id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
20,216,055 | https://en.wikipedia.org/wiki/A-423579 | A-423,579 is one of a range of histamine antagonists developed by Abbott Laboratories which are selective for the H3 subtype, and have stimulant and anorectic effects in animal studies making them potentially useful treatments for obesity. A-423,579 has improved characteristics over earlier drugs in the series with both high efficacy and low toxicity in studies on mice, and is currently in clinical development.
References
Fluoroarenes
H3 receptor antagonists
Benzonitriles
4-Hydroxybiphenyl ethers
Pyrrolidines
Nitriles | A-423579 | [
"Chemistry"
] | 123 | [
"Nitriles",
"Functional groups"
] |
20,218,245 | https://en.wikipedia.org/wiki/Huabiao | Huabiao () is a type of ceremonial column used in traditional Chinese architecture. Huabiao are traditionally erected in pairs in front of palaces and tombs. The prominence of their placement have made them one of the emblems of traditional Chinese culture. When placed outside palaces, they can also be called bangmu (). When placed outside a tomb, they can also be called shendaozhu ().
Structure
Extant huabiao are typically made from white marble. A huabiao is typically made up of four components. At the bottom is a square base which is decorated with bas-relief depictions of dragons, lotuses, and other auspicious symbols. Above is a column, decorated with a coiled dragon and auspicious clouds. Near the top, the column is crossed by a horizontal stone board in the shape of a cloud (called the "cloud board"). The column is topped by a round cap, called the chenglupan (承露盤) "dew-collecting plate" (see fangzhu). At the top of the cap sits a mythical creature called the denglong (), one of the "Nine sons of the dragon", which is said to have the habit of watching the sky. Its role atop the huabiao is said to be to communicate the mood of the people to the Heavens above.
History
Classical texts in China attribute the beginning of the huabiao to Shun, a legendary leader traditionally dated to the 23rd–22nd century BC. Some say it developed from the totem poles of ancient tribes. The Huainanzi describes the feibangmu (), or bangmu for short, literally "commentary board", as a wooden board set up on main roads to allow the people to write criticism of government policies. However, tradition holds that by the mid-Xia dynasty, the king had moved the bangmu in front of the palace, in order to control public criticism. During the notorious reign of King Li of Zhou, the king would monitor those who wrote on the bangmu, and those who criticised the government would be killed. The practical use of the bangmu gradually diminished as a result of such practices.
In the Han dynasty, the bangmu became merely a symbol of the government's responsibility to the people. These were erected near bridges, palaces, city gates and tombs; the name huabiao arose during this time. During the Southern and Northern Dynasties, the Liang dynasty restored the institution of the bangmu, by installing boxes next to the bangmu. Those wishing to air grievances or to comment on government policies could post their writings in these boxes. However, by this time, the column itself was no longer treated as a bulletin board.
It is thought that, in their use on spirit roads, the huabiao replaced the ornate que towers, which were commonly used during the
Eastern Han dynasty (25–220 AD).
Notable examples
Some prominent examples of ancient huabiao that can still be seen today include the following.
There are two pairs of huabiao at Tiananmen, with one pair located inside the gate, and one pair outside. These were erected in the Ming dynasty in the 15th century.
Pairs of huabiao flank the spirit way of most of the medieval and pre-modern imperial tombs which survive to this day, including the imperial tombs of the Ming and Qing dynasties.
A pair of huabiao are located outside the tomb of the Marquess Pingzhong of Wu of the Liang dynasty, located in Nanjing.
A pair of huabiao originally from the Old Summer Palace in Beijing are now located within the grounds of Peking University (photo)
Two pairs of huabiao near the Lugou Bridge (Marco Polo Bridge) at the southwestern outskirts of Beijing. One pair is located at the eastern end of the bridge, the other at the western end, flanking the roadway.
In the early 20th century, the huabiao, in a Modernist form, was incorporated into the developing vocabulary of a modern Chinese architectural style. Examples of these modernist re-interpretations of the huabiao can be seen in front of a variety of institutions built during that period, such as Tongji University in Shanghai, or the Sun Yat-sen Mausoleum in Nanjing.
More recently, a trend has developed in some parts of China to create (often enlarged) replicas of the classical huabiao, though not often used in the classical context. For example, Xinghai Square in Dalian, which was built in the 1990s, incorporated a single huabiao at its centre, to commemorate China's resuming sovereignty on Hong Kong. On 5 August 2016, Dalian's landmark huabiao was demolished in secret at 00:30am, as the government considered it as a vanity project of Bo Xilai (a former mayor of Dalian who was arrested in 2013 for corruption). It had been there for 19 years and had become a landmark and symbol of Dalian. Citizens were angry with that, according to taxi drivers in the city and many comments on the Internet by Dalian locals. However, the media and government both remained silent.
During the 2008 Summer Olympics opening ceremony, a pair of huabiao were featured as part of the performance.
See also
Totem Pole
Obelisk
Paifang
References
External links
Architecture in China
Columns and entablature
Traditional Chinese architecture | Huabiao | [
"Technology"
] | 1,093 | [
"Structural system",
"Columns and entablature"
] |
20,220,596 | https://en.wikipedia.org/wiki/Comparison%20of%20force-field%20implementations | This is a table of notable computer programs implementing molecular mechanics force fields.
See also
Force field (chemistry)
List of software for Monte Carlo molecular modeling
Molecular mechanics
Molecular design software
Molecule editor
Comparison of software for molecular mechanics modeling
Molecular modeling on GPU
References
Force fields (chemistry)
Molecular modelling
Software comparisons | Comparison of force-field implementations | [
"Chemistry",
"Technology"
] | 60 | [
"Molecular physics",
"Computing comparisons",
"Theoretical chemistry",
"Molecular modelling",
"Molecular dynamics",
"Computational chemistry",
"Software comparisons",
"Force fields (chemistry)"
] |
20,220,938 | https://en.wikipedia.org/wiki/Osmotic%20coefficient | An osmotic coefficient is a quantity which characterises the deviation of a solvent from ideal behaviour, referenced to Raoult's law. It can be also applied to solutes. Its definition depends on the ways of expressing chemical composition of mixtures.
The osmotic coefficient based on molality m is defined by:
and on a mole fraction basis by:
where is the chemical potential of the pure solvent and is the chemical potential of the solvent in a solution, MA is its molar mass, xA its mole fraction, R the gas constant and T the temperature in Kelvin. The latter osmotic
coefficient is sometimes called the rational osmotic coefficient. The values for the two definitions are different, but since
the two definitions are similar, and in fact both approach 1 as the concentration goes to zero.
Applications
For liquid solutions, the osmotic coefficient is often used to calculate the salt activity coefficient from the solvent activity, or vice versa. For example, freezing point depression measurements, or measurements of deviations from ideality for other colligative properties, allows calculation of the salt activity coefficient through the osmotic coefficient.
Relation to other quantities
In a single solute solution, the (molality based) osmotic coefficient and the solute activity coefficient are related to the excess Gibbs free energy by the relations:
and there is thus a differential relationship between them (temperature and pressure held constant):
Liquid electrolyte solutions
For a single salt solute with molal activity (), the osmotic coefficient can be written as where is the stochiometric number of salt and the activity of the solvent. can be calculated from the salt activity coefficient via:
Moreover, the activity coefficient of the salt can be calculated from:
According to Debye–Hückel theory, which is accurate only at low concentrations, is asymptotic to , where I is ionic strength and A is the Debye–Hückel constant (equal to about 1.17 for water at 25 °C).
This means that, at least at low concentrations, the vapor pressure of the solvent will be greater than that predicted by Raoult's law. For instance, for solutions of magnesium chloride, the vapor pressure is slightly greater than that predicted by Raoult's law up to a concentration of 0.7 mol/kg, after which the vapor pressure is lower than Raoult's law predicts. For aqueous solutions, the osmotic coefficients can be calculated theoretically by Pitzer equations or TCPC model.
See also
Bromley equation
Pitzer equation
Davies equation
van 't Hoff factor
Law of dilution
Thermodynamic activity
Ion transport number
References
Physical chemistry | Osmotic coefficient | [
"Physics",
"Chemistry"
] | 552 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
25,671,436 | https://en.wikipedia.org/wiki/Neo-organ | In tissue engineering, neo-organ is the final structure of a procedure based on transplantation consisting of endogenous stem/progenitor cells grown ex vivo within predesigned matrix scaffolds. Current organ donation faces the problems of patients waiting to match for an organ and the possible risk of the patient's body rejecting the organ. Neo-organs are being researched as a solution to those problems with organ donation. Suitable methods for creating neo-organs are still under development. One experimental method is using adult stem cells, which use the patients own stem cells for organ donation. Currently this method can be combined with decellularization, which uses a donor organ for structural support but removes the donors cells from the organ. Similarly, the concept of 3-D bioprinting organs has shown experimental success in printing bioink layers that mimic the layer of organ tissues. However, these bioinks do not provide structural support like a donor organ. Current methods of clinically successful neo-organs use a combination of decellularized donor organs, along with adult stem cells of the organ recipient to account for both the structural support of a donor organ and the personalization of the organ for each individual patient to reduce the chance of rejection.
Background
The word neo-organ comes from the Greek word "neos," which means new. Organ transplants have been successfully used for medical purposes since 1954. The difficulty with the traditional process of organ transplants is that it requires waiting for a viable donor to donate an organ. The process of matching the organ to make sure it is compatible with the patient has also proven to be challenging. There are two main challenges: finding the right candidate for the patient and avoiding the patient rejecting the organ even if it is a match. Neo-organs can be used to avoid the process of organ matching and donating.
Neo-Organ Creation Methods
Research is being conducted for methods of creating neo-organs including three methods such as using adult stem cells, decellularization, and 3-D Bioprinting:
Adult Stem Cells
One of the most studied methods is to use the patients own cells to generate a new organ, ex-vivo Specifically, researchers have chosen to focus on adult stem cells, or somatic stem cells, for the generation of new organ cells to create organs. There has been success in the production and use of some organs. The first stem-cell based organ, a tracheal graft, was transplanted successfully in 2008. The method involves obtaining a donor organ, removing the cells and MHC antigens from the donor organ, and colonizing it with stem cells obtained from the patient. This method does not create an entire organ from stem cells, and it still requires a donor to provide the decellularized graft. However, the first surgery done with this method was successful and the patient has shown no signs of rejection since. The current debate with this method is whether the decellularized graft was only used to provide the shape of the organ, or whether it provided benefits from it being a donor graft. Current research is being done to find ways to use adult stem cells for neo-organs without using decellularized donor organs for structural support.
Decellularization
Researchers have begun to focus on decellularization for organ transplants since it reduces the chance of rejection to almost none. This process was used in the first successful stem-cell based organ transplant by removing the cells and MHC antigens from the donor organ. There are different ways to remove the cells from the organ which can include physical, chemical, and enzymatic treatments. This method is especially useful when trying to create a neo-heart because the heart needs to be created in a way where the structure remains. Since the stem cells used are currently not able to maintain a shape, researchers have started to look more into decellularization of existing organs to be able to perform successful transplant procedures without the problem of rejection. While this method may assist with the problem of rejection, donors are still needed to provide this structure to patients.
3-D Bioprinting
The process of creating a 3-D organ with stem cells is thought to not be possible without the structural support of a donor organ. However, new studies are being conducted that discuss research on the process of 3-D bioprinting organs. The process of 3-D Bioprinting includes combining cells and growth factors to create a bioink, then using that bioink to print individual layers of tissue. Research is being done to find ways to use the formulated bioink to print organs that have the same structural support of donor organs without the need for donors. While currently there have not been experimental success with printing structural organs, there has been success with using bioink for printing tissue layers. A method for creating gelatin based vascularized bone equivalents has shown to be successful in a small scale experiment, but it has not been used clinically.
References
Further reading
Tissue engineering | Neo-organ | [
"Chemistry",
"Engineering",
"Biology"
] | 1,010 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
25,671,808 | https://en.wikipedia.org/wiki/Gene-activated%20matrix | In gene-activated matrix technology (GAM), cytokines and growth factors could be delivered not as recombinant proteins but as plasmid genes. GAM is one of the tissue engineering approaches to wound healing. Following gene delivery, the recombinant cytokine could be expressed in situ by endogenous would healing cells – in small amounts but for a prolonged period of time – leading to reproducible tissue regeneration. The matrix can be modified by incorporating a viral vector, mRNA or DNA bound to a delivery system, or a naked plasmid.
References
External links
Cardium Presents Gene Activated Matrix Technology And Update On Excellarate Clinical Development Program At ASGT Annual Meeting
Tissue engineering | Gene-activated matrix | [
"Chemistry",
"Engineering",
"Biology"
] | 148 | [
"Biological engineering",
"Bioengineering stubs",
"Cloning",
"Chemical engineering",
"Biotechnology stubs",
"Tissue engineering",
"Medical technology"
] |
25,672,736 | https://en.wikipedia.org/wiki/List%20of%20restriction%20enzyme%20cutting%20sites | A restriction enzyme or restriction endonuclease is a special type of biological macromolecule that functions as part of the "immune system" in bacteria. One special kind of restriction enzymes is the class of "homing endonucleases", these being present in all three domains of life, although their function seems to be very different from one domain to another.
The classical restriction enzymes cut up, and hence render harmless, any unknown (non-cellular) DNA that enters a bacterial cell as a result of a viral infection. They recognize a specific DNA sequence, usually short (3 to 8 bp), and cut it, producing either blunt or overhung ends, either at or nearby the recognition site.
Restriction enzymes are quite variable in the short DNA sequences they recognize. An organism often has several different enzymes, each specific to a distinct short DNA sequence.
Restriction enzymes catalog
The list includes some of the most studied examples of restriction endoncleases. The following information is given:The whole list contains more than 1,200 enzymes, but databases register about 4,000. To make a list that is accessible to navigation, this list has been divided into different pages. Each page contains somewhere between 120-150 entries. Choose a letter to go to a specific part of the list:
Notes and references
See also
List of homing endonuclease cutting sites
Restriction enzyme.
Isoschizomer.
Detailed articles about certain restriction enzymes: EcoRI, HindIII, BglII.
Homing endonuclease.
Introns and inteins.
Intragenomic conflict: Homing endonuclease genes.
I-CreI homing endonuclease.
External links
Databases and lists of restriction enzymes:
Very comprehensive database of restriction enzymes supported by New England Biolabs. It includes all kind of biological, structural, kinetical and commercial information about thousands of enzymes. Also includes related literature for every molecule:
Detailed information for biochemical experiments:
Alphabetical list of enzymes and their restriction sites:
General information about restriction sites and biochemical conditions for restriction reactions:
Databases of proteins:
Database of structures of proteins, solved at atomic resolution:
Databases of proteins:
Biotechnology
Restriction enzyme cutting sites
Restriction enzymes
Enzymes
Proteins | List of restriction enzyme cutting sites | [
"Chemistry",
"Biology"
] | 457 | [
"Genetics techniques",
"Biomolecules by chemical classification",
"Molecular-biology-related lists",
"Biotechnology",
"nan",
"Molecular biology",
"Proteins",
"Restriction enzymes"
] |
25,672,752 | https://en.wikipedia.org/wiki/Quantum%20inverse%20scattering%20method | In quantum physics, the quantum inverse scattering method (QISM), similar to the closely related algebraic Bethe ansatz, is a method for solving integrable models in 1+1 dimensions, introduced by Leon Takhtajan and L. D. Faddeev in 1979.
It can be viewed as a quantized version of the classical inverse scattering method pioneered by Norman Zabusky and Martin Kruskal used to investigate the Korteweg–de Vries equation and later other integrable partial differential equations. In both, a Lax matrix features heavily and scattering data is used to construct solutions to the original system.
While the classical inverse scattering method is used to solve integrable partial differential equations which model continuous media (for example, the KdV equation models shallow water waves), the QISM is used to solve many-body quantum systems, sometimes known as spin chains, of which the Heisenberg spin chain is the best-studied and most famous example. These are typically discrete systems, with particles fixed at different points of a lattice, but limits of results obtained by the QISM can give predictions even for field theories defined on a continuum, such as the quantum sine-Gordon model.
Discussion
The quantum inverse scattering method relates two different approaches:
the Bethe ansatz, a method of solving integrable quantum models in one space and one time dimension.
the inverse scattering transform, a method of solving classical integrable differential equations of the evolutionary type.
This method led to the formulation of quantum groups, in particular the Yangian. The center of the Yangian, given by the quantum determinant plays a prominent role in the method.
An important concept in the inverse scattering transform is the Lax representation. The quantum inverse scattering method starts by the quantization of the Lax representation and reproduces the results of the Bethe ansatz. In fact, it allows the Bethe ansatz to be written in a new form: the algebraic Bethe ansatz. This led to further progress in the understanding of quantum integrable systems, such as the quantum Heisenberg model, the quantum nonlinear Schrödinger equation (also known as the Lieb–Liniger model or the Tonks–Girardeau gas) and the Hubbard model.
The theory of correlation functions was developed, relating determinant representations, descriptions by differential equations and the Riemann–Hilbert problem. Asymptotics of correlation functions which include space, time and temperature dependence were evaluated in 1991.
Explicit expressions for the higher conservation laws of the integrable models were obtained in 1989.
Essential progress was achieved in study of ice-type models: the bulk free energy of the
six vertex model depends on boundary conditions even in the thermodynamic limit.
Procedure
The steps can be summarized as follows :
Take an R-matrix which solves the Yang–Baxter equation.
Take a representation of an algebra satisfying the RTT relations.
Find the spectrum of the generating function of the centre of .
Find correlators.
References
Exactly solvable models
Quantum mechanics | Quantum inverse scattering method | [
"Physics"
] | 629 | [
"Theoretical physics",
"Quantum mechanics"
] |
25,673,677 | https://en.wikipedia.org/wiki/Second%20Green%20Revolution | The Second Green Revolution is a change in agricultural production widely thought necessary to feed and sustain the growing population on Earth.
These calls came about as a response to rising food commodity prices and fears of peak oil, among other factors.
It is named after the Green Revolution.
Usage
A 1981 article by Peter Steinhart used the term Second Green Revolution to describe future widespread adoption of genetic engineering of new food crops for increased crop yield and nutrition. Sakiko Fukuda-Parr's 2006 book The Gene Revolution: GM Crops and Unequal Development also explored this concept.
Others
have used the term to refer to a combination of urban agriculture, smaller farm size and organic agriculture with the aim of increasing resource sustainability of crop production.
Proponents
Bill Gates has been among the proponents of a second green revolution, saying:
Three quarters of the world's poorest people get their food and income by farming small plots of land...if we can make smallholder farming more productive and more profitable, we can have a massive impact on hunger and nutrition and poverty...the charge is clear—we have to develop crops that can grow in a drought; that can survive in a flood; that can resist pests and disease...we need higher yields on the same land in harsher weather."
Gates made these remarks during the World Food Prize. He has made over US$1.4 billion in contributions towards agricultural developments.
Opponents
Some opponents of the Second Green Revolution believe that social inequity is a major factor leading to food insecurity, one which is not addressed by increasing food production capacity.
See also
Plant breeding
Transgenic plant
Urban agriculture
Future food technology
Agricultural robot
Cultured meat
Blue revolution, aquaculture
References
Sustainable agriculture
Agricultural economics
Agricultural revolutions
Genetic engineering and agriculture
Intensive farming | Second Green Revolution | [
"Chemistry",
"Engineering",
"Biology"
] | 361 | [
"Eutrophication",
"Genetic engineering and agriculture",
"Intensive farming",
"Genetic engineering"
] |
25,676,077 | https://en.wikipedia.org/wiki/Nucleonica | Nucleonica is a nuclear science web portal created by the European Commission's Joint Research Centre. which was later spun off to the company Nucleonica GmbH in March 2011.
History
The company Nucleonica GmbH was founded by Dr. Joseph Magill in 2011 as a spin-off from the European Commission's Joint Research Centre, Institute for Transuranium Elements. In addition to providing user friendly access to nuclear data, the main focus of Nucleonica is to provide professionals in the nuclear industry with a suite of validated scientific applications for everyday calculations.
The portal is also suitable for education and training in the nuclear field, both for technicians and degree-level and programmes in Nuclear engineering technology.
Nucleonica GmbH also took responsibility for the management and development of the Karlsruhe Nuclide Chart print and online versions.
User access
Users can register for free access to Nucleonica. This free access gives the user access to most applications but is restricted to a limited number of nuclides. For full access to all nuclides and applications, the user can upgrade to Premium for which there is an annual user charge.
References
Notes
Sources
External links
Internet properties established in 2011
Nuclear physics
Science websites
Web portals | Nucleonica | [
"Physics"
] | 250 | [
"Nuclear physics"
] |
25,678,212 | https://en.wikipedia.org/wiki/Earth%20rainfall%20climatology | Earth rainfall climatology Is the study of rainfall, a sub-field of meteorology. Formally, a wider study includes water falling as ice crystals, i.e. hail, sleet, snow (parts of the hydrological cycle known as precipitation). The aim of rainfall climatology is to measure, understand and predict rain distribution across different regions of planet Earth, a factor of air pressure, humidity, topography, cloud type and raindrop size, via direct measurement and remote sensing data acquisition. Current technologies accurately predict rainfall 3–4 days in advance using numerical weather prediction. Geostationary orbiting satellites gather IR and visual wavelength data to measure realtime localised rainfall by estimating cloud albedo, water content, and the corresponding probability of rain.
Geographic distribution of rain is largely governed by climate type, topography and habitat humidity. In mountainous areas, heavy precipitation is possible where the upslope flow is maximized within windward sides of the terrain at elevation. On the leeward side of mountains, desert climates can exist due to the dry air caused by compressional heating. The movement of the monsoon trough, or Intertropical Convergence Zone, brings rainy seasons to savannah climes. The urban heat island effect leads to increased rainfall, both in amounts and intensity, downwind of cities. Warming may also cause changes in the precipitation pattern globally, including wetter conditions at high latitudes and in some wet tropical areas. Precipitation is a major component of the water cycle, and is responsible for depositing most of the fresh water on the planet. Approximately of water falls as precipitation each year; of it over the oceans. Given the Earth's surface area, that means the globally averaged annual precipitation is . Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes.
Most of Australia is semi-arid or desert, making it the world's driest continent (excluding Antarctica). Australia's rainfall is mainly regulated by the movement of the alien monsoon trough during the summer rainy season, with lesser amounts falling during the winter and spring in its southernmost sections.
Almost whole North Africa is semi-arid, arid or hyper-arid, containing the Sahara Desert which is the largest hot desert in the world, while central Africa (known as Sub-Saharan Africa) sees an annual rainy season regulated by the movement of the Intertropical Convergence Zone or monsoon trough, though the Sahel Belt located at the south of the Sahara Desert knows an extremely intense and a nearly permanent dry season and only receives minimum summer rainfall.
Across Asia, a large annual rainfall minimum, composed primarily of deserts, stretches from the Gobi Desert in Mongolia west-southwest through Pakistan and Iran into the Arabian Desert in Saudi Arabia. In Asia, rainfall is favored across its southern portion from India east and northeast across the Philippines and southern China into Japan due to the monsoon advecting moisture primarily from the Indian Ocean into the region. Similar, but weaker, monsoon circulations are present over North America and Australia.
In Europe, the wettest regions are in the Alps and downwind of bodies of water, particularly the Atlantic west coasts.
Within North America, the drier areas of the United States are the Desert Southwest, Great Basin, valleys of northeast Arizona, eastern Utah, central Wyoming, and the Columbia Basin. Other dry regions within the continent are far northern Canada and the Sonoran Desert of northwest Mexico. The Pacific Northwest United States, the Rockies of British Columbia, and the coastal ranges of Alaska are the wettest locations in North America. The equatorial region near the Intertropical Convergence Zone (ITCZ), or monsoon trough, is the wettest part of the world's continents. Annually, the rain belt within the tropics marches northward by August, then moves back southwards into the Southern Hemisphere by February and March.
Role in Köppen climate classification
The Köppen classification depends on average monthly values of temperature and precipitation. The most commonly used form of the Köppen classification has five primary types labeled A through E. Specifically, the primary types are A, tropical; B, dry; C, mild mid-latitude; D, cold mid-latitude; and E, polar. The five primary classifications can be further divided into secondary classifications such as rain forest, monsoon, tropical savanna, humid subtropical, humid continental, oceanic climate, Mediterranean climate, steppe, subarctic climate, tundra, polar ice cap, and desert.
Rain forests are characterized by high rainfall, with definitions setting minimum normal annual rainfall between and . A tropical savanna is a grassland biome located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes, with rainfall between and a year. They are widespread on Africa, and are also found in India, the northern parts of South America, Malaysia, and Australia. The humid subtropical climate zone where winter rainfall (and sometimes snowfall) is associated with large storms that the westerlies steer from west to east. Most summer rainfall occurs during thunderstorms and from occasional tropical cyclones. Humid subtropical climates lie on the east side continents, roughly between latitudes 20° and 40° degrees away from the equator.
An oceanic (or maritime) climate is typically found along the west coasts at the middle latitudes of all the world's continents, bordering cool oceans, as well as southeastern Australia, and is accompanied by plentiful precipitation year-round. The Mediterranean climate regime resembles the climate of the lands in the Mediterranean Basin, parts of western North America, parts of Western and South Australia, in southwestern South Africa and in parts of central Chile. The climate is characterized by hot, dry summers and cool, wet winters. A steppe is a dry grassland. Subarctic climates are cold with continuous permafrost and little precipitation.
The tropical zones have the highest number of storm events followed by the temperate climate. In a recent study, researchers from 63 countries combined 30-minutes rainfall data in order to estimate the global rainfall erosivity (an index combining the amount, frequency and intensity of rainfall). The arid and cold climate zones have very low number of erosive events.
Africa
The northern half of the continent is primarily desert, containing the vast Sahara Desert, while its southern areas contain both savanna and plains, and its central portion contains very dense jungle (rainforest) regions. The equatorial region near the Intertropical Convergence Zone is the wettest portion of the continent. Annually, the rain belt across the continent marches northward into Sub-Saharan Africa by August, then moves back southward into south-central Africa by March. Mesoscale convective systems which form in tandem with tropical waves that move along the Intertropical Convergence Zone during the summer months become the seedlings for tropical cyclones which form in the northern Atlantic and northeast Pacific oceans. Areas with a savannah climate in Sub-Saharan Africa, such as Ghana, Burkina Faso, Darfur, Eritrea, Ethiopia, and Botswana have a distinct rainy season.
Within of Madagascar, trade winds bring moisture up the eastern slopes of the island, which is deposited as rainfall, and brings drier downsloped winds to areas south and west leaving the western sections of the island in a rain shadow. This leads to significantly more rainfall over northeast sections of the island than the southwestern portions of Madagascar. Southern Africa receives most of its rainfall from summer convective storms and with extratropical cyclones moving through the Westerlies. Once a decade, tropical cyclones lead to excessive rainfall across the region.
Asia
A large annual rainfall minimum, composed primarily of deserts, stretches from the Gobi Desert in Mongolia west-southwest through Pakistan and Iran into the Arabian Desert in Saudi Arabia. Rainfall around the continent is favored across its southern portion from India east and northeast across the Philippines and southern China into Japan due to the monsoon advecting moisture primarily from the Indian Ocean into the region. The monsoon trough can reach as far north as the 40th parallel in East Asia during August before moving southward thereafter. Its poleward progression is accelerated by the onset of the summer monsoon which is characterized by the development of lower air pressure (a thermal low) over the warmest part of Asia. Cherrapunji, situated on the southern slopes of the Eastern Himalaya in Shillong, India is one of the wettest places on Earth, with an average annual rainfall of 11,430 mm (450 in). The highest recorded rainfall in a single year was 22,987 mm (904.9 in) in 1861. The 38-year average at Mawsynram, Meghalaya, India is 11,873 mm (467.4 in). Lower rainfall maxima are found on the Mediterranean and Black Sea coasts of Turkey and the mountains of Tajikistan.
Australia
Most of Australia is semi-arid or desert, making it the world's driest continent after Antarctica. The movement of the monsoon trough is linked to the peak of the rainy season within the continent. Northern portions of the continent see the most rainfall, which is concentrated in the summer months. During winter and spring southern Australia sees its maximum rainfall. The interior desert sees its greatest rainfall during spring and summer. The wettest spot is Mount Bellenden Ker in the north-east of the country records an average of per year, with over of rain recorded in the year 2000. While Melbourne is thought of as being significantly wetter than Sydney, Sydney receives an average of 1212 mm (47.8 in) of rain per year compared to Melbourne's 650 mm (25.5 in), although Sydney is significantly sunnier and receives less days of rain.
New Zealand
New Zealand's Cropp River has the 4th highest rainfall in the world with a 11499mm per year average. The river may be only 9 km long but it certainly punches above its weight in precipitation.
Europe
On an annual basis, rainfall across the continent is favored within the Alps, and from Slovenia southward to the western coast of Greece. Other maxima exist in western Georgia, northwest Iberia, western Great Britain, and western Norway. The maxima along the eastern coasts of water bodies is due to the westerly wind flow which dominates across the continent. A bulk of the precipitation across the Alps falls between March and November. The wet season in lands bordering the Mediterranean lasts from October through March, with November and December typically the wettest months. Summer rainfall across the continent evaporates completely into the warm atmosphere, leaving winter precipitation to be the source of groundwater for Europe. Mesoscale rain systems during the rainy season track south and eastward over the Mediterranean, with western portions of the sea experiencing 20 percent more rainfall than eastern sections of the sea.
The European Monsoon (more commonly known as the Return of the Westerlies) is the result of a resurgence of westerly winds from the Atlantic, where they become loaded with wind and rain. These westerly winds are a common phenomenon during the European winter, but they ease as spring approaches in late March and through April and May. The winds pick up again in June, which is why this phenomenon is also referred to as "the return of the westerlies". The rain usually arrives in two waves, at the beginning of June and again in mid to late June. The European monsoon is not a monsoon in the traditional sense in that it doesn't meet all the requirements to be classified as such. Instead the Return of the Westerlies is more regarded as a conveyor belt that delivers a series of low pressure centres to Western Europe where they create unseasonable weather. These storms generally feature significantly lower than average temperatures, fierce rain or hail, thunder and strong winds. The Return of the Westerlies affects Europe's Northern Atlantic coastline, more precisely Ireland, Great Britain, the Benelux countries, western Germany, northern France and parts of Scandinavia.
There are cycles seen within the rainfall data from Northern Europe between Great Britain and Germany, which are seen at a 16-year interval. Southern Europe experiences a 22-year cycle in rainfall variation. Other smaller term cycles are seen at 10-12 year and 6-7 year periods within the rainfall record. Places with significant impact by acid rain across the continent include most of eastern Europe from Poland northward into Scandinavia.
North America
Canada
Precipitation across Canada is highest in the mountain ranges in the western portions due to onshore flow bringing Pacific moisture into the mountains, which is subsequently forced to lift up their slopes and deposit significant precipitation, primarily between August and May. Mesoscale convective systems are common mid-summer near the central border with the United States from the Prairie provinces eastward towards the Great Lakes. Southeastern sections of the country are also wet, due to the development of extratropical cyclones along the east coast of the continent which move northward into Atlantic Canada. During the summer and fall months, tropical cyclones from the Atlantic basin are also possible across Atlantic Canada. Amounts decrease as one works farther inland from the Pacific and Atlantic coasts, and from south to north towards the Arctic.
Mexico
Rainfall varies widely both by location and season. Arid or semiarid conditions are encountered in the Baja California Peninsula, the northwestern state of Sonora, the northern altiplano, and also significant portions of the southern Altiplano. Rainfall in these regions averages between per year, with lower amounts across Baja California Norte. Average rainfall totals are between in most of the major populated areas of the southern altiplano, including Mexico City and Guadalajara. Low-lying areas along the Gulf of Mexico receive in excess of of rainfall in an average year, with the wettest region being the southeastern state of Tabasco, which typically receives approximately of rainfall on an annual basis. Parts of the northern altiplano, highlands and high peaks in the Sierra Madre Occidental and the Sierra Madre Oriental occasionally receive significant snowfalls.
Mexico has pronounced wet and dry seasons. Most of the country experiences a rainy season from June to mid-October and significantly less rain during the remainder of the year. February and July generally are the driest and wettest months, respectively. Mexico City, for example, receives an average of only of rain during February but more than in July. Coastal areas, especially those along the Gulf of Mexico, experience the largest amounts of rain in September. Tabasco typically records more than of rain during that month. A small coastal area of northwestern coastal Mexico around Tijuana has a Mediterranean climate with considerable coastal fog and a rainy season that occurs in winter.
Tropical cyclones track near and along the western Mexican coastline primarily between the months of July and September. These storms enhance the monsoon circulation over northwest Mexico and the southwest United States. On an average basis, eastern Pacific tropical cyclones contribute about one-third of the annual rainfall along the Mexican Riviera, and up to one-half of the rainfall seen annually across Baja California Sur. Mexico is twice as likely (18% of the basin total) to be impacted by a Pacific tropical cyclone on its west coast than an Atlantic tropical cyclone on its east coast (9% of the basin total). The three most struck states in Mexico in the 50 years at the end of the 20th century were Baja California Sur, Sinaloa, and Quintana Roo.
United States
Late summer and fall extratropical cyclones bring a majority of the precipitation which falls across western, southern, and southeast Alaska annually. During the fall, winter, and spring, Pacific storm systems bring most of Hawaii and the western United States much of their precipitation. Nor'easters moving up the East coast bring cold season precipitation to the Mid-Atlantic and New England states. During the summer, the Southwest monsoon combined with Gulf of California and Gulf of Mexico moisture moving around the subtropical ridge in the Atlantic Ocean bring the promise of afternoon and evening thunderstorms to the southern tier of the country as well as the Great Plains. Tropical cyclones enhance precipitation across southern sections of the country, as well as Puerto Rico, the United States Virgin Islands, the Northern Mariana Islands, Guam, and American Samoa. Over the top of the ridge, the jet stream brings a summer precipitation maximum to the Great Lakes. Large thunderstorm areas known as mesoscale convective complexes move through the Plains, Midwest, and Great Lakes during the warm season, contributing up to 10% of the annual precipitation to the region.
The El Niño–Southern Oscillation affects the precipitation distribution, by altering rainfall patterns across the West, Midwest, the Southeast, and throughout the tropics. There is also evidence that global warming is leading to increased precipitation to the eastern portions of North America, while droughts are becoming more frequent in the tropics and subtropics. The eastern half of the contiguous United States east of the 98th meridian, the mountains of the Pacific Northwest, and the Sierra Nevada range are the wetter portions of the nation, with average rainfall exceeding per year. The drier areas are the Desert Southwest, Great Basin, valleys of northeast Arizona, eastern Utah, central Wyoming, eastern Oregon and Washington and the northeast of the Olympic Peninsula. The Big Bog on the island of Maui receives, on average, every year, making it the wettest location in the US, and all of Oceania, apart from Cropp River in New Zealand.|
South America
The annual average rainfall maxima across the continent lie across the northwest from northwest Brazil into northern Peru, Colombia, and Ecuador, then along the Atlantic coast of the Guyanas and far northern Brazil, as well as within the southern half of Chile. Lloró, a town situated in Chocó, Colombia, is probably the place with the largest measured rainfall in the world, averaging 13,300 mm per year (523.6 in). In fact, the whole Department of Chocó is extraordinarily humid. Tutunendo, a small town situated in the same department, is one of the wettest places on earth, averaging 11,394 mm per year (448 in); in 1974 the town received 26,303 mm (86 ft 3½ in), the largest annual rainfall measured in Colombia. Unlike Cherrapunji, which receives most of its rainfall between April and September, Tutunendo receives rain almost uniformly distributed throughout the year. The months of January and February have somewhat less frequent storms. On average, Tutunendo has 280 days with rainfall per year. Over ⅔ of the rain (68%) falls during the night. The average relative humidity is 90% and the average temperature is 26.4 °C. Quibdó, the capital of Chocó, receives the most rain in the world among cities with over 100,000 inhabitants: per year. Storms in Chocó can drop 500 mm (19.7 in) of rainfall in a day. This amount is more than falls in many cities in a year's time. The Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina.
Urban heat island impacts
Aside from the effect on temperature, urban heat islands (UHIs) can produce secondary effects on local meteorology, including the altering of local wind patterns, the development of clouds and fog, the humidity, and the rates of precipitation. The extra heat provided by the UHI leads to greater upward motion, which can induce additional shower and thunderstorm activity. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between to downwind of cities, compared with upwind. Some cities show a total precipitation increase of 51%. Using satellite images, researchers discovered that city climates have a noticeable influence on plant growing seasons up to 10 kilometers (6 mi) away from a city's edges. Growing seasons in 70 cities in eastern North America were about 15 days longer in urban areas compared to rural areas outside of a city's influence.
See also
China tropical cyclone rainfall climatology
Mexico tropical cyclone rainfall climatology
Typhoons in the Philippines
United States rainfall climatology
References
Climate and weather statistics
Rain | Earth rainfall climatology | [
"Physics"
] | 4,064 | [
"Weather",
"Physical phenomena",
"Climate and weather statistics"
] |
2,801,004 | https://en.wikipedia.org/wiki/Handle%20decompositions%20of%203-manifolds | In mathematics, a handle decomposition of a 3-manifold allows simplification of the original 3-manifold into pieces which are easier to study.
Heegaard splittings
An important method used to decompose into handlebodies is the Heegaard splitting, which gives a decomposition in two handlebodies of equal genus.
Examples
As an example: lens spaces are orientable 3-spaces and allow decomposition into two solid tori, which are genus-one-handlebodies. The genus one non-orientable space is a space which is the union of two solid Klein bottles and corresponds to the twisted product of the 2-sphere and the 1-sphere: .
Orientability
Each orientable 3-manifold is the union of exactly two orientable handlebodies; meanwhile, each non-orientable one needs three orientable handlebodies.
Heegaard genus
The minimal genus of the glueing boundary determines what is known as the Heegaard genus. For non-orientable spaces an interesting invariant is the tri-genus.
References
J.C. Gómez Larrañaga, W. Heil, V.M. Núñez. Stiefel-Whitney surfaces and decompositions of 3-manifolds into handlebodies, Topology Appl. 60 (1994), 267-280.
J.C. Gómez Larrañaga, W. Heil, V.M. Núñez. Stiefel-Whitney surfaces and the trigenus of non-orientable 3-manifolds, Manuscripta Math. 100 (1999), 405-422.
3-manifolds
Topology | Handle decompositions of 3-manifolds | [
"Physics",
"Mathematics"
] | 321 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
2,801,321 | https://en.wikipedia.org/wiki/Dynamic%20covalent%20chemistry | Dynamic covalent chemistry (DCvC) is a synthetic strategy employed by chemists to make complex molecular and supramolecular assemblies from discrete molecular building blocks. DCvC has allowed access to complex assemblies such as covalent organic frameworks, molecular knots, polymers, and novel macrocycles. Not to be confused with dynamic combinatorial chemistry, DCvC concerns only covalent bonding interactions. As such, it only encompasses a subset of supramolecular chemistries.
The underlying idea is that rapid equilibration allows the coexistence of a variety of different species among which molecules can be selected with desired chemical, pharmaceutical and biological properties. For instance, the addition of a proper template will shift the equilibrium toward the component that forms the complex of higher stability (thermodynamic template effect). After the new equilibrium is established, the reaction conditions are modified to stop equilibration. The optimal binder for the template is then extracted from the reactional mixture by the usual laboratory procedures. The property of self-assembly and error-correcting that allow DCvC to be useful in supramolecular chemistry rely on the dynamic property.
Dynamic systems
Dynamic systems are collections of discrete molecular components that can reversibly assemble and disassemble. Systems may include multiple interacting species leading to competing reactions.
Thermodynamic control
In dynamic reaction mixtures, multiple products exist in equilibrium. Reversible assembly of molecular components generates products and semi-stable intermediates. Reactions can proceed along kinetic or thermodynamic pathways. Initial concentrations of kinetic intermediates are greater than thermodynamic products because the lower barrier of activation (ΔG‡), compared to the thermodynamic pathway, gives a faster rate of formation. A kinetic pathway is represented in figure 1 as a purple energy diagram. With time, the intermediates equilibrate towards the global minimum, corresponding to the lowest overall Gibbs free energy (ΔG°), shown in red on the reaction diagram in figure 1. The driving force for products to re-equilibrate towards the most stable products is referred to as thermodynamic control. The ratio of products to at any equilibrium state is determined by the relative magnitudes of free energy of the products. This relationship between population and relative energies is called the Maxwell-Boltzmann distribution.
Thermodynamic template effect
The concept of a thermodynamic template is demonstrated in scheme 1. A thermodynamic template is a reagent that can stabilize the form of one product over others by lowering its Gibb's free energy (ΔG°) in relation to other products. cyclophane C2 can be prepared by the irreversible highly diluted reaction of a diol with chlorobromomethane in the presence of sodium hydride. The dimer however is part of series of equilibria between polyacetal macrocycles of different size brought about by acid catalyzed (triflic acid) transacetalization. Regardless of the starting material, C2, C4 or a high molar mass product, the equilibrium will eventually produce a product distribution across many macrocycles and oligomers. In this system it is possible to amplify the presence of C2 in the mixture when the transacetalisation catalyst is silver triflate because the silver ion fits ideally and irreversibly in the C2 cavity.
Synthetic methods
Reactions used in DCvC must generate thermodynamically stable products to overcome the entropic cost of self-assembly. The reactions must form covalent linkages between building blocks. Finally, all possible intermediates must be reversible, and the reaction ideally proceeds under conditions that are tolerant of functional groups elsewhere in the molecule.
Reactions that can be used in DCvC are diverse and can be placed into two general categories. Exchange reactions involve the substitution of one reaction partner in an intermolecular reaction for another with an identical type of bonding. Some examples of this are shown in schemes 5 and 8, in an ester exchange, and disulfide exchange reactions. The second type, formation reactions, rely on the formation of new covalent bonds. Some examples include Diels–Alder and aldol reactions. In some cases, a reaction can pertain to both categories. For example, Schiff base formation can be categorized as a forming new covalent bonds between a carbonyl and primary amine. However, in the presence of two different amines the reaction becomes an exchange reaction where the two imine derivatives compete in equilibrium.
Exchange and formation reactions can be further broken down into three categories:
Bonding between carbon–carbon
Bonding between carbon–heteroatom
Bonding between heteroatom–heteroatom
Carbon–carbon
Bond formation between carbon atoms forms very thermodynamically stable products. Therefore, they often require the use of a catalyst to improve kinetics and ensure reversibility.
Aldol reactions
Aldol reactions are commonly used in organic chemistry to form carbon-to-carbon bonds. The aldehyde-alcohol motif common to the reaction product is ubiquitous to synthetic chemistry and natural products. The reaction utilizes two carbonyl compounds to generate a β-hydroxy carbonyl. Catalysis is always necessary because the barrier of activation between kinetic products and starting materials makes the dynamic reversible process too slow. Catalysts that have been successfully employed include enzymatic aldolase and Al2O3 based systems.
Diels–Alder
[4+2] cycloadditions of a diene and an alkene have been used as DCvC reactions. These reactions are often reversible at high temperatures. In the case of furan–maleimide adducts, the retro-cycloaddition is accessible at temperatures as low as 40 °C.
Metathesis
Olefin and alkyne metathesis refers to a carbon–carbon bond forming reaction. In the case of olefin metathesis, the bond forms between two sp2-hybridized carbon centers. In alkyne metathesis it forms between two sp-hybridized carbon centers. Ring opening metathesis polymerization (ROMP) can be used in polymerization and macrocycle synthesis.
Carbon–heteroatom
A common dynamic covalent building motif is bond formation between a carbon center and a heteroatom such as nitrogen or oxygen. Because the bond formed between carbon and a heteroatom is less stable than a carbon-carbon bond, they offer more reversibility and reach thermodynamic equilibrium faster than carbon bond forming dynamic covalent reactions.
Ester exchange
Ester exchange takes place between an ester carbonyl and an alcohol. Reverse esterification can take place via hydrolysis. This method has been used extensively in polymer synthesis.
Imine and aminal formation
Bond forming reactions between carbon and nitrogen are the most widely used in dynamic covalent chemistry. They have been used more broadly in materials chemistry for molecular switches, covalent organic frameworks, and in self-sorting systems.
Imine formation takes place between an aldehyde or ketone and a primary amine. Similarly, aminal formation takes place between an aldehyde or ketone and a vicinal secondary amine. Both reactions are commonly used in DCvC. While both reactions can initially be categorized as formation reactions, in the presence of one or more of either reagent, the dynamic equilibrium between carbonyl and amine becomes an exchange reaction.
Heteroatom–heteroatom
Dynamic heteroatom bond formation, presents useful reactions in the dynamic covalent reaction toolbox. Boronic acid condensation (BAC) and disulfide exchange constitute the two main reactions in this category.
Disulfide exchange
Disulfides can undergo dynamic exchange reactions with free thiols. The reaction is well documented within the realm of DCvC, and is one of the first reactions demonstrated to have dynamic properties. The application of disulfide chemistry has the added advantage of being a biological motif. Cysteine residues can form disulfide bonds in natural systems.
Boronic acid
Boronic acid self-condensation or condensation with diols is a well-documented dynamic covalent reaction. The boronic acid condensation has the characteristic of forming two dynamic bonds with various substrates. This is advantageous when designing systems where high rigidity is desired, such as 3-D cages and COFs.
Applications in research
Although dynamic covalent chemistry has no practical applications, it has allowed access to a wide variety of supramolecular structures. Using the above reactions to link molecular fragments, higher order materials have been made. These materials include macrocycles, COFs, and molecular knots. The applications of these products have been used in gas storage, catalysis, and biomedical sensing, among others.
Dynamic signaling cascades
Dynamic covalent reactions have recently been used in Systems chemistry to initiate signaling cascades by reversibly releasing protons. The dynamic nature of the reactions provides a suitable "on-off" switch-like nature to the cascade systems.
Macrocycles
Many examples exist that demonstrate the utility of DCvC in macrocycle synthesis. This type of chemistry is effective for large macrocycle synthesis because the thermodynamic template effect is well suited to stabilize ring structures. Furthermore, the error-correcting ability inherent to DCvC allows large structures to be made without flaws.
Covalent organic frameworks
All current methods of covalent organic framework (COF) synthesis use DCvC. Boronic acid dehydration, as demonstrated by Yaghi et al. is the most common type of reaction used. COFs have been used in gas storage, catalysis, . Possible morphologies include infinite covalent 3D frameworks, 2D polymers, or discrete molecular cages.
Molecular knots
DCvC has been used to make molecules with complex topological properties. In the case of Borromean rings, DCvC is used to synthesize a three ring interlocking system. Thermodynamic templates are used to stabilize interlocking macrocycle growth.
See also
Organic chemistry
Supramolecular chemistry
Template effect
Boronic acids in supramolecular chemistry: Saccharide recognition
Dynamic combinatorial chemistry
Thermodynamic control
References
Organic chemistry
Supramolecular chemistry | Dynamic covalent chemistry | [
"Chemistry",
"Materials_science"
] | 2,134 | [
"Nanotechnology",
"nan",
"Supramolecular chemistry"
] |
2,801,560 | https://en.wikipedia.org/wiki/Ocean%20acidification | Ocean acidification is the ongoing decrease in the pH of the Earth's ocean. Between 1950 and 2020, the average pH of the ocean surface fell from approximately 8.15 to 8.05. Carbon dioxide emissions from human activities are the primary cause of ocean acidification, with atmospheric carbon dioxide () levels exceeding 422 ppm (). from the atmosphere is absorbed by the oceans. This chemical reaction produces carbonic acid () which dissociates into a bicarbonate ion () and a hydrogen ion (). The presence of free hydrogen ions () lowers the pH of the ocean, increasing acidity (this does not mean that seawater is acidic yet; it is still alkaline, with a pH higher than 8). Marine calcifying organisms, such as mollusks and corals, are especially vulnerable because they rely on calcium carbonate to build shells and skeletons.
A change in pH by 0.1 represents a 26% increase in hydrogen ion concentration in the world's oceans (the pH scale is logarithmic, so a change of one in pH units is equivalent to a tenfold change in hydrogen ion concentration). Sea-surface pH and carbonate saturation states vary depending on ocean depth and location. Colder and higher latitude waters are capable of absorbing more . This can cause acidity to rise, lowering the pH and carbonate saturation levels in these areas. There are several other factors that influence the atmosphere-ocean exchange, and thus local ocean acidification. These include ocean currents and upwelling zones, proximity to large continental rivers, sea ice coverage, and atmospheric exchange with nitrogen and sulfur from fossil fuel burning and agriculture.
A lower ocean pH has a range of potentially harmful effects for marine organisms. Scientists have observed for example reduced calcification, lowered immune responses, and reduced energy for basic functions such as reproduction. Ocean acidification can impact marine ecosystems that provide food and livelihoods for many people. About one billion people are wholly or partially dependent on the fishing, tourism, and coastal management services provided by coral reefs. Ongoing acidification of the oceans may therefore threaten food chains linked with the oceans.
The only solution that would address the root cause of ocean acidification is to reduce carbon dioxide emissions. This is one of the main objectives of climate change mitigation measures. The removal of carbon dioxide from the atmosphere would also help to reverse ocean acidification. In addition, there are some specific ocean-based mitigation methods, for example ocean alkalinity enhancement and enhanced weathering. These strategies are under investigation, but generally have a low technology readiness level and many risks.
Ocean acidification has happened before in Earth's geologic history. The resulting ecological collapse in the oceans had long-lasting effects on the global carbon cycle and climate.
Cause
Present-day (2021) atmospheric carbon dioxide (CO2) levels of around 415 ppm are around 50% higher than preindustrial concentrations. The current elevated levels and rapid growth rates are unprecedented in the past 55 million years of the geological record. The sources of this excess CO2 are clearly established as human driven: they include anthropogenic fossil fuel, industrial, and land-use/land-change emissions. One source of this is fossil fuels, which are burned for energy. When burned, CO2 is released into the atmosphere as a byproduct of combustion, which is a significant contributor to the increasing levels of CO2 in the Earth's atmosphere. The ocean acts as a carbon sink for anthropogenic CO2 and takes up roughly a quarter of total anthropogenic CO2 emissions. However, the additional CO2 in the ocean results in a wholesale shift in seawater acid-base chemistry toward more acidic, lower pH conditions and lower saturation states for carbonate minerals used in many marine organism shells and skeletons.
Accumulated since 1850, the ocean sink holds up to 175 ± 35 gigatons of carbon, with more than two-thirds of this amount (120 GtC) being taken up by the global ocean since 1960. Over the historical period, the ocean sink increased in pace with the exponential anthropogenic emissions increase. From 1850 until 2022, the ocean has absorbed 26 % of total anthropogenic emissions. Emissions during the period 1850–2021 amounted to 670 ± 65 gigatons of carbon and were partitioned among the atmosphere (41 %), ocean (26 %), and land (31 %).
The carbon cycle describes the fluxes of carbon dioxide () between the oceans, terrestrial biosphere, lithosphere, and atmosphere. The carbon cycle involves both organic compounds such as cellulose and inorganic carbon compounds such as carbon dioxide, carbonate ion, and bicarbonate ion, together referenced as dissolved inorganic carbon (DIC). These inorganic compounds are particularly significant in ocean acidification, as they include many forms of dissolved present in the Earth's oceans.
When dissolves, it reacts with water to form a balance of ionic and non-ionic chemical species: dissolved free carbon dioxide (), carbonic acid (), bicarbonate () and carbonate (). The ratio of these species depends on factors such as seawater temperature, pressure and salinity (as shown in a Bjerrum plot). These different forms of dissolved inorganic carbon are transferred from an ocean's surface to its interior by the ocean's solubility pump. The resistance of an area of ocean to absorbing atmospheric is known as the Revelle factor.
Main effects
The ocean's chemistry is changing due to the uptake of anthropogenic carbon dioxide (CO2). Ocean pH, carbonate ion concentrations ([CO32−]), and calcium carbonate mineral saturation states (Ω) have been declining as a result of the uptake of approximately 30% of the anthropogenic carbon dioxide emissions over the past 270 years (since around 1750). This process, commonly referred to as "ocean acidification", is making it harder for marine calcifiers to build a shell or skeletal structure, endangering coral reefs and the broader marine ecosystems.
Ocean acidification has been called the "evil twin of global warming" and "the other CO2 problem". Increased ocean temperatures and oxygen loss act concurrently with ocean acidification and constitute the "deadly trio" of climate change pressures on the marine environment. The impacts of this will be most severe for coral reefs and other shelled marine organisms, as well as those populations that depend on the ecosystem services they provide.
Reduction in pH value
Dissolving in seawater increases the hydrogen ion () concentration in the ocean, and thus decreases ocean pH, as follows:
In shallow coastal and shelf regions, a number of factors interplay to affect air-ocean exchange and resulting pH change. These include biological processes, such as photosynthesis and respiration, as well as water upwelling. Also, ecosystem metabolism in freshwater sources reaching coastal waters can lead to large, but local, pH changes.
Freshwater bodies also appear to be acidifying, although this is a more complex and less obvious phenomenon.
The absorption of CO2 from the atmosphere does not affect the ocean's alkalinity. This is important to know in this context as alkalinity is the capacity of water to resist acidification. Ocean alkalinity enhancement has been proposed as one option to add alkalinity to the ocean and therefore buffer against pH changes.
Decreased calcification in marine organisms
Changes in ocean chemistry can have extensive direct and indirect effects on organisms and their habitats. One of the most important repercussions of increasing ocean acidity relates to the production of shells out of calcium carbonate (). This process is called calcification and is important to the biology and survival of a wide range of marine organisms. Calcification involves the precipitation of dissolved ions into solid structures, structures for many marine organisms, such as coccolithophores, foraminifera, crustaceans, mollusks, etc. After they are formed, these structures are vulnerable to dissolution unless the surrounding seawater contains saturating concentrations of carbonate ions ().
Very little of the extra carbon dioxide that is added into the ocean remains as dissolved carbon dioxide. The majority dissociates into additional bicarbonate and free hydrogen ions. The increase in hydrogen is larger than the increase in bicarbonate, creating an imbalance in the reaction:
To maintain chemical equilibrium, some of the carbonate ions already in the ocean combine with some of the hydrogen ions to make further bicarbonate. Thus the ocean's concentration of carbonate ions is reduced, removing an essential building block for marine organisms to build shells, or calcify:
The increase in concentrations of dissolved carbon dioxide and bicarbonate, and reduction in carbonate, are shown in the Bjerrum plot.
Disruption of the food chain is also a possible effect as many marine organisms rely on calcium carbonate-based organisms at the base of the food chain for food and habitat. This can potentially have detrimental effects throughout the food web and potentially lead to a decline in availability of fish stocks which would have an impact on human livelihoods.
Decrease in saturation state
The saturation state (known as Ω) of seawater for a mineral is a measure of the thermodynamic potential for the mineral to form or to dissolve, and for calcium carbonate is described by the following equation:
Here Ω is the product of the concentrations (or activities) of the reacting ions that form the mineral (Ca2+ and CO32−), divided by the apparent solubility product at equilibrium (Ksp), that is, when the rates of precipitation and dissolution are equal. In seawater, dissolution boundary is formed as a result of temperature, pressure, and depth, and is known as the saturation horizon. Above this saturation horizon, Ω has a value greater than 1, and does not readily dissolve. Most calcifying organisms live in such waters. Below this depth, Ω has a value less than 1, and will dissolve. The carbonate compensation depth is the ocean depth at which carbonate dissolution balances the supply of carbonate to sea floor, therefore sediment below this depth will be void of calcium carbonate. Increasing levels, and the resulting lower pH of seawater, decreases the concentration of CO32− and the saturation state of therefore increasing dissolution.
Calcium carbonate most commonly occurs in two common polymorphs (crystalline forms): aragonite and calcite. Aragonite is much more soluble than calcite, so the aragonite saturation horizon, and aragonite compensation depth, is always nearer to the surface than the calcite saturation horizon. This also means that those organisms that produce aragonite may be more vulnerable to changes in ocean acidity than those that produce calcite. Ocean acidification and the resulting decrease in carbonate saturation states raise the saturation horizons of both forms closer to the surface. This decrease in saturation state is one of the main factors leading to decreased calcification in marine organisms because the inorganic precipitation of is directly proportional to its saturation state and calcifying organisms exhibit stress in waters with lower saturation states.
Natural variability and climate feedbacks
Already now large quantities of water undersaturated in aragonite are upwelling close to the Pacific continental shelf area of North America, from Vancouver to Northern California. These continental shelves play an important role in marine ecosystems, since most marine organisms live or are spawned there. Other shelf areas may be experiencing similar effects.
At depths of 1000s of meters in the ocean, calcium carbonate shells begin to dissolve as increasing pressure and decreasing temperature shift the chemical equilibria controlling calcium carbonate precipitation. The depth at which this occurs is known as the carbonate compensation depth. Ocean acidification will increase such dissolution and shallow the carbonate compensation depth on timescales of tens to hundreds of years. Zones of downwelling are being affected first.
In the North Pacific and North Atlantic, saturation states are also decreasing (the depth of saturation is getting more shallow). Ocean acidification is progressing in the open ocean as the CO2 travels to deeper depth as a result of ocean mixing. In the open ocean, this causes carbonate compensation depths to become more shallow, meaning that dissolution of calcium carbonate will occur below those depths. In the North Pacific these carbonate saturations depths are shallowing at a rate of 1–2 m per year.
It is expected that ocean acidification in the future will lead to a significant decrease in the burial of carbonate sediments for several centuries, and even the dissolution of existing carbonate sediments.
Measured and estimated values
Present day and recent history
Between 1950 and 2020, the average pH value of the ocean surface is estimated to have decreased from approximately 8.15 to 8.05. This represents an increase of around 26% in hydrogen ion concentration in the world's oceans (the pH scale is logarithmic, so a change of one in pH unit is equivalent to a tenfold change in hydrogen ion concentration). For example, in the 15-year period 1995–2010 alone, acidity has increased 6 percent in the upper 100 meters of the Pacific Ocean from Hawaii to Alaska.
The IPCC Sixth Assessment Report in 2021 stated that "present-day surface pH values are unprecedented for at least 26,000 years and current rates of pH change are unprecedented since at least that time. The pH value of the ocean interior has declined over the last 20–30 years everywhere in the global ocean. The report also found that "pH in open ocean surface water has declined by about 0.017 to 0.027 pH units per decade since the late 1980s".
The rate of decline differs by region. This is due to complex interactions between different types of forcing mechanisms: "In the tropical Pacific, its central and eastern upwelling zones exhibited a faster pH decline of minus 0.022 to minus 0.026 pH unit per decade." This is thought to be "due to increased upwelling of -rich sub-surface waters in addition to anthropogenic uptake." Some regions exhibited a slower acidification rate: a pH decline of minus 0.010 to minus 0.013 pH unit per decade has been observed in warm pools in the western tropical Pacific.
The rate at which ocean acidification will occur may be influenced by the rate of surface ocean warming, because warm waters will not absorb as much . Therefore, greater seawater warming could limit CO2 absorption and lead to a smaller change in pH for a given increase in CO2. The difference in changes in temperature between basins is one of the main reasons for the differences in acidification rates in different localities.
Current rates of ocean acidification have been likened to the greenhouse event at the Paleocene–Eocene boundary (about 56 million years ago), when surface ocean temperatures rose by 5–6 degrees Celsius. In that event, surface ecosystems experienced a variety of impacts, but bottom-dwelling organisms in the deep ocean actually experienced a major extinction. Currently, the rate of carbon addition to the atmosphere-ocean system is about ten times the rate that occurred at the Paleocene–Eocene boundary.
Extensive observational systems are now in place or being built for monitoring seawater chemistry and acidification for both the global open ocean and some coastal systems.
Geologic past
Ocean acidification has occurred previously in Earth's history. It happened during the Capitanian mass extinction, at the end-Permian extinction, during the end-Triassic extinction, and during the Cretaceous–Palaeogene extinction event.
Three of the big five mass extinction events in the geologic past were associated with a rapid increase in atmospheric carbon dioxide, probably due to volcanism and/or thermal dissociation of marine gas hydrates. Elevated CO2 levels impacted biodiversity. Decreased saturation due to seawater uptake of volcanogenic CO2 has been suggested as a possible kill mechanism during the marine mass extinction at the end of the Triassic. The end-Triassic biotic crisis is still the most well-established example of a marine mass extinction due to ocean acidification, because (a) carbon isotope records suggest enhanced volcanic activity that decreased the carbonate sedimentation which reduced the carbonate compensation depth and the carbonate saturation state, and a marine extinction coincided precisely in the stratigraphic record, and (b) there was pronounced selectivity of the extinction against organisms with thick aragonitic skeletons, which is predicted from experimental studies. Ocean acidification has also been suggested as a one cause of the end-Permian mass extinction and the end-Cretaceous crisis. Overall, multiple climatic stressors, including ocean acidification, was likely the cause of geologic extinction events.
The most notable example of ocean acidification is the Paleocene-Eocene Thermal Maximum (PETM), which occurred approximately 56 million years ago when massive amounts of carbon entered the ocean and atmosphere, and led to the dissolution of carbonate sediments across many ocean basins. Relatively new geochemical methods of testing for pH in the past indicate the pH dropped 0.3 units across the PETM. One study that solves the marine carbonate system for saturation state shows that it may not change much over the PETM, suggesting the rate of carbon release at our best geological analogy was much slower than human-induced carbon emissions. However, stronger proxy methods to test for saturation state are needed to assess how much this pH change may have affected calcifying organisms.
Predicted future values
Importantly, the rate of change in ocean acidification is much higher than in the geological past. This faster change prevents organisms from gradually adapting, and prevents climate cycle feedbacks from kicking in to mitigate ocean acidification. Ocean acidification is now on a path to reach lower pH levels than at any other point in the last 300 million years. The rate of ocean acidification (i.e. the rate of change in pH value) is also estimated to be unprecedented over that same time scale. These expected changes are considered unprecedented in the geological record. In combination with other ocean biogeochemical changes, this drop in pH value could undermine the functioning of marine ecosystems and disrupt the provision of many goods and services associated with the ocean, beginning as early as 2100.
The extent of further ocean chemistry changes, including ocean pH, will depend on climate change mitigation efforts taken by nations and their governments. Different scenarios of projected socioeconomic global changes are modelled by using the Shared Socioeconomic Pathways (SSP) scenarios.
Under a very high emission scenario (SSP5-8.5), model projections estimate that surface ocean pH could decrease by as much as 0.44 units by the end of this century, compared to the end of the 19th century. This would mean a pH as low as about 7.7, and represents a further increase in H+ concentrations of two to four times beyond the increase to date.
Impacts on oceanic calcifying organisms
Complexity of research findings
The full ecological consequences of the changes in calcification due to ocean acidification are complex but it appears likely that many calcifying species will be adversely affected by ocean acidification. Increasing ocean acidification makes it more difficult for shell-accreting organisms to access carbonate ions, essential for the production of their hard exoskeletal shell. Oceanic calcifying organism span the food chain from autotrophs to heterotrophs and include organisms such as coccolithophores, corals, foraminifera, echinoderms, crustaceans and molluscs.
Overall, all marine ecosystems on Earth will be exposed to changes in acidification and several other ocean biogeochemical changes. Ocean acidification may force some organisms to reallocate resources away from productive endpoints in order to maintain calcification. For example, the oyster Magallana gigas is recognized to experience metabolic changes alongside altered calcification rates due to energetic tradeoffs resulting from pH imbalances.
Under normal conditions, calcite and aragonite are stable in surface waters since the carbonate ions are supersaturated with respect to seawater. However, as ocean pH falls, the concentration of carbonate ions also decreases. Calcium carbonate thus becomes undersaturated, and structures made of calcium carbonate are vulnerable to calcification stress and dissolution. In particular, studies show that corals, coccolithophores, coralline algae, foraminifera, shellfish and pteropods experience reduced calcification or enhanced dissolution when exposed to elevated . Even with active marine conservation practices it may be impossible to bring back many previous shellfish populations.
Some studies have found different responses to ocean acidification, with coccolithophore calcification and photosynthesis both increasing under elevated atmospheric p, and an equal decline in primary production and calcification in response to elevated , or the direction of the response varying between species.
Similarly, the sea star, Pisaster ochraceus, shows enhanced growth in waters with increased acidity.
Reduced calcification from ocean acidification may affect the ocean's biologically driven sequestration of carbon from the atmosphere to the ocean interior and seafloor sediment, weakening the so-called biological pump. Seawater acidification could also reduce the size of Antarctic phytoplankton, making them less effective at storing carbon. Such changes are being increasingly studied and synthesized through the use of physiological frameworks, including the Adverse Outcome Pathway (AOP) framework.
Coccolithophores
A coccolithophore is a unicellular, eukaryotic phytoplankton (alga). Understanding calcification changes in coccolithophores may be particularly important because a decline in the coccolithophores may have secondary effects on climate: it could contribute to global warming by decreasing the Earth's albedo via their effects on oceanic cloud cover. A study in 2008 examined a sediment core from the North Atlantic and found that the species composition of coccolithophorids remained unchanged over the past 224 years (1780 to 2004). But the average coccolith mass had increased by 40% during the same period.
Corals
Warm water corals are clearly in decline, with losses of 50% over the last 30–50 years due to multiple threats from ocean warming, ocean acidification, pollution and physical damage from activities such as fishing, and these pressures are expected to intensify.
The fluid in the internal compartments (the coelenteron) where corals grow their exoskeleton is also extremely important for calcification growth. When the saturation state of aragonite in the external seawater is at ambient levels, the corals will grow their aragonite crystals rapidly in their internal compartments, hence their exoskeleton grows rapidly. If the saturation state of aragonite in the external seawater is lower than the ambient level, the corals have to work harder to maintain the right balance in the internal compartment. When that happens, the process of growing the crystals slows down, and this slows down the rate of how much their exoskeleton is growing. Depending on the aragonite saturation state in the surrounding water, the corals may halt growth because pumping aragonite into the internal compartment will not be energetically favorable. Under the current progression of carbon emissions, around 70% of North Atlantic cold-water corals will be living in corrosive waters by 2050–60.
Acidified conditions primarily reduce the coral's capacity to build dense exoskeletons, rather than affecting the linear extension of the exoskeleton. The density of some species of corals could be reduced by over 20% by the end of this century.
An in situ experiment, conducted on a 400 m2 patch of the Great Barrier Reef, to decrease seawater CO2 level (raise pH) to near the preindustrial value showed a 7% increase in net calcification. A similar experiment to raise in situ seawater CO2 level (lower pH) to a level expected soon after the 2050 found that net calcification decreased 34%.
However, a field study of the coral reef in Queensland and Western Australia from 2007 to 2012 found that corals are more resistant to the environmental pH changes than previously thought, due to internal homeostasis regulation; this makes thermal change (marine heatwaves), which leads to coral bleaching, rather than acidification, the main factor for coral reef vulnerability due to climate change.
Studies at carbon dioxide seep sites
In some places carbon dioxide bubbles out from the sea floor, locally changing the pH and other aspects of the chemistry of the seawater. Studies of these carbon dioxide seeps have documented a variety of responses by different organisms. Coral reef communities located near carbon dioxide seeps are of particular interest because of the sensitivity of some corals species to acidification. In Papua New Guinea, declining pH caused by carbon dioxide seeps is associated with declines in coral species diversity. However, in Palau carbon dioxide seeps are not associated with reduced species diversity of corals, although bioerosion of coral skeletons is much higher at low pH sites.
Pteropods and brittle stars
Pteropods and brittle stars both form the base of the Arctic food webs and are both seriously damaged from acidification. Pteropods shells dissolve with increasing acidification and the brittle stars lose muscle mass when re-growing appendages. For pteropods to create shells they require aragonite which is produced through carbonate ions and dissolved calcium and strontium. Pteropods are severely affected because increasing acidification levels have steadily decreased the amount of water supersaturated with carbonate. The degradation of organic matter in Arctic waters has amplified ocean acidification; some Arctic waters are already undersaturated with respect to aragonite.
The brittle star's eggs die within a few days when exposed to expected conditions resulting from Arctic acidification. Similarly, when exposed in experiments to pH reduced by 0.2 to 0.4, larvae of a temperate brittle star, a relative of the common sea star, fewer than 0.1 percent survived more than eight days.
Other impacts on ecosystems
Other biological impacts
Aside from the slowing and/or reversal of calcification, organisms may suffer other adverse effects, either indirectly through negative impacts on food resources, or directly as reproductive or physiological effects. For example, the elevated oceanic levels of may produce -induced acidification of body fluids, known as hypercapnia.
Increasing acidity has been observed to reduce metabolic rates in jumbo squid and depress the immune responses of blue mussels.
Atlantic longfin squid eggs took longer to hatch in acidified water, and the squid's statolith was smaller and malformed in animals placed in sea water with a lower pH. However, these studies are ongoing and there is not yet a full understanding of these processes in marine organisms or ecosystems.
Acoustic properties
Another potential route to ecosystem impacts is through bioacoustics. This may occur as ocean acidification can alter the acoustic properties of seawater, allowing sound to propagate further, and increasing ocean noise. This impacts all animals that use sound for echolocation or communication.
Algae and seagrasses
Another possible effect would be an increase in harmful algal bloom events, which could contribute to the accumulation of toxins (domoic acid, brevetoxin, saxitoxin) in small organisms such as anchovies and shellfish, in turn increasing occurrences of amnesic shellfish poisoning, neurotoxic shellfish poisoning and paralytic shellfish poisoning. Although algal blooms can be harmful, other beneficial photosynthetic organisms may benefit from increased levels of carbon dioxide. Most importantly, seagrasses will benefit. Research found that as seagrasses increased their photosynthetic activity, calcifying algae's calcification rates rose, likely because localized photosynthetic activity absorbed carbon dioxide and elevated local pH.
Fish larvae
Ocean acidification can also have effects on marine fish larvae. It internally affects their olfactory systems, which is a crucial part of their early development. Orange clownfish larvae mostly live on oceanic reefs that are surrounded by vegetative islands. Larvae are known to use their sense of smell to detect the differences between reefs surrounded by vegetative islands and reefs not surrounded by vegetative islands. Clownfish larvae need to be able to distinguish between these two destinations to be able to find a suitable area for their growth. Another use for marine fish olfactory systems is to distinguish between their parents and other adult fish, in order to avoid inbreeding.
In an experimental aquarium facility, clownfish were sustained in non-manipulated seawater with pH 8.15 ± 0.07, which is similar to our current ocean's pH. To test for effects of different pH levels, the seawater was modified to two other pH levels, which corresponded with climate change models that predict future atmospheric levels. In the year 2100 the model projects possible levels of 1,000 ppm, which correlates with the pH of 7.8 ± 0.05.
This experiment showed that when larvae are exposed to a pH of 7.8 ± 0.05 their reaction to environmental cues differs drastically from their reaction to cues at pH equal to current ocean levels. At pH 7.6 ± 0.05 larvae had no reaction to any type of cue. However, a meta-analysis published in 2022 found that the effect sizes of published studies testing for ocean acidification effects on fish behavior have declined by an order of magnitude over the past decade, and have been negligible for the past five years.
Eel embryos, a "critically endangered" species yet profound in aquaculture, are also being affected by ocean acidification, specifically the European eel. Although they spend most of their lives in fresh water, usually in rivers, streams, or estuaries, they go to spawn and die in the Sargasso Sea. Here is where European eels are experiencing the effects of acidification in one of their key life stages.
Fish embryos and larvae are usually more sensitive to pH changes than adults, as organs for pH regulation are not full developed. Because of this, European eel embryos are more vulnerable to changes in pH in the Sargasso Sea. A study of the European Eel in the Sargasso Sea was conducted in 2021 to analyze the specific effects of ocean acidification on embryos. The study found that exposure to predicted end-of-century ocean pCO2 conditions may affect normal development of this species in nature during sensitive early life history stages with limited physiological response capacities, while extreme acidification would negatively influence embryonic survival and development under hatchery conditions.
Compounded effects of acidification, warming and deoxygenation
There is a substantial body of research showing that a combination of ocean acidification and elevated ocean temperature have a compounded effect on marine life and the ocean environment. This effect far exceeds the individual harmful impact of either. In addition, ocean warming, along with increased productivity of phytoplankton from higher CO2 levels exacerbates ocean deoxygenation. Deoxygenation of ocean waters is an additional stressor on marine organisms that increases ocean stratification therefore limiting nutrients over time and reducing biological gradients.
Meta analyses have quantified the direction and magnitude of the harmful effects of combined ocean acidification, warming and deoxygenation on the ocean. These meta-analyses have been further tested by mesocosm studies that simulated the interaction of these stressors and found a catastrophic effect on the marine food web: thermal stress more than negates any primary producer to herbivore increase in productivity from elevated .
Impacts on the economy and societies
The increase of ocean acidity decelerates the rate of calcification in salt water, leading to smaller and slower growing coral reefs which supports approximately 25% of marine life. Impacts are far-reaching from fisheries and coastal environments down to the deepest depths of the ocean. The increase in ocean acidity in not only killing the coral, but also the wildly diverse population of marine inhabitants which coral reefs support.
Fishing and tourism industry
The threat of acidification includes a decline in commercial fisheries and the coast-based tourism industry. Several ocean goods and services are likely to be undermined by future ocean acidification potentially affecting the livelihoods of some 400 to 800 million people, depending upon the greenhouse gas emission scenario.
Some 1 billion people are completely or partially dependent on the fishing, tourism, and coastal management services provided by coral reefs. Ongoing acidification of the oceans may therefore threaten future food chains linked with the oceans.
Arctic
In the Arctic, commercial fisheries are threatened because acidification harms calcifying organisms which form the base of the Arctic food webs (pteropods and brittle stars, see above). Acidification threatens Arctic food webs from the base up. Arctic food webs are considered simple, meaning there are few steps in the food chain from small organisms to larger predators. For example, pteropods are "a key prey item of a number of higher predators – larger plankton, fish, seabirds, whales". Both pteropods and sea stars serve as a substantial food source and their removal from the simple food web would pose a serious threat to the whole ecosystem. The effects on the calcifying organisms at the base of the food webs could potentially destroy fisheries.
US commercial fisheries
The value of fish caught from US commercial fisheries in 2007 was valued at $3.8 billion and of that 73% was derived from calcifiers and their direct predators. Other organisms are directly harmed as a result of acidification. For example, decrease in the growth of marine calcifiers such as the American lobster, ocean quahog, and scallops means there is less shellfish meat available for sale and consumption. Red king crab fisheries are also at a serious threat because crabs are also calcifiers. Baby red king crab when exposed to increased acidification levels experienced 100% mortality after 95 days. In 2006, red king crab accounted for 23% of the total guideline harvest levels and a serious decline in red crab population would threaten the crab harvesting industry.
Possible responses
Climate change mitigation
Reducing carbon dioxide emissions (i.e. climate change mitigation measures) is the only solution that addresses the root cause of ocean acidification. For example, some mitigation measures focus on carbon dioxide removal (CDR) from the atmosphere (e.g. direct air capture (DAC), bioenergy with carbon capture and storage (BECCS)). These would also slow the rate of acidification.
Approaches that remove carbon dioxide from the ocean include ocean nutrient fertilization, artificial upwelling/downwelling, seaweed farming, ecosystem recovery, ocean alkalinity enhancement, enhanced weathering and electrochemical processes. All of these methods use the ocean to remove from the atmosphere to store it in the ocean. These methods could assist with mitigation but they can have side-effects on marine life. The research field for all CDR methods has grown a lot since 2019.
In total, "ocean-based methods have a combined potential to remove 1–100 gigatons of per year". Their costs are in the order of USD40–500 per ton of . For example, enhanced weathering could remove 2–4 gigatons of per year. This technology comes with a cost of 50–200 USD per ton of .
Carbon removal technologies which add alkalinity
Some carbon removal techniques add alkalinity to the ocean and therefore immediately buffer pH changes which might help the organisms in the region that the extra alkalinity is added to. The two technologies that fall into this category are ocean alkalinity enhancement and electrochemical methods. Eventually, due to diffusion, that alkalinity addition will be quite small to distant waters. This is why the term local ocean acidification mitigation is used. Both of these technologies have the potential to operate on a large scale and to be efficient at removing carbon dioxide. However, they are expensive, have many risks and side effects and currently have a low technology readiness level.
Ocean alkalinity enhancement
Ocean alkalinity enhancement (OAE) is a proposed "carbon dioxide removal (CDR) method that involves deposition of alkaline minerals or their dissociation products at the ocean surface". The process would increase surface total alkalinity. It would work to increase ocean absorption of . The process involves increasing the amount of bicarbonate (HCO3-) through accelerated weathering (enhanced weathering) of rocks (silicate, limestone and quicklime). This process mimics the silicate-carbonate cycle. The either becomes bicarbonate, remaining in that form for more than 100 years, or may precipitate into calcium carbonate (CaCO3). When calcium carbonate is buried in the deep ocean, it can hold the carbon indefinitely when utilizing silicate rocks.
Enhanced weathering is one type of ocean alkalinity enhancement. Enhanced weathering increases alkalinity by scattering fine rock particles. This can happen on land and in the ocean (even though the outcome eventually affects the ocean).
In addition to sequestering , alkalinity addition buffers the pH of the ocean therefore reducing ocean acidification. However, little is known about how organisms respond to added alkalinity, even from natural sources. For example, weathering of some silicate rocks could release a large amount of trace metals at the weathering site.
Cost and energy consumed by ocean alkalinity enhancement (mining, pulverizing, transport) is high compared to other CDR techniques. The cost is estimated to be 20–50 USD per ton of CO2 (for "direct addition of alkaline minerals to the ocean").
Carbon sequestered as bicarbonate in the ocean amounts to about 30% of carbon emissions since the Industrial Revolution.
Experimental materials include limestone, brucite, olivine and alkaline solutions. Another approach is to use electricity to raise alkalinity during desalination to capture waterborne CO2.
Electrochemical methods
Electrochemical methods, or electrolysis, can strip carbon dioxide directly from seawater. Electrochemical process are a type of ocean alkalinity enhancement, too. Some methods focus on direct removal (in the form of carbonate and gas) while others increase the alkalinity of seawater by precipitating metal hydroxide residues, which absorbs in a matter described in the ocean alkalinity enhancement section. The hydrogen produced during direct carbon capture can then be upcycled to form hydrogen for energy consumption, or other manufactured laboratory reagents such as hydrochloric acid.
However, implementation of electrolysis for carbon capture is expensive and the energy consumed for the process is high compared to other CDR techniques. In addition, research to assess the environmental impact of this process is ongoing. Some complications include toxic chemicals in wastewaters, and reduced DIC in effluents; both of these may negatively impact marine life.
Policies and goals
Global policies
As awareness about ocean acidification grows, policies geared towards increasing monitoring efforts of ocean acidification have been drafted. Previously in 2015, ocean scientist Jean-Pierre Gattuso had remarked that "The ocean has been minimally considered at previous climate negotiations. Our study provides compelling arguments for a radical change at the UN conference (in Paris) on climate change".
International efforts, such as the Wider Caribbean's Cartagena Convention (entered into force in 1986), may enhance the support provided by regional governments to highly vulnerable areas in response to ocean acidification. Many countries, for example in the Pacific Islands and Territories, have constructed regional policies, or National Ocean Policies, National Action Plans, National Adaptation Plans of Action and Joint National Action Plans on Climate Change and Disaster Risk Reduction, to help work towards SDG 14. Ocean acidification is now starting to be considered within those frameworks.
UN Ocean Decade
The UN Ocean Decade has a program called "Ocean acidification research for sustainability". It was proposed by the Global Ocean Acidification Observing Network (GOA-ON) and its partners, and has been formally endorsed as a program of the UN Decade of Ocean Science for Sustainable Development. The OARS program builds on the work of GOA-ON and has the following aims: to further develop the science of ocean acidification; to increase observations of ocean chemistry changes; to identify the impacts on marine ecosystems on local and global scales; and to provide decision makers with the information needed to mitigate and adapt to ocean acidification.
Global Climate Indicators
The importance of ocean acidification is reflected in its inclusion as one of seven Global Climate Indicators. These Indicators are a set of parameters that describe the changing climate without reducing climate change to only rising temperature. The Indicators include key information for the most relevant domains of climate change: temperature and energy, atmospheric composition, ocean and water as well as the cryosphere. The Global Climate Indicators have been identified by scientists and communication specialists in a process led by Global Climate Observing System (GCOS). The Indicators have been endorsed by the World Meteorological Organization (WMO). They form the basis of the annual WMO Statement of the State of the Global Climate, which is submitted to the Conference of Parties (COP) of the United Nations Framework Convention on Climate Change (UNFCCC). Additionally, the Copernicus Climate Change Service (C3S) of the European Commission uses the Indicators for their annual "European State of the Climate".
Sustainable Development Goal 14
In 2015, the United Nations adopted the 2030 Agenda and a set of 17 Sustainable Development Goals (SDG), including a goal dedicated to the ocean, Sustainable Development Goal 14, which calls to "conserve and sustainably use the oceans, seas and marine resources for sustainable development". Ocean acidification is directly addressed by the target SDG 14.3. The full title of Target 14.3 is: "Minimize and address the impacts of ocean acidification, including through enhanced scientific cooperation at all levels". This target has one indicator: Indicator 14.3.1 which calls for the "Average marine acidity (pH) measured at agreed suite of representative sampling stations".
The Intergovernmental Oceanographic Commission (IOC) of UNESCO was identified as the custodian agency for the SDG 14.3.1 Indicator. In this role, IOC-UNESCO is tasked with developing the SDG 14.3.1 Indicator Methodology, the annual collection of data towards the SDG 14.3.1 Indicator and the reporting of progress to the United Nations.
Policies at country level
United States
In the United States, the Federal Ocean Acidification Research And Monitoring Act of 2009 supports government coordination, such as the National Oceanic Atmospheric Administration's (NOAA) "Ocean Acidification Program". In 2015, USEPA denied a citizens petition that asked EPA to regulate under the Toxic Substances Control Act of 1976 in order to mitigate ocean acidification. In the denial, the EPA said that risks from ocean acidification were being "more efficiently and effectively addressed" under domestic actions, e.g., under the Presidential Climate Action Plan, and that multiple avenues are being pursued to work with and in other nations to reduce emissions and deforestation and promote clean energy and energy efficiency.
History
Research into the phenomenon of ocean acidification, as well as awareness raising about the problem, has been going on for several decades. The fundamental research really began with the creation of the pH scale by Danish chemist Søren Peder Lauritz Sørensen in 1909. By around the 1950s the massive role of the ocean in absorbing fossil fuel CO2 was known to specialists, but not appreciated by the greater scientific community. Throughout much of the 20th century, the dominant focus has been the beneficial process of oceanic CO2 uptake, which has enormously ameliorated climate change. The concept of "too much of a good thing" has been late in developing and was triggered only by some key events, and the oceanic sink for heat and CO2 is still critical as the primary buffer against climate change.
In the early 1970s questions over the long-term impact of the accumulation of fossil fuel CO2 in the sea were already arising around the world and causing strong debate. Researchers commented on the accumulation of fossil CO2 in the atmosphere and sea and drew attention to the possible impacts on marine life. By the mid-1990s, the likely impact of CO2 levels rising so high with the inevitable changes in pH and carbonate ion became a concern of scientists studying the fate of coral reefs.
By the end of the 20th century the trade-offs between the beneficial role of the ocean in absorbing some 90 % of all heat created, and the accumulation of some 50 % of all fossil fuel CO2 emitted, and the impacts on marine life were becoming more clear. By 2003, the time of planning for the "First Symposium on the Ocean in a High-CO2 World" meeting to be held in Paris in 2004, many new research results on ocean acidification were published.
In 2009, members of the InterAcademy Panel called on world leaders to "Recognize that reducing the build up of in the atmosphere is the only practicable solution to mitigating ocean acidification". The statement also stressed the importance to "Reinvigorate action to reduce stressors, such as overfishing and pollution, on marine ecosystems to increase resilience to ocean acidification".
For example, research in 2010 found that in the 15-year period 1995–2010 alone, acidity had increased 6 percent in the upper 100 meters of the Pacific Ocean from Hawaii to Alaska.
According to a statement in July 2012 by Jane Lubchenco, head of the U.S. National Oceanic and Atmospheric Administration "surface waters are changing much more rapidly than initial calculations have suggested. It's yet another reason to be very seriously concerned about the amount of carbon dioxide that is in the atmosphere now and the additional amount we continue to put out."
A 2013 study found acidity was increasing at a rate 10 times faster than in any of the evolutionary crises in Earth's history.
The "Third Symposium on the Ocean in a High- World" took place in Monterey, California, in 2012. The summary for policy makers from the conference stated that "Ocean acidification research is growing rapidly".
In a synthesis report published in Science in 2015, 22 leading marine scientists stated that from burning fossil fuels is changing the oceans' chemistry more rapidly than at any time since the Great Dying (Earth's most severe known extinction event). Their report emphasized that the 2 °C maximum temperature increase agreed upon by governments reflects too small a cut in emissions to prevent "dramatic impacts" on the world's oceans.
A study done in 2020 argues that ocean acidification is not only negatively affecting marine life, but also human health. Food quality, respiratory issues, and human health are all negatively affected by ocean acidification.
See also
References
External links
Global Ocean Acidification Observing Network (GOA-ON)
United Nations Decade of Ocean Science for Sustainable Development (2021–2030)
US NOAA Ocean Acidification Program
Aquatic ecology
Biological oceanography
Carbon
Chemical oceanography
Fisheries science
Geochemistry
Oceanography
Effects of climate change
Environmental impact by effect
Articles containing video clips
Oceanographical terminology | Ocean acidification | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 9,658 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Chemical oceanography",
"Ecosystems",
"nan",
"Aquatic ecology"
] |
2,802,690 | https://en.wikipedia.org/wiki/National%20Research%20Universal%20reactor | The National Research Universal (NRU) reactor was a 135 MW nuclear research reactor built in the Chalk River Laboratories, Ontario, one of Canada’s national science facilities. It was a multipurpose science facility that served three main roles. It generated radionuclides used to treat or diagnose over 20 million people in 80 countries every year (and, to a lesser extent, other isotopes used for non-medical purposes). It was the neutron source for the NRC Canadian Neutron Beam Centre: a materials research centre that grew from the Nobel Prize-winning work of Bertram Brockhouse. It was the test bed for Atomic Energy of Canada Limited to develop fuels and materials for the CANDU reactor. At the time of its retirement on March 31, 2018, it was the world's oldest operating nuclear reactor.
History
The NRU reactor design was started in 1949. It is fundamentally a Canadian design, significantly advanced from NRX. It was built as the successor to the NRX reactor at the Atomic Energy Project of the National Research Council of Canada at Chalk River Laboratories. The NRX reactor was the world's most intense source of neutrons when it started operation in 1947. It was not known how long a research reactor could be expected to operate, so the management of Chalk River Laboratories began planning the NRU reactor to ensure continuity of the research programs.
NRU started self-sustained operation (or went "critical") on November 3, 1957, a decade after the NRX, and was ten times more powerful. It was initially designed as a 200 MW reactor, fueled with natural uranium. NRU was converted to 60 MW with highly-enriched uranium (HEU) fuel in 1964 and converted a third time in 1991 to 135 MW running on low-enriched uranium (LEU) fuel.
On Saturday, 24 May 1958 the NRU suffered a major accident. A damaged uranium fuel rod caught fire and was torn in two as it was being removed from the core. The fire was extinguished, but a sizeable quantity of radioactive combustion products had contaminated the interior of the reactor building and, to a lesser degree, an area of the surrounding laboratory site. The clean-up and repair took three months. NRU was operating again in August 1958. Care was taken to ensure no one was exposed to dangerous levels of radiation and staff involved in clean-up were monitored over the following decades. A corporal named Bjarnie Hannibal Paulson who was at the clean up developed unusual skin cancers and received a disability pension.
NRU's calandria, the vessel which contains its nuclear reactions, is made of aluminum, and was replaced in 1971 because of corrosion. The calandria has not been replaced since, although a second replacement is likely needed. An advantage of NRU's design is that it can be taken apart to allow for upgrade and repair.
In October 1986, the NRU reactor was recognized as a nuclear historic landmark by the American Nuclear Society. Since NRX was decommissioned in 1992, after 45 years of service, there has been no backup for NRU.
In 1994, Bertram Brockhouse was awarded the Nobel Prize in Physics, for his pioneering work carried out in the NRX and NRU reactors in the 1950s. He gave birth to a scientific technique which is now used around the world.
In 1996, AECL informed the Canadian Nuclear Safety Commission (then known as the Atomic Energy Control Board) that operation of the NRU reactor would not continue beyond December 31, 2005. It was expected that a replacement facility would be built inside that time. However, no replacement was built and in 2003, AECL advised the CNSC that they intended to continue operation of the NRU reactor beyond December 2005. The operating licence was initially extended to July 31, 2006, and a 63-month licence renewal was obtained in July 2006, allowing operation of the NRU until October 31, 2011.
In May 2007, the NRU set a new record for the production of medical isotopes.
In June 2007, a new neutron scattering instrument was opened in NRU. The D3 Neutron Reflectometer is designed for examining surfaces, thin films and interfaces. The technique of Neutron Reflectometry is capable of providing unique information on materials in the nanometre length scale.
2007 shutdown
On November 18, 2007, the NRU reactor was shut down for routine maintenance. This shutdown was voluntarily extended when AECL decided to install seismically qualified emergency power systems (EPS) to two of the reactor's cooling pumps (in addition to the AC and DC backup power systems already in place), as required as part of its August 2006 operating license extension by the Canadian Nuclear Safety Commission (CNSC). This resulted in a worldwide shortage of radioisotopes for medical treatments because AECL had not pre-arranged for an alternate supply. On December 11, 2007, the House of Commons of Canada, acting on what the government described as "independent expert" advice, passed emergency legislation authorizing the restarting of the NRU reactor with one of the two seismic connections complete (one pump being sufficient to cool the core), and authorizing the reactor's operation for 120 days without CNSC approval. The legislation, C-38, was passed by the Senate and received Royal Assent on December 12. Prime Minister Stephen Harper accused the "Liberal-appointed" CNSC for this shutdown which "jeopardized the health and safety of tens of thousands of Canadians". Others viewed the actions and priorities of the Prime Minister and government in terms of protecting the eventual sale of AECL to private investors. The government later announced plans to sell part of AECL in May 2009.
The NRU reactor was restarted on December 16, 2007.
On January 29, 2008, the former President of the CNSC, Linda Keen, testified before a Parliamentary Committee that the risk of fuel failure in the NRU reactor was "1 in 1000 years", and claimed this to be a thousand times greater risk than the "international standard". These claims were refuted by AECL.
On February 2, 2008, the second seismic connection was complete. This timing was well within the above 120-day window afforded by Bill C-38.
2009 Shutdown
In mid-May 2009 a heavy water leak at the base of the reactor vessel was detected, prompting a temporary shutdown of the reactor. The leak was estimated to be 5 kg (<5 litres) per hour, a result of corrosion. This was the second heavy water leak since late 2008. The reactor was defuelled and drained of all of its heavy water moderator. No administrative levels of radioactivity were exceeded, during the leak or defuelling, and all leaked water was contained and treated on site.
The reactor remained shut down until August 2010. The lengthy shutdown was necessary to defuel the reactor, ascertain the full extent of the corrosion to the vessel, and finally to effect the repairs — all with remote and restricted access from a minimum distance of 8 metres due to the residual radioactive fields in the reactor vessel. The 2009 shutdown occurred at a time when only one of the other four worldwide regular medical isotope sourcing reactors was producing, resulting in a worldwide shortage.
Decommissioning
On March 31, 2018, following government direction to shut down operations, NRU was permanently shut down ahead of decommissioning scheduled to start in 2028.
Production of isotopes
With the construction of the earlier NRX reactor, it was possible for the first time to commercially manufacture isotopes that were not commonly found in nature. In the core of an operating reactor there are billions of free neutrons. By inserting a piece of target material into the core, atoms in the target can capture some of those neutrons and become heavier isotopes. Manufacturing medical isotopes was a Canadian medical innovation in the early 1950s.
The NRU reactor continued the legacy of NRX and until it was decommissioned March 31, 2018 produced more medical isotopes than any facility in the world.
Cobalt-60 from NRU is used in radiation therapy machines that treat cancer in 15 million patients in 80 countries each year. The NRU produced about 75% of the global supply. The decay of Cobalt-60 results in the emission of high energy photons.
Technetium-99m from NRU used in the diagnosis of 5 million patients each year represented about 80 per cent of all nuclear medicine procedures. The NRU produced over half of the world's total supply. Technetium-99m emits less energy as it decays than most gamma emitters, roughly as much as the X-rays from an X-ray tube. This can act as an in situ source for a special camera that creates an image of the patient called a SPECT scan. NRU actually produced the more stable parent isotope, molybdenum-99, which is shipped to medical labs. There it decays into technetium-99m, which is separated and used for testing.
NRU produced xenon-133, iodine-131 and iodine-125, which are used in a variety of diagnostic and therapeutic applications.
Carbon-14 produced in NRU was sold to chemistry, bioscience and environmental labs where it is used as a tracer.
Iridium-192 from NRU is used in several industries to inspect welds or other metal components to ensure they are safe for use.
The core of the NRU reactor was about wide and high, which is unusually large for a research reactor. That large volume enabled the bulk production of isotopes. Other research reactors in the world produce isotopes for medical and industrial uses, for example the European High Flux Reactor at Petten in the Netherlands, Maria Reactor in Poland, and the OPAL reactor in Australia which began operation in April 2007.
NRU was originally scheduled to shut down in October 2016. With no stable isotope manufacturer ready to step in until 2018, the Canadian Government allowed the NRU to produce Isotopes until March 2018.
Neutron beam research
The NRU reactor is home to Canada's national facility for neutron scattering: the NRC Canadian Neutron Beam Centre. Neutron scattering is a technique where a beam of neutrons shines through a sample of material, and depending on how the neutrons scatter from the atoms inside, scientists can determine many details about the crystal structure and movements of the atoms within the sample.
An early pioneer of the technique was Bertram Brockhouse who built some of the early neutron spectrometers in the NRX and NRU reactors and was awarded the 1994 Nobel Prize in physics for the development of neutron spectroscopy.
The NRC Canadian Neutron Beam Centre continues that field of science today, operating as an open-access user facility allowing scientists from across Canada and around the world to use neutrons in their research programs.
It is common for a developed country to support a national facility for neutron scattering and one for X-ray scattering. The two types of facility provide complementary information about materials.
An unusual feature of the NRU reactor as Canada's national neutron source is its multipurpose design: able to manufacture isotopes, and support nuclear R&D at the same time as it supplies neutrons to the suite of neutron scattering instruments.
The NRU reactor is sometimes (incorrectly) characterized as simply a nuclear research facility. Neutron scattering however is not nuclear research, it is materials research. Neutrons are an ideal probe of materials including metals, alloys, biomaterials, ceramics, magnetic materials, minerals, polymers, composites, glasses, nano-materials and many others. The neutron scattering instruments at the NRC Canadian Neutron Beam Centre are used by universities and industries from across Canada every year because knowledge of materials is important for innovation in many sectors.
Nuclear power research and development
The NRU reactor has test facilities built into its core that can replicate conditions inside a large electricity-producing reactor. NRU itself does not generate steam (or electricity); its cooling water heats up to approximately 55 degrees Celsius. Inside the test facilities though, high temperatures and pressures can be produced. It is essential to test out different materials before they are used in the construction of a nuclear generating station.
The fundamental knowledge gained from NRU enabled the development of the CANDU reactor, and is the foundation for the Canadian nuclear industry.
See also
Nuclear fission
Neutron scattering
Nuclear power
Nuclear waste
Nuclear power in Canada
References
External links
The NRU Reactor
NRC Canadian Neutron Beam Centre
Nuclear Accidents (Georgia State University)
"The Canadian Nuclear FAQ" by Dr. Jeremy Whitlock (in particular see the section "NRU Safety" regarding the Nov-Dec 2007 shutdown of isotope production)
The Society for the Preservation of Canada's Nuclear Heritage, Inc.
Atomic Energy of Canada Limited
Nuclear medicine organizations
Neutron-related techniques
Nuclear accidents and incidents
Nuclear research reactors
Nuclear technology in Canada | National Research Universal reactor | [
"Chemistry",
"Engineering"
] | 2,610 | [
"Nuclear accidents and incidents",
"Nuclear medicine organizations",
"Nuclear organizations",
"Radioactivity"
] |
30,280,089 | https://en.wikipedia.org/wiki/Bochner%E2%80%93Martinelli%20formula | In mathematics, the Bochner–Martinelli formula is a generalization of the Cauchy integral formula to functions of several complex variables, introduced by and .
History
Bochner–Martinelli kernel
For , in the Bochner–Martinelli kernel is a differential form in of bidegree defined by
(where the term is omitted).
Suppose that is a continuously differentiable function on the closure of a domain in n with piecewise smooth boundary . Then the Bochner–Martinelli formula states that if is in the domain then
In particular if is holomorphic the second term vanishes, so
See also
Bergman–Weil formula
Notes
References
.
.
.
.
.
.
, (ebook).
. The first paper where the now called Bochner-Martinelli formula is introduced and proved.
. Available at the SEALS Portal . In this paper Martinelli gives a proof of Hartogs' extension theorem by using the Bochner-Martinelli formula.
. The notes form a course, published by the Accademia Nazionale dei Lincei, held by Martinelli during his stay at the Accademia as "Professore Linceo".
. In this article, Martinelli gives another form to the Martinelli–Bochner formula.
Theorems in complex analysis
Several complex variables | Bochner–Martinelli formula | [
"Mathematics"
] | 262 | [
"Theorems in mathematical analysis",
"Functions and mappings",
"Several complex variables",
"Theorems in complex analysis",
"Mathematical objects",
"Mathematical relations"
] |
6,710,047 | https://en.wikipedia.org/wiki/Picamar | Picamar is a colorless, hydrocarbon oil extracted from the creosote of beechwood tar with a peculiar odor and bitter taste. It consists of derivatives of pyrogallol. It was discovered by German chemist Carl Reichenbach in the 1830s. Picamar can be used to lubricate machinery.
Chemical and physical properties
The exact composition of picamar is unknown. According to Pastrovich, picamar is a monomethyl ether of propyl-pyrogallol (). However, Gustav Niederist, who obtained an original sample of the oil as prepared by von Reichenbach himself, assigned it a formula of . Picamar is colorless with a peculiar, peppermint-like odor and bitter taste. It is soluble in alcohol and sparingly soluble in water. It has a melting point of . Picamar reduces the red oxide of mercury to its metallic state. It reacts with nitric acid to become a reddish-brown, greasy substance and can also dissolve camphor, resin, and benzoic acids.
History
The name "picamar" is derived from the Latin phrase in pice amarum (meaning "bitter principle of tar"). It was discovered by German chemist Carl Reichenbach in the 1830s as one of the six principles of beechwood tar, along with other substances as capnomor and eupione that were "met with less notice".
Applications
Picamar is used for greasing machinery and preventing them from rusting.
References
Hydrocarbons
Phenols | Picamar | [
"Chemistry"
] | 320 | [
"Organic compounds",
"Hydrocarbons"
] |
6,710,186 | https://en.wikipedia.org/wiki/Capnomor | Capnomor (from Greek smoke + part) is a colorless oil with an aromatic odor which is extracted by distillation from beechwood tar. Its specific gravity is 0.9775 at 20 °C and boiling point is 185 °C. It was discovered in the 1830s by the German chemist Baron Karl von Reichenbach.
References
Hydrocarbons | Capnomor | [
"Chemistry"
] | 72 | [
"Organic compounds",
"Hydrocarbons"
] |
6,710,586 | https://en.wikipedia.org/wiki/Taping%20knife | A taping knife or joint knife is a drywall tool with a wide blade for spreading joint compound, also known as "mud". It can be used to spread mud over nail and screw indents in new drywall applications and is also used when using paper or fiberglass drywall tape to cover seams. Other common uses include patching holes, smoothing wall-coverings and creating specialty artistic wall finishes. Common sizes range from 15cm to 30cm wide (five to 12 inches). Spackle knives are a smaller version, used for patching small holes.
A right-angle joint knife allows one to apply joint compound to inside corners where walls meet. The handle is offset to allow clearance for fingers.
References
The Reader's Digest Book of Skills and Tools
Josh Mars and His Tips For Home Repairs
Mechanical hand tools | Taping knife | [
"Physics"
] | 169 | [
"Mechanics",
"Mechanical hand tools"
] |
6,711,184 | https://en.wikipedia.org/wiki/VTLS | VTLS Inc. was a global company that provided library automation software and services to a diverse customer base of more than 1900 libraries in 44 countries. The for-profit company was founded in 1985 by Dr. Vinod Chachra, who became the President and CEO of the company. VTLS originated as "Virginia Tech Library Systems", an automated circulation and cataloging system created for Virginia Tech’s Newman Library in 1975. In addition to its headquarters in Blacksburg, Virginia, United States, VTLS had five international offices in Australia, Brazil, India, Malaysia and Spain. VTLS was one of the few ISO 9001:2008 quality-certified companies within the library industry for many years. The company was acquired by Innovative Interfaces in 2014.
History
VTLS Inc. was the offspring of a project launched in 1974 at Virginia Polytechnic Institute and State University's (Virginia Tech’s) Newman Library, a member of the Association of Research Libraries with more than 2 million cataloged volumes. Having explored available library automation alternatives and having found no system suitable for the needs of its libraries, Virginia Tech initiated a development project to create an automated library system, spearheaded by Dr. Vinod Chachra, head of Systems Development (SD). This forerunner of VTLS Classic and Virtua, consisting of an Online Public Access Catalog (OPAC) and an automated circulation system, was installed at Virginia Tech's Newman Library in September 1975.
By 1980, the software had evolved into the integrated library system (ILS) known as "VTLS Classic", a MARC based ILS including library automation. In 1983, VTLS became the first integrated library system to implement linked Authority Control and to feature full integration and support of the MARC Format for bibliographic records. On July 1, 1985, VTLS Inc. was formed by Dr. Vinod Chachra as a subsidiary corporation of Virginia Tech Intellectual Properties (VTIP), which granted VTLS Inc. exclusive, worldwide rights to enhance and market VTLS software. The ensuing years witnessed dramatic growth for VTLS and innovative development of the VTLS system. By 1987, VTLS became the first ILS to fully support the US MARC Format for Holdings and Locations in a fully integrated Serials Control Subsystem. In 1989, VTLS introduced a multilingual user interface design that allowed users to change language dynamically within their user session. VTLS also began offering imaging services and digital library solutions in 1993.
In 1998, VTLS launched their next-generation library automation suite with the first release of the "Virtua Integrated Library System" (ILS). Virtua incorporated all of the functionality of VTLS Classic but utilized an entirely new software architecture that included full Unicode support throughout the system as well as full native Z39.50 support. Since the introduction of Virtua, VTLS continued to develop support for new standards and emerging technologies. In 2000, VTLS introduced support for radio-frequency identification (RFID) technology for circulation and security, later expanding it into the "Fastrac" product division. In 2002, VTLS introduced full support for Functional Requirements for Bibliographic Records (FRBR), a key bibliographic standard enabling full Resource Description and Access (RDA) Level 1 implementation. In 2004, VTLS introduced the "VITAL Digital Asset Management System". Later developments focused on discovery software for patrons with the introduction of "Visualizer Discovery" in 2008, the "Chamo Social OPAC" in 2009, and a hybrid of the two products called "Chamo Discovery" in 2012. VTLS later introduced a service called "Vorpal Solutions", which offered custom Drupal modules that allowed deployment of a complete custom Drupal-based front-end for VTLS solutions. This was followed in 2013 by the introduction of the "Open Skies" unified software platform initiative, planning for the future interconnectivity of various VTLS-based software components as well as more open connectivity with external library services.
Throughout its history, VTLS was a member of many industry organizations, including: American Library Association (ALA), Book Industry Study Group (BISG), Coalition for Networked Information (CNI), EDUCAUSE, International Federation of Library Associations and Institutions (IFLA), MARBI, National Information Standards Organization (NISO) for Z39.50, Online Computer Library Center (OCLC), the Unicode Consortium, and the Virginia Business Pipeline (VBP). VTLS was also a founding member of the Coalition for Networked Information (CNI).
In June 2014, VTLS was purchased by Innovative Interfaces Inc., who continues to actively develop and support most of the VTLS-based software suites.
References
External links
Innovative Interfaces Inc. Website
Innovative Interfaces Company Profile on Library Technology Guides (Maintained by Marshal Breeding)
1985 establishments in Virginia
Virginia Tech
Blacksburg, Virginia
Companies established in 1985
Library automation
Library-related organizations
Software companies based in Virginia
Defunct software companies of the United States | VTLS | [
"Engineering"
] | 1,022 | [
"Library automation",
"Automation"
] |
6,713,413 | https://en.wikipedia.org/wiki/Logic%20probe | A logic probe is a low-cost hand-held test probe used for analyzing and troubleshooting the logical states (boolean 0 or 1) of a digital circuit. When many signals need to be observed or recorded simultaneously, a logic analyzer is used instead.
Overview
While most logic probes are powered by the circuit under test, some devices use batteries. They can be used on either TTL (transistor-transistor logic) or CMOS (complementary metal-oxide semiconductor) logic integrated circuit devices, such as 7400-series, 4000 series, and newer logic families that support similar voltages.
Most modern logic probes typically have one or more LEDs on the body of the probe:
an LED to indicate a high (1) logic state.
an LED to indicate a low (0) logic state.
an LED to indicate changing back and forth between low and high states. The pulse-detecting electronics usually has a pulse-stretcher circuit so that even very short pulses become visible on the LED.
A control on the logic probe allows either the capture and storage of a single event or continuous running.
When the logic probe is either connected to an invalid logic level (a fault condition or a tri-stated output) or not connected at all, none of the LEDs light up.
Another control on the logic probe allow selection of either TTL or CMOS family logic. This is required as these families have different thresholds for the logic-high (VIH) and logic-low (VIL) circuit voltages.
Some logic probes have an audible tone, of which vary across models. A model may 1) emit a tone for high logic state otherwise no tone, or 2) emit a higher frequency tone for a high logic state, lower frequency tone for a low logic state, and no tone for no connection or tri-state. An oscillating signal causes the probe to alternate between high-state and low-state tones.
History
The logic probe was invented by Gary Gordon in 1968 while he was employed by Hewlett-Packard.
References
External links
Schematic of a simple logic tester
How to make a simple logic tester
How to make a logic tester using a 555 timer chip
Elenco logic probes
Electronic test equipment
Digital electronics
Measuring instruments | Logic probe | [
"Technology",
"Engineering"
] | 471 | [
"Electronic engineering",
"Electronic test equipment",
"Measuring instruments",
"Digital electronics"
] |
6,713,437 | https://en.wikipedia.org/wiki/Doubling%20time | The doubling time is the time it takes for a population to double in size/value. It is applied to population growth, inflation, resource extraction, consumption of goods, compound interest, the volume of malignant tumours, and many other things that tend to grow over time. When the relative growth rate (not the absolute growth rate) is constant, the quantity undergoes exponential growth and has a constant doubling time or period, which can be calculated directly from the growth rate.
This time can be calculated by dividing the natural logarithm of 2 by the exponent of growth, or approximated by dividing 70 by the percentage growth rate (more roughly but roundly, dividing 72; see the rule of 72 for details and derivations of this formula).
The doubling time is a characteristic unit (a natural unit of scale) for the exponential growth equation, and its converse for exponential decay is the half-life.
As an example, Canada's net population growth was 2.7 percent in the year 2022, dividing 72 by 2.7 gives an approximate doubling time of about 27 years. Thus if that growth rate were to remain constant, Canada's population would double from its 2023 figure of about 39 million to about 78 million by 2050.
History
The notion of doubling time dates to interest on loans in Babylonian mathematics. Clay tablets from circa 2000 BCE include the exercise "Given an interest rate of 1/60 per month (no compounding), come the doubling time." This yields an annual interest rate of 12/60 = 20%, and hence a doubling time of 100% growth/20% growth per year = 5 years. Further, repaying double the initial amount of a loan, after a fixed time, was common commercial practice of the period: a common Assyrian loan of 1900 BCE consisted of loaning 2 minas of gold, getting back 4 in five years, and an Egyptian proverb of the time was "If wealth is placed where it bears interest, it comes back to you redoubled."
Examination
Examining the doubling time can give a more intuitive sense of the long-term impact of growth than simply viewing the percentage growth rate.
For a constant growth rate of r % within time t, the formula for the doubling time Td is given by
A common rule-of-thumb can be derived by Taylor series expanding the denominator ln(1+x) for x=0 using and ignoring higher order terms.
This "Rule of 70" gives accurate doubling times to within 10% for growth rates less than 25% and within 20% for rates less than 60%. Larger growth rates result in the rule underestimating the doubling time by a larger margin.
Some doubling times calculated with this formula are shown in this table.
Simple doubling time formula:
where
N(t) = the number of objects at time t
Td = doubling period (time it takes for object to double in number)
N0 = initial number of objects
t = time
For example, with an annual growth rate of 4.8% the doubling time is 14.78 years, and a doubling time of 10 years corresponds to a growth rate between 7% and 7.5% (actually about 7.18%).
When applied to the constant growth in consumption of a resource, the total amount consumed in one doubling period equals the total amount consumed in all previous periods. This enabled U.S. President Jimmy Carter to note in a speech in 1977 that in each of the previous two decades the world had used more oil than in all of previous history (The roughly exponential growth in world oil consumption between 1950 and 1970 had a doubling period of under a decade).
Given two measurements of a growing quantity, q1 at time t1 and q2 at time t2, and assuming a constant growth rate, the doubling time can be calculated as
Related concepts
The equivalent concept to doubling time for a material undergoing a constant negative relative growth rate or exponential decay is the half-life.
The equivalent concept in base-e is e-folding.
Cell culture doubling time
Cell doubling time can be calculated in the following way using growth rate (amount of doubling in one unit of time)
Growth rate:
or
where
= the number of cells at time t
= the number of cells at time 0
= growth rate
= time (usually in hours)
Doubling time:
The following is the known doubling time for the following cells:
See also
Albert Allen Bartlett
Binary logarithm
e-folding
Exponential decay
Exponential growth
Half-life
Relative growth rate
Rule of 72
References
External links
Doubling Time Calculator
http://geography.about.com/od/populationgeography/a/populationgrow.htm
Population ecology
Temporal exponentials
Economic growth
Epidemiology
Mathematics in medicine | Doubling time | [
"Physics",
"Mathematics",
"Environmental_science"
] | 967 | [
"Epidemiology",
"Physical quantities",
"Time",
"Applied mathematics",
"Temporal exponentials",
"Spacetime",
"Mathematics in medicine",
"Environmental social science"
] |
34,367,597 | https://en.wikipedia.org/wiki/Self-enforcing%20agreement | A self-enforcing agreement is an agreement that is enforced only by the parties to it; no external party can enforce or interfere with the agreement. (In this respect it differs from an enforceable contract.) The agreement will stand so long as the parties believe it is mutually beneficial and it is not breached by any party.
In game theory, games in which cooperative behaviour can only be enforced through self-enforcing agreements are called non-cooperative games, whereas games allowing strategies relying on external enforcement are called cooperative games. Nash equilibrium is the most common kind of self-enforcing agreement.
References
Agreements
Game theory equilibrium concepts | Self-enforcing agreement | [
"Mathematics"
] | 124 | [
"Game theory",
"Game theory equilibrium concepts"
] |
34,368,888 | https://en.wikipedia.org/wiki/Myokine | A myokine is one of several hundred cytokines or other small proteins (~5–20 kDa) and proteoglycan peptides that are produced and released by skeletal muscle cells (muscle fibers) in response to muscular contractions. They have autocrine, paracrine and/or endocrine effects; their systemic effects occur at picomolar concentrations.
Receptors for myokines are found on muscle, fat, liver, pancreas, bone, heart, immune, and brain cells. The location of these receptors reflects the fact that myokines have multiple functions. Foremost, they are involved in exercise-associated metabolic changes, as well as in the metabolic changes following training adaptation. They also participate in tissue regeneration and repair, maintenance of healthy bodily functioning, immunomodulation; and cell signaling, expression and differentiation.
History
The definition and use of the term myokine first occurred in 2003.
In 2008, the first myokine, myostatin, was identified. The gp130 receptor cytokine IL-6 (Interleukin 6) was the first myokine found to be secreted into the blood stream in response to muscle contractions.
Secretion
In repetitive skeletal muscle contractions
There is an emerging understanding of skeletal muscle as a secretory organ, and of myokines as mediators of physical fitness through the practice of regular physical exercise (aerobic exercise and strength training), as well as new awareness of the anti-inflammatory and thus disease prevention aspects of exercise. Different muscle fiber types – slow twitch muscle fibers, oxidative muscle fibers, intermediate twitch muscle fibers, and fast twitch muscle fibers – release different clusters of myokines during contraction. This implies that variation of exercise types, particularly aerobic training/endurance training and muscle contraction against resistance (strength training) may offer differing myokine-induced benefits.
Functions
"Some myokines exert their effects within the muscle itself. Thus, myostatin, LIF, IL-6 and IL-7 are involved in muscle hypertrophy and myogenesis, whereas BDNF and IL-6 are involved in AMPK-mediated fat oxidation. IL-6 also appears to have systemic effects on the liver, adipose tissue and the immune system, and mediates crosstalk between intestinal L cells and pancreatic islets. Other myokines include the osteogenic factors IGF-1 and FGF-2; FSTL-1, which improves the endothelial function of the vascular system; and the PGC-1alpha-dependent myokine irisin, which drives brown fat-like development. Studies in the past few years suggest the existence of yet unidentified factors, secreted from muscle cells, which may influence cancer cell growth and pancreas function. Many proteins produced by skeletal muscle are dependent upon contraction; therefore, physical inactivity probably leads to an altered myokine response, which could provide a potential mechanism for the association between sedentary behaviour and many chronic diseases."
In brain functions related to neuroplasticity, memory, sleep and mood
Physical exercise rapidly triggers substantial changes at the organismal level, including the secretion of myokines and metabolites by muscle cells. For instance, aerobic exercise in humans leads to significant structural alterations in the brain, while wheel-running in rodents promotes neurogenesis and improves synaptic transmission in particular in the hippocampus. Moreover, physical exercise triggers histone modifications and protein synthesis which ultimately positively influence mood and cognitive abilities. Notably, regular exercise is somewhat associated with a better sleep quality, which could be mediated by the muscle secretome.
In regulating heart architecture
Heart muscle is subject to two kinds of stress: physiologic stress, i.e. exercise; and pathologic stress, i.e. disease related. Likewise, the heart has two potential responses to either stress: cardiac hypertrophy, which is a normal, physiologic, adaptive growth; or cardiac remodeling, which is an abnormal, pathologic, maladaptive growth. Upon being subjected to either stress, the heart "chooses" to turn on one of the responses and turn off the other. If it has chosen the abnormal path, i.e. remodeling, exercise can reverse this choice by turning off remodeling and turning on hypertrophy. The mechanism for reversing this choice is the microRNA miR-222 in cardiac muscle cells, which exercise up-regulates via unknown myokines. miR-222 represses genes involved in fibrosis and cell-cycle control.
In immunomodulation
Immunomodulation and immunoregulation were a particular focus of early myokine research, as, according to Dr. Bente Klarlund Pedersen and her colleagues, "the interactions between exercise and the immune system provided a unique opportunity to evaluate the role of underlying endocrine and cytokine mechanisms."
Muscle has an impact on the trafficking and inflammation of lymphocytes and neutrophils. During exercise, both neutrophils and NK cells and other lymphocytes enter the blood. Long-duration, high-intensity exercise leads to a decrease in the number of lymphocytes, while the concentration of neutrophils increases through mechanisms including adrenaline and cortisol.Interleukin-6 has been shown to mediate the increase in Cortisol: IL-6 stimulates the production of cortisol and therefore induces leukocytosis and lymphocytopenia.
Specific myokines
Myostatin
Both aerobic exercise and strength training (resistance exercise) attenuate myostatin expression, and myostatin inactivation potentiates the beneficial effects of endurance exercise on metabolism.
Interleukins
Aerobic exercise provokes a systemic cytokine response, including, for example, IL-6, IL-1 receptor antagonist (IL-1ra), and IL-10 (Interleukin 10) and the concentrations of chemokines, IL-8, macrophage inflammatory protein α (MIP-1α), MIP-1β, and MCP-1 rise after vigorous exercise. IL-6 was identified as a myokine based on the observation that it increased in an exponential fashion proportional to the length of exercise and the amount of muscle mass engaged in the exercise. This increase is followed by the appearance of IL-1ra and the anti-inflammatory cytokine IL-10. In general, the cytokine response to exercise and sepsis differs with regard to TNF-α. Thus, the cytokine response to exercise is not preceded by an increase in plasma-TNF-α. Following exercise, the basal plasma IL-6 concentration may increase up to 100-fold, but less dramatic increases are more frequent. The exercise-induced increase of plasma IL-6 occurs in an exponential manner and the peak IL-6 level is reached at the end of the exercise or shortly thereafter. It is the combination of mode, intensity, and duration of the exercise that determines the magnitude of the exercise-induced increase of plasma IL-6.
As studies have demonstrated IL-6 has pro-inflammatory functions when evaluated in regard to sepsis and obesity, it was initially hypothesized that the exercise-induced IL-6 response was related to muscle damage. However, a recent study suggests that eccentric exercise is not associated with a larger increase in plasma IL-6 than exercise involving concentric “nondamaging” muscle contractions. This finding supports the hypothesis that muscle damage is not required to provoke an increase in plasma IL-6 during exercise.
IL-6, among an increasing number of other recently identified myokines, remains an important topic of myokine research. It appears in muscle tissue and in the circulation during exercise at levels up to one hundred times basal rates, as noted, and may have a beneficial impact on health and bodily functioning with transient increases as P. Munoz-Canoves et al. write: "It appears consistently in the literature that IL-6, produced locally by different cell types, has a positive impact on the proliferative capacity of muscle stem cells. This physiological mechanism functions to provide enough muscle progenitors in situations that require a high number of these cells, such as during the processes of muscle regeneration and hypertrophic growth after an acute stimulus. IL-6 is also the founding member of the myokine family of muscle-produced cytokines. Indeed, muscle-produced IL-6 after repeated contractions also has important autocrine and paracrine benefits, acting as a myokine, in regulating energy metabolism, controlling, for example, metabolic functions and stimulating glucose production. It is important to note that these positive effects of IL-6 and other myokines are normally associated with its transient production and short-term action."
Interleukin 15
Interleukin-15 stimulates fat oxidation, glucose uptake, mitochondrial biogenesis and myogenesis in skeletal muscle and adipose tissue. In humans, basal concentrations of IL-15 and its alpha receptor (IL-15Rα) in blood have been inversely associated with physical inactivity and fat mass, particularly trunk fat mass. Moreover, in response to a single session of resistance exercise the IL-15/IL-15Rα complex has been related to myofibrillar protein synthesis (hypertrophy).
Brain-derived neurotrophic factor
Brain-derived neurotrophic factor (BDNF) is also a myokine, though BDNF produced by contracting muscle is not released into circulation. Rather, BDNF produced in skeletal muscle appears to enhance the oxidation of fat. Skeletal muscle activation through exercise also contributes to an increase in BDNF secretion in the brain. A beneficial effect of BDNF on neuronal function has been noted in multiple studies. Dr. Pedersen writes, "Neurotrophins are a family of structurally related growth factors, including brain-derived neurotrophic factor (BDNF), which exert many of their effects on neurons primarily through Trk receptor tyrosine kinases. Of these, BDNF and its receptor TrkB are most widely and abundantly expressed in the brain. However, recent studies show that BDNF is also expressed in non-neurogenic tissues, including skeletal muscle. BDNF has been shown to regulate neuronal development and to modulate synaptic plasticity. BDNF plays a key role in regulating survival, growth and maintenance of neurons, and BDNF has a bearing on learning and memory. However, BDNF has also been identified as a key component of the hypothalamic pathway that controls body mass and energy homeostasis.
"Most recently, we have shown that BDNF appears to be a major player not only in central metabolic pathways but also as a regulator of metabolism in skeletal muscle. Hippocampal samples from Alzheimer’s disease donors show decreased BDNF expression and individuals with Alzheimer’s disease have low plasma levels of BDNF. Also, patients with major depression have lower levels of serum BDNF than normal control subjects. Other studies suggest that plasma BDNF is a biomarker of impaired memory and general cognitive function in ageing women and a low circulating BDNF level was recently shown to be an independent and robust biomarker of mortality risk in old women. Low levels of circulating BDNF are also found in obese individuals and those with type 2 diabetes. In addition, we have demonstrated that there is a cerebral output of BDNF and that this is inhibited during hyperglycaemic clamp conditions in humans. This last finding may explain the concomitant finding of low circulating levels of BDNF in individuals with type 2 diabetes, and the association between low plasma BDNF and the severity of insulin resistance.
BDNF appears to play a role in both neurobiology and metabolism. Studies have demonstrated that physical exercise may increase circulating BDNF levels in humans. To identify whether the brain is a source of BDNF during exercise, eight volunteers rowed for 4 h while simultaneous blood samples were obtained from the radial artery and the internal jugular vein. To further identify the putative cerebral region(s) responsible for BDNF release, mouse brains were dissected and analysed for BDNF mRNA expression following treadmill exercise. In humans, a BDNF release from the brain was observed at rest and increased 2- to 3-fold during exercise. Both at rest and during exercise, the brain contributed 70–80% of the circulating BDNF, while this contribution decreased following 1 h of recovery. In mice, exercise induced a 3- to 5-fold increase in BDNF mRNA expression in the hippocampus and cortex, peaking 2 h after the termination of exercise. These results suggest that the brain is a major but not the sole contributor to circulating BDNF. Moreover, the importance of the cortex and hippocampus as sources of plasma BDNF becomes even more prominent in the response to exercise.”
With respect to studies of exercise and brain function, a 2010 report is of particular interest. Erickson et al. have shown that the volume of the anterior hippocampus increased by 2% in response to aerobic training in a randomized controlled trial with 120 older adults. The authors also summarize several previously-established research findings relating to exercise and brain function: (1) Aerobic exercise training increases grey and white matter volume in the prefrontal cortex of older adults and increases the functioning of key nodes in the executive control network. (2) Greater amounts of physical activity have been associated with sparing of prefrontal and temporal brain regions over a 9-y period, which reduces the risk for cognitive impairment. (3) Hippocampal and medial temporal lobe volumes are larger in higher-fit older adults (larger hippocampal volumes have been demonstrated to mediate improvements in spatial memory). (4) Exercise training increases cerebral blood volume and perfusion of the hippocampus.
Regarding the 2010 study, the authors conclude: "We also demonstrate that increased hippocampal volume is associated with greater serum levels of BDNF, a mediator of neurogenesis in the dentate gyrus. Hippocampal volume declined in the control group, but higher preintervention fitness partially attenuated the decline, suggesting that fitness protects against volume loss. Caudate nucleus and thalamus volumes were unaffected by the intervention. These theoretically important findings indicate that aerobic exercise training is effective at reversing hippocampal volume loss in late adulthood, which is accompanied by improved memory function."
Decorin
Decorin is an example of a proteoglycan which functions as a myokine. Kanzleiter et al have established that this myokine is secreted during muscular contraction against resistance, and plays a role in muscle growth. They reported on July 1, 2014: "The small leucine-rich proteoglycan decorin has been described as a myokine for some time. However, its regulation and impact on skeletal muscle (had) not been investigated in detail. In (our recent) study, we report decorin to be differentially expressed and released in response to muscle contraction using different approaches. Decorin is released from contracting human myotubes, and circulating decorin levels are increased in response to acute resistance exercise in humans. Moreover, decorin expression in skeletal muscle is increased in humans and mice after chronic training. Because decorin directly binds myostatin, a potent inhibitor of muscle growth, we investigated a potential function of decorin in the regulation of skeletal muscle growth. In vivo overexpression of decorin in murine skeletal muscle promoted expression of the pro-myogenic factor Mighty, which is negatively regulated by myostatin. We also found Myod1 and follistatin to be increased in response to decorin overexpression. Moreover, muscle-specific ubiquitin ligases atrogin1 and MuRF1, which are involved in atrophic pathways, were reduced by decorin overexpression. In summary, our findings suggest that decorin secreted from myotubes in response to exercise is involved in the regulation of muscle hypertrophy and hence could play a role in exercise-related restructuring processes of skeletal muscle."
Irisin
Discovery
Irisin is a cleaved version of FNDC5. Boström and coworkers named the cleaved product irisin, after the Greek messenger goddess Iris. FNDC5 was initially discovered in 2002 by two independent groups of researchers.
Function
Irisin (fibronectin type III domain-containing protein 5 or FNDC5), a recently described myokine hormone produced and secreted by acutely exercising skeletal muscles, is thought to bind white adipose tissue cells via undetermined receptors. Irisin has been reported to promote a brown adipose tissue-like phenotype upon white adipose tissue by increasing cellular mitochondrial density and expression of uncoupling protein-1, thereby increasing adipose tissue energy expenditure via thermogenesis. This is considered important, because excess visceral adipose tissue in particular distorts the whole body energy homeostasis, increases the risk of cardiovascular disease and raises exposure to a milieu of adipose tissue-secreted hormones (adipokines) that promote inflammation and cellular aging. The authors enquired whether the favorable impact of irisin on white adipose tissue might be associated with maintenance of telomere length, a well-established genetic marker in the aging process. They conclude that these data support the view that irisin may have a role in the modulation not only of energy balance but also the aging process.
However, exogenous irisin may aid in heightening energy expenditure, and thus in reducing obesity. Boström et al. reported on December 14, 2012: "Since the conservation of calories would likely provide an overall survival advantage for mammals, it appears paradoxical that exercise would stimulate the secretion of a polypeptide hormone that increases thermogenesis and energy expenditure. One explanation for the increased irisin expression with exercise in mouse and man may have evolved as a consequence of muscle contraction during shivering. Muscle secretion of a hormone that activates adipose thermogenesis during this process might provide a broader, more robust defense against hypothermia. The therapeutic potential of irisin is obvious. Exogenously administered irisin induces the browning of subcutaneous fat and thermogenesis, and it presumably could be prepared and delivered as an injectable polypeptide. Increased formation of brown or beige/brite fat has been shown to have anti-obesity, anti-diabetic effects in multiple murine models, and adult humans have significant deposits of UCP1-positive brown fat. (Our data show) that even relatively short treatments of obese mice with irisin improves glucose homeostasis and causes a small weight loss. Whether longer treatments with irisin and/or higher doses would cause more weight loss remains to be determined. The worldwide, explosive increase in obesity and diabetes strongly suggests exploring the clinical utility of irisin in these and related disorders. Another potentially important aspect of this work relates to other beneficial effects of exercise, especially in some diseases for which no effective treatments exist. The clinical data linking exercise with health benefits in many other diseases suggests that irisin could also have significant effects in these disorders."
While the murine findings reported by Boström et al. appear encouraging, other researchers have questioned whether irisin operates in a similar manner in humans. For example, Timmons et al. noted that over 1,000 genes are upregulated by exercise and examined how expression of FNDC5 was affected by exercise in ~200 humans. They found that it was upregulated only in highly active elderly humans, casting doubt on the conclusions of Boström et al. Further discussion of this issue can be found in .
Osteonectin (SPARC)
A novel myokine osteonectin, or SPARC (secreted protein acidic and rich in cysteine), plays a vital role in bone mineralization, cell-matrix interactions, and collagen binding. Osteonectin inhibits tumorigenesis in mice. Osteonectin can be classed as a myokine, as it was found that even a single bout of exercise increased its expression and secretion in skeletal muscle in both mice and humans.
PGC-1
Peroxisome proliferator activated receptor gamma 1-alpha coactivator (PGC-1 alpha) is a specific myokine since it stimulates satellite cells, but stimulates M1 and M2 macrophages; M1 macrophages release interleukin 6 (IL-6), Insulin growth factor type 1 (IGF-1) and vascular endothelial growth factor (VEGF), while M2 macrophages mainly secrete IGF-1, VEGF and monocyte chemoattractant protein 1 (MCP-1)) and all this process the muscle becomes muscle hypertrophy.
Macrophages M2 stimulate satellite cells for proliferation and growth but M1 stimulates blood vessels and produces pro-inflammatory cytokines only M2 produces anti-inflammatory in muscles.
Myokine in cancer treatments
The myokine oncostatin M has been shown to inhibit the proliferation of breast cancer cells, IL-6, IL-15, epinephrine and norepinephrine for the recruitment of NK cells and replacement of old neutrophils into new and more functional ones and limit induced inflammation by Macrophages M1 and increase in Macrophages M2 (anti-inflammatory).
References
External links
TED 2012: MAKING MORE MINDS UP TO MOVE
Danish Centre of Inflammation and Metabolism - publications link
Cell biology
Cell communication
Cytokines
Molecular biology
Exercise physiology
Strength training | Myokine | [
"Chemistry",
"Biology"
] | 4,611 | [
"Cell communication",
"Cell biology",
"Signal transduction",
"Cytokines",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
34,372,981 | https://en.wikipedia.org/wiki/Bradsher%20cycloaddition | The Bradsher cycloaddition reaction, also known as the Bradsher cyclization reaction is a form of the Diels–Alder reaction which involves the [4+2] addition of a common dienophile with a cationic aromatic azadiene such as acridizinium or isoquinolinium.
The Bradsher cycloaddition was first reported by C. K. Bradsher and T. W. G. Solomons in 1958.
References
Name reactions
Cycloadditions | Bradsher cycloaddition | [
"Chemistry"
] | 108 | [
"Name reactions"
] |
34,373,016 | https://en.wikipedia.org/wiki/Metallacarboxylic%20acid | A metallacarboxylic acid is a metal complex with the ligand CO2H. These compounds are intermediates in reactions that involve carbon monoxide and carbon dioxide, these species are intermediates in the water gas shift reaction. Metallacarboxylic acids are also called hydroxycarbonyls.
Preparation
Metallacarboxylic acids mainly arise by the attack of hydroxide on electrophilic metal carbonyl complexes. An illustrative synthesis is the reaction of a cationic iron carbonyl with a stoichiometric amount of base:
[(C5H5)(CO)2FeCO]BF4 + NaOH → [(C5H5)(CO)2FeCO2H + NaBF4
When applied to simple metal carbonyls, this kind of conversion is sometimes called the Hieber base reaction. Decarboxylation of the resulting anion gives the anionic hydride complex. This conversion is illustrated by the synthesis of [HFe(CO)4]− from iron pentacarbonyl.
Fe(CO)5 + NaOH → NaFe(CO)4CO2H
NaFe(CO)4CO2H → NaHFe(CO)4 + CO2
Related compounds
Metallacarboxylic acids exist in equilibria with the carboxylate anions, LnMCO2−.
Metallacarboxylate esters (LnMCO2R) arise by the addition of alkoxide to metal carbonyl:
[LnM-CO]+ + ROH → [LnM-CO2R] + H+
Metallacarboxylic amides (LnMC(O)NR2) arise by the addition of amide to metal carbonyl:
[LnM-CO]+ + 2 RNH2 → [LnM-C(O)N(H)R] + RNH3+
Derivatives of metalladithiacarboxylic acids are also known. They are prepared by treating anionic complexes with carbon disulfide.
References
Carbonyl complexes
Organometallic chemistry
Transition metals | Metallacarboxylic acid | [
"Chemistry"
] | 447 | [
"Organometallic chemistry"
] |
34,375,922 | https://en.wikipedia.org/wiki/Mattauch%20isobar%20rule | The Mattauch isobar rule, formulated by Josef Mattauch in 1934, states that if two adjacent elements on the periodic table have isotopes of the same mass number, one of the isotopes must be radioactive. Two nuclides that have the same mass number (isobars) can both be stable only if their atomic numbers differ by more than one. In fact, for currently observationally stable nuclides, the difference can only be 2 or 4, and in theory, two nuclides that have the same mass number cannot be both stable (at least to beta decay or double beta decay), but many such nuclides which are theoretically unstable to double beta decay have not been observed to decay, e.g. 134Xe. However, this rule cannot make predictions on the half-lives of these radioisotopes.
Technetium and promethium
A consequence of this rule is that technetium and promethium both have no stable isotopes, as each of the neighboring elements on the periodic table (molybdenum and ruthenium, and neodymium and samarium, respectively) have a beta-stable isotope for each mass number for the range in which the isotopes of the unstable elements usually would be stable to beta decay. (Note that although 147Sm is unstable, it is stable to beta decay; thus 147 is not a counterexample). These ranges can be calculated using the liquid drop model (for example the stability of technetium isotopes), in which the isobar with the lowest mass excess or greatest binding energy is shown to be stable to beta decay because energy conservation forbids a spontaneous transition to a less stable state.
Thus no stable nuclides have proton number 43 or 61, and by the same reasoning no stable nuclides have neutron number 19, 21, 35, 39, 45, 61, 71, 89, 115, or 123.
Exceptions
The only known exceptions to the Mattauch isobar rule are the cases of antimony-123 and tellurium-123 and of hafnium-180 and tantalum-180m, where both nuclei are observationally stable. It is predicted that 123Te would undergo electron capture to form 123Sb, but this decay has not yet been observed; 180mTa should be able to undergo isomeric transition to 180Ta, beta decay to 180W, electron capture to 180Hf, or alpha decay to 176Lu, but none of these decay modes have been observed.
In addition, beta decay has been seen for neither curium-247 nor berkelium-247, though it is expected that the former should decay into the latter. Both nuclides are alpha-unstable.
As mentioned above, the Mattauch isobar rule cannot make predictions as to the half-lives of the beta-unstable isotopes. Hence there are a few cases where isobars of adjacent elements both occur primordially, as the half-life of the unstable isobar is over a billion years. This occurs for the following mass numbers:
40 (40Ar and 40Ca stable; 40K unstable)
50 (50Ti and 50Cr stable; 50V unstable)
87 (87Sr stable; 87Rb unstable)
113 (113In stable; 113Cd unstable)
115 (115Sn stable; 115In unstable)
138 (138Ba and 138Ce stable; 138La unstable)
176 (176Yb and 176Hf stable; 176Lu unstable)
187 (187Os stable; 187Re unstable)
See also
Beta-stable
References
Isotopes
Radioactivity | Mattauch isobar rule | [
"Physics",
"Chemistry"
] | 738 | [
"Radioactivity",
"Isotopes",
"Nuclear physics"
] |
3,758,415 | https://en.wikipedia.org/wiki/Nadcap | Nadcap (formerly NADCAP, the National Aerospace and Defense Contractors Accreditation Program) is a global cooperative accreditation program for aerospace engineering, defense and related industries.
History of Nadcap
The Nadcap program is administered by the Performance Review Institute (PRI). Nadcap was established in 1990 by SAE International. Nadcap's membership consists of "prime contractors" who coordinate with aerospace accredited suppliers to develop industry-wide audit criteria for special processes and products. Through PRI, Nadcap provides independent certification of manufacturing processes for the industry. PRI has its headquarters in Warrendale, Pennsylvania with branch offices for Nadcap located in London, Beijing, and Nagoya.
Fields of Nadcap activities
The Nadcap program provides accreditation for special processes in the aerospace and defense industry.
These include:
Aerospace Quality Systems (AQS)
Aero Structure Assembly (ASA)
Chemical Processing (CP)
Coatings (CT)
Composites (COMP)
Conventional Machining as a Special Process (CMSP)
Elastomer Seals (SEAL)
Electronics (ETG)
Fluids Distribution (FLU)
Heat Treating (HT)
Materials Testing Laboratories (MTL)
Measurement & Inspection (M&I)
Metallic Materials Manufacturing (MMM)
Nonconventional Machining and Surface Enhancement (NMSE)
Nondestructive Testing (NDT)
Non Metallic Materials Manufacturing (NMMM)
Non Metallic Materials Testing (NMMT)
Sealants (SLT)
Welding (WLD)
The Nadcap program and industry
PRI schedules an audit and assigns an industry approved auditor who will conduct the audit using an industry agreed checklist. At the end of the audit, any non-conformity issues will be raised through a non-conformance report. PRI will administer and close out the non-conformance reports with the Supplier. Upon completion PRI will present the audit pack to a 'special process Task Group’ made up of members from industry who will review it and vote on its acceptability for approval.
The Nadcap subscribers include:
309th Maintenance Wing-Hill AFB
Aerojet Rocketdyne
Airbus Group - Airbus
Airbus Group - Airbus Defence and Space
Airbus Group - Airbus Helicopters
Airbus Group - Premium AEROTEC GmbH
Airbus Group - Stelia Aerospace
Air Force
BAE Systems Military Air Information (MAI)
BAE Systems
The Boeing Company
Bombardier Inc.
COMAC
Defense Contract Management Agency (DCMA)
Eaton, Aerospace Group
Embraer S.A.
GARDNER AEROSPACE Group
GE Aviation
GE Aviation - GE Avio S.r.l.
General Dynamics - Gulfstream
GKN Aerospace
GKN Aerospace Sweden AB
Harris Corporation
Heroux-Devtek Landing Gear Division Inc.
Honeywell Aerospace
Howmet Aerospace
Israel Aerospace Industries
Latécoère
Leonardo S.p.A. Divisione Velivoli
Leonardo S.p.A. – Helicopter Division
Liebherr-Aerospace SAS
Lockheed Martin Corporation
Lockheed Martin - Sikorsky Aircraft
Mitsubishi Aircraft Corporation
Mitsubishi Heavy Industries LTD
MTU Aero Engines AG
NASA
Northrop Grumman Corporation
Parker Aerospace Group
Raytheon Company
Raytheon Technologies - Goodrich
Raytheon Technologies - Collins Aerospace (Hamilton Sundstrand)
Raytheon Technologies - Pratt & Whitney
Raytheon Technologies - Pratt & Whitney Canada
Raytheon Technologies - Collins Aerospace (Rockwell Collins)
Rolls-Royce
SAFRAN Group
Singapore Technologies Aerospace
Sonaca
Spirit AeroSystems
Swift Engineering
Textron Inc. - Textron Aviation
Textron Inc. - Bell Helicopter
Thales Group
Triumph Group Inc.
Zodiac Aerospace (SAFRAN)
Nadcap Meetings
Nadcap meetings are held several times a year in different locations worldwide. For example, the 2017 meetings were held in New Orleans, LA, USA in February, Berlin (Germany) in June; and Pittsburgh (Pennsylvania). During these meetings there are open Task Group meetings and other workshops (with participation of Primes, Suppliers, and PRI staff). These meetings are used to discuss the program development and changes to audit criteria among other topics. Agendas and minutes are posted on the PRI website.
Nadcap Training
During the Nadcap meetings, training classes are provided on different topics such as:
Root Cause Corrective Action - RCCA
Special processes, such as, NDT, chemical processing, etc.
Internal auditing
AS/EN/JISQ 9100
Problem Solving Tools
Nadcap Audit Preparation – Chemical Processing
Nadcap Audit Preparation – Heat Treating
Nadcap Audit Preparation – Metallic Material Testing Laboratories
Nadcap Audit Preparation – Non-Destructive Testing
Nadcap Audit Preparation – Welding
References
External links
Boeing official site
ADS Group official site
Aerospace Manufacturing
Quality Manufacturing Today
Aerospace engineering | Nadcap | [
"Engineering"
] | 911 | [
"Aerospace engineering"
] |
3,758,605 | https://en.wikipedia.org/wiki/Martingale%20representation%20theorem | In probability theory, the martingale representation theorem states that a random variable that is measurable with respect to the filtration generated by a Brownian motion can be written in terms of an Itô integral with respect to this Brownian motion.
The theorem only asserts the existence of the representation and does not help to find it explicitly; it is possible in many cases to determine the form of the representation using Malliavin calculus.
Similar theorems also exist for martingales on filtrations induced by jump processes, for example, by Markov chains.
Statement
Let be a Brownian motion on a standard filtered probability space and let be the augmented filtration generated by . If X is a square integrable random variable measurable with respect to , then there exists a predictable process C which is adapted with respect to such that
Consequently,
Application in finance
The martingale representation theorem can be used to establish the existence
of a hedging strategy.
Suppose that is a Q-martingale process, whose volatility is always non-zero.
Then, if is any other Q-martingale, there exists an -previsible process , unique up to sets of measure 0, such that with probability one, and N can be written as:
The replicating strategy is defined to be:
hold units of the stock at the time t, and
hold units of the bond.
where is the stock price discounted by the bond price to time and is the expected payoff of the option at time .
At the expiration day T, the value of the portfolio is:
and it is easy to check that the strategy is self-financing: the change in the value of the portfolio only depends on the change of the asset prices .
See also
Backward stochastic differential equation
References
Montin, Benoît. (2002) "Stochastic Processes Applied in Finance"
Elliott, Robert (1976) "Stochastic Integrals for Martingales of a Jump Process with Partially Accessible Jump Times", Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 36, 213–226
Martingale theory
Probability theorems | Martingale representation theorem | [
"Mathematics"
] | 444 | [
"Theorems in probability theory",
"Mathematical theorems",
"Mathematical problems"
] |
3,762,839 | https://en.wikipedia.org/wiki/PelB%20leader%20sequence | The pelB leader sequence is a sequence of amino acids which, when attached to a protein, directs the protein to the bacterial periplasm, where the sequence is removed by a signal peptidase. Specifically, pelB refers to pectate lyase B of Erwinia carotovora CE. The leader sequence consists of the 22 N-terminal amino acid residues. This leader sequence can be attached to any other protein (on the DNA level) resulting in a transfer of such a fused protein to the periplasmic space of Gram-negative bacteria, such as Escherichia coli, often used in genetic engineering.
Protein secretion can increase the stability of cloned gene products. For instance it was shown that the half-life of the recombinant proinsulin is increased 10-fold when the protein is secreted to the periplasmic space. (vijji. Narne, R.S.Ramya)
One of pelB's possible applications is to direct coat protein-antigen fusions to the cell surface for the construction of engineered bacteriophages for the purpose of phage display.
The Pectobacterium carotovorum pelB leader sequence commonly used in molecular biology has the sequence MKYLLPTAAAGLLLLAAQPAMA (UniProt Q04085).
References
SP Lei et al., Characterization of the Erwinia carotovora pelB gene and its product pectate lyase. J. Bacteriol. 169(9): 4379–4383 (1987).
Amino acids
Bacteria
Genetic engineering | PelB leader sequence | [
"Chemistry",
"Engineering",
"Biology"
] | 338 | [
"Biomolecules by chemical classification",
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Prokaryotes",
"Genetic engineering",
"Amino acids",
"Bacteria",
"Molecular biology",
"Microorganisms"
] |
21,294,790 | https://en.wikipedia.org/wiki/Generalized%20Environmental%20Modeling%20System%20for%20Surfacewaters | Generalized Environmental Modeling System for Surfacewaters or GEMSS is a public domain software application published by ERM. It has been used for hydrological studies throughout the world.
History
GEMSS has been used for ultimate heat sink analyses at Comanche Peak Nuclear Power Plant, and Arkansas Nuclear One. In Pennsylvania it has been applied at PPL Corporation's Brunner Island Steam Electric Station on the lower Susquehanna River, Exelon’s Cromby and Limerick Generating Stations on the Schuylkill River, and at several other electric power facilities. River applications for electric power facilities have been made on the Susquehanna (Brunner Island), the Missouri(Labadie Power Station), the Delaware (Mercer and Gilbert Generating Station), the Connecticut (Connecticut Yankee Nuclear Power Plant), and others.
Applications of GEMSS and its individual component modules have been accepted by regulatory agencies in the U.S. and Canada. It is the sole hydrodynamic model listed in the model selection tool database for hydrodynamic and chemical fate models that can perform 1-D, 2-D, and 3-D time-variable modeling for most waterbody types, consider all state variables and include the near- and far-fields. GEMSS can also provide GUI’s, grid generation, and GIS linkage tools and has strong documentation.
Features
GEMSS includes a grid generator and editor, control file generator, 2-D and 3-D post processing viewers, and an animation tool. It uses a database approach to store and access model results. The database approach is also used for field data; as a result, the GEMSS viewers can be used to display model results, field data or both, a capability useful for understanding the behavior of the prototype as well as for calibrating the model. The field data analysis features can be used independently using GEMSS modeling capability.
Modeling techniques
A GEMSS application requires two types of data: (1) spatial data (primarily the waterbody shoreline and bathymetry, but also locations, elevations, and configurations of man-made structures) and (2) temporal data (time-varying boundary condition data defining tidal elevation, inflow rate and temperature, inflow constituent concentration, outflow rate, and meteorological data. All deterministic models, including GEMSS, require uninterrupted time-varying boundary condition data. There can be no long gaps in the datasets and all required datasets must be available during the span of the proposed simulation period.
For input to the model, the spatial data is encoded primarily in two input files: the control and bathymetry files. These files are geo-referenced. The temporal data is encoded in many files, each file representing a set of time-varying boundary conditions, for example, meteorological data for surface heat exchange and wind shear, or inflow rates for a tributary stream. Each record in the boundary condition files is stamped with a year-month-day-hour-minute address. The data can be subjected to quality assurance procedures by using GEMSS to plot, then to visually inspect individual data points, trends and outliers. The set of input files and the GEMSS executable constitute the model application.
Notes
References
Further reading
Lauzon, Prakash, Salzsauler and Vandenberg. "Use of water quality models for design and evaluation of pit lakes." Mine Pit Lakes: Closure and Management. Australian Center for Geomechanics. Pages 63 to 81.
U. S. Army Engineer Waterways Experiment Station, Environmental Laboratory, Hydraulics Laboratory. "CE-QUAL-W2: A Numerical Two-Dimensional, Laterally Averaged Model of Hydrodynamics and Water Quality" (August 1986) User's Manual. Instruction Report E-86-5. Final Report.
Durand, Kruk, Kempa, Tjomsland. "Vistula Water Quality Modeling" (2011) Pages 165 to 180.
Cvetkovic and Dargahi. 2014. "Hydrodynamic and Transport Characterization of the Baltic Sea 2000-2009" (July 2014). TRITA-LWR Report 2014:03. KTH Royal Institute of Technology, Stockholm. .
Kim and Park. "Multidimensional Hydrodynamic and Water Temperature Modeling of Han River System" (2012) Journal of Korean Society on Water Environment. Volume 28. Number 6. Pages 866 to 881.
Na and Park. "A Hydrodynamic and Water Quality Modeling Study of Spatial and Temporal Patterns of Phytoplankton Growth in a Stratified Lake with Buoyant Incoming Flow" (2006) Ecological Modelling 199. Pages 298 to 314.
Na and Park. "A Hydrodynamic Modeling Study to Determine the Optimum Water Intake Location in Lake Paldang, Korea" (2005) Journal of the American Water Resources Association. Volume 41. Issue 6. Pages 1315 to 1332.
HydroGeoLogic and Aqua Terra. "Selection of Water Quality Components for Eutrophication-Related Total Maximum Daily Load Assessments - Task 4: Documentation of Review and Evaluation of Eutrophication Models and Components". (June 1999) EPA Contract Number 68 C6 0020. Work Assignment Number 2 04.
Computational fluid dynamics
Environmental science software
Integrated hydrologic modelling
Public-domain software | Generalized Environmental Modeling System for Surfacewaters | [
"Physics",
"Chemistry",
"Environmental_science"
] | 1,087 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Environmental science software",
"Computational physics"
] |
21,296,590 | https://en.wikipedia.org/wiki/Ethyl%20chloroformate | Ethyl chloroformate is an organic compound with the chemical formula . It is the ethyl ester of chloroformic acid. It is a colorless, corrosive and highly toxic liquid. It is a reagent used in organic synthesis for the introduction of the ethyl carbamate protecting group and for the formation of carboxylic anhydrides.
Preparation
Ethyl chloroformate can be prepared using ethanol and phosgene:
Safety
Ethyl chloroformate is a highly toxic, flammable, corrosive substance. It causes severe burns when comes in contact with eyes and/or skin, can be harmful if swallowed or inhaled.
References
Chloroformates
Reagents for organic chemistry | Ethyl chloroformate | [
"Chemistry"
] | 159 | [
"Reagents for organic chemistry"
] |
21,301,778 | https://en.wikipedia.org/wiki/Gelation | In polymer chemistry, gelation (gel transition) is the formation of a gel from a system with polymers. Branched polymers can form links between the chains, which lead to progressively larger polymers. As the linking continues, larger branched polymers are obtained and at a certain extent of the reaction, links between the polymer result in the formation of a single macroscopic molecule. At that point in the reaction, which is defined as gel point, the system loses fluidity and viscosity becomes very large. The onset of gelation, or gel point, is accompanied by a sudden increase in viscosity. This "infinite" sized polymer is called the gel or network, which does not dissolve in the solvent, but can swell in it.
Background
Gelation is promoted by gelling agents.
Gelation can occur either by physical linking or by chemical crosslinking. While the physical gels involve physical bonds, chemical gelation involves covalent bonds. The first quantitative theories of chemical gelation were formulated in the 1940s by Flory and Stockmayer. Critical percolation theory was successfully applied to gelation in 1970s. A number of growth models (diffusion limited aggregation, cluster-cluster aggregation, kinetic gelation) were developed in the 1980s to describe the kinetic aspects of aggregation and gelation.
Quantitative approaches to determine gelation
It is important to be able to predict the onset of gelation, since it is an irreversible process that dramatically changes the properties of the system.
Average functionality approach
According to the Carothers equation number-average degree of polymerization is given by
where is the extent of the reaction and is the average functionality of reaction mixture. For the gel can be considered to be infinite, thus the critical extent of the reaction at the gel point is found as
If is greater or equal to , gelation occurs.
Flory Stockmayer approach
Flory and Stockmayer used a statistical approach to derive an expression to predict the gel point by calculating when approaches infinite size. The statistical approach assumes that (1) the reactivity of the functional groups of the same type is the same and independent of the molecular size and (2) there are no intramolecular reactions between the functional groups on the same molecule.
Consider the polymerization of bifunctional molecules , and multifunctional , where is the functionality. The extends of the functional groups are and , respectively. The ratio of all A groups, both reacted and unreacted, that are part of branched units, to the total number of A groups in the mixture is defined as . This will lead to the following reaction
The probability of obtaining the product of the reaction above is given by , since the probability that a B group reach with a branched unit is and the probability that a B group react with non-branched A is .
This relation yields to an expression for the extent of reaction of A functional groups at the gel point
where r is the ratio of all A groups to all B groups. If more than one type of multifunctional branch unit is present and average value is used for all monomer molecules with functionality greater than 2.
Note that the relation does not apply for reaction systems containing monofunctional reactants and/or both A and B type of branch units.
Erdős–Rényi model
Gelation of polymers can be described in the framework of the Erdős–Rényi model or the Lushnikov model, which answers the question when a giant component arises.
Random graph
The structure of a gel network can be conceptualised as a random graph. This analogy is exploited to calculate the gel point and gel fraction for monomer precursors with arbitrary types of functional groups. Random graphs can be used to derive analytical expressions for simple polymerisation mechanisms, such as step-growth polymerisation, or alternatively, they can be combined with a system of rate equations that are integrated numerically.
See also
Mechanics of gelation
References
Gels
Drug delivery devices
Dosage forms
Colloids
Organic chemistry
Physical chemistry
Polymer chemistry | Gelation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 807 | [
"Pharmacology",
"Applied and interdisciplinary physics",
"Drug delivery devices",
"Materials science",
"Colloids",
"Chemical mixtures",
"Condensed matter physics",
"nan",
"Gels",
"Polymer chemistry",
"Physical chemistry"
] |
21,302,154 | https://en.wikipedia.org/wiki/Costly%20state%20verification | Costly State Verification (CSV) is an approach in contract theory that considers a contract design problem in which verification (or disclosure) of enterprise performance is costly and a lender has to pay a monitoring cost.
A central result of CSV approach is that it is generally optimal to commit to a partial, state-contingent disclosure rule. Robert M. Townsend (1979) has shown that under few strong assumptions the optimal financing mechanism is a standard debt contract for which there is no disclosure of the debtor's performance as long as debt is honored, but there is full disclosure (verification) in case of default.
Viewed from the CSV perspective, the main function of bankruptcy institutions is to establish a clear inventory of all assets and liabilities and to assess the net value of the firm.
The standard setup for financial contracting problems in CSV framework involves two risk-neutral agents, a wealth-constrained entrepreneur with an investment project, and a wealthy investor with capital available. The fixed capital invested in the project generates random cash flow at future time t with probability distribution over the possible range of profits. The entrepreneur has private information about realized cash flows from the project, but it can credibly disclose them to the investor by incurring certain cost.
The solution to this problem should provide ex-ante optimal contract structure which specify in which scenario realized cash flow should be audited and certified.
With no audit the entrepreneur would never be able to raise any money from investor since rational investor anticipates that the entrepreneur will lie about realized profit to avoid paying back to the investor.
However, in CSV framework regulated mandatory periodic disclosure of entrepreneurial performance is not efficient and imposes excessive disclosure costs.
The optimal financial contract in CSV model gives the creditor the right to all assets of the project in the event of default at fixed bankruptcy cost that must be incurred to collect the proceeds.
The results that standard debt contract is optimal does not hold in case of multiple investors or multiple risky projects undertaken by the entrepreneur.
See also
Complete contract
Contract theory (Economics)
Agency cost
References
Townsend, R.M., 1979. Optimal contracts and competitive markets with costly state verification. Journal of Economic Theory 21(2), 265–293.
Gale, Douglas and Hellwig, Martin (1985), "Incentive-Compatible Debt Contracts I: The One-Period Problem", Review of Economic Studies 52, 647-64
Bolton, Patrick and Dewatripont, Mathias. Contract Theory. MIT Press, 2005.
Game theory
Asymmetric information | Costly state verification | [
"Physics",
"Mathematics"
] | 512 | [
"Game theory",
"Asymmetric information",
"Symmetry",
"Asymmetry"
] |
21,304,415 | https://en.wikipedia.org/wiki/Sexual%20reproduction | Sexual reproduction is a type of reproduction that involves a complex life cycle in which a gamete (haploid reproductive cells, such as a sperm or egg cell) with a single set of chromosomes combines with another gamete to produce a zygote that develops into an organism composed of cells with two sets of chromosomes (diploid). This is typical in animals, though the number of chromosome sets and how that number changes in sexual reproduction varies, especially among plants, fungi, and other eukaryotes.
In placental mammals, sperm cells exit the penis through the male urethra and enter the vagina during copulation, while egg cells enter the uterus through the oviduct. Other vertebrates of both sexes possess a cloaca for the release of sperm or egg cells.
Sexual reproduction is the most common life cycle in multicellular eukaryotes, such as animals, fungi and plants. Sexual reproduction also occurs in some unicellular eukaryotes. Sexual reproduction does not occur in prokaryotes, unicellular organisms without cell nuclei, such as bacteria and archaea. However, some processes in bacteria, including bacterial conjugation, transformation and transduction, may be considered analogous to sexual reproduction in that they incorporate new genetic information. Some proteins and other features that are key for sexual reproduction may have arisen in bacteria, but sexual reproduction is believed to have developed in an ancient eukaryotic ancestor.
In eukaryotes, diploid precursor cells divide to produce haploid cells in a process called meiosis. In meiosis, DNA is replicated to produce a total of four copies of each chromosome. This is followed by two cell divisions to generate haploid gametes. After the DNA is replicated in meiosis, the homologous chromosomes pair up so that their DNA sequences are aligned with each other. During this period before cell divisions, genetic information is exchanged between homologous chromosomes in genetic recombination. Homologous chromosomes contain highly similar but not identical information, and by exchanging similar but not identical regions, genetic recombination increases genetic diversity among future generations.
During sexual reproduction, two haploid gametes combine into one diploid cell known as a zygote in a process called fertilization. The nuclei from the gametes fuse, and each gamete contributes half of the genetic material of the zygote. Multiple cell divisions by mitosis (without change in the number of chromosomes) then develop into a multicellular diploid phase or generation. In plants, the diploid phase, known as the sporophyte, produces spores by meiosis. These spores then germinate and divide by mitosis to form a haploid multicellular phase, the gametophyte, which produces gametes directly by mitosis. This type of life cycle, involving alternation between two multicellular phases, the sexual haploid gametophyte and asexual diploid sporophyte, is known as alternation of generations.
The evolution of sexual reproduction is considered paradoxical, because asexual reproduction should be able to outperform it as every young organism created can bear its own young. This implies that an asexual population has an intrinsic capacity to grow more rapidly with each generation. This 50% cost is a fitness disadvantage of sexual reproduction. The two-fold cost of sex includes this cost and the fact that any organism can only pass on 50% of its own genes to its offspring. However, one definite advantage of sexual reproduction is that it increases genetic diversity and impedes the accumulation of harmful genetic mutations.
Sexual selection is a mode of natural selection in which some individuals out-reproduce others of a population because they are better at securing mates for sexual reproduction. It has been described as "a powerful evolutionary force that does not exist in asexual populations".
Evolution
The first fossilized evidence of sexual reproduction in eukaryotes is from the Stenian period, about 1.05 billion years old.
Biologists studying evolution propose several explanations for the development of sexual reproduction and its maintenance. These reasons include reducing the likelihood of the accumulation of deleterious mutations, increasing rate of adaptation to changing environments, dealing with competition, DNA repair, masking deleterious mutations, and reducing genetic variation on the genomic level. All of these ideas about why sexual reproduction has been maintained are generally supported, but ultimately the size of the population determines if sexual reproduction is entirely beneficial. Larger populations appear to respond more quickly to some of the benefits obtained through sexual reproduction than do smaller population sizes.
However, newer models presented in recent years suggest a basic advantage for sexual reproduction in slowly reproducing complex organisms.
Sexual reproduction allows these species to exhibit characteristics that depend on the specific environment that they inhabit, and the particular survival strategies that they employ.
Sexual selection
In order to reproduce sexually, both males and females need to find a mate. Generally in animals mate choice is made by females while males compete to be chosen. This can lead organisms to extreme efforts in order to reproduce, such as combat and display, or produce extreme features caused by a positive feedback known as a Fisherian runaway. Thus sexual reproduction, as a form of natural selection, has an effect on evolution. Sexual dimorphism is where the basic phenotypic traits vary between males and females of the same species. Dimorphism is found in both sex organs and in secondary sex characteristics, body size, physical strength and morphology, biological ornamentation, behavior and other bodily traits. However, sexual selection is only implied over an extended period of time leading to sexual dimorphism.
Animals
Arthropods
Insects
Insect species make up more than two-thirds of all extant animal species. Most insect species reproduce sexually, though some species are facultatively parthenogenetic. Many insect species have sexual dimorphism, while in others the sexes look nearly identical. Typically they have two sexes with males producing spermatozoa and females ova. The ova develop into eggs that have a covering called the chorion, which forms before internal fertilization. Insects have very diverse mating and reproductive strategies most often resulting in the male depositing a spermatophore within the female, which she stores until she is ready for egg fertilization. After fertilization, and the formation of a zygote, and varying degrees of development, in many species the eggs are deposited outside the female; while in others, they develop further within the female and the young are born live.
Mammals
There are three extant kinds of mammals: monotremes, placentals and marsupials, all with internal fertilization. In placental mammals, offspring are born as juveniles: complete animals with the sex organs present although not reproductively functional. After several months or years, depending on the species, the sex organs develop further to maturity and the animal becomes sexually mature. Most female mammals are only fertile during certain periods during their estrous cycle, at which point they are ready to mate. For most mammals, males and females exchange sexual partners throughout their adult lives.
Fish
The vast majority of fish species lay eggs that are then fertilized by the male. Some species lay their eggs on a substrate like a rock or on plants, while others scatter their eggs and the eggs are fertilized as they drift or sink in the water column.
Some fish species use internal fertilization and then disperse the developing eggs or give birth to live offspring. Fish that have live-bearing offspring include the guppy and mollies or Poecilia. Fishes that give birth to live young can be ovoviviparous, where the eggs are fertilized within the female and the eggs simply hatch within the female body, or in seahorses, the male carries the developing young within a pouch, and gives birth to live young. Fishes can also be viviparous, where the female supplies nourishment to the internally growing offspring. Some fish are hermaphrodites, where a single fish is both male and female and can produce eggs and sperm. In hermaphroditic fish, some are male and female at the same time while in other fish they are serially hermaphroditic; starting as one sex and changing to the other. In at least one hermaphroditic species, self-fertilization occurs when the eggs and sperm are released together. Internal self-fertilization may occur in some other species. One fish species does not reproduce by sexual reproduction but uses sex to produce offspring; Poecilia formosa is a unisex species that uses a form of parthenogenesis called gynogenesis, where unfertilized eggs develop into embryos that produce female offspring. Poecilia formosa mate with males of other fish species that use internal fertilization, the sperm does not fertilize the eggs but stimulates the growth of the eggs which develops into embryos.
Plants
Animals have life cycles with a single diploid multicellular phase that produces haploid gametes directly by meiosis. Male gametes are called sperm, and female gametes are called eggs or ova. In animals, fertilization of the ovum by a sperm results in the formation of a diploid zygote that develops by repeated mitotic divisions into a diploid adult. Plants have two multicellular life-cycle phases, resulting in an alternation of generations. Plant zygotes germinate and divide repeatedly by mitosis to produce a diploid multicellular organism known as the sporophyte. The mature sporophyte produces haploid spores by meiosis that germinate and divide by mitosis to form a multicellular gametophyte phase that produces gametes at maturity. The gametophytes of different groups of plants vary in size. Mosses and other pteridophytic plants may have gametophytes consisting of several million cells, while angiosperms have as few as three cells in each pollen grain.
Flowering plants
Flowering plants are the dominant plant form on land and they reproduce either sexually or asexually. Often their most distinctive feature is their reproductive organs, commonly called flowers. The anther produces pollen grains which contain the male gametophytes that produce sperm nuclei. For pollination to occur, pollen grains must attach to the stigma of the female reproductive structure (carpel), where the female gametophytes are located within ovules enclose within the ovary. After the pollen tube grows through the carpel's style, the sex cell nuclei from the pollen grain migrate into the ovule to fertilize the egg cell and endosperm nuclei within the female gametophyte in a process termed double fertilization. The resulting zygote develops into an embryo, while the triploid endosperm (one sperm cell plus two female cells) and female tissues of the ovule give rise to the surrounding tissues in the developing seed. The ovary, which produced the female gametophyte(s), then grows into a fruit, which surrounds the seed(s). Plants may either self-pollinate or cross-pollinate.
In 2013, flowers dating from the Cretaceous (100 million years before present) were found encased in amber, the oldest evidence of sexual reproduction in a flowering plant. Microscopic images showed tubes growing out of pollen and penetrating the flower's stigma. The pollen was sticky, suggesting it was carried by insects.
Ferns
Ferns produce large diploid sporophytes with rhizomes, roots and leaves. Fertile leaves produce sporangia that contain haploid spores. The spores are released and germinate to produce small, thin gametophytes that are typically heart shaped and green in color. The gametophyte prothalli, produce motile sperm in the antheridia and egg cells in archegonia on the same or different plants. After rains or when dew deposits a film of water, the motile sperm are splashed away from the antheridia, which are normally produced on the top side of the thallus, and swim in the film of water to the archegonia where they fertilize the egg. To promote out crossing or cross fertilization the sperm are released before the eggs are receptive of the sperm, making it more likely that the sperm will fertilize the eggs of different thallus. After fertilization, a zygote is formed which grows into a new sporophytic plant. The condition of having separate sporophyte and gametophyte plants is called alternation of generations.
Bryophytes
The bryophytes, which include liverworts, hornworts and mosses, reproduce both sexually and vegetatively. They are small plants found growing in moist locations and like ferns, have motile sperm with flagella and need water to facilitate sexual reproduction. These plants start as a haploid spore that grows into the dominant gametophyte form, which is a multicellular haploid body with leaf-like structures that photosynthesize. Haploid gametes are produced in antheridia (male) and archegonia (female) by mitosis. The sperm released from the antheridia respond to chemicals released by ripe archegonia and swim to them in a film of water and fertilize the egg cells thus producing a zygote. The zygote divides by mitotic division and grows into a multicellular, diploid sporophyte. The sporophyte produces spore capsules (sporangia), which are connected by stalks (setae) to the archegonia. The spore capsules produce spores by meiosis and when ripe the capsules burst open to release the spores. Bryophytes show considerable variation in their reproductive structures and the above is a basic outline. Also in some species each plant is one sex (dioicous) while other species produce both sexes on the same plant (monoicous).
Fungi
Fungi are classified by the methods of sexual reproduction they employ. The outcome of sexual reproduction most often is the production of resting spores that are used to survive inclement times and to spread. There are typically three phases in the sexual reproduction of fungi: plasmogamy, karyogamy and meiosis. The cytoplasm of two parent cells fuse during plasmogamy and the nuclei fuse during karyogamy. New haploid gametes are formed during meiosis and develop into spores. The adaptive basis for the maintenance of sexual reproduction in the Ascomycota and Basidiomycota (dikaryon) fungi was reviewed by Wallen and Perlin. They concluded that the most plausible reason for maintaining this capability is the benefit of repairing DNA damage, caused by a variety of stresses, through recombination that occurs during meiosis.
Bacteria and archaea
Three distinct processes in prokaryotes are regarded as similar to eukaryotic sex: bacterial transformation, which involves the incorporation of foreign DNA into the bacterial chromosome; bacterial conjugation, which is a transfer of plasmid DNA between bacteria, but the plasmids are rarely incorporated into the bacterial chromosome; and gene transfer and genetic exchange in archaea.
Bacterial transformation involves the recombination of genetic material and its function is mainly associated with DNA repair. Bacterial transformation is a complex process encoded by numerous bacterial genes, and is a bacterial adaptation for DNA transfer. This process occurs naturally in at least 40 bacterial species. For a bacterium to bind, take up, and recombine exogenous DNA into its chromosome, it must enter a special physiological state referred to as competence (see Natural competence). Sexual reproduction in early single-celled eukaryotes may have evolved from bacterial transformation, or from a similar process in archaea (see below).
On the other hand, bacterial conjugation is a type of direct transfer of DNA between two bacteria mediated by an external appendage called the conjugation pilus. Bacterial conjugation is controlled by plasmid genes that are adapted for spreading copies of the plasmid between bacteria. The infrequent integration of a plasmid into a host bacterial chromosome, and the subsequent transfer of a part of the host chromosome to another cell do not appear to be bacterial adaptations.
Exposure of hyperthermophilic archaeal Sulfolobus species to DNA damaging conditions induces cellular aggregation accompanied by high frequency genetic marker exchange Ajon et al. hypothesized that this cellular aggregation enhances species-specific DNA repair by homologous recombination. DNA transfer in Sulfolobus may be an early form of sexual interaction similar to the more well-studied bacterial transformation systems that also involve species-specific DNA transfer leading to homologous recombinational repair of DNA damage.
See also
Amphimixis (psychology)
Androgenesis
Anisogamy
Biological reproduction
Dioecy
Heterosexuality
Hermaphroditism
Isogamy
Mate choice
Mating in fungi
Operational sex ratio
Outcrossing
Allogamy
Self-incompatibility
Sex
Sexual intercourse
Transformation (genetics)
Gynogenesis
References
Further reading
Pang, K. "Certificate Biology: New Mastering Basic Concepts", Hong Kong, 2004
Journal of Biology of Reproduction, accessed in August 2005.
"Sperm Use Heat Sensors To Find The Egg; Weizmann Institute Research Contributes To Understanding Of Human Fertilization", Science Daily, 3 February 2003
External links
Khan Academy, video lecture
Sexual Reproduction and the Evolution of Sex (Archived (2023)) − Nature journal (2008)
Developmental biology
Fertility
Reproduction
Sexuality | Sexual reproduction | [
"Biology"
] | 3,656 | [
"Behavior",
"Developmental biology",
"Sex",
"Reproduction",
"Biological interactions",
"Sexual reproduction",
"Sexuality"
] |
21,304,461 | https://en.wikipedia.org/wiki/Steam | Steam is water vapour (water in the gas phase), often mixed with air and/or an aerosol of liquid water droplets. This may occur due to evaporation or due to boiling, where heat is applied until water reaches the enthalpy of vaporization. Steam that is saturated or superheated (water vapor) is invisible; however, wet steam, a visible mist or aerosol of water droplets, is often referred to as "steam".
When liquid water becomes steam, it increases in volume by 1,700 times at standard temperature and pressure; this change in volume can be converted into mechanical work by steam engines such as reciprocating piston type engines and steam turbines, which are a sub-group of steam engines. Piston type steam engines played a central role in the Industrial Revolution and modern steam turbines are used to generate more than 80 % of the world's electricity. If liquid water comes in contact with a very hot surface or depressurizes quickly below its vapour pressure, it can create a steam explosion.
Types of steam and conversions
Steam is traditionally created by heating a boiler via burning coal and other fuels, but it is also possible to create steam with solar energy. Water vapour that includes water droplets is described as wet steam. As wet steam is heated further, the droplets evaporate, and at a high enough temperature (which depends on the pressure) all of the water evaporates and the system is in vapour–liquid equilibrium. When steam has reached this equilibrium point, it is referred to as saturated steam.
Superheated steam or live steam is steam at a temperature higher than its boiling point for the pressure, which only occurs when all liquid water has evaporated or has been removed from the system.
Steam tables contain thermodynamic data for water/saturated steam and are often used by engineers and scientists in design and operation of equipment where thermodynamic cycles involving steam are used. Additionally, thermodynamic phase diagrams for water/steam, such as a temperature-entropy diagram or a Mollier diagram shown in this article, may be useful. Steam charts are also used for analysing thermodynamic cycles.
Uses
Agricultural
In agriculture, steam is used for soil sterilization to avoid the use of harmful chemical agents and increase soil health.
Domestic
Steam's capacity to transfer heat is also used in the home: for cooking vegetables, steam cleaning of fabric, carpets and flooring, and for heating buildings. In each case, water is heated in a boiler, and the steam carries the energy to a target object. Steam is also used in ironing clothes to add enough humidity with the heat to take wrinkles out and put intentional creases into the clothing.
Electricity generation (and co-generation)
As of 2000 around 90% of all electricity was generated using steam as the working fluid, nearly all by steam turbines.
In electric generation, steam is typically condensed at the end of its expansion cycle, and returned to the boiler for re-use. However, in co-generation, steam is piped into buildings through a district heating system to provide heat energy after its use in the electric generation cycle. The world's biggest steam generation system is the New York City steam system, which pumps steam into 100,000 buildings in Manhattan from seven co-generation plants.
Energy storage
In other industrial applications steam is used for energy storage, which is introduced and extracted by heat transfer, usually through pipes. Steam is a capacious reservoir for thermal energy because of water's high heat of vaporization.
Fireless steam locomotives were steam locomotives that operated from a supply of steam stored on board in a large tank resembling a conventional locomotive's boiler. This tank was filled by process steam, as is available in many sorts of large factory, such as paper mills. The locomotive's propulsion used pistons and connecting rods, as for a typical steam locomotive. These locomotives were mostly used in places where there was a risk of fire from a boiler's firebox, but were also used in factories that simply had a plentiful supply of steam to spare.
Mechanical effort
Steam engines and steam turbines use the expansion of steam to drive a piston or turbine to perform mechanical work. The ability to return condensed steam as water-liquid to the boiler at high pressure with relatively little expenditure of pumping power is important. Condensation of steam to water often occurs at the low-pressure end of a steam turbine, since this maximizes the energy efficiency, but such wet-steam conditions must be limited to avoid excessive turbine blade erosion. Engineers use an idealised thermodynamic cycle, the Rankine cycle, to model the behaviour of steam engines. Steam turbines are often used in the production of electricity.
Sterilization
An autoclave, which uses steam under pressure, is used in microbiology laboratories and similar environments for sterilization.
Steam, especially dry (highly superheated) steam, may be used for antimicrobial cleaning even to the levels of sterilization. Steam is a non-toxic antimicrobial agent.
Steam in piping
Steam is used in piping for utility lines. It is also used in jacketing and tracing of piping to maintain the uniform temperature in pipelines and vessels.
Industrial Processes
Steam is used across multiple industries for its ability to transfer heat to drive chemical reactions, sterilize or disinfect objects and to maintain constant temperatures. In the lumber industry, steam is used in the process of wood bending, killing insects, and increasing plasticity. Steam is used to accentuate drying of concrete especially in prefabricates.
Care should be taken since concrete produces heat during hydration and additional heat from the steam could be detrimental to hardening reaction processes of the concrete. In chemical and petrochemical industries, steam is used in various chemical processes as a reactant. Steam cracking of long chain hydrocarbons produces lower molecular weight hydrocarbons for fuel or other chemical applications. Steam reforming produces syngas or hydrogen.
Cleaning
Used in cleaning of fibers and other materials, sometimes in preparation for painting. Steam is also useful in melting hardened grease and oil residues, so it is useful in cleaning kitchen floors and equipment and internal combustion engines and parts. Among the advantages of using steam versus a hot water spray are the facts that steam can operate at higher temperatures and it uses substantially less water per minute.
See also
Electrification
Food steamer or steam cooker
Geyser—geothermally-generated steam
IAPWS—an association that maintains international-standard correlations for the thermodynamic properties of steam, including IAPWS-IF97 (for use in industrial simulation and modelling) and IAPWS-95 (a general purpose and scientific correlation).
Industrial Revolution
Live steam
Mass production
Nuclear power—and power plants use steam to generate electricity
Oxyhydrogen
Psychrometrics—moist air–vapor mixtures, humidity, and air conditioning
Steam–electric power station
Steam locomotive
Sterilization (microbiology)
References
External links
Thermophysical Properties of Fluid Systems, Steam Tables & Charts by National Institute of Standards and Technology, NIST
Forms of water
Gases
Water in gas | Steam | [
"Physics",
"Chemistry"
] | 1,470 | [
"Matter",
"Physical quantities",
"Phases of matter",
"Steam power",
"Power (physics)",
"Forms of water",
"Statistical mechanics",
"Gases"
] |
21,306,150 | https://en.wikipedia.org/wiki/Random-access%20memory | Random-access memory (RAM; ) is a form of electronic computer memory that can be read and changed in any order, typically used to store working data and machine code. A random-access memory device allows data items to be read or written in almost the same amount of time irrespective of the physical location of data inside the memory, in contrast with other direct-access data storage media (such as hard disks and magnetic tape), where the time required to read and write data items varies significantly depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement.
In today's technology, random-access memory takes the form of integrated circuit (IC) chips with MOS (metal–oxide–semiconductor) memory cells. RAM is normally associated with volatile types of memory where stored information is lost if power is removed. The two main types of volatile random-access semiconductor memory are static random-access memory (SRAM) and dynamic random-access memory (DRAM).
Non-volatile RAM has also been developed and other types of non-volatile memories allow random access for read operations, but either do not allow write operations or have other kinds of limitations. These include most types of ROM and NOR flash memory.
The use of semiconductor RAM dates back to 1965 when IBM introduced the monolithic (single-chip) 16-bit SP95 SRAM chip for their System/360 Model 95 computer, and Toshiba used bipolar DRAM memory cells for its 180-bit Toscal BC-1411 electronic calculator, both based on bipolar transistors. While it offered higher speeds than magnetic-core memory, bipolar DRAM could not compete with the lower price of the then-dominant magnetic-core memory. In 1966, Dr. Robert Dennard invented modern DRAM architecture in which there's a single MOS transistor per capacitor. The first commercial DRAM IC chip, the 1K Intel 1103, was introduced in October 1970. Synchronous dynamic random-access memory (SDRAM) was reintroduced with the Samsung KM48SL2000 chip in 1992.
History
Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines were serial devices which could only reproduce data in the order it was written. Drum memory could be expanded at relatively low cost but efficient retrieval of memory items requires knowledge of the physical layout of the drum to optimize speed. Latches built out of triode vacuum tubes, and later, out of discrete transistors, were used for smaller and faster memories such as registers. Such registers were relatively large and too costly to use for large amounts of data; generally only a few dozen or few hundred bits of such memory could be provided.
The first practical form of random-access memory was the Williams tube. It stored data as electrically charged spots on the face of a cathode-ray tube. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access. The capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller, faster, and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored program was implemented in the Manchester Baby computer, which first successfully ran a program on 21 June, 1948. In fact, rather than the Williams tube memory being designed for the Baby, the Baby was a testbed to demonstrate the reliability of the memory.
Magnetic-core memory was invented in 1947 and developed up until the mid-1970s. It became a widespread form of random-access memory, relying on an array of magnetized rings. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible. Magnetic core memory was the standard form of computer memory until displaced by semiconductor memory in integrated circuits (ICs) during the early 1970s.
Prior to the development of integrated read-only memory (ROM) circuits, permanent (or read-only) random-access memory was often constructed using diode matrices driven by address decoders, or specially wound core rope memory planes.
Semiconductor memory appeared in the 1960s with bipolar memory, which used bipolar transistors. Although it was faster, it could not compete with the lower price of magnetic core memory.
MOS RAM
In 1957, Frosch and Derick manufactured the first silicon dioxide field-effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, in 1960, a team demonstrated a working MOSFET at Bell Labs. This led to the development of metal–oxide–semiconductor (MOS) memory by John Schmidt at Fairchild Semiconductor in 1964. In addition to higher speeds, MOS semiconductor memory was cheaper and consumed less power than magnetic core memory. The development of silicon-gate MOS integrated circuit (MOS IC) technology by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips. MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s.
Integrated bipolar static random-access memory (SRAM) was invented by Robert H. Norman at Fairchild Semiconductor in 1963. It was followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964. SRAM became an alternative to magnetic-core memory, but required six MOS transistors for each bit of data. Commercial use of SRAM began in 1965, when IBM introduced the SP95 memory chip for the System/360 Model 95.
Dynamic random-access memory (DRAM) allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit, greatly increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor and had to be periodically refreshed every few milliseconds before the charge could leak away.
Toshiba's Toscal BC-1411 electronic calculator, which was introduced in 1965, used a form of capacitor bipolar DRAM, storing 180-bit data on discrete memory cells, consisting of germanium bipolar transistors and capacitors. Capacitors had also been used for earlier memory schemes, such as the drum of the Atanasoff–Berry Computer, the Williams tube and the Selectron tube. While it offered higher speeds than magnetic-core memory, bipolar DRAM could not compete with the lower price of the then-dominant magnetic-core memory.
In 1966, Robert Dennard invented modern DRAM architecture for which there is a single MOS transistor per capacitor. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent under IBM for a single-transistor DRAM memory cell, based on MOS technology. The first commercial DRAM IC chip was the Intel 1103, which was manufactured on an 8μm MOS process with a capacity of 1kbit, and was released in 1970.
The earliest DRAMs were often synchronized with the CPU clock (clocked) and were used with early microprocessors. In the mid-1970s, DRAMs moved to the asynchronous design, but in the 1990s returned to synchronous operation. In 1992 Samsung released KM48SL2000, which had a capacity of 16Mbit. and mass-produced in 1993. The first commercial DDR SDRAM (double data rate SDRAM) memory chip was Samsung's 64Mbit DDR SDRAM chip, released in June 1998. GDDR (graphics DDR) is a form of DDR SGRAM (synchronous graphics RAM), which was first released by Samsung as a 16Mbit memory chip in 1998.
Types
The two widely used forms of modern RAM are static RAM (SRAM) and dynamic RAM (DRAM). In SRAM, a bit of data is stored using the state of a six-transistor memory cell, typically using six MOSFETs. This form of RAM is more expensive to produce, but is generally faster and requires less dynamic power than DRAM. In modern computers, SRAM is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair (typically a MOSFET and MOS capacitor, respectively), which together comprise a DRAM cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers.
Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory (ROM) stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writable variants of ROM (such as EEPROM and NOR flash) share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment. ECC memory (which can be either SRAM or DRAM) includes special circuitry to detect and/or correct random faults (memory errors) in the stored data, using parity bits or error correction codes.
In general, the term RAM refers solely to solid-state memory devices (either DRAM or SRAM), and more specifically the main memory in most computers. In optical storage, the term DVD-RAM is somewhat of a misnomer since, it is not random access; it behaves much like a hard disc drive if somewhat slower. Aside, unlike CD-RW or DVD-RW, DVD-RAM does not need to be erased before reuse.
Memory cell
The memory cell is the fundamental building block of computer memory. The memory cell is an electronic circuit that stores one bit of binary information and it must be set to store a logic 1 (high voltage level) and reset to store a logic 0 (low voltage level). Its value is maintained/stored until it is changed by the set/reset process. The value in the memory cell can be accessed by reading it.
In SRAM, the memory cell is a type of flip-flop circuit, usually implemented using FETs. This means that SRAM requires very low power when not being accessed, but it is expensive and has low storage density.
A second type, DRAM, is based around a capacitor. Charging and discharging this capacitor can store a "1" or a "0" in the cell. However, the charge in this capacitor slowly leaks away, and must be refreshed periodically. Because of this refresh process, DRAM uses more power, but it can achieve greater storage densities and lower unit costs compared to SRAM.
Addressing
To be useful, memory cells must be readable and writable. Within the RAM device, multiplexing and demultiplexing circuitry is used to select memory cells. Typically, a RAM device has a set of address lines , and for each combination of bits that may be applied to these lines, a set of memory cells are activated. Due to this addressing, RAM devices virtually always have a memory capacity that is a power of two.
Usually several memory cells share the same address. For example, a 4 bit "wide" RAM chip has four memory cells for each address. Often the width of the memory and that of the microprocessor are different, for a 32 bit microprocessor, eight 4 bit RAM chips would be needed.
Often more addresses are needed than can be provided by a device. In that case, external multiplexors to the device are used to activate the correct device that is being accessed. RAM is often byte addressable, although it is also possible to make RAM that is word-addressable.
Memory hierarchy
One can read and over-write data in RAM. Many computer systems have a memory hierarchy consisting of processor registers, on-die SRAM caches, external caches, DRAM, paging systems and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that access time to rotating storage media or a tape is variable. The overall goal of using a memory hierarchy is to obtain the fastest possible average access time while minimizing the total cost of the entire memory system (generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom).
In many modern personal computers, the RAM comes in an easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can be quickly replaced should they become damaged or when changing needs demand more storage capacity. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system.
Other uses of RAM
In addition to serving as temporary storage and working space for the operating system and applications, RAM is used in numerous other ways.
Virtual memory
Most modern operating systems employ a method of extending RAM capacity, known as "virtual memory". A portion of the computer's hard drive is set aside for a paging file or a scratch partition, and the combination of physical RAM and the paging file form the system's total memory. (For example, if a computer has 2 GB (10243 B) of RAM and a 1 GB page file, the operating system has 3 GB total memory available to it.) When the system runs low on physical memory, it can "swap" portions of RAM to the paging file to make room for new data, as well as to read previously swapped information back into RAM. Excessive use of this mechanism results in thrashing and generally hampers overall system performance, mainly because hard drives are far slower than RAM.
RAM disk
Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. A RAM disk loses the stored data when the computer is shut down, unless memory is arranged to have a standby battery source, or changes to the RAM disk are written out to a nonvolatile disk. The RAM disk is reloaded from the physical disk upon RAM disk initialization.
Shadow RAM
Sometimes, the contents of a relatively slow ROM chip are copied to read/write memory to allow for shorter access times. The ROM chip is then disabled while the initialized memory locations are switched in on the same block of addresses (often write-protected). This process, sometimes called shadowing, is fairly common in both computers and embedded systems.
As a common example, the BIOS in typical personal computers often has an option called "use shadow BIOS" or similar. When enabled, functions that rely on data from the BIOS's ROM instead use DRAM locations (most can also toggle shadowing of video card ROM or other ROM sections). Depending on the system, this may not result in increased performance, and may cause incompatibilities. For example, some hardware may be inaccessible to the operating system if shadow RAM is used. On some systems the benefit may be hypothetical because the BIOS is not used after booting in favor of direct hardware access. Free memory is reduced by the size of the shadowed ROMs.
Memory wall
The 'memory wall is the growing disparity of speed between CPU and the response time of memory (known as memory latency) outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries, which is also referred to as bandwidth wall. From 1986 to 2000, CPU speed improved at an annual rate of 55% while off-chip memory response time only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance.
Another reason for the disparity is the enormous increase in the size of memory since the start of the PC revolution in the 1980s. Originally, PCs contained less than 1 mebibyte of RAM, which often had a response time of 1 CPU clock cycle, meaning that it required 0 wait states. Larger memory units are inherently slower than smaller ones of the same type, simply because it takes longer for signals to traverse a larger circuit. Constructing a memory unit of many gibibytes with a response time of one clock cycle is difficult or impossible. Today's CPUs often still have a mebibyte of 0 wait state cache memory, but it resides on the same chip as the CPU cores due to the bandwidth limitations of chip-to-chip communication. It must also be constructed from static RAM, which is far more expensive than the dynamic RAM used for larger memories. Static RAM also consumes far more power.
CPU speed improvements slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense. Intel summarized these causes in a 2005 document.
First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat... Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-called von Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address.
The RC delays in signal transmission were also noted in "Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures" which projected a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014.
A different concept is the processor-memory performance gap, which can be addressed by 3D integrated circuits that reduce the distance between the logic and memory aspects that are further apart in a 2D chip. Memory subsystem design requires a focus on the gap, which is widening over time. The main method of bridging the gap is the use of caches; small amounts of high-speed memory that houses recent operations and instructions nearby the processor, speeding up the execution of those operations or instructions in cases where they are called upon frequently. Multiple levels of caching have been developed to deal with the widening gap, and the performance of high-speed modern computers relies on evolving caching techniques. There can be up to a 53% difference between the growth in speed of processor and the lagging speed of main memory access.
Solid-state hard drives have continued to increase in speed, from ~400 Mbit/s via SATA3 in 2012 up to ~7 GB/s via NVMe/PCIe in 2024, closing the gap between RAM and hard disk speeds, although RAM continues to be an order of magnitude faster, with single-lane DDR5 8000MHz capable of 128 GB/s, and modern GDDR even faster. Fast, cheap, non-volatile solid state drives have replaced some functions formerly performed by RAM, such as holding certain data for immediate availability in server farms - 1 terabyte of SSD storage can be had for $200, while 1 TB of RAM would cost thousands of dollars.
Timeline
SRAM
DRAM
SDRAM
See also
References
External links
American inventions
Computer architecture
Computer memory | Random-access memory | [
"Technology",
"Engineering"
] | 4,279 | [
"Computers",
"Computer engineering",
"Computer architecture"
] |
24,271,629 | https://en.wikipedia.org/wiki/Local%20tangent%20space%20alignment | Local tangent space alignment (LTSA) is a method for manifold learning, which can efficiently learn a nonlinear embedding into low-dimensional coordinates from high-dimensional data, and can also reconstruct high-dimensional coordinates from embedding coordinates. It is based on the intuition that when a manifold is correctly unfolded, all of the tangent hyperplanes to the manifold will become aligned. It begins by computing the k-nearest neighbors of every point. It computes the tangent space at every point by computing the d-first principal components in each local neighborhood. It then optimizes to find an embedding that aligns the tangent spaces, but it ignores the label information conveyed by data samples, and thus can not be used for classification directly.
See also
Isomap
References
Further reading
Dimension reduction
Manifolds | Local tangent space alignment | [
"Mathematics"
] | 166 | [
"Topological spaces",
"Topology",
"Manifolds",
"Space (mathematics)"
] |
24,271,712 | https://en.wikipedia.org/wiki/Therapeutic%20Targets%20Database | Therapeutic Target Database (TTD) is a pharmaceutical and medical repository constructed by the Innovative Drug Research and Bioinformatics Group (IDRB) at Zhejiang University, China and the Bioinformatics and Drug Design Group at the National University of Singapore. It provides information about known and explored therapeutic protein and nucleic acid targets, the targeted disease, pathway information and the corresponding drugs directed at each of these targets. Detailed knowledge about target function, sequence, 3D structure, ligand binding properties, enzyme nomenclature and drug structure, therapeutic class, and clinical development status. TTD is freely accessible without any login requirement at https://idrblab.org/ttd/.
Statistics
This database contains 3,730 therapeutic targets (532 successful, 1,442 clinical trial, 239 preclincial/patented and 1,517 research targets) and 39,862 drugs (2,895 approved, 11,796 clinical trial, 5,041 preclincial/patented and 20,130 experimental drugs). The targets and drugs in TTD cover 583 protein biochemical classes and 958 drug therapeutic classes, respectively. The latest version of the International Classification of Diseases (ICD-11) codes released by WHO are incorporated in TTD to facilitate the clear definition of disease/disease class.
Validation of Primary Therapeutic Target
Target validation normally requires the determination that the target is expressed in the disease-relevant cells/tissues, it can be directly modulated by a drug or drug-like molecule with adequate potency in biochemical assay, and that target modulation in cell and/or animal models ameliorates the relevant disease phenotype. Therefore, TTD collects three types of target validation data:
Experimentally determined potency of drugs against their primary target or targets.
Evident potency or effects of drugs against disease models (cell-lines, ex-vivo, in-vivo models) linked to their primary target or targets.
Observed effects of target knockout, knockdown, RNA interference, transgenetic, antibody or antisense treated in-vivo models.
Categorization of Therapeutic Targets based on Clinical Status
The therapeutic targets in TTD are categorized into successful target, clinical trial target, preclinical target, patented target, and literature-reported target, which are defined by the highest status of their corresponding drugs.
Successful target: targeted by at least one approved drug;
Clinical trial target: not targeted by any approved drug, but targeted by at least one clinical trial drug;
Preclinical target: not targeted by any approved/clinical trial drug, but targeted by at least one preclinical drug;
Patented target: not targeted by any approved/clinical trial/preclinical drug, but targeted by at least one patented drug;
Literature-reported target: targeted by investigative drugs only.
Classification of Therapeutic Targets based on Molecular Types
The molecular types of therapeutic targets in TTD include protein, nucleic acid, and other molecule.
Protein: the most common type of target in drug development
Nucleic acid: include DNA, mRNA, miRNA, lncRNA targets
Other molecule: such as uric acid, iron, and reactive oxygen species
Different Types of Drugs Collected in TTD
The main drug types in TTD include small molecule, antibody, nucleic acid drug, cell therapy, gene therapy and vaccine.
Small molecule: the most common medications in the pharmaceutical market
Antibody: includes monoclonal antibodies and several alternatives such as antibody-drug conjugates, bispecific antibodies, IgG mixtures, and antibody fusion proteins
Nucleic acid drug: mainly include antisense oligonucleotides, small interfering RNAs, small activating RNA, microRNAs, mRNAs and so on
Cell therapy: inject, graft or implant viable cells into a patient in to effectuate a medicinal effect
Gene therapy: manipulate gene expression or alter the biological properties of living cells to produce the therapeutic effect
Vaccine: provide active acquired immunity to a particular infectious or malignant disease
Main Advancement in Different Versions of TTD
2024 Update (Nucleic Acids Res. 2023, doi: 10.1093/nar/gkad751)
■ Target druggability illustrated by molecular interactions or regulations;
■ Target druggability characterized by different human system features;
■ Target druggability reflected by diverse cell-based expression variations;
2022 Update (Nucleic Acids Res. 2022, 50: D1398-D1407)
■ Structure-based activity landscape and drug-like property profile of targets;
■ Prodrugs together with their parent drug and target;
■ Co-targets modulated by approved/clinical trial drugs;
■ Poor binders and non-binders of targets;
2020 Update (Nucleic Acids Res. 2020, 48: D1031-D1041)
■ Target regulators (microRNAs & transcription factors) and target-interacting proteins;
■ Patented agents and their targets (structures and experimental activity values if available);
2018 Update (Nucleic Acids Res. 2018, 46: D1121-D1127)
■ Differential expression profiles and downloadable data of targets in patients and healthy individuals;
■ Target combination of multitarget drugs and combination therapies;
2016 Update (Nucleic Acids Res. 2016, 44: D1069-D1074)
■ Cross-links of most TTD target and drug entries to the corresponding pathway entries;
■ Access of the multiple targets and drugs cross-linked to each of these pathway entries;
2014 Update (Nucleic Acids Res. 2014, 42: D1118-D1123)
■ Biomarkers for disease conditions;
■ Drug scaffolds for drugs/leads;
2012 Update (Nucleic Acids Res. 2012, 40: D1128-D1136)
■ Target validation information (drug-target-disease);
■ Quantitative structure activity relationship models (QSAR) for compounds;
2010 Update (Nucleic Acids Res. 2010, 38: D787-D791)
■ Clinical trial drugs and their targets;
■ Similarity target and drug search.
References
External links
Therapeutic Target Database home page
Article on PubMed
Innovative Drug Research and Bioinformatics Group
Proteins
Pharmacology literature
Chemical databases
Medical databases
Drugs in Singapore
de:Target (Biologie)
sl:Biološka tarča | Therapeutic Targets Database | [
"Chemistry"
] | 1,300 | [
"Biomolecules by chemical classification",
"Pharmacology",
"Chemical databases",
"Pharmacology literature",
"Molecular biology",
"Proteins"
] |
24,271,731 | https://en.wikipedia.org/wiki/Chemical%20WorkBench | Chemical WorkBench is a proprietary simulation software tool aimed at the reactor scale kinetic modeling of homogeneous gas-phase and heterogeneous processes and kinetic mechanism development. It can be effectively used for the modeling, optimization, and design of a wide range of industrially and environmentally important chemistry-loaded processes. Chemical WorkBench is a modeling environment based on advanced scientific approaches, complementary databases, and accurate solution methods. Chemical WorkBench is developed and distributed by Kintech Lab.
Chemical WorkBench models
Chemical WorkBench has an extensive library of physicochemical models:
Thermodynamic Models
Gas-Phase Kinetic Models
Flame model
Heterogeneous Kinetic Models
Non-Equilibrium Plasma Models
Detonation and Aerodynamic Models
Membrane Separation Models
Mechanism Analysis and Reduction
Fields of application
Chemical WorkBench can be used by researchers and engineers working in the following fields:
General chemical kinetics and thermodynamics
Kinetic mechanisms development
Thin films growth for microelectronics
Nanotechnology
Catalysis and chemical engineering
Combustion, detonation and pollution control
Waste treatment and recovering
Plasma light sources and plasma chemistry
High-temperature chemistry
Education
Combustion and detonation, clean power-generation technologies, safety analysis, CVD, heterogeneous and catalytic reactions and processes, and processes in non-equilibrium plasmas are the main areas of interest.
External links
Chemical WorkBench web page
Video Review of Chemical Workbench-Tool for Modeling Reactive Flows and Developing Chemical Mechanisms
See also
Chemical kinetics
Autochem
Cantera
CHEMKIN
Kinetic PreProcessor (KPP)
Laboratory information management system
References
1. https://web.archive.org/web/20090108090305/http://www.softscout.com/software/Science-and-Laboratory/Scientific-Modeling-and-Simulation/Chemical-Workbench.html
2.
3.
Chemical engineering software
Chemical kinetics
Combustion
Computational chemistry software
Molecular modelling software | Chemical WorkBench | [
"Chemistry",
"Engineering"
] | 398 | [
"Chemical reaction engineering",
"Molecular modelling software",
"Computational chemistry software",
"Chemistry software",
"Chemical engineering",
"Molecular modelling",
"Combustion",
"Computational chemistry",
"Chemical engineering software",
"Chemical kinetics"
] |
24,274,027 | https://en.wikipedia.org/wiki/Paleobiota%20of%20the%20Solnhofen%20Limestone | The Solnhofen Limestone or Solnhofen Plattenkalk is a collective term for multiple Late Jurassic lithographic limestones in southeastern Germany, which is famous for its well preserved fossil flora and fauna dating to the late Jurassic (Kimmeridgian-Tithonian). The paleoenvironment is also often referred to as the Solnhofen Archipelago. The Solnhofen Archipelago was located at the northern edge of the Tethys Ocean as part of a shallow epicontinental sea and is firmly a part of the Mediterranean realm.
Chondrichthyes
Holocephali
Elasmobranchii
Neoselachii
Batomorphii
Heterodontiformes
Orectolobiformes
Carcharhiniformes
Lamniformes
Hexanchiformes
Squatiniformes
Other
Osteognathostomata
Chondostrei
Pycnodontiformes
Ginglymodi
Lepisosteiformes
Semionotiformes
Macrosemiidae
Dapediiformes
Halecomorphi
Ionoscopidae
Amiiformes
Pleuropholidae
Pachycormidae
Aspidorhynchidae
"Pholidophoridae"
Crossognathiformes
Ichthyodectiformes
Elopiformes
Osteoglossomorpha
Coelacanths
Reptiles
Lizards
Rhynchocephalians
Ichthyosaurs
Turtles
Crocodylomorphs
Dinosaurs
Pterosaurs
Sauropterygia
Echinodermata
Crinoids
Ophiuroidea
Asteroidea
Echinoidea
Cidaroidea
Euechinoidea
Other Echinoids
Holothuroidea
Molluscs
Bivalvia
Protobranchia
Autobranchia
Cephalopods
Nautiloidea
Ammonoidea
Belemnoidea
Vampyromorpha
Crustacea
Cirripedia
Malocastrata
Isopoda
Decapoda
Penaeoidea
Caridae
Astacidea
Glypheidea
Thalassinidae
Anomura
Brachyura
Insects
Other Arthropods
Xiphosura
Pantopoda
Thylacocephalans
References
General
Lambers, P. H. (1999). The actinopterygian fish fauna of the Late Kimmeridgian and Early Tithonian 'Plattenkalke' near Solnhofen (Bavaria, Germany): state of the art. Geologie en Mijnbouw 78:215-229.
Solnhofen Limestone
Jurassic life of Europe | Paleobiota of the Solnhofen Limestone | [
"Biology"
] | 529 | [
"Mesozoic paleobiotas",
"Prehistoric biotas"
] |
24,280,173 | https://en.wikipedia.org/wiki/Leakage%20%28electronics%29 | In electronics, leakage is the gradual transfer of electrical energy across a boundary normally viewed as insulating, such as the spontaneous discharge of a charged capacitor, magnetic coupling of a transformer with other components, or flow of current across a transistor in the "off" state or a reverse-polarized diode.
In capacitors
Gradual loss of energy from a charged capacitor is primarily caused by electronic devices attached to the capacitors, such as transistors or diodes, which conduct a small amount of current even when they are turned off. Even though this off current is an order of magnitude less than the current through the device when it is on, the current still slowly discharges the capacitor. Another contributor to leakage from a capacitor is from the undesired imperfection of some dielectric materials used in capacitors, also known as dielectric leakage. It is a result of the dielectric material not being a perfect insulator and having some non-zero conductivity, allowing a leakage current to flow, slowly discharging the capacitor.
Another type of leakage occurs when current leaks out of the intended circuit, instead flowing through some alternate path. This sort of leakage is undesirable because the current flowing through the alternate path can cause damage, fires, RF noise, or electrocution. Leakage of this type can be measured by observing that the current flow at some point in the circuit does not match the flow at another. Leakage in a high-voltage system can be fatal to a human in contact with the leak, as when a person accidentally grounds a high-voltage power line.
Between electronic assemblies and circuits
Leakage may also mean an unwanted transfer of energy from one circuit to another. For example, magnetic lines of flux will not be entirely confined within the core of a power transformer; another circuit may couple to the transformer and receive some leaked energy at the frequency of the electric mains, which will cause audible hum in an audio application.
Leakage current is also any current that flows when the ideal current is zero. Such is the case in electronic assemblies when they are in standby, disabled, or "sleep" mode (standby power). These devices can draw one or two microamperes while in their quiescent state compared to hundreds or thousands of milliamperes while in full operation. These leakage currents are becoming a significant factor to portable device manufacturers because of their undesirable effect on battery run time for the consumer.
When mains filters are used in the power circuits supplying an electrical or electronic assembly, e.g., a variable frequency drive or an AC/DC power converter, leakage currents will flow through the "Y" capacitors that are connected between the live and neutral conductors to the earthing or grounding conductor. The current that flows through these capacitors is due to the capacitors' impedance at power line frequencies. Some amount of leakage current is generally considered acceptable, however excessive leakage current, exceeding 30 mA, can create a hazard for users of the equipment. In some applications, e.g. medical devices with patient contact, the acceptable amount of leakage current can be quite low, less than 10 mA.
In semiconductors
In semiconductor devices, leakage is a quantum phenomenon where mobile charge carriers (electrons or holes) tunnel through an insulating region. Leakage increases exponentially as the thickness of the insulating region decreases. Tunneling leakage can also occur across semiconductor junctions between heavily doped P-type and N-type semiconductors. Other than tunneling via the gate insulator or junctions, carriers can also leak between source and drain terminals of a Metal Oxide Semiconductor (MOS) transistor. This is called subthreshold conduction. The primary source of leakage occurs inside transistors, but electrons can also leak between interconnects. Leakage increases power consumption and if sufficiently large can cause complete circuit failure.
Leakage is currently one of the main factors limiting increased computer processor performance. Efforts to minimize leakage include the use of strained silicon, high-κ dielectrics, and/or stronger dopant levels in the semiconductor. Leakage reduction to continue Moore's law will not only require new material solutions but also proper system design.
Certain types of semiconductor manufacturing defects exhibit themselves as increased leakage. Thus measuring leakage, or Iddq testing, is a quick, inexpensive method for finding defective chips.
Increased leakage is a common failure mode resulting from non-catastrophic overstress of a semiconductor device, when the junction or the gate oxide suffers permanent damage not sufficient to cause a catastrophic failure. Overstressing the gate oxide can lead to stress-induced leakage current.
In bipolar junction transistors, the emitter current is the sum of the collector and base currents. Ie = Ic + Ib. The collector current has two components: minority carriers and majority carriers. The minority current is called the leakage current.
In heterostructure field-effect transistors (HFETs) the gate leakage is usually attributed to the high density of traps residing within the barrier. The gate leakage of GaN HFETs has been so far observed to remain at higher levels compared with the other counterparts such as GaAs.
Leakage current is generally measured in microamperes. For a reverse-biased diode it is temperature sensitive. Leakage current must be carefully examined for applications that work in wide temperature ranges to know the diode characteristics.
See also
Grid leak
Quiescent current
Losses in electrical systems
Parasitic losses
Residual-current circuit breaker
References
Electrical parameters
Electric current | Leakage (electronics) | [
"Physics",
"Engineering"
] | 1,169 | [
"Physical quantities",
"Electric current",
"Electrical engineering",
"Wikipedia categories named after physical quantities",
"Electrical parameters"
] |
24,280,199 | https://en.wikipedia.org/wiki/Metal%20L-edge | Metal L-edge spectroscopy is a spectroscopic technique used to study the electronic structures of transition metal atoms and complexes. This method measures X-ray absorption caused by the excitation of a metal 2p electron to unfilled d orbitals (e.g. 3d for first-row transition metals), which creates a characteristic absorption peak called the L-edge. Similar features can also be studied by Electron Energy Loss Spectroscopy. According to the selection rules, the transition is formally electric-dipole allowed, which not only makes it more intense than an electric-dipole forbidden metal K pre-edge (1s → 3d) transition, but also makes it more feature-rich as the lower required energy (~400-1000 eV from scandium to copper) results in a higher-resolution experiment.
In the simplest case, that of a cupric (CuII) complex, the 2p → 3d transition produces a 2p53d10 final state. The 2p5 core hole created in the transition has an orbital angular momentum L=1 which then couples to the spin angular momentum S=1/2 to produce J=3/2 and J=1/2 final states. These states are directly observable in the L-edge spectrum as the two main peaks (Figure 1). The peak at lower energy (~930 eV) has the greatest intensity and is called the L3-edge, while the peak at higher energy (~950 eV) has less intensity and is called the L2-edge.
Spectral components
As we move left across the periodic table (e.g. from copper to iron), we create additional holes in the metal 3d orbitals. For example, a low-spin ferric (FeIII) system in an octahedral environment has a ground state of (t2g)5(eg)0 resulting in transitions to the t2g (dπ) and eg (dσ) sets. Therefore, there are two possible final states: t2g6eg0 or t2g5eg1(Figure 2a). Since the ground-state metal configuration has four holes in the eg orbital set and one hole in the t2g orbital set, an intensity ratio of 4:1 might be expected (Figure 2b). However, this model does not take into account covalent bonding and, indeed, an intensity ratio of 4:1 is not observed in the spectrum.
In the case of iron, the d6 excited state will further split in energy due to d-d electron repulsion (Figure 2c). This splitting is given by the right-hand (high-field) side of the d6 Tanabe–Sugano diagram and can be mapped onto a theoretical simulation of a L-edge spectrum (Figure 2d). Other factors such as p-d electron repulsion and spin-orbit coupling of the 2p and 3d electrons must also be considered to fully simulate the data.
For a ferric system, all of these effects result in 252 initial states and 1260 possible final states that together will comprise the final L-edge spectrum (Figure 2e). Despite all of these possible states, it has been established that in a low-spin ferric system, the lowest energy peak is due to a transition to the t2g hole and the more intense and higher energy (~3.5 eV) peak is to that of the unoccupied eg orbitals.
Feature mixing
In most systems, bonding between a ligand and a metal atom can be thought of in terms of metal-ligand covalent bonds, where the occupied ligand orbitals donate some electron density to the metal. This is commonly known as ligand-to-metal charge transfer or LMCT. In some cases, low-lying unoccupied ligand orbitals (π*) can receive back-donation (or backbonding) from the occupied metal orbitals. This has the opposite effect on the system, resulting in metal-to-ligand charge transfer, MLCT, and commonly appears as an additional L-edge spectral feature.
An example of this feature occurs in low-spin ferric [Fe(CN)6]3−, since CN− is a ligand that can have backbonding. While backbonding is important in the initial state, it would only warrant a small feature in the L-edge spectrum. In fact, it is in the final state where the backbonding π* orbitals are allowed to mix with the very intense eg transition, thus borrowing intensity and resulting in the final dramatic three peak spectrum (Figure 3 and Figure 4).
Model construction
X-ray absorption spectroscopy (XAS), like other spectroscopies, looks at the excited state to infer information about the ground state. To make a quantitative assignment, L-edge data is fitted using a valence bond configuration interaction (VBCI) model where LMCT and MLCT are applied as needed to successfully simulate the observed spectral features. These simulations are then further compared to density functional theory (DFT) calculations to arrive at a final interpretation of the data and an accurate description of the electronic structure of the complex (Figure 4).
In the case of iron L-edge, the excited state mixing of the metal eg orbitals into the ligand π* make this method a direct and very sensitive probe of backbonding.
See also
Metal K-edge
Ligand K-edge
Extended X-ray absorption fine structure
References
X-ray absorption spectroscopy | Metal L-edge | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,134 | [
"X-ray absorption spectroscopy",
"Materials science",
"Laboratory techniques in condensed matter physics"
] |
24,280,337 | https://en.wikipedia.org/wiki/Detonation%20flame%20arrester | A detonation flame arrester (also spelled arrestor) is a device fitted to the opening of an enclosure or to the connecting pipe work of a system of enclosures and whose intended function is to allow flow but prevent the transmission of flame propagating at supersonic velocity.
Detonation Flame Arrester products were created in response to environmental regulations (such as The Clean Air Act) which required liquid product storage terminals and hydrocarbon processing plants to control evaporative hydrocarbon emissions from loading and storage operations. This process is called vapor control. Two types of recognized vapor control technologies are commonly used; carbon adsorption vapor recovery and vapor destruction or combustion. Vapor destruction systems include elevated flare systems, enclosed flare systems, burner and catalytic incineration systems, and waste gas boilers. Both systems require flame or detonation flame arresters to maximize safety.
Detonation flame arresters are used in many industries, including refining, pharmaceutical, chemical, and petrochemical, pulp and paper, oil exploration and production, sewage treatment, landfills, mining, power generation, and bulk liquids transportation.
Operation
Flame arresters are passive devices with no moving parts. They prevent the propagation of flame from the exposed side of the unit to the protected side by the use of metal matrix creating a tortuous path called a flame cell or element. All detonation flame arresters operate on the same principle: removing heat from the flame as it attempts to travel through narrow passages with walls of metal or other heat-conductive material, but unlike flame arresters, detonation flame arresters must be built to withstand extreme pressures that travel at supersonic velocities, at 2500 m/s is not uncommon with a group D Gas.
Detonation flame arresters made by most manufacturers employ layers of metal ribbons with crimped corrugations. The internal narrow passages of the crimped corrugations make up the element matrix. These passages are measured as the hydraulic diameter and are made smaller for gases having smaller maximum experimental safe gaps (MESG).
Under normal operating conditions the flame arrester permits a relatively free flow of gas or vapor through the piping system. If the mixture is ignited and the flame begins to travel back through the piping, the arrester will prohibit the flame from moving back to the gas source.
Most detonation flame arrester applications are in systems which collect gases emitted by liquids and solids. These systems, commonly used in many industries, may be called vapor control systems. The gases which are vented to atmosphere or controlled via vapor control systems are typically flammable. If the conditions are such that ignition occurs, a flame propagation inside or outside of the system could result, with the potential to do catastrophic damage.
History
The first patented detonation flame arrester was developed by Nicholas Roussakis et al., and was issued on March 20, 1990. Its need was initially driven by new environmental legislation, namely the Clean Air Act of the USA. Regular flame arresters had been around for years, but they had very limited applications.
There have been at least a dozen more since then. A few are as follows;
Nicholas Roussakis & Dwight E Brooker, issued May 16, 1995
Dwight E Brooker, issued Nov 11, 2003
Dwight E Brooker, issued Sept 6, 2004
Dwight E Brooker, issued June 6, 2006
Standards
ISO/TC 21/WG 3 (ISO 16852)
*EN-12874
USCG 33cfr154.1325
CSA-Z343 Flame Arrester Standard
See also
Flame arrester
Flashback arrester
Industrial safety devices
Fire prevention
Chemical safety | Detonation flame arrester | [
"Chemistry",
"Engineering"
] | 747 | [
"Chemical safety",
"Industrial safety devices",
"Chemical accident",
"nan"
] |
24,281,183 | https://en.wikipedia.org/wiki/Hand%20boiler | A hand boiler or (less commonly) love meter is a glass sculpture used as an experimental tool to demonstrate vapour-liquid equilibrium, or as a collector's item to whimsically "measure love." It consists of a lower bulb containing a volatile liquid and a mixture of gases that is connected usually by a twisting glass tube that connects to an upper or "receiving" glass bulb.
Mechanics
A hand boiler functions similar to the "drinking bird" toy: The upper and lower bulbs of the device are at different temperatures, and therefore the vapor pressure in the two bulbs is different. Since the lower bulb is warmer, the vapor pressure in it is higher. The difference in vapor pressure forces the liquid from the lower bulb to the upper bulb. Thus:
where:
= the height of the column of fluid above the fluid's level in the lower bulb
= the difference in vapor pressure between the two bulbs (which can be determined via the Antoine equation)
= the density of the liquid
= the acceleration of gravity at the Earth's surface
The boiling is caused by the relationship between the temperature and pressure of a gas. As the temperature of a gas in a closed container rises, the pressure also rises. There must be a temperature (and pressure) difference between the two large chambers for the liquid to move. When held upright (with the smaller bulb on top), the liquid will move from the bulb with the higher pressure to the bulb with lower pressure. As the gas continues to expand, the gas will then bubble through the liquid, boiling. The fact that the liquid is volatile (easily vaporized) makes the hand boiler more effective. Adding heat to the liquid produces more gas, also increasing pressure in the closed container.
Sometimes a hand boiler is used to show properties of distillation. Since the liquid both evaporates and condenses at relatively cool temperatures while in an enclosed system, the boiler can be turned upside down, and the top end can be placed in ice water. The gaseous form of the liquid will condense in the cooled chamber. Since the liquid is often colored with dye, but the dye does not evaporate or condense at the same temperature, the liquid that condenses in the cooled chamber is colorless, leaving the pigment behind.
Popular culture
In popular culture, hand boilers used to be sometimes known as "love meters" because the tube that separates the upper and lower bulbs is twisted into a heart shape and the volatile liquid is colored red. Love meters were a common collector's item or a souvenir. Depending on how the item was packaged, one would grasp the lower bulb to "prove" how passionate one was, or a couple would each grasp one end to see who would force the liquid into the other's bulb.
Hand boilers are much more commonly used as a scientific novelty today.
History
Hand boilers date back at least as early as 1767, when the American polymath Benjamin Franklin encountered them in Germany. He developed an improved version in 1768, after which they were called Franklin's pulse glass or palm glass or pulse hammer (German: Pulshammer) or water hammer (German: Wasserhammer).
See also
Charles's law
Drinking Bird
Gas laws
Vapor–liquid equilibrium
References
Physics education
Equilibrium chemistry
Chemistry education
Science demonstrations
Novelty items
Science education materials | Hand boiler | [
"Physics",
"Chemistry"
] | 673 | [
"Equilibrium chemistry",
"Applied and interdisciplinary physics",
"Physics education"
] |
24,283,737 | https://en.wikipedia.org/wiki/Leeb%20rebound%20hardness%20test | The Leeb Rebound Hardness Test (LRHT) invented by Swiss company Proceq SA is one of the four most used methods for testing metal hardness. This portable method is mainly used for testing sufficiently large workpieces (mainly above 1 kg).
It measures the coefficient of restitution. It is a form of nondestructive testing.
History
The Equotip (later on also called simultaneously as Leeb method) rebound hardness test method was developed in the year 1975 by Leeb and Brandestini at Proceq SA to provide a portable hardness test for metals. It was developed as an alternative to the unwieldy and sometimes intricate traditional hardness measuring equipment. The first Leeb rebound product on the market was named “Equotip”, a phrase that still is used synonymously with “Leeb rebound” due to the wide circulation of the “Equotip” product.
Traditional hardness measurements, e.g., those of Rockwell, Vickers, and Brinell, are stationary, requiring fixed workstations in segregated testing areas or laboratories. Most of the time, these methods are selective, involving destructive tests on samples. From individual results, these tests draw statistical conclusions for entire batches. The portability of Leeb testers can sometimes help to achieve higher testing rates without destruction of samples, which in turn simplifies processes and reduces cost.
Method
The traditional methods are based on well-defined physical indentation hardness tests. Very hard indenters of defined geometries and sizes are continuously pressed into the material under a particular force. Deformation parameters, such as the indentation depth in the Rockwell method, are recorded to give measures of hardness.
According to the dynamic Leeb principle, the hardness value is derived from the energy loss of a defined impact body after impacting on a metal sample, similar to the Shore scleroscope. The Leeb quotient (vi,vr) is taken as a measure of the energy loss by plastic deformation: the impact body rebounds faster from harder test samples than it does from softer ones, resulting in a greater value 1000×vr/vi. A magnetic impact body permits the velocity to be deduced from the voltage induced by the body as it moves through the measuring coil. The quotient 1000×vr/vi is quoted in the Leeb rebound hardness unit HLx (where x indicates the probe and impact body type: D, DC, DL, C, G, S, E) .
While in the traditional static tests the test force is applied uniformly with increasing magnitude, dynamic testing methods apply an instantaneous load. A test takes a mere 2 seconds and, using the standard probe D, leaves an indentation of just ~0.5 mm in diameter on steel or steel casting with a Leeb hardness of 600 HLD. By comparison, a Brinell indentation on the same material is ~3 mm (hardness value ~400 HBW 10/3000), with a standard-compliant measuring time of ~15 seconds plus the time for measuring the indentation.
Theoretical background of the rebound hardness test is discussed in detail in.
Scales
Depending on the probe (“impact device”) and indenter (“impact body”) types that vary by geometry, size, weight, material and spring force, diverse impact devices and hardness units are distinguished, e.g.:
Equotip impact device D with hardness unit HLD
Equotip impact device G with hardness unit HLG
Equotip impact device C with hardness unit HLC
Equotip impact device E with hardness unit HLE
Equotip impact device DL with hardness unit HLDL
Equotip impact device S with hardness unit HLS
Equotip impact device DC with hardness unit HLDC
Generally, impact device types are optimized for certain application fields. This is similar to using various indenter geometries and test loads in Rockwell (e.g. HRA, HRB, HRC), Brinell and Vickers.
Equotip hardness results in HLx are often converted to the traditional hardness scales HRC, HB and HV mainly for convention reasons between supplier and customer.
Standards
German standards and specifications:
DIN 50156-1 “Metallic materials – Leeb hardness test - Part 1: Test Method”
DIN 50156-2 "Metallic materials - Leeb hardness test - Part 2: Verification and calibration of the testing devices"
DIN 50156-3 "Metallic materials - Leeb hardness test - Part 3: Calibration of reference blocks"
DGZfP Guideline “Mobile Härteprüfung“
VDI/VDE Guideline 2616 Part 1 “Hardness testing of metallic materials”
American standards:
ASTM A956 “Standard Test Method for Leeb Hardness Testing of Steel Products”
ASTM E140 - 12be1 "Standard Hardness Conversion Tables for Metals Relationship Among Brinell Hardness, Vickers Hardness, Rockwell Hardness, Superficial Hardness, Knoop Hardness, Scleroscope Hardness, and Leeb Hardness"
International standards:
ISO/DIS 16859-1 "Metallic materials - Leeb hardness test - Part 1: Test method"
ISO/DIS 16859-2 "Metallic materials - Leeb hardness test - Part 2: Verification and calibration of the testing devices"
ISO/DIS 16859-3 "Metallic materials - Leeb hardness test - Part 3: Calibration of reference test blocks"
See also
Meyer hardness test
References
External links
http://grhardnesstester.com/blog/methods-testing-hardness-steel/
https://www.baq.de/template.cgi?page=service_infos_ueber_messverfahren&rubrik=&id=&lang=en#rueckprall-verfahren
Hardness tests | Leeb rebound hardness test | [
"Materials_science"
] | 1,208 | [
"Hardness tests",
"Materials testing"
] |
28,828,990 | https://en.wikipedia.org/wiki/Magnetoelectric%20effect | In its most general form, the magnetoelectric effect (ME) denotes any coupling between the magnetic and the electric properties of a material. The first example of such an effect was described by Wilhelm Röntgen in 1888, who found that a dielectric material moving through an electric field would become magnetized. A material where such a coupling is intrinsically present is called a magnetoelectric.
Some promising applications of the ME effect are sensitive detection of magnetic fields, advanced logic devices and tunable microwave filters.
History
The first example of a magnetoelectric effect was discussed in 1888 by Wilhelm Röntgen, who showed that a dielectric material moving through an electric field would become magnetized. The possibility of an intrinsic magnetoelectric effect in a (non-moving) material was conjectured by Pierre Curie in 1894, while the term "magnetoelectric" was coined by Peter Debye in 1926.
A mathematical formulation of the linear magnetoelectric effect was included in Lev Landau and Evgeny Lifshitz's Course of Theoretical Physics. Only in 1959 did Igor Dzyaloshinskii, using an elegant symmetry argument, derive the form of a linear magnetoelectric coupling in chromium(III) oxide (Cr2O3). The experimental confirmation came just a few months later when the effect was observed for the first time by D. Astrov. The general excitement which followed the measurement of the linear magnetoelectric effect lead to the organization of the series of Magnetoelectric Interaction Phenomena in Crystals (MEIPIC) conferences. Between the prediction of Dzyaloshinskii and the MEIPIC first edition (1973), more than 80 linear magnetoelectric compounds were found. Recently, technological and theoretical progress, driven in large part by the advent of multiferroic materials, triggered a renaissance of these studies and magnetoelectric effect is still heavily investigated.
Linear magnetoelectric effect
Historically, the first and most studied example of this effect is the linear magnetoelectric effect. Mathematically, while the electric susceptibility and magnetic susceptibility describe the electric and magnetic polarization responses to an electric, resp. a magnetic field, there is also the possibility of a magnetoelectric susceptibility which describes a linear response of the electric polarization to a magnetic field, and vice versa:
The tensor must be the same in both equations. Here, P is the electric polarization, M the magnetization, E and H the electric and magnetic fields. In SI units, has units of second per meter.
The first material where an intrinsic linear magnetoelectric effect was predicted theoretically and confirmed experimentally was Cr2O3. This is a single-phase material. Multiferroics are another example of single-phase materials that can exhibit a general magnetoelectric effect if their magnetic and electric orders are coupled. Composite materials are another way to realize magnetoelectrics. There, the idea is to combine, say a magnetostrictive and a piezoelectric material. These two materials interact by strain, leading to a coupling between magnetic and electric properties of the compound material.
General phenomenology
If the coupling between magnetic and electric properties is analytic, then the magnetoelectric effect can be described by an expansion of the free energy as a power series in the electric and magnetic fields and :
Differentiating the free energy will then give the electric polarization and the magnetization .
Here, and are the static polarization, resp. magnetization of the material, whereas and are the electric, resp. magnetic susceptibilities. The tensor describes the linear magnetoelectric effect, which corresponds to an electric polarization induced linearly by a magnetic field, and vice versa. The higher terms with coefficients and describe quadratic effects. For instance, the tensor describes a linear magnetoelectric effect which is, in turn, induced by an electric field.
The possible terms appearing in the expansion above are constrained by symmetries of the material. Most notably, the tensor must be antisymmetric under time-reversal symmetry. Therefore, the linear magnetoelectric effect may only occur if time-reversal symmetry is explicitly broken, for instance by the explicit motion in Röntgens' example, or by an intrinsic magnetic ordering in the material. In contrast, the tensor may be non-vanishing in time-reversal symmetric materials.
Microscopic origin
There are several ways in which a magnetoelectric effect can arise microscopically in a material.
Single-ion anisotropy
In crystals, spin–orbit coupling is responsible for single-ion magnetocrystalline anisotropy which determines preferential axes for the orientation of the spins (such as easy axes). An external electric field may change the local symmetry seen by magnetic ions and affect both the strength of the anisotropy and the direction of the easy axes. Thus, single-ion anisotropy can couple an external electric field to spins of magnetically ordered compounds.
Symmetric Exchange striction
The main interaction between spins of transition metal ions in solids is usually provided by superexchange, also called symmetric exchange. This interaction depends on details of the crystal structure such as the bond length between magnetic ions and the angle formed by the bonds between magnetic and ligand ions. In magnetic insulators it usually is the main mechanism for magnetic ordering, and, depending on the orbital occupancies and bond angles, can lead to ferro- or antiferromagnetic interactions. As the strength of symmetric exchange depends on the relative position of the ions, it couples the spin orientations to the lattice structure. Coupling of spins to a collective distortion with a net electric dipole can occur if the magnetic order breaks inversion symmetry. Thus, symmetric exchange can provide a handle to control magnetic properties through an external electric field.
Strain driven magnetoelectric heterostructured effect
Because materials exist that couple strain to electrical polarization (piezoelectrics, electrostrictives, and ferroelectrics) and that couple strain to magnetization (magnetostrictive/magnetoelastic/ferromagnetic materials), it is possible to couple magnetic and electric properties indirectly by creating composites of these materials that are tightly bonded so that strains transfer from one to the other.
Thin film strategy enables achievement of interfacial multiferroic coupling through a mechanical channel in heterostructures consisting of a magnetoelastic and a piezoelectric component. This type of heterostructure is composed of an epitaxial magnetoelastic thin film grown on a piezoelectric substrate. For this system, application of a magnetic field will induce a change in the dimension of the magnetoelastic film. This process, called magnetostriction, will alter residual strain conditions in the magnetoelastic film, which can be transferred through the interface to the piezoelectric substrate. Consequently, a polarization is introduced in the substrate through the piezoelectric process.
The overall effect is that the polarization of the ferroelectric substrate is manipulated by an application of a magnetic field, which is the desired magnetoelectric effect (the reverse is also possible). In this case, the interface plays an important role in mediating the responses from one component to another, realizing the magnetoelectric coupling. For an efficient coupling, a high-quality interface with optimal strain state is desired. In light of this interest, advanced deposition techniques have been applied to synthesize these types of thin film heterostructures. Molecular beam epitaxy has been demonstrated to be capable of depositing structures consisting of piezoelectric and magnetostrictive components. Materials systems studied included cobalt ferrite, magnetite, SrTiO3, BaTiO3, PMNT.
Flexomagnetoelectric effect
Magnetically driven ferroelectricity is also caused by inhomogeneous magnetoelectric interaction. This effect appears due to the coupling between inhomogeneous order parameters. It was also called as flexomagnetoelectric effect. Usually it is describing using the Lifshitz invariant (i.e. single-constant coupling term). It was shown that in general case of cubic hexoctahedral crystal the four phenomenological constants approach is correct. The flexomagnetoelectric effect appears in spiral multiferroics or micromagnetic structures like domain walls and magnetic vortexes.
Ferroelectricity developed from micromagnetic structure can appear in any magnetic material even in centrosymmetric one. Building of symmetry classification of domain walls leads to determination of the type of electric polarization rotation in volume of any magnetic domain wall. Existing symmetry classification of magnetic domain walls was applied for predictions of electric polarization spatial distribution in their volumes. The predictions for almost all symmetry groups conform with phenomenology in which inhomogeneous magnetization couples with homogeneous polarization. The total synergy between symmetry and phenomenology theory appears if energy terms with electrical polarization spatial derivatives are taken into account.
See also
Piezoelectricity
Multiferroics
Exchange interaction
References
Condensed matter physics
Materials science | Magnetoelectric effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,898 | [
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Matter"
] |
31,387,682 | https://en.wikipedia.org/wiki/Dynamic%20insulation | Dynamic insulation is a form of insulation where cool outside air flowing through the thermal insulation in the envelope of a building will pick up heat from the insulation fibres. Buildings can be designed to exploit this to reduce the transmission heat loss (U-value) and to provide pre-warmed, draft free air to interior spaces. This is known as dynamic insulation since the U-value is no longer constant for a given wall or roof construction but varies with the speed of the air flowing through the insulation (climate adaptive building shell). Dynamic insulation is different from breathing walls. The positive aspects of dynamic insulation need to be weighed against the more conventional approach to building design which is to create an airtight envelope and provide appropriate ventilation using either natural ventilation or mechanical ventilation with heat recovery. The air-tight approach to building envelope design, unlike dynamic insulation, results in a building envelope that provides a consistent performance in terms of heat loss and risk of interstitial condensation that is independent of wind speed and direction. Under certain wind conditions a dynamically insulated building can have a higher heat transmission loss than an air-tight building with the same thickness of insulation. Often the air enters at about 15 °C.
Introduction
The primary function of the walls and roof of a building is to be wind and watertight. Depending on the function of the building there will be also a requirement to maintain the inside within a suitable temperature range in a way that minimises both the use of energy and the associated carbon dioxide emissions.
Dynamic insulation is normally implemented in timber frame walls and in ceilings. It turns on its head the long accepted wisdom of building designers and building services engineers to "build tight and ventilate right". It requires air permeable walls and/or roof/ceiling so that when the building is depressurised air can flow from outside to inside through the insulation in the wall or roof or ceiling (Figs 1 and 2). The following explanation of dynamic insulation will, for simplicity, be set in the context of temperate or cold climates where the main energy use is for heating rather than cooling the building. In hot climates it may have application in increasing the heat loss from the building.
As air flows inwards through the insulation it picks up, via the insulation fibres, the heat that is being conducted to the outside. Dynamic insulation is thus able to achieve the dual function of reducing the heat loss through the walls and/or roof whilst at the same time supplying pre-warmed air to the indoor spaces. Dynamic insulation would appear, therefore, to overcome the major disadvantage of airtight envelopes which is that the quality of the indoor air will deteriorate unless there is natural or mechanical ventilation. However, dynamic insulation also requires mechanical ventilation with heat recovery (MVHR) in order to recover the heat in the exhaust air.
For the air to be continually drawn through the walls and/or roof/ceiling, a fan is needed to hold the building at a pressure of 5 to 10 Pascals below the ambient pressure. The air that is being continuously drawn through the wall or roof needs to be continuously vented to outside. This represents a heat loss which must be recovered. An air-to-air heat exchanger (Fig 2) is the simplest way to do this.
Annotation for Air Tight Timber Frame Construction
Annotation for Air Permeable Wall Construction
Science of dynamic insulation
All the main features of dynamic insulation can be understood by considering the ideal case of one-dimensional steady state heat conduction and air flow through a uniform sample of air permeable insulation. Equation (), which determines the temperature T at a distance x measured from the cold side of the insulation, is derived from the total net flow of conduction and convective heat across a small element of insulation being constant.
where
uair speed through the insulation (m/s)
caspecific heat of air (J/kg K)
ρadensity of air (kg/m3)
λathermal conductivity of the insulation(W/m K)
For two- and three-dimensional geometries computational fluid dynamics (CFD) tools are required to solve simultaneously the fluid flow and heat transfer equations through porous media. The idealised 1D model of dynamic insulation provide a great deal of physical insight into the conductive and convective heat transfer processes which provides a means of testing the validity of the results of CFD calculations. Furthermore, just as simple 1D steady state heat flow is assumed in the calculation of the heat transmission coefficients (U-values) that are used in the design, approval and building energy performance rating of buildings so the simple 1D steady state model of dynamic insulation is adequate for designing and assessing the performance of a dynamically insulated building or element of the building.
Insulations such as polyurethane (PUR) boards, which due to their micro-structure, are not air permeable are not suitable for dynamic insulation. Insulations such as rock wool, glass wool, sheep's wool, cellulose are all air permeable and so can be used in a dynamically insulated envelope. In equation () the air speed through the insulation, u is taken as positive when the air flow is in the opposite direction to the conductive heat flow (contra-flux). Equation () also applies to steady state heat flow in multi-layered walls.
Equation () has an analytical solution
For the boundary conditions:
T(x) = To at x = 0
T(x) = TL at x = L
where the parameter A, with dimensions of length, is defined by:
The temperature profile as calculated using equation () for air flowing through a slab of cellulose insulation 0.2 m thick in which one side is at a temperature of 20 °C and the other is at 0 °C is shown in Fig 3. The thermal conductivity of cellulose insulation was taken to be 0.04 W/m2K.
Contra-flux
Fig 3 shows the typical behaviour of the temperature profile through dynamic insulation where the air flows in the opposite direction to the heat flux. As the air flow increases from zero, the temperature profile becomes increasingly more curved. On the cold side of the insulation (x/L = 0) the temperature gradient becomes increasingly horizontal. As the conduction heat flow is proportional to the temperature gradient, the slope of the temperature profile on the cold side is a direct indication of the conduction heat loss through a wall or roof. On the cold side of the insulation the temperature gradient is close zero which is the basis for the claim often made that dynamic insulation can achieve a U-value of zero W/m2K.
On the warm side of the insulation the temperature gradient gets steeper with increasing air flow. This implies heat is flowing into the wall at a greater rate than for conventional insulation (air speed = 0 mm/s). For the case shown of air flowing through the insulation at 1mm/s the temperature gradient on the warm side of the insulation x/L = 1) is 621 °C/m which compares with only 100 °C/m for the conventional insulation. This implies that with an air flow of 1mm/s the inner surface is absorbing 6 times as much heat as that for conventional insulation.
A consequence of this is that considerably more heat has to be put into the wall if there is air flowing through from outside. Specifically a space heating system six time larger than that for a conventionally insulated house would be needed. It is frequently stated that in dynamic insulation the outside air is being warmed up by heat that would be lost in any case. The implication being that the outside air is being warmed by "free" heat. The fact that the heat flow into the wall increases with air speed is evidenced by the decreasing temperature of the inner surface (Table 2 and Fig 4 below). A dynamically insulated house requires also an air-to-air heat exchanger as does an airtight house. The latter has the further advantage that if it is well insulated it will require only a minimal space heating system.
The temperature gradient at point in dynamic insulation can be obtained by differentiating equation ()
From this the temperature gradient on the cold side of the insulation (x = 0) is given by
and the temperature gradient on the warm side of the insulation (x = L) is given by
From the temperature gradient on the cold side of the insulation (equation ()) a transmission heat loss or U-value for a dynamically insulated wall, Udyn can be calculated (Table 1)
This definition of dynamic U-value would appear to be consistent with Wallenten's definition.
The ratio of the dynamic U-value to the static U-value (u=0 m/s) is
Table 1 Dynamic U-value
With this definition, the U-value of the dynamic wall decreases exponentially with increasing air speed.
As stated above the conductive heat flow into the insulation on the warm side is very much greater than that leaving the cold side. In this case it is 6.21 X 4 / 0.0504 = 493 times for an air speed of 1 mm/s (Table 1). This imbalance in conductive heat flow is raising the temperature of the incoming air.
This large heat flow into the wall has a further consequence. At the surface of a wall, floor or ceiling there is thermal resistance which takes account of the convective and radiant heat transfer at these surfaces. For a vertical internal surface this thermal resistance has a value of 0.13 m2 K/W. In a dynamically insulated wall, as the conduction heat flow into the wall increases then so does the temperature drop across this internal thermal resistance increase. The wall surface temperature will become increasingly colder (Table 2). The temperature profiles through dynamic insulation taking into account the decrease in surface temperature with increasing air flow is shown in Fig 4.
Table 2 Temperature drop across air film thermal resistance
As the operative temperature of a room is a combination of the air temperature and the mean temperature of all the surfaces in the room this implies that people will feel increasingly cooler as the air flow through the wall increases. Occupants may be tempted to turn up the room thermostat to compensate and thereby increasing the heat loss.
Pro-flux
Fig 5 shows the typical behaviour of the dynamic insulation temperature profile when the air flows in the same direction to the conductive heat flow (pro-flux). As air at room temperature flows outwards with increasing speed the temperature profile becomes increasingly more curved. On the warm side of the insulation the temperature gradient becomes increasingly horizontal as the warm air prevents the insulation cooling down in the linear way that would occur with no air flow. The conductive heat loss into the wall is very much less than that for conventional insulation. This does not mean that the transmission heat loss for the insulation is very low.
On the cold side of the insulation the temperature gradient gets steeper with increasing air outward flow. This is because the air, having now cooled, is no longer able to transfer heat to the insulation fibres. In pro-flux mode heat is flowing out of the wall at a greater rate than the case for conventional insulation. Warm moist air flowing out through the insulation and cooling rapidly increases the risk of condensation occurring within the insulation which will degrade the thermal performance of the wall and could, if prolonged, lead to mould growth and timber decay.
How the heat flow (W/m2K) from the outer or cold surface of the insulation varies with air flow through the insulation is shown in Fig 6. When the air, which is also cold, flows inwards (air speed is positive) then the heat loss decreases from that of conventional insulation towards zero. However, when warm air flows outwards through the insulation (air speed is negative) then the heat losses increase dramatically. This is why in a conventionally insulated building it is desirable to make the envelope airtight. In a dynamically insulated wall it is necessary to ensure the air flow is inward at all points of the building under all wind speeds and directions.
Influence of the wind
In general when the wind blows on a building then the air pressure, Pw varies all over the building surface (Fig 7).
where
Poa reference pressure (Pa)
Cpwind pressure coefficient (dimensionless)
Liddament, and CIBSE, provide approximate wind pressure coefficient data for low rise buildings (up to 3 storeys). For a square plan building on an exposed site with the wind blowing directly on to the face of the building the wind pressure coefficients are as shown in Fig 8. For a wind speed of 5.7 m/s at ridge height (taken as 8m) there is zero pressure difference across the side walls when the building is depressurised to -10 Pa. The insulation in the windward and leeward walls is behaving dynamically in the contra-flux mode with U-values of 0.0008 W/(m2K) and 0.1 W/(m2K) respectively. Since the building has a square footprint the average U-value for the walls is 0.1252 W/m2K. For other wind speeds and directions, the U-values will be different.
For wind speeds greater than 5.7 m/s at ridge height then the side walls are in pro-flux mode with a U value dramatically increasing with wind speed (Fig 6) At wind speeds greater than 9.0 m/s at ridge height the lee-ward switches from contra-flux to pro-flux mode. The average U-value for the four walls is now 0.36 W/(m2K), which is significantly greater than the 0.2 W/(m2K) for an air-tight construction. These changes from contra-flux to pro-flux mode could be delayed by depressurising the building below -10 Pa.
By locating this building in a particular geographical location then wind speed data for this site may be used to estimate the proportion of the year in which one or more of the walls will be operating in the risky and high heat loss pro-flux mode. From the Rayleigh distribution of wind speed at the site of the building, it is possible to estimate the number of hours in a year during which the wind speed at a height of 10.0 m exceeds 7.83 m/s (estimated from the wind speed of 5.7 m/s at ridge height of 8.0 m). This is the total time during an average year in which a building with dynamically insulated walls has significant heat losses.
If, by way of example, the building in Fig 8 were located in Footdee, Aberdeen, the Ordnance Survey Land Ranger grid reference is NJ955065. Entering NJ9506 into the UK windspeed data base returns for this site an average annual wind speed of 5.8 m/s at a height of 10 m. The Rayleigh distribution for this mean wind speed indicates wind speeds in excess of 8 m/s are likely to occur for 2348 hours in the year or about 27% of the year. The wind pressure coefficients for the walls of the building vary also with wind direction which changes throughout the year. Nevertheless, the above calculations indicate that a square plan building of 2 storeys located in Footdee, Aberdeen could have one or more of the walls operating in the risky and high heat loss pro-flux mode for about a quarter of the year.
A more robust way of introducing dynamic insulation to a building that avoids the pressure variation around the building envelope is to make use of the fact that in a ventilated roof space the pressure is relatively uniform over the ceiling (Fig 9 ). Thus a building with a dynamically insulated ceiling would offer a consistent performance independent of a varying wind speed and direction.
Air control layer
The maximum depressurisation for a dynamically insulated building is normally limited to 10 Pa in order to avoid doors slamming shut or difficulty in opening doors. Dalehaug also recommended that the pressure difference through the construction at the design minimum air flow (> 0.5 m3/m2h) should be about 5 Pa. The function of the air control layer (Fig 1) in a dynamically insulated wall or ceiling is provide sufficient resistance to the air flow to achieve the required pressure drop at the design air flow rate. The air control layer requires to have a suitable air permeability and this is the key to making dynamic insulation work.
The permeability of a material to air flow, Φ, (m2/hPa) is defined as the volume of air that flows through a cube of material 1m X 1m X 1m in one hour
where
Aarea of material through which air flows (m2)
Lthickness of material through which air flows (m)
V'volume flow rate of air (m3/h)
ΔPpressure difference along the length L of material (Pa)
Equation () is a simplified form of Darcy's Law. In building applications the air is at ambient pressure and temperature and small changes in the viscosity of air are not significant. Darcy's Law can be used to calculate the air permeability of a porous medium if the permeability of the medium (m2) is known.
The air permeability of some materials that could be used in dynamically insulated walls or ceiling are listed in Table 3. Air permeability data is crucial to the selection of the correct material for the air control layer. Further sources of air permeability data include ASHRAE and Kumaran.
Table 3: Measured Air Permeability of Building Materials
(1) Pressure drop calculated at flow rate of 1 m3/m2h
Design of a dynamic insulated building
The application of the theory of dynamic insulation is best explained by way of an example. Assume a house of 100 m2 floor area with a dynamically insulated ceiling. Putting dynamic insulation in the ceiling effectively limits the house to a single storey.
The first step is to decide on an appropriate air change rate for good air quality. As this air flow rate will be supplied through the dynamically insulated ceiling and a mechanical ventilation and heat recovery system (MVHR), energy loss is not a major concern so 1 air change per hour (ach) will be assumed. If the floor to ceiling height is 2.4 m this implies an air flow rate of 240 m3/h, part of which is supplied through the dynamically insulated ceiling and partly through the MVHR.
Next the material for the air control layer is chosen to provide a suitable air flow rate at the chosen depressurisation, taken as 10 Pa in this case. (The air flow rate could be determined from the desired U-value at the depressurisation of 10 Pa.) From Table 4, fibreboard has an appropriate air permeability of 1.34x10−3 (m2/hPa).
For a 12mm thick sheet of fibreboard this gives, for the maximum pressure difference of 10 Pa, an air flow rate of 1.12 m3/h per m2 of ceiling. This is equivalent to an air speed through the ceiling of 1.12 m/h or 0.31 mm/s. The 100 m2 ceiling will thus provide 112 m3/h and therefore an air-to-air heat exchanger will provide the balance of 128 m3/h
Dynamic insulation works best with a good thickness of insulation so taking 200 mm of cellulose insulation (k = 0.04 W/m °C) the dynamic U value for an air flow of 0.31 mm/s is calculated using equation () above to be 0.066 W/m2 °C. If a lower dynamic U-value is required then a material with lower air permeability than fibreboard would need to be selected for the air control layer, so that a higher air speed through the insulation at 10 Pa can be achieved.
The final step would be to select an air-to-air heat exchanger that had a good heat recovery efficiency with a supply air flow rate of 128 m3/h and an extract air flow rate of 240 m3/h.
See also
List of insulation material
Wind loads on buildings
Laminar flow
Porous medium
References
External links
“OpenAir@RGU” Further resources on the theory and applications of dynamic insulation can be found on OpenAIR@RGU, the open access institutional repository of Robert Gordon University.
Insulators
Engineering thermodynamics | Dynamic insulation | [
"Physics",
"Chemistry",
"Engineering"
] | 4,177 | [
"Engineering thermodynamics",
"Thermodynamics",
"Mechanical engineering"
] |
31,392,778 | https://en.wikipedia.org/wiki/Space%20tether | Space tethers are long cables which can be used for propulsion, momentum exchange, stabilization and attitude control, or maintaining the relative positions of the components of a large dispersed satellite/spacecraft sensor system. Depending on the mission objectives and altitude, spaceflight using this form of spacecraft propulsion is theorized to be significantly less expensive than spaceflight using rocket engines.
Main techniques
Tether satellites might be used for various purposes, including research into tether propulsion, tidal stabilization and orbital plasma dynamics. Five main techniques for employing space tethers are in development:
Electrodynamic tethers
Electrodynamic tethers are primarily used for propulsion. These are conducting tethers that carry a current that can generate either thrust or drag from a planetary magnetic field, in much the same way as an electric motor does.
Momentum exchange tethers
These can be either rotating tethers, or non-rotating tethers, that capture an arriving spacecraft and then release it at a later time into a different orbit with a different velocity. Momentum exchange tethers can be used for orbital maneuvering, or as part of a planetary-surface-to-orbit / orbit-to-escape-velocity space transportation system.
Tethered formation flying
This is typically a non-conductive tether that accurately maintains a set distance between multiple space vehicles flying in formation.
Electric sail
A form of solar wind sail with electrically charged tethers that will be pushed by the momentum of solar wind ions.
Universal Orbital Support System
A concept for suspending an object from a tether orbiting in space.
Many uses for space tethers have been proposed, including deployment as space elevators, as skyhooks, and for doing propellant-free orbital transfers.
History
Konstantin Tsiolkovsky (1857–1935) once proposed a tower so tall that it reached into space, so that it would be held there by the rotation of Earth. However, at the time, there was no realistic way to build it.
In 1960, another Russian, Yuri Artsutanov, wrote in greater detail about the idea of a tensile cable to be deployed from a geosynchronous satellite, downwards towards the ground, and upwards away, keeping the cable balanced. This is the space elevator idea, a type of synchronous tether that would rotate with the Earth. However, given the materials technology of the time, this too was impractical on Earth.
In the 1970s, Jerome Pearson independently conceived the idea of a space elevator, sometimes referred to as a synchronous tether, and, in particular, analyzed a lunar elevator that can go through the L1 and L2 points, and this was found to be possible with materials then existing.
In 1977, Hans Moravec and later Robert L. Forward investigated the physics of non-synchronous skyhooks, also known as rotating skyhooks, and performed detailed simulations of tapered rotating tethers that could pick objects off, and place objects onto, the Moon, Mars and other planets, with little loss, or even a net gain of energy.
In 1979, NASA examined the feasibility of the idea and gave direction to the study of tethered systems, especially tethered satellites.
In 1990, Eagle Sarmont proposed a non-rotating Orbiting Skyhook for an Earth-to-orbit / orbit-to-escape-velocity Space Transportation System in a paper titled "An Orbiting Skyhook: Affordable Access to Space". In this concept a suborbital launch vehicle would fly to the bottom end of a Skyhook, while spacecraft bound for higher orbit, or returning from higher orbit, would use the upper end.
In 2000, NASA and Boeing considered a HASTOL concept, where a rotating tether would take payloads from a hypersonic aircraft (at half of orbital velocity) to orbit.
Missions
A tether satellite is a satellite connected to another by a space tether. A number of satellites have been launched to test tether technologies, with varying degrees of success.
Types
There are many different (and overlapping) types of tether.
Momentum exchange tethers, rotating
Momentum exchange tethers are one of many applications for space tethers. Momentum exchange tethers come in two types; rotating and non-rotating. A rotating tether will create a controlled force on the end-masses of the system due to centrifugal acceleration. While the tether system rotates, the objects on either end of the tether will experience continuous acceleration; the magnitude of the acceleration depends on the length of the tether and the rotation rate. Momentum exchange occurs when an end body is released during the rotation. The transfer of momentum to the released object will cause the rotating tether to lose energy, and thus lose velocity and altitude. However, using electrodynamic tether thrusting, or ion propulsion the system can then re-boost itself with little or no expenditure of consumable reaction mass.
Skyhook
A skyhook is a theoretical class of orbiting tether propulsion intended to lift payloads to high altitudes and speeds. Proposals for skyhooks include designs that employ tethers spinning at hypersonic speed for catching high speed payloads or high altitude aircraft and placing them in orbit.
Electrodynamics
Electrodynamic tethers are long conducting wires, such as one deployed from a tether satellite, which can operate on electromagnetic principles as generators, by converting their kinetic energy to electrical energy, or as motors, converting electrical energy to kinetic energy. Electric potential is generated across a conductive tether by its motion through the Earth's magnetic field. The choice of the metal conductor to be used in an electrodynamic tether is determined by a variety of factors. Primary factors usually include high electrical conductivity and low density. Secondary factors, depending on the application, include cost, strength, and melting point.
An electrodynamic tether was profiled in the documentary film Orphans of Apollo as technology that was to be used to keep the Russian space station Mir in orbit.
Formation flying
This is the use of a (typically) non-conductive tether to connect multiple spacecraft. Tethered Experiment for Mars inter-Planetary Operations (TEMPO³) is a proposed 2011 experiment to study the technique.
Universal Orbital Support System
A theoretical type of non-rotating tethered satellite system, it is a concept for providing space-based support to things suspended above an astronomical object. The orbital system is a coupled mass system wherein the upper supporting mass (A) is placed in an orbit around a given celestial body such that it can support a suspended mass (B) at a specific height above the surface of the celestial body, but lower than (A).
Technical difficulties
Gravitational gradient stabilization
Instead of rotating end for end, tethers can also be kept straight by the slight difference in the strength of gravity over their length.
A non-rotating tether system has a stable orientation that is aligned along the local vertical (of the earth or other body). This can be understood by inspection of the figure on the right where two spacecraft at two different altitudes have been connected by a tether. Normally, each spacecraft would have a balance of gravitational (e.g. Fg1) and centrifugal (e.g. Fc1) forces, but when tied together by a tether, these values begin to change with respect to one another. This phenomenon occurs because, without the tether, the higher-altitude mass would travel slower than the lower mass. The system must move at a single speed, so the tether must therefore slow down the lower mass and speed up the upper one. The centrifugal force of the tethered upper body is increased, while that of the lower-altitude body is reduced. This results in the centrifugal force of the upper body and the gravitational force of the lower body being dominant. This difference in forces naturally aligns the system along the local vertical, as seen in the figure.
Atomic oxygen
Objects in low Earth orbit are subjected to noticeable erosion from atomic oxygen due to the high orbital speed with which the molecules strike as well as their high reactivity. This could quickly erode a tether.
Micrometeorites and space junk
Simple single-strand tethers are susceptible to micrometeoroids and space junk. Several systems have since been proposed and tested to improve debris resistance:
The US Naval Research Laboratory has successfully flown a long term long, diameter tether with an outer layer of Spectra 1000 braid and a core of acrylic yarn. This satellite, the Tether Physics and Survivability Experiment (TiPS), was launched in June 1996 and remained in operation over 10 years, finally breaking in July 2006.
Robert P. Hoyt patented an engineered circular net, such that a cut strand's strains would be redistributed automatically around the severed strand. This is called a Hoytether. Hoytethers have theoretical lifetimes of decades.
Researchers with JAXA have also proposed net-based tethers for their future missions.
Large pieces of junk would still cut most tethers, including the improved versions listed here, but these are currently tracked on radar and have predictable orbits. Although thrusters could be used to change the orbit of the system, a tether could also be temporally wiggled in the right place, using less energy, to dodge known pieces of junk.
Radiation
Radiation, including UV radiation tend to degrade tether materials, and reduce lifespan. Tethers that repeatedly traverse the Van Allen belts can have markedly lower life than those that stay in low earth orbit or are kept outside Earth's magnetosphere.
Construction
Properties of useful materials
Tether properties and materials are dependent on the application. However, there are some common properties. To achieve maximum performance and low cost, tethers would need to be made of materials with the combination of high strength or electrical conductivity and low density. All space tethers are susceptible to space debris or micrometeoroids. Therefore, system designers will need to decide whether or not a protective coating is needed, including relative to UV and atomic oxygen.
For applications that exert high tensile forces on the tether, the materials need to be strong and light. Some current tether designs use crystalline plastics such as ultra-high-molecular-weight polyethylene, aramid or carbon fiber. A possible future material would be carbon nanotubes, which have an estimated tensile strength between , and a proven tensile strength in the range for some individual nanotubes. (A number of other materials obtain in some samples on the nano scale, but translating such strengths to the macro scale has been challenging so far, with, as of 2011, CNT-based ropes being an order of magnitude less strong, not yet stronger than more conventional carbon fiber on that scale).
For some applications, the tensile force on the tether is projected to be less than . Material selection in this case depends on the purpose of the mission and design constraints. Electrodynamic tethers, such as the one used on TSS-1R, may use thin copper wires for high conductivity (see EDT).
There are design equations for certain applications that may be used to aid designers in identifying typical quantities that drive material selection.
Space elevator equations typically use a "characteristic length", Lc, which is also known as its "self-support length" and is the length of untapered cable it can support in a constant 1 g gravity field.
,
where σ is the stress limit (in pressure units) and ρ is the density of the material.
Hypersonic skyhook equations use the material's "specific velocity" which is equal to the maximum tangential velocity a spinning hoop can attain without breaking:
For rotating tethers (rotovators) the value used is the material's 'characteristic velocity' which is the maximum tip velocity a rotating untapered cable can attain without breaking,
The characteristic velocity equals the specific velocity multiplied by the square root of two.
These values are used in equations similar to the rocket equation and are analogous to specific impulse or exhaust velocity. The higher these values are, the more efficient and lighter the tether can be in relation to the payloads that they can carry. Eventually however, the mass of the tether propulsion system will be limited at the low end by other factors such as momentum storage.
Practical materials
Proposed materials include Kevlar, ultra-high-molecular-weight polyethylene, carbon nanotubes and M5 fiber. M5 is a synthetic fiber that is lighter than Kevlar or Spectra. According to Pearson, Levin, Oldson, and Wykes in their article "The Lunar Space Elevator", an M5 ribbon wide and thick, would be able to support on the lunar surface. It would also be able to hold 100 cargo vehicles, each with a mass of , evenly spaced along the length of the elevator. Other materials that could be used are T1000G carbon fiber, Spectra 2000, or Zylon.
Shape
Tapering
For gravity stabilized tethers, to exceed the self-support length the tether material can be tapered so that the cross-sectional area varies with the total load at each point along the length of the cable. In practice this means that the central tether structure needs to be thicker than the tips. Correct tapering ensures that the tensile stress at every point in the cable is exactly the same. For very demanding applications, such as an Earth space elevator, the tapering can reduce the excessive ratios of cable weight to payload weight. In lieu of tapering a modular staged tether system maybe used to achieve the same goal. Multiple tethers would be used between stages. The number of tethers would determine the strength of any given cross-section.
Thickness
For rotating tethers not significantly affected by gravity, the thickness also varies, and it can be shown that the area, A, is given as a function of r (the distance from the centre) as follows:
where R is the radius of tether, v is the velocity with respect to the centre, M is the tip mass, is the material density, and T is the design tensile strength.
Mass ratio
Integrating the area to give the volume and multiplying by the density and dividing by the payload mass gives a payload mass / tether mass ratio of:
where erf is the normal probability error function.
Let ,
then:
This equation can be compared with the rocket equation, which is proportional to a simple exponent on a velocity, rather than a velocity squared. This difference effectively limits the delta-v that can be obtained from a single tether.
Redundancy
In addition the cable shape must be constructed to withstand micrometeorites and space junk. This can be achieved with the use of redundant cables, such as the Hoytether; redundancy can ensure that it is very unlikely that multiple redundant cables would be damaged near the same point on the cable, and hence a very large amount of total damage can occur over different parts of the cable before failure occurs.
Material strength
Beanstalks and rotovators are currently limited by the strengths of available materials. Although ultra-high strength plastic fibers (Kevlar and Spectra) permit rotovators to pluck masses from the surface of the Moon and Mars, a rotovator from these materials cannot lift from the surface of the Earth. In theory, high flying, supersonic (or hypersonic) aircraft could deliver a payload to a rotovator that dipped into Earth's upper atmosphere briefly at predictable locations throughout the tropic (and temperate) zone of Earth. As of May 2013, all mechanical tethers (orbital and elevators) are on hold until stronger materials are available.
Cargo capture
Cargo capture for rotovators is nontrivial, and failure to capture can cause problems. Several systems have been proposed, such as shooting nets at the cargo, but all add weight, complexity, and another failure mode. At least one lab scale demonstration of a working grapple system has been achieved, however.
Life expectancy
Currently, the strongest materials in tension are plastics that require a coating for protection from UV radiation and (depending on the orbit) erosion by atomic oxygen. Disposal of waste heat is difficult in a vacuum, so overheating may cause tether failures or damage.
Control and modelling
Pendular motion instability
Electrodynamic tethers deployed along the local vertical ('hanging tethers') may suffer from dynamical instability. Pendular motion causes the tether vibration amplitude to build up under the action of electromagnetic interaction. As the mission time increases, this behavior can compromise the performance of the system. Over a few weeks, electrodynamic tethers in Earth orbit might build up vibrations in many modes, as their orbit interacts with irregularities in magnetic and gravitational fields.
One plan to control the vibrations is to actively vary the tether current to counteract the growth of the vibrations. Electrodynamic tethers can be stabilized by reducing their current when it would feed the oscillations, and increasing it when it opposes oscillations. Simulations have demonstrated that this can control tether vibration. This approach requires sensors to measure tether vibrations, which can either be an inertial navigation system on one end of the tether, or satellite navigation systems mounted on the tether, transmitting their positions to a receiver on the end.
Another proposed method is to use spinning electrodynamic tethers instead of hanging tethers. The gyroscopic effect provides passive stabilisation, avoiding the instability.
Surges
As mentioned earlier, conductive tethers have failed from unexpected current surges. Unexpected electrostatic discharges have cut tethers (e.g. see Tethered Satellite System Reflight (TSS‑1R) on STS‑75), damaged electronics, and welded tether handling machinery. It may be that the Earth's magnetic field is not as homogeneous as some engineers have believed.
Vibrations
Computer models frequently show tethers can snap due to vibration.
Mechanical tether-handling equipment is often surprisingly heavy, with complex controls to damp vibrations. The one ton climber proposed by Brad Edwards for his Space Elevator may detect and suppress most vibrations by changing speed and direction. The climber can also repair or augment a tether by spinning more strands.
The vibration modes that may be a problem include skipping rope, transverse, longitudinal, and pendulum.
Tethers are nearly always tapered, and this can greatly amplify the movement at the thinnest tip in whip-like ways.
Other issues
A tether is not a spherical object, and has significant extent. This means that as an extended object, it is not directly modelable as a point source, and this means that the center of mass and center of gravity are not usually colocated. Thus the inverse square law does not apply except at large distances, to the overall behaviour of a tether. Hence the orbits are not completely Keplerian, and in some cases they are actually chaotic.
With bolus designs, rotation of the cable interacting with the non-linear gravity fields found in elliptical orbits can cause exchange of orbital angular momentum and rotation angular momentum. This can make prediction and modelling extremely complex.
See also
STARS-II
Spacecraft propulsion
Non-rocket spacelaunch
Orbital ring – theoretical artificial ring placed in Earth orbit
References
External links
Text
ProSEDS, a tether-based propulsion experiment
Special Projects Group
NASA tether overview
Tethers Unlimited Incorporated
"Tethers In Space Handbook" M. L. Cosmo and E. C. Lorenzini 3rd ed., December 1997
NASA IAC report on orbital systems
SpaceTethers.com, space tether simulator applet
USA National Public Radio – Space Tethers: Slinging Objects in Orbit?
ESA – The YES2 project
ESA – Students test 'space postal service' during Foton mission
The Space Show #531 Robert P. Hoyt discusses space tethers on the Space Show
NASA site on TSS-1R
NASA Tether Origami
New Scientist article
Tether Physics and Survivability Experiment
Tethers Unlimited • Publications
Tethers in Space Handbook (PDF)
Tethers in Space, a propellantless propulsion in-orbit demonstration
Video
Video animation explaining how a tether might work
Single-stage-to-orbit
Space elevator
Spacecraft propulsion
Vertical transport devices
Satellites
Spaceflight concepts
Hypothetical technology | Space tether | [
"Astronomy",
"Technology"
] | 4,202 | [
"Exploratory engineering",
"Astronomical hypotheses",
"Transport systems",
"Outer space",
"Space elevator",
"Vertical transport devices",
"Satellites"
] |
31,393,808 | https://en.wikipedia.org/wiki/Mehler%20kernel | The Mehler kernel is a complex-valued function found to be the propagator of the quantum harmonic oscillator.
Mehler's formula
defined a function
and showed, in modernized notation, that it can be expanded in terms of Hermite polynomials (.) based on weight function exp(−²) as
This result is useful, in modified form, in quantum physics, probability theory, and harmonic analysis.
Physics version
In physics, the fundamental solution, (Green's function), or propagator of the Hamiltonian for the quantum harmonic oscillator is called the Mehler kernel. It provides the fundamental solution to
The orthonormal eigenfunctions of the operator are the Hermite functions,
with corresponding eigenvalues (-2-1), furnishing particular solutions
The general solution is then a linear combination of these; when fitted to the initial condition , the general solution reduces to
where the kernel has the separable representation
Utilizing Mehler's formula then yields
On substituting this in the expression for with the value for , Mehler's kernel finally reads
When = 0, variables and coincide, resulting in the limiting formula necessary by the initial condition,
As a fundamental solution, the kernel is additive,
This is further related to the symplectic rotation structure of the kernel .
When using the usual physics conventions of defining the quantum harmonic oscillator instead via
and assuming natural length and energy scales, then the Mehler kernel becomes the Feynman propagator which reads
i.e.
When the in the inverse square-root should be replaced by and should be
multiplied by an extra Maslov phase factor
When the general solution is proportional to the Fourier transform of the initial conditions since
and the exact Fourier transform is thus obtained from the quantum harmonic oscillator's number operator written as
since the resulting kernel
also compensates for the phase factor still arising in and , i.e.
which shows that the number operator can be interpreted via the Mehler kernel as the generator of fractional Fourier transforms for arbitrary values of , and of the conventional Fourier transform for the particular value , with the Mehler kernel providing an active transform, while the corresponding passive transform is already embedded in the basis change from position to momentum space. The eigenfunctions of are the usual Hermite functions which are therefore also Eigenfunctions of .
Probability version
The result of Mehler can also be linked to probability. For this, the variables should be rescaled as , , so as to change from the 'physicist's' Hermite polynomials (.) (with weight function exp(−2)) to "probabilist's" Hermite polynomials (.) (with weight function exp(−2/2)). Then, becomes
The left-hand side here is p(x,y)/p(x)p(y) where p(x,y) is the bivariate Gaussian probability density function for variables having zero means and unit variances:
and {{math|p(x), p(y)}} are the corresponding probability densities of and (both standard normal).
There follows the usually quoted form of the result (Kibble 1945)
This expansion is most easily derived by using the two-dimensional Fourier transform of , which is
This may be expanded as
The Inverse Fourier transform then immediately yields the above expansion formula.
This result can be extended to the multidimensional case.
Fractional Fourier transform
Since Hermite functions are orthonormal eigenfunctions of the Fourier transform,
in harmonic analysis and signal processing, they diagonalize the Fourier operator,
Thus, the continuous generalization for real angle can be readily defined (Wiener, 1929; Condon, 1937), the fractional Fourier transform (FrFT), with kernel
This is a continuous family of linear transforms generalizing the Fourier transform, such that, for , it reduces to the standard Fourier transform, and for to the inverse Fourier transform.
The Mehler formula, for = exp(−i), thus directly provides
The square root is defined such that the argument of the result lies in the interval [−π /2, π /2].
If is an integer multiple of , then the above cotangent and cosecant functions diverge. In the limit, the kernel goes to a Dirac delta function in the integrand, or , for an even or odd multiple of , respectively. Since [ ] = (−), [ ] must be simply or for an even or odd multiple of , respectively.
See also
Heat kernel
Hermite polynomials
Parabolic cylinder functions
References
Nicole Berline, Ezra Getzler, and Michèle Vergne (2013). Heat Kernels and Dirac Operators'', (Springer: Grundlehren Text Editions) Paperback
Parabolic partial differential equations
Orthogonal polynomials
Mathematical physics
Multivariate continuous distributions | Mehler kernel | [
"Physics",
"Mathematics"
] | 1,013 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
31,394,768 | https://en.wikipedia.org/wiki/Glycopolymer | Glycopolymer is a synthetic polymer with pendant carbohydrates. Glycopolymers play an important role in many biological recognition events such as cell–cell adhesion, the development of new tissues and the infectious behavior of virus and bacteria. They have high potential in targeted drug delivery, tissue engineering and synthesis of bio-compatible materials.
The first glycopolymer was synthesized in 1978 by free-radical polymerization. Subsequent efforts have been devoted to synthesizing glycopolymers with various structures and sizes, and the synthesis techniques have widened to controlled/living radical polymerisation, ring-opening polymerization, ring-opening metathesis polymerization and post-functionalization.
References
Polymers
Carbohydrate chemistry | Glycopolymer | [
"Chemistry",
"Materials_science"
] | 157 | [
"Carbohydrate chemistry",
"Polymer chemistry",
"Chemical synthesis",
"nan",
"Glycobiology",
"Polymers"
] |
31,397,430 | https://en.wikipedia.org/wiki/D-value%20%28transport%29 | In transport, D-value is a rating in kN that is typically attributed to mechanical couplings, and reflects dynamic loading limits between a towing vehicle and a trailer.
The corresponding formula for a truck and trailer combination, used to determine the required D-value of a coupling, is:
T = Weight of towing vehicle including the vertical load on the fifth wheel
R = Total weight of the loaded semi-trailer
U = Vertical load on the fifth wheel
g = Acceleration due to gravity (assumed to be 9.81 m/s 2 )
D (kN) = g x ((0.6 x T x R)/(T + R - U))
References
Mechanical engineering | D-value (transport) | [
"Physics",
"Engineering"
] | 138 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
31,398,073 | https://en.wikipedia.org/wiki/Tetraethylmethane | Tetraethylmethane is a branched alkane with 9 carbon atoms. It is a highly flammable and volatile liquid at room temperature. It is one of the isomers of nonane. It is considered one of the most controversial alkanes due to its structural resemblance to the swastika.
References
See also
Neopentane
Tetraethynylmethane
Tetramethoxymethane
External links
Alkanes | Tetraethylmethane | [
"Chemistry"
] | 92 | [
"Organic compounds",
"Alkanes"
] |
35,568,024 | https://en.wikipedia.org/wiki/Seminal%20RNase | Bovine seminal RNase (BS-RNase) is a member of the ribonuclease superfamily produced by the bovine seminal vesicles. This enzyme can not be differentiated from its members distinctly since there are more features that this enzyme shares with its family members than features that it possess alone. The research on the question of how new functions arrive in proteins in evolution led the scientists to find an uncommon consequence for a usual biological event called gene conversion in the case of the ribonuclease (RNase) protein family. The most well-known member of this family, RNase A (also called pancreatic RNase), is expressed in the pancreas of oxen. It serves to digest RNA in intestine, and evolved from bacteria fermenting in the stomach of the first ox. The homologous RNase, called seminal RNase, differs from RNase A by 23 amino acids and is expressed in seminal plasma in the concentration of 1-1.5 mg/ml, which constitutes more than 3% of the fluid protein content. Bovine seminal ribonuclease (BS-RNase) is a homologue of RNase A with specific antitumor activity.
Functional properties of bovine seminal RNase
The physiological role of this enzyme is not yet found and thus it is still a mystery why the seminal fluid in bovine has such a higher concentration of this enzyme. In the evolutionary process, it has acquired new behaviors such as being a dimer with composite active sites binding firmly to anionic glycolipids, including bovine spermatozoa seminolipid, a fusogenic sulfated galactolipid possessing immunosuppressive and cytostatic activities whereas the ancestral RNase does not possess these behaviors. The homolog of RNase A, bovine seminal ribonuclease (BS-RNase), has a specific antitumor activity. In the immunoregulation of both male and female genital systems, the seminal plasma plays a prominent role in immunosuppression. The direct or indirect interference of the seminal plasma with the function of many types of immunocompetent cells including T cells, B cells, NK cells and macrophages has been shown. These effects of immunosuppression are not species-specific and are found to be in physiological concentrations that are normally seen in the urogenital tract of females. RNase secretion has not been detected in the seminal fluid of any other mammal.
Origin of seminal RNase gene
The recruitment of established proteins after the gene duplication leads to play some new biomolecular functions. Among different models that exist, one model suggests that after the gene duplication, among the two copies of genes, one will be subjected to continuous evolution under ancestral dictated functional constraints whereas the duplicate meanwhile will not be restricted by a functional role and feels free to find protein “structure space”. In the end, it may come with encoded new behaviors that which are required for a new physiological function and thereby discourse the selective advantage. In any case, we may consider it as an ambiguous model since most duplicates have to become pseudogenes, which are considered as an inexpressible genetic information (referred to as “junk DNA”) in just a few million years. Because selective pressure can do nothing much with duplicated genes, they are prone to deleterious mutations that present their incapability to encode a protein useful for any function. This restricts to use a functionally unconstrained gene duplicate as a tool for investigating protein structure space of new behaviors that might discourse selectable physiological function. Then how would new functions arise in proteins? One of the possibilities is the resurrection of the pseudogenes due to some biological events like gene conversion. One such an example is the resurrection of the bovine seminal RNase gene.
From the laboratory reconstructions of ancient RNases, it is shown that each of these traits was absent in the most recent common ancestor of seminal and pancreatic RNase and a bit later arose in the seminal lineage after the divergence of the above two protein families. The RNase genes from all taxa in a true ruminant phylogenetic tree that was constructed by parsimony analysis were analyzed by the researchers, and they revealed that early after the gene duplication, pancreatic RNases and seminal RNases separated at about 35 million years ago (MYA). Several marker substitutions, including Pro 19, Cys 32 and Lys 62, have been introduced in seminal RNase genes which made them to be recognized as different from their pancreatic cousins. Based on this, the seminal RNase family includes the taxa called saiga, sheep, duiker, kudu and cape buffalo, while peccary has been excluded from it. Later on, from the sequence analyses, mass spectrophotometry and western blotting studies on taxa that comes under the seminal RNase gene family, it has been revealed that they are consistent with the model which assumes that immediately after duplication the seminal RNase gene gained a physiological function and this function has been continued throughout the divergent evolution (each copy of gene gets evolved independently) and later on it was lost in all other species including modern kudu and cape buffalo except in modern oxen. This would require, however, that this function was lost independently multiple times in different lineages.
After the divergence of Cape buffalo in the lineage leading to modern oxen, the seminal RNase gene was resurrected very recently. It is intriguing to ask whether the domestication of the ox is related to the emergence of seminal RNase as a functioning protein. In modern oxen, does the seminal RNase gene have a function? This is the question that arises now. To address this question we can take into consideration the non-silent to silent substitution ratio in these gene families. The average ratio of non-silent to silent substitutions is 2:1 for unexpressed seminal RNase sequences, which is consistent with the model that these seminal RNases are pseudogenes and is close to that expected for random substitution in a gene that serves with no selected function. On other hand, the average ratio is less than 1:1 in case of pancreatic RNases which exhibits consistency with the model that states that pancreatic RNases are functional where selective pressure constrains the amino acid replacements. However, when the expressed ox seminal RNase is compared with its nearest unexpressed homologs (homologous chromosomes) in buffalo and kudu, a most remarkable ratio of non-silent to silent substitutions, 4:1, is observed. Pseudogenes in order to perform a new function and to provide new selected properties they search protein “structure space” with rapidly introduced amino acid replacements and such pseudogenes are only expected to have the above-mentioned remarkable ratio of non-silent to silent substitutions. The resurrection of the seminal RNase gene is evidently associated with the introduction of Cys 31.
Then how was this pseudogene resurrected? It is not so clear to say and one can note that the similarity between the region of the kudu deletion and the sequence of the expressed seminal RNase pseudogene extends some 70 base pairs into the 3’ –untranslated region are 89% identical (with 62 of the 70 nucleotides). We can expect that in order to repair the damaged seminal RNase there might be the gene conversion event took place between it and the pancreatic gene to create a new physiological evolution. Gene conversion is of two types - interallelic and interlocus gene conversions. The resurrection of seminal RNase gene function is believed to be the unexpected consequence of the interlocus gene conversion event of seminal RNase pseudogene with its homologous functional gene. In these recombination events, the genetic information is transferred from a donor functional locus to that of an acceptor pseudogene which is non-functional. Thus the non-functional seminal RNase pseudogene has acquired some new physiological functions being in the state of dead for many million years. This might be the first example in the literature with for the resurrection of a pseudogene by gene conversion event and it would be interesting to further test this data with more sequencing data. Later on another evolutionary aspect has been proposed in case of seminal RNase showing that the seminal RNase has been left with two quaternary forms: one is to exhibit special biological actions and the other is just an RNA-degrading enzyme. Based on this proposal the evolution of seminal RNase into these two structures that coexists and are more versatile structurally and biologically can be considered as treated as an evolutionary progress.
Scientists from all over the world studied and recognized a plenty of pseudogenes. They have launched several projects which are worldwide to identify and study the potential roles of pseudogenes. ENCODE is one of such projects. Even though the pseudogenes accelerate the issues formolecular analysis, they are still regarded as genome fossils that provide a sound record of evolution since they offer a plethora of diverse information for molecular analysis. The worldwide researchers are building different ways to identify the pseudogenes by various scheme and criteria for computation such that they give a set of pseudogenes that are consistent. Sometimes, the resurrected pseudogenes have been identified as functional and they may also be altered back to be non-functional, which again can be reversed. Not all the pseudogenes in a genome should be considered as a “junk DNA”. The evidence for functional pseudogenes strengthens their significance, and they have also become a hotspot in research due to their significance and possible resurrection. To study the characteristics of this soundless fossil in human and other organisms, researchers are contributing their attempts. In the near future, the real evolutionary fates of the pseudogenes will be found with the embedded picture of genome annotation.
See also
Molecular evolution
Gene duplication
Gene conversion
Pseudogenes
Ancestral gene resurrection
Bovinae
Ribonuclease A
ENCODE
Decapacitation factor
References
Molecular evolution
Evolutionary biology | Seminal RNase | [
"Chemistry",
"Biology"
] | 2,079 | [
"Evolutionary biology",
"Evolutionary processes",
"Molecular evolution",
"Molecular biology"
] |
35,572,339 | https://en.wikipedia.org/wiki/Magnesium%20nickel%20hydride | Magnesium nickel hydride is the chemical compound Mg2NiH4. It contains 3.6% by weight of hydrogen and has been studied as a potential hydrogen storage medium.
References
Metal hydrides
Magnesium compounds
Nickel compounds | Magnesium nickel hydride | [
"Chemistry"
] | 48 | [
"Metal hydrides",
"Inorganic compounds",
"Reducing agents"
] |
35,573,062 | https://en.wikipedia.org/wiki/Blahut%E2%80%93Arimoto%20algorithm | The term Blahut–Arimoto algorithm is often used to refer to a class of algorithms for computing numerically either the information theoretic capacity of a channel, the rate-distortion function of a source or a source encoding (i.e. compression to remove the redundancy). They are iterative algorithms that eventually converge to one of the maxima of the optimization problem that is associated with these information theoretic concepts.
History and application
For the case of channel capacity, the algorithm was independently invented by Suguru Arimoto and Richard Blahut. In addition, Blahut's treatment gives algorithms for computing rate distortion and generalized capacity with input contraints (i.e. the capacity-cost function, analogous to rate-distortion). These algorithms are most applicable to the case of arbitrary finite alphabet sources. Much work has been done to extend it to more general problem instances.
Recently, a version of the algorithm that accounts for continuous and multivariate outputs was proposed with applications in cellular signaling. There exists also a version of Blahut–Arimoto algorithm for directed information.
Algorithm for Channel Capacity
A discrete memoryless channel (DMC) can be specified using two random variables with alphabet , and a channel law as a conditional probability distribution . The channel capacity, defined as , indicates the maximum efficiency that a channel can communicate, in the unit of bit per use. Now if we denote the cardinality , then is a matrix, which we denote the row, column entry by . For the case of channel capacity, the algorithm was independently invented by Suguru Arimoto and Richard Blahut. They both found the following expression for the capacity of a DMC with channel law:
where and are maximized over the following requirements:
is a probability distribution on , That is, if we write as
is a matrix that behaves like a transition matrix from to with respect to the channel law. That is, For all :
Every row sums up to 1, i.e. .
Then upon picking a random probability distribution on , we can generate a sequence iteratively as follows:
For .
Then, using the theory of optimization, specifically coordinate descent, Yeung showed that the sequence indeed converges to the required maximum. That is,
.
So given a channel law , the capacity can be numerically estimated up to arbitrary precision.
Algorithm for Rate-Distortion
Suppose we have a source with probability of any given symbol. We wish to find an encoding that generates a compressed signal from the original signal while minimizing the expected distortion , where the expectation is taken over the joint probability of and . We can find an encoding that minimizes the rate-distortion functional locally by repeating the following iteration until convergence:
where is a parameter related to the slope in the rate-distortion curve that we are targeting and thus is related to how much we favor compression versus distortion (higher means less compression).
References
Coding theory | Blahut–Arimoto algorithm | [
"Mathematics"
] | 589 | [
"Discrete mathematics",
"Coding theory"
] |
2,011,054 | https://en.wikipedia.org/wiki/Pentane%20interference | Pentane interference or syn-pentane interaction is the steric hindrance that the two terminal methyl groups experience in one of the chemical conformations of n-pentane. The possible conformations are combinations of anti conformations and gauche conformations and are anti-anti, anti-gauche+, gauche+ - gauche+ and gauche+ - gauche− of which the last one is especially energetically unfavorable. In macromolecules such as polyethylene pentane interference occurs between every fifth carbon atom. The 1,3-diaxial interactions of cyclohexane derivatives is a special case of this type of interaction, although there are additional gauche interactions shared between substituents and the ring in that case. A clear example of the syn-pentane interaction is apparent in the diaxial versus diequatorial heats of formation of cis 1,3-dialkyl cyclohexanes. Relative to the diequatorial conformer, the diaxial conformer is 2-3 kcal/mol higher in energy than the value that would be expected based on gauche interactions alone. Pentane interference helps explain molecular geometries in many chemical compounds, product ratios, and purported transition states. One specific type of syn-pentane interaction is known as 1,3 allylic strain or (A1,3 strain).
For instance in certain aldol adducts with 2,6-disubstituted aryl groups the molecular geometry has the vicinal hydrogen atoms in an antiperiplanar configuration both in a crystal lattice (X-ray diffraction) and in solution proton (NMR coupling constants) normally reserved for the most bulky groups i.d. both arenes:
The other contributing factor explaining this conformation is reduction in allylic strain by minimizing the dihedral angle between the arene double bond and the methine proton.
Syn-pentane interactions are responsible for the backbone-conformation dependence of protein side chain rotamer frequencies and their mean dihedral angles, which is evident from statistical analysis of protein side-chain rotamers in the Backbone-dependent rotamer library.
References
Stereochemistry | Pentane interference | [
"Physics",
"Chemistry"
] | 468 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
2,011,503 | https://en.wikipedia.org/wiki/Steady%20state%20%28biochemistry%29 | In biochemistry, steady state refers to the maintenance of constant internal concentrations of molecules and ions in the cells and organs of living systems. Living organisms remain at a dynamic steady state where their internal composition at both cellular and gross levels are relatively constant, but different from equilibrium concentrations. A continuous flux of mass and energy results in the constant synthesis and breakdown of molecules via chemical reactions of biochemical pathways. Essentially, steady state can be thought of as homeostasis at a cellular level.
Maintenance of steady state
Metabolic regulation achieves a balance between the rate of input of a substrate and the rate that it is degraded or converted, and thus maintains steady state. The rate of metabolic flow, or flux, is variable and subject to metabolic demands. However, in a metabolic pathway, steady state is maintained by balancing the rate of substrate provided by a previous step and the rate that the substrate is converted into product, keeping substrate concentration relatively constant.
Thermodynamically speaking, living organisms are open systems, meaning that they constantly exchange matter and energy with their surroundings. A constant supply of energy is required for maintaining steady state, as maintaining a constant concentration of a molecule preserves internal order and thus is entropically unfavorable. When a cell dies and no longer utilizes energy, its internal composition will proceed toward equilibrium with its surroundings.
In some occurrences, it is necessary for cells to adjust their internal composition in order to reach a new steady state. Cell differentiation, for example, requires specific protein regulation that allows the differentiating cell to meet new metabolic requirements.
ATP
The concentration of ATP must be kept above equilibrium level so that the rates of ATP-dependent biochemical reactions meet metabolic demands. A decrease in ATP will result in a decreased saturation of enzymes that use ATP as substrate, and thus a decreased reaction rate. The concentration of ATP is also kept higher than that of AMP, and a decrease in the ATP/AMP ratio triggers AMPK to activate cellular processes that will return ATP and AMP concentrations to steady state.
In one step of the glycolysis pathway catalyzed by PFK-1, the equilibrium constant of reaction is approximately 1000, but the steady state concentration of products (fructose-1,6-bisphosphate and ADP) over reactants (fructose-6-phosphate and ATP) is only 0.1, indicating that the ratio of ATP to AMP remains in a steady state significantly above equilibrium concentration. Regulation of PFK-1 maintains ATP levels above equilibrium.
In the cytoplasm of hepatocytes, the steady state ratio of NADP+ to NADPH is approximately 0.1 while that of NAD+ to NADH is approximately 1000, favoring NADPH as the main reducing agent and NAD+ as the main oxidizing agent in chemical reactions.
Blood glucose
Blood glucose levels are maintained at a steady state concentration by balancing the rate of entry of glucose into the blood stream (i.e. by ingestion or released from cells) and the rate of glucose uptake by body tissues. Changes in the rate of input will be met with a change in consumption, and vice versa, so that blood glucose concentration is held at about 5 mM in humans. A change in blood glucose levels triggers the release of insulin or glucagon, which stimulates the liver to release glucose into the bloodstream or take up glucose from the bloodstream in order to return glucose levels to steady state. Pancreatic beta cells, for example, increase oxidative metabolism as a result of a rise in blood glucose concentration, triggering secretion of insulin. Glucose levels in the brain are also maintained at steady state, and glucose delivery to the brain relies on the balance between the flux of the blood brain barrier and uptake by brain cells. In teleosts, a drop of blood glucose levels below that of steady state decreases the intracellular-extracellular gradient in the bloodstream, limiting glucose metabolism in red blood cells.
Blood lactate
Blood lactate levels are also maintained at steady state. At rest or low levels of exercise, the rate of lactate production in muscle cells and consumption in muscle or blood cells allows lactate to remain in the body at a certain steady state concentration. If a higher level of exercise is sustained, however, blood lactose levels will increase before becoming constant, indicating that a new steady state of elevated concentration has been reached. Maximal lactate steady state (MLSS) refers to the maximum constant concentration of lactase reached during sustained high-activity.
Nitrogen-containing molecules
Metabolic regulation of nitrogen-containing molecules, such as amino acids, is also kept at steady state. The amino acid pool, which describes the level of amino acids in the body, is maintained at a relatively constant concentration by balancing the rate of input (i.e. from dietary protein ingestion, production of metabolic intermediates) and rate of depletion (i.e. from formation of body proteins, conversion to energy-storage molecules). Amino acid concentration in lymph node cells, for example, is kept at steady state with active transport as the primary source of entry, and diffusion as the source of efflux.
Ions
One main function of plasma and cell membranes is to maintain asymmetric concentrations of inorganic ions in order to maintain an ionic steady state different from electrochemical equilibrium. In other words, there is a differential distribution of ions on either side of the cell membrane - that is, the amount of ions on either side is not equal and therefore a charge separation exists. However, ions move across the cell membrane such that a constant resting membrane potential is achieved; this is ionic steady state. In the pump-leak model of cellular ion homeostasis, energy is utilized to actively transport ions against their electrochemical gradient. The maintenance of this steady state gradient, in turn, is used to do electrical and chemical work, when it is dissipated though the passive movement of ions across the membrane.
In cardiac muscle, ATP is used to actively transport sodium ions out of the cell through a membrane ATPase. Electrical excitation of the cell results in an influx of sodium ions into the cell, temporarily depolarizing the cell. To restore the steady state electrochemical gradient, ATPase removes sodium ions and restores potassium ions in the cell. When an elevated heart rate is sustained, causing more depolarizations, sodium levels in the cell increase until becoming constant, indicating that a new steady state has been reached.
Stability of the steady-state
Steady-states can be stable or unstable. A steady-state is unstable if a small perturbation in one or more of the concentrations results in the system diverging from its state. In contrast, if a steady-state is stable, any perturbation will relax back to the original steady state. Further details can be found on the page Stability theory.
Simple Example
The following provides a simple example for computing the steady-state give a simple mathematical model.
Consider the open chemical system composed of two reactions with rates and :
We will assume that the chemical species and are fixed external species and is an internal chemical species that is allowed to change. The fixed boundaries is to ensure the system can reach a steady-state. If we assume simple irreversible mass-action kinetics, the differential equation describing the concentration of is given by:
To find the steady-state the differential equation is set to zero and the equation rearranged to solve for
This is the steady-state concentration of .
The stability of this system can be determined by making a perturbation in This can be expressed as:
Note that the will elicit a change in the rate of change.
At steady-state , therefore the rate of change of as a result of this perturbation is:
This shows that the perturbation, decays exponetially, hence the system is stable.
See also
Transition state
Steady state
Steady state (chemistry)
References
Biochemistry
Physical chemistry | Steady state (biochemistry) | [
"Physics",
"Chemistry",
"Biology"
] | 1,612 | [
"Biochemistry",
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
2,012,013 | https://en.wikipedia.org/wiki/Wirth%E2%80%93Weber%20precedence%20relationship | In computer science, a Wirth–Weber relationship between a pair of symbols is necessary to determine if a formal grammar is a simple precedence grammar. In such a case, the simple precedence parser can be used. The relationship is named after computer scientists Niklaus Wirth and Helmut Weber.
The goal is to identify when the viable prefixes have the pivot and must be reduced. A means that the pivot is found, a means that a potential pivot is starting, and a means that a relationship remains in the same pivot.
Formal definition
Precedence relations computing algorithm
We will define three sets for a symbol:
Head*(X) is X if X is a terminal, and if X is a non-terminal, Head*(X) is the set with only the terminals belonging to Head+(X). This set is equivalent to First-set or Fi(X) described in LL parser.
Head+(X) and Tail+(X) are ∅ if X is a terminal.
The pseudocode for computing relations is:
RelationTable := ∅
For each production
For each two adjacent symbols in
add(RelationTable, )
add(RelationTable, )
add(RelationTable, )
add(RelationTable, ) where is the initial non terminal of the grammar, and $ is a limit marker
add(RelationTable, ) where is the initial non terminal of the grammar, and $ is a limit marker
and are used with sets instead of elements as they were defined, in this case you must add all the cartesian product between the sets/elements.
Examples
Example 1
Head(a) = ∅
Head(S) = {a, c}
Head(b) = ∅
Head(c) = ∅
Tail(a) = ∅
Tail(S) = {b, c}
Tail(b) = ∅
Tail(c) = ∅
Head(a) = a
Head(S) = {a, c}
Head(b) = b
Head(c) = c
a Next to S
S Next to S
S Next to b
there is only one symbol, so no relation is added.
precedence table
Example 2
Head( S ) = { a, [ }
Head( a ) = ∅
Head( T ) = { b }
Head( [ ) = ∅
Head( ] ) = ∅
Head( b ) = ∅
Tail( S ) = { a, T, ], b }
Tail( a ) = ∅
Tail( T ) = { b, T }
Tail( [ ) = ∅
Tail( ] ) = ∅
Tail( b ) = ∅
Head( S ) = { a, [ }
Head( a ) = a
Head( T ) = { b }
Head( [ ) = [
Head( ] ) = ]
Head( b ) = b
a Next to T
[ Next to S
S Next to ]
b Next to T
precedence table
Further reading
Formal languages | Wirth–Weber precedence relationship | [
"Mathematics"
] | 589 | [
"Formal languages",
"Mathematical logic"
] |
2,012,564 | https://en.wikipedia.org/wiki/Karp%27s%2021%20NP-complete%20problems | In computational complexity theory, Karp's 21 NP-complete problems are a set of computational problems which are NP-complete. In his 1972 paper, "Reducibility Among Combinatorial Problems", Richard Karp used Stephen Cook's 1971 theorem that the boolean satisfiability problem is NP-complete (also called the Cook-Levin theorem) to show that there is a polynomial time many-one reduction from the boolean satisfiability problem to each of 21 combinatorial and graph theoretical computational problems, thereby showing that they are all NP-complete. This was one of the first demonstrations that many natural computational problems occurring throughout computer science are computationally intractable, and it drove interest in the study of NP-completeness and the P versus NP problem.
The problems
Karp's 21 problems are shown below, many with their original names. The nesting indicates the direction of the reductions used. For example, Knapsack was shown to be NP-complete by reducing Exact cover to Knapsack.
Satisfiability: the boolean satisfiability problem for formulas in conjunctive normal form (often referred to as SAT)
0–1 integer programming (A variation in which only the restrictions must be satisfied, with no optimization)
Clique (see also independent set problem)
Set packing
Vertex cover
Set covering
Feedback node set
Feedback arc set
Directed Hamilton circuit (Karp's name, now usually called Directed Hamiltonian cycle)
Undirected Hamilton circuit (Karp's name, now usually called Undirected Hamiltonian cycle)
Satisfiability with at most 3 literals per clause (equivalent to 3-SAT)
Chromatic number (also called the Graph Coloring Problem)
Clique cover
Exact cover
Hitting set
Steiner tree
3-dimensional matching
Knapsack (Karp's definition of Knapsack is closer to Subset sum)
Job sequencing
Partition
Max cut
Approximations
As time went on it was discovered that many of the problems can be solved efficiently if restricted to special cases, or can be solved within any fixed percentage of the optimal result. However, David Zuckerman showed in 1996 that every one of these 21 problems has a constrained optimization version that is impossible to approximate within any constant factor unless P = NP, by showing that Karp's approach to reduction generalizes to a specific type of approximability reduction. Note however that these may be different from the standard optimization versions of the problems, which may have approximation algorithms (as in the case of maximum cut).
See also
List of NP-complete problems
Notes
References
Mathematics-related lists | Karp's 21 NP-complete problems | [
"Mathematics"
] | 527 | [
"NP-complete problems",
"Mathematical problems",
"Computational problems"
] |
2,013,448 | https://en.wikipedia.org/wiki/Scrubber | Scrubber systems (e.g. chemical scrubbers, gas scrubbers) are a diverse group of air pollution control devices that can be used to remove some particulates and/or gases from industrial exhaust streams. An early application of a carbon dioxide scrubber was in the submarine the Ictíneo I, in 1859; a role for which they continue to be used today. Traditionally, the term "scrubber" has referred to pollution control devices that use liquid to wash unwanted pollutants from a gas stream. Recently, the term has also been used to describe systems that inject a dry reagent or slurry into a dirty exhaust stream to "wash out" acid gases. Scrubbers are one of the primary devices that control gaseous emissions, especially acid gases. Scrubbers can also be used for heat recovery from hot gases by flue-gas condensation. They are also used for the high flows in solar, PV, or LED processes.
There are several methods to remove toxic or corrosive compounds from exhaust gas and neutralize it.
Combustion
Combustion is sometimes the cause of harmful exhausts, but, in many cases, combustion may also be used for exhaust gas cleaning if the temperature is high enough and enough oxygen is available.
Wet scrubbing
The exhaust gases of combustion may contain substances considered harmful to the environment, and the scrubber may remove or neutralize those.
A wet scrubber is used for cleaning air, fuel gas or other gases of various pollutants and dust particles. Wet scrubbing works via the contact of target compounds or particulate matter with the scrubbing solution. Water is the most common solvent used to remove inorganic contaminants, particularly for dust, but solutions of reagents that specifically target certain compounds may also be used.
Process exhaust gas can also contain water-soluble toxic and/or corrosive gases like hydrochloric acid (HCl) or ammonia (NH3). These can be removed very well by a wet scrubber.
Removal efficiency of pollutants is improved by increasing residence time in the scrubber or by the increase of surface area of the scrubber solution by the use of a spray nozzle, packed towers or an aspirator. Wet scrubbers may increase the proportion of water in the gas, resulting in a visible stack plume, if the gas is sent to a stack.
Wet scrubbers can also be used for heat recovery from hot gases by flue-gas condensation. In this mode, termed a condensing scrubber, water from the scrubber drain is circulated through a cooler to the nozzles at the top of the scrubber. The hot gas enters the scrubber at the bottom. If the gas temperature is above the water dew point, it is initially cooled by evaporation of water drops. Further cooling causes water vapors to condense, adding to the amount of circulating water.
The condensation of water releases significant amounts of low temperature heat due to the high value of the specific latent heat of the vaporisation of water (more than per ton of water), which can be recovered by the cooler for e.g. district heating purposes.
Excess condensed water must continuously be removed from the circulating water.
Dry scrubbing
A dry or semi-dry scrubbing system, unlike the wet scrubber, does not saturate the flue gas stream that is being treated with moisture. In some cases no moisture is added, while in others only the amount of moisture that can be evaporated in the flue gas without condensing is added. Therefore, dry scrubbers generally do not have a stack steam plume or wastewater handling/disposal requirements. Dry scrubbing systems are used to remove acid gases (such as SO2 and HCl) primarily from combustion sources.
There are a number of dry type scrubbing system designs. However, all consist of two main sections or devices: a device to introduce the acid gas sorbent material into the gas stream and a particulate matter control device to remove reaction products, excess sorbent material as well as any particulate matter already in the flue gas.
Dry scrubbing systems can be categorized as dry sorbent injectors (DSIs) or as spray dryer absorbers (SDAs). Spray dryer absorbers are also called semi-dry scrubbers or spray dryers.
Dry scrubbing systems are often used for the removal of odorous and corrosive gases from wastewater treatment plant operations. The medium used is typically an activated alumina compound impregnated with materials to handle specific gases such as hydrogen sulfide. Media used can be mixed together to offer a wide range of removal for other odorous compounds such as methyl mercaptans, aldehydes, volatile organic compounds, dimethyl sulfide, and dimethyl disulfide.
Dry sorbent injection involves the addition of an alkaline material (usually hydrated lime, soda ash, or sodium bicarbonate) into the gas stream to react with the acid gases. The sorbent can be injected directly into several different locations: the combustion process, the flue gas duct (ahead of the particulate control device), or an open reaction chamber (if one exists). The acid gases react with the alkaline sorbents to form solid salts which are removed in the particulate control device. These simple systems can achieve only limited acid gas (SO2 and HCl) removal efficiencies. Higher collection efficiencies can be achieved by increasing the flue gas humidity (i.e., cooling using water spray). These devices have been used on medical waste incinerators and a few municipal waste combustors.
In spray dryer absorbers, the flue gases are introduced into an absorbing tower (dryer) where the gases are contacted with a finely atomized alkaline slurry. Acid gases are absorbed by the slurry mixture and react to form solid salts which are removed by the particulate control device. The heat of the flue gas is used to evaporate all the water droplets, leaving a non-saturated flue gas to exit the absorber tower. Spray dryers are capable of achieving high (80+%) acid gas removal efficiencies. These devices have been used on industrial and utility boilers and municipal waste incinerators.
Adsorber
Many chemicals can be removed from exhaust gas also by using adsorber material. The flue gas is passed through a cartridge which is filled with one or several adsorber materials and has been adapted to the chemical properties of the components to be removed. This type of scrubber is sometimes also called dry scrubber. The adsorber material has to be replaced after its surface is saturated. Note: adsorption is a surface phenomena, absorption involves the entire material. Ex: Activated carbon an adsorbent, used for the adsorption of odorous compounds.
Mercury removal
Mercury is a highly toxic element commonly found in coal and municipal waste. Wet scrubbers are only effective for removal of soluble mercury species, such as oxidized mercury, Hg2+. Mercury vapor in its elemental form, Hg0, is insoluble in the scrubber slurry and not removed. Therefore, an additional process of Hg0 conversion is required to complete mercury capture. Usually halogens are added to the flue gas for this purpose. The type of coal burned as well as the presence of a selective catalytic reduction unit both affect the ratio of elemental to oxidized mercury in the flue gas and thus the degree to which the mercury is removed.
In July 2015, one study found that some mercury scrubbers installed on coal power plants inadvertently capture PAH (polycyclic aromatic hydrocarbons) emissions as well.
Scrubber waste products
One side effect of scrubbing is that the process only moves the unwanted substance from the exhaust gases into a liquid solution, solid paste or powder form. This must be disposed of safely, if it can not be reused.
For example, mercury removal results in a waste product that either needs further processing to extract the raw mercury, or must be buried in a special hazardous wastes landfill that prevents the mercury from seeping out into the environment. There are issues with that, as it is extremely dangerous to the environment, and many factories cannot process them or have it moved to a landfill.
As an example of reuse, limestone-based scrubbers in coal-fired power plants can produce a synthetic gypsum of sufficient quality that can be used to manufacture drywall and other industrial products.
Bacteria spread
Poorly maintained scrubbers have the potential to spread disease-causing bacteria. The problem is a result of inadequate cleaning. For example, the cause of a 2005 outbreak of Legionnaires' disease in Norway was just a few infected scrubbers. The outbreak caused 10 deaths and more than 50 cases of infection.
Scrubbers on ships
Scrubbers were first used on board ships for the production of inert gas for oil tanker operations.
Later, in preparation for the global 0.5% sulfur cap in 2020, the International Maritime Organization (IMO) adopted guidelines on the approval, installation and use of exhaust gas scrubbers (exhaust gas cleaning systems) on board ships to ensure compliance with the sulfur regulation of MARPOL Annex VI. Flag states must approve such systems and port states can (as part of their port state control) ensure that such systems are functioning correctly. If a scrubber system is not functioning properly (and the IMO procedures for such malfunctions are not adhered to), port states can sanction the ship. The United Nations Convention on the Law of the Sea also bestows port states with a right to regulate (and even ban) the use of open loop scrubber systems within ports and internal waters.
See also
Flue-gas desulfurization
Flue-gas condensation
Mercury (element)
Mercury cycle
Oil desulfurization
Electrostatic precipitator
BS4994 Chemical Process Plant Equipments in FRP
Catalytic converter
Wet scrubber
Baffle spray scrubber
Ejector venturi scrubber
Liquid-to-gas ratio
Mechanically aided scrubber
Spray tower
Spray nozzle
Stripping (chemistry)
Venturi scrubber
References
Further reading
Jesper Jarl Fanø (2019). Enforcing International Maritime Legislation on Air Pollution through UNCLOS. Hart Publishing.
Pollution control technologies
Air pollution control systems
Acid gas control
Industrial processes | Scrubber | [
"Chemistry",
"Engineering"
] | 2,141 | [
"Chemical equipment",
"Scrubbers"
] |
2,013,573 | https://en.wikipedia.org/wiki/Sulfonyl%20group | In organosulfur chemistry, a sulfonyl group is either a functional group found primarily in sulfones, or a substituent obtained from a sulfonic acid by the removal of the hydroxyl group, similarly to acyl groups.
Group
Sulfonyl groups can be written as having the general formula , where there are two double bonds between the sulfur and oxygen.
Sulfonyl groups can be reduced to the sulfide with diisobutylaluminium hydride (DIBALH). Lithium aluminium hydride () reduces some but not all sulfones to sulfides.
In inorganic chemistry, when the group is not connected to any carbon atoms, it is referred to as sulfuryl.
Examples of sulfonyl group substituents
The names of sulfonyl groups typically end in -syl, such as:
{| class=wikitable
!Group name
!Full name
!Pseudoelement symbol
!Example
|-
|Tosyl
|p-toluenesulfonyl
|Ts
|Tosyl chloride (p-toluenesulfonyl chloride)CH3C6H4SO2Cl
|-
|Brosyl
|p-bromobenzenesulfonyl
|Bs
|
|-
|Nosyl
|o- or p-nitrobenzenesulfonyl
|Ns
|
|-
|Mesyl
|methanesulfonyl
|Ms
|Mesyl chloride (methanesulfonyl chloride)CH3SO2Cl
|-
|Triflyl
|trifluoromethanesulfonyl
|Tf
|
|-
|Tresyl
|2,2,2-trifluoroethyl-1-sulfonyl
|
|
|-
|Dansyl
|5-(dimethylamino)naphthalene-1-sulfonyl
|Ds
|Dansyl chloride
|}
See also
Sulfonyl halide
Sulfonamide
Sulfonate
Methylsulfonylmethane (MSM)
References
Functional groups
Organosulfur compounds | Sulfonyl group | [
"Chemistry"
] | 434 | [
"Organosulfur compounds",
"Substituents",
"Functional groups",
"Organic compounds",
"Sulfonyl groups"
] |
2,013,881 | https://en.wikipedia.org/wiki/Electra%20%28star%29 | Electra , designated 17 Tauri, is a blue-white giant star in the constellation of Taurus located approximately 400 light years away. It is the third-brightest star in the Pleiades open star cluster (M45), visible to the naked eye with an apparent magnitude of 3.7. Like the other bright stars of the Pleiades, it is named for one of the Seven Sisters of Greek mythology.
Properties
Electra has an apparent brightness of 3.72, the third-brightest of the stars in the group. It belongs to the spectral class B6 IIIe and is approximately 400 light-years from the Sun. The Pleiades cluster is thought to be 444 light-years away. A number of papers have reported Electra to be a multiple star, but these have been contradictory and never confirmed.
The projected rotational velocity of this star is , making it a fast rotator. This is the velocity component of the star's equatorial rotation along the line of sight to the Earth. The estimated inclination of the star's pole is , giving it a true equatorial rotational velocity of . The rapid rotation rate of this star flattens the poles and stretch the equator. This makes the surface gravity of the star non-uniform and causes temperature variation. This effect is known as gravity darkening, because it results in a variation of radiation by latitude. The rapid rotation extends the life span of the star by increasing the core density and reducing the radiation output.
This is classified as a Be star, which is a B-type star with prominent emission lines of hydrogen in its spectrum. The Be stars have a rotation rate that is 1.5–2 times the rotation of normal B-type stars. This high rate of rotation may allow mass loss during even minor prominences. Changes in the radial velocity measurements indicate that this star may have a companion, which would make Electra a spectroscopic binary. However, follow-up studies including interferometry have failed to confirm any companion star(s), so it is likely a single star.
Electra may be a variable star, and it appears in the New Catalogue of Suspected Variable Stars as NSV 15755.
Low amplitude variability of the brightness of Electra was detected by Kepler/K2, and Fourier analysis of the star's light curve shows several periods of oscillation, the strongest being 1.107 and 1.165 days. The International Variable Star Index classifies it as a slowly pulsating B star.
Infrared observations of this star showed an excess level of radiation equal to about 0.5 magnitudes. This emission is probably from a gaseous disk created by radiation-driven mass loss and rapid rotation of the star. These disks are created by an ejection of material roughly every ten years, which then settles into the equatorial plane about the star. However, the bright nebulosity that surrounds this star makes the observation uncertain.
Nomenclature
17 Tauri is the star's Flamsteed designation.
It bore the traditional name Electra. Electra was one of the Pleiades sisters in Greek mythology. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Electra for this star on 21 August 2016 and it is now so entered in the IAU Catalog of Star Names.
Military namesakes
USS Electra (1843) and USS Electra (AK-21/AKA-4), were both ships of the United States Navy.
References
External links
B-type giants
Tauri, 017
Pleiades
Taurus (constellation)
Be stars
1142
023302
017499
Durchmusterung objects | Electra (star) | [
"Astronomy"
] | 763 | [
"Taurus (constellation)",
"Constellations"
] |
2,014,413 | https://en.wikipedia.org/wiki/Anaplerotic%20reactions | Anaplerotic reactions, a term coined by Hans Kornberg and originating from the Greek ἀνά= 'up' and πληρόω= 'to fill', are chemical reactions that form intermediates of a metabolic pathway. Examples of such are found in the citric acid cycle (TCA cycle). In normal function of this cycle for respiration, concentrations of TCA intermediates remain constant; however, many biosynthetic reactions also use these molecules as a substrate. Anaplerosis is the act of replenishing TCA cycle intermediates that have been extracted for biosynthesis (in what are called anaplerotic reactions).
The TCA cycle is a hub of metabolism, with central importance in both energy production and biosynthesis. Therefore, it is crucial for the cell to regulate concentrations of TCA cycle metabolites in the mitochondria. Anaplerotic flux must balance cataplerotic flux in order to retain homeostasis of cellular metabolism.
Reactions of anaplerotic metabolism
There are five major reactions classed as anaplerotic, and it is estimated that the production of oxaloacetate from pyruvate has the most physiologic importance.
The malate is created by PEP carboxylase and malate dehydrogenase in the cytosol. Malate, in the mitochondrial matrix, can be used to make pyruvate (catalyzed by malic enzyme) or oxaloacetic acid, both of which can enter the citric acid cycle.
Glutamine can also be used to produce oxaloacetate during anaplerotic reactions in various cell types through "glutaminolysis", which is also seen in many c-Myc transformed cells. Anaplerotic enzymes mediate an alternative pathway to insulin secretion by aiding the production of cytosolic signal molecules. Pancreatic β-cells which regulate blood glucose level by secreting insulin,contain high a mounts of pyruvate carboxylase. A decrease in insulin secretion and anaplerotic activity has been found in β-cells that do not have hypoxia-inducible factor-1 beta
Diseases of anaplerotic metabolism
Pyruvate carboxylase deficiency is an inherited metabolic disorder where anaplerosis is greatly reduced. Other anaplerotic substrates such as the odd-carbon-containing triglyceride triheptanoin can be used to treat this disorder.
References
Biochemical reactions
Cellular respiration | Anaplerotic reactions | [
"Chemistry",
"Biology"
] | 528 | [
"Biochemistry",
"Cellular respiration",
"Metabolism",
"Biochemical reactions"
] |
2,014,417 | https://en.wikipedia.org/wiki/Clear-air%20turbulence | In meteorology, clear-air turbulence (CAT) is the turbulent movement of air masses in the absence of any visual clues such as clouds, and is caused when bodies of air moving at widely different speeds meet.
The atmospheric region most susceptible to CAT is the high troposphere at altitudes of around as it meets the tropopause. Here CAT is most frequently encountered in the regions of jet streams. At lower altitudes it may also occur near mountain ranges. Thin cirrus clouds can also indicate high probability of CAT.
CAT can be hazardous to the comfort, and occasionally the safety, of air travelers, as the aircraft pilots often cannot see and anticipate such turbulences, and a sudden encounter can impart significant stress to the airframe.
CAT in the jet stream is expected to become stronger and more frequent because of climate change, with transatlantic wintertime CAT increasing by 60% (light), 95% (moderate), and 150% (severe) by the time of doubling.
Definition
In meteorology, clear-air turbulence (CAT) is the turbulent movement of air masses in the absence of any visual clues, such as clouds, and is caused when bodies of air moving at widely different speeds meet.
In aviation, CAT is defined as "the detection by aircraft of high-altitude inflight bumps in patchy regions devoid of significant cloudiness or nearby thunderstorm activity". It was first noted in the 1940s.
Detection
Clear-air turbulence is not possible to detect with the naked eye and very difficult to detect with a conventional radar, with the result that it is difficult for aircraft pilots to detect and avoid it. However, it can be remotely detected with instruments that can measure turbulence with optical techniques, such as scintillometers, Doppler LIDARs, or N-slit interferometers.
Factors
At typical heights where it occurs, the intensity and location cannot be determined precisely. However, because this turbulence affects long range aircraft that fly near the tropopause, CAT has been intensely studied. Several factors affect the likelihood of CAT. Often more than one factor is present.
As of 1965 it had been noted that 64% of the non-light turbulences (not only CAT) were observed less than away from the core of a jet stream. Jet stream produces horizontal wind shear at its edges, caused by the different relative air speeds of the stream and the surrounding air. Wind shear, a difference in relative speed between two adjacent air masses, can produce vortices, and when of sufficient degree, the air will tend to move chaotically.
A strong anticyclone vortex can also lead to CAT.
Rossby waves caused by this jet stream shear and the Coriolis force cause it to meander.
Although the altitudes near the tropopause are usually cloudless, thin cirrus cloud can form where there are abrupt changes of air velocity, for example associated with jet streams. Lines of cirrus perpendicular to the jet stream indicate possible CAT, especially if the ends of the cirrus are dispersed, in which case the direction of dispersal can indicate if the CAT is stronger at the left or at the right of the jet stream.
A temperature gradient is the change of temperature over a distance in some given direction. Where the temperature of a gas changes, so does its density and where the density changes CAT can appear.
From the ground upwards through the troposphere temperature decreases with height; from the tropopause upwards through the stratosphere temperature increases with height. Such variations are examples of temperature gradients.
A horizontal temperature gradient may occur, and hence air density variations, where air velocity changes. An example: the speed of the jet stream is not constant along its length; additionally air temperature and hence density will vary between the air within the jet stream and the air outside.
As is explained elsewhere in this article, temperature decreases and wind velocity increase with height in the troposphere, and the reverse is true within the stratosphere. These differences cause changes in air density, and hence viscosity. The viscosity of the air thus presents both inertias and accelerations which cannot be determined in advance.
Vertical wind shear above the jet stream (i.e., in the stratosphere) is sharper when it is moving upwards, because wind speed decreases with height in the stratosphere. This is the reason CAT can be generated above the tropopause, despite the stratosphere otherwise being a region which is vertically stable. On the other hand, vertical wind shear moving downwards within the stratosphere is more moderate (i.e., because downwards wind shear within the stratosphere is effectively moving against the manner in which wind speed changes within the stratosphere) and CAT is never produced in the stratosphere. Similar considerations apply to the troposphere but in reverse.
When strong wind deviates, the change of wind direction implies a change in the wind speed. A stream of wind can change its direction by differences of pressure. CAT appears more frequently when the wind is surrounding a low pressure region, especially with sharp troughs that change the wind direction more than 100°. Extreme CAT has been reported without any other factor than this.
Mountain waves are formed when four requirements are met. When these factors coincide with jet streams, CAT can occur:
A mountain range, not an isolated mountain
Strong perpendicular wind
Wind direction maintained with altitude
Temperature inversion at the top of the mountain range
The tropopause is a layer which separates two very different types of air. Beneath it, the air gets colder and the wind gets faster with height. Above it, the air warms and wind velocity decreases with height. These changes in temperature and velocity can produce fluctuation in the altitude of the tropopause, called gravity waves.
Effects on aircraft
Pilot rules
When a pilot experiences CAT, a number of rules should be applied:
The aircraft must sustain the recommended velocity for turbulence.
When following the jet stream to escape from the CAT, the aircraft must change altitude and/or heading.
When the CAT arrives from one side of the airplane, the pilot must observe the thermometer to determine whether the aircraft is above or below the jet stream and then move away from the tropopause.
When the CAT is associated with a sharp trough, the plane must go through the low-pressure region instead of around it.
The pilot may issue a Pilot Report (PIREP), communicating position, altitude and severity of the turbulence to warn other aircraft entering the region.
Cases
Because aircraft move so quickly, they can experience sudden unexpected accelerations or 'bumps' from turbulence, including CAT – as the aircraft rapidly cross invisible bodies of air which are moving vertically at many different speeds. Although the vast majority of cases of turbulence are harmless, in rare cases cabin crew and passengers on aircraft have been injured when tossed around inside an aircraft cabin during extreme turbulence. In a small number of cases, people have been killed and at least one aircraft disintegrated mid-air.
On March 5, 1966, BOAC Flight 911 from Tokyo to Hong Kong, a Boeing 707, broke up in CAT, with the loss of all persons (124) on board after experiencing severe lee-wave turbulence just downwind of Mount Fuji, Japan. The sequence of failure started with the vertical stabilizer getting ripped off.
On December 28, 1997, on United Airlines Flight 826, one person died and 17 others were seriously injured in a CAT event.
On May 21, 2024, one passenger died and dozens were injured on Singapore Airlines Flight 321 from London to Singapore, causing the plane to divert to Bangkok.
See also
Continuous gusts
Dryden Wind Turbulence Model
Ellrod index
N-slit interferometer
von Kármán Wind Turbulence Model
Wake turbulence
References
External links
Brace for Turbulence
Clear Air Turbulence Forecast (USA)
Aerodynamics
Weather hazards to aircraft
Meteorological phenomena
Turbulence
Wind | Clear-air turbulence | [
"Physics",
"Chemistry",
"Engineering"
] | 1,605 | [
"Physical phenomena",
"Earth phenomena",
"Turbulence",
"Aerodynamics",
"Meteorological phenomena",
"Aerospace engineering",
"Fluid dynamics"
] |
2,015,367 | https://en.wikipedia.org/wiki/Two-hybrid%20screening | Two-hybrid screening (originally known as yeast two-hybrid system or Y2H) is a molecular biology technique used to discover protein–protein interactions (PPIs) and protein–DNA interactions by testing for physical interactions (such as binding) between two proteins or a single protein and a DNA molecule, respectively.
The premise behind the test is the activation of downstream reporter gene(s) by the binding of a transcription factor onto an upstream activating sequence (UAS). For two-hybrid screening, the transcription factor is split into two separate fragments, called the DNA-binding domain (DBD or often also abbreviated as BD) and activating domain (AD). The BD is the domain responsible for binding to the UAS and the AD is the domain responsible for the activation of transcription. The Y2H is thus a protein-fragment complementation assay.
History
Pioneered by Stanley Fields and Ok-Kyu Song in 1989, the technique was originally designed to detect protein–protein interactions using the Gal4 transcriptional activator of the yeast Saccharomyces cerevisiae. The Gal4 protein activated transcription of a gene involved in galactose utilization, which formed the basis of selection. Since then, the same principle has been adapted to describe many alternative methods, including some that detect protein–DNA interactions or DNA-DNA interactions, as well as methods that use different host organisms such as Escherichia coli or mammalian cells instead of yeast.
Basic premise
The key to the two-hybrid screen is that in most eukaryotic transcription factors, the activating and binding domains are modular and can function in proximity to each other without direct binding. This means that even though the transcription factor is split into two fragments, it can still activate transcription when the two fragments are indirectly connected.
The most common screening approach is the yeast two-hybrid assay. In this approach the researcher knows where each prey is located on the used medium (agar plates). Millions of potential interactions in several organisms have been screened in the latest decade using high-throughput screening systems (often using robots) and over thousands of interactions have been detected and categorized in databases as BioGRID. This system often utilizes a genetically engineered strain of yeast in which the biosynthesis of certain nutrients (usually amino acids or nucleic acids) is lacking. When grown on media that lacks these nutrients, the yeast fail to survive. This mutant yeast strain can be made to incorporate foreign DNA in the form of plasmids. In yeast two-hybrid screening, separate bait and prey plasmids are simultaneously introduced into the mutant yeast strain or a mating strategy is used to get both plasmids in one host cell.
The second high-throughput approach is the library screening approach. In this set up the bait and prey harboring cells are mated in a random order. After mating and selecting surviving cells on selective medium the scientist will sequence the isolated plasmids to see which prey (DNA sequence) is interacting with the used bait. This approach has a lower rate of reproducibility and tends to yield higher amounts of false positives compared to the matrix approach.
Plasmids are engineered to produce a protein product in which the DNA-binding domain (BD) fragment is fused onto a protein while another plasmid is engineered to produce a protein product in which the activation domain (AD) fragment is fused onto another protein. The protein fused to the BD may be referred to as the bait protein, and is typically a known protein the investigator is using to identify new binding partners. The protein fused to the AD may be referred to as the prey protein and can be either a single known protein or a library of known or unknown proteins. In this context, a library may consist of a collection of protein-encoding sequences that represent all the proteins expressed in a particular organism or tissue, or may be generated by synthesising random DNA sequences. Regardless of the source, they are subsequently incorporated into the protein-encoding sequence of a plasmid, which is then transfected into the cells chosen for the screening method. This technique, when using a library, assumes that each cell is transfected with no more than a single plasmid and that, therefore, each cell ultimately expresses no more than a single member from the protein library.
If the bait and prey proteins interact (i.e., bind), then the AD and BD of the transcription factor are indirectly connected, bringing the AD in proximity to the transcription start site and transcription of reporter gene(s) can occur. If the two proteins do not interact, there is no transcription of the reporter gene. In this way, a successful interaction between the fused protein is linked to a change in the cell phenotype.
The challenge of separating cells that express proteins that happen to interact with their counterpart fusion proteins from those that do not, is addressed in the following section.
Fixed domains
In any study, some of the protein domains, those under investigation, will be varied according to the goals of the study whereas other domains, those that are not themselves being investigated, will be kept constant. For example, in a two-hybrid study to select DNA-binding domains, the DNA-binding domain, BD, will be varied while the two interacting proteins, the bait and prey, must be kept constant to maintain a strong binding between the BD and AD. There are a number of domains from which to choose the BD, bait and prey and AD, if these are to remain constant. In protein–protein interaction investigations, the BD may be chosen from any of many strong DNA-binding domains such as Zif268. A frequent choice of bait and prey domains are residues 263–352 of yeast Gal11P with a N342V mutation and residues 58–97 of yeast Gal4, respectively. These domains can be used in both yeast- and bacterial-based selection techniques and are known to bind together strongly.
The AD chosen must be able to activate transcription of the reporter gene, using the cell's own transcription machinery. Thus, the variety of ADs available for use in yeast-based techniques may not be suited to use in their bacterial-based analogues. The herpes simplex virus-derived AD, VP16 and yeast Gal4 AD have been used with success in yeast whilst a portion of the α-subunit of E. coli RNA polymerase has been utilised in E. coli-based methods.
Whilst powerfully activating domains may allow greater sensitivity towards weaker interactions, conversely, a weaker AD may provide greater stringency.
Construction of expression plasmids
A number of engineered genetic sequences must be incorporated into the host cell to perform two-hybrid analysis or one of its derivative techniques. The considerations and methods used in the construction and delivery of these sequences differ according to the needs of the assay and the organism chosen as the experimental background.
There are two broad categories of hybrid library: random libraries and cDNA-based libraries. A cDNA library is constituted by the cDNA produced through reverse transcription of mRNA collected from specific cells of types of cell. This library can be ligated into a construct so that it is attached to the BD or AD being used in the assay. A random library uses lengths of DNA of random sequence in place of these cDNA sections. A number of methods exist for the production of these random sequences, including cassette mutagenesis. Regardless of the source of the DNA library, it is ligated into the appropriate place in the relevant plasmid/phagemid using the appropriate restriction endonucleases.
E. coli-specific considerations
By placing the hybrid proteins under the control of IPTG-inducible lac promoters, they are expressed only on media supplemented with IPTG. Further, by including different antibiotic resistance genes in each genetic construct, the growth of non-transformed cells is easily prevented through culture on media containing the corresponding antibiotics. This is particularly important for counter selection methods in which a lack of interaction is needed for cell survival.
The reporter gene may be inserted into the E. coli genome by first inserting it into an episome, a type of plasmid with the ability to incorporate itself into the bacterial cell genome with a copy number of approximately one per cell.
The hybrid expression phagemids can be electroporated into E. coli XL-1 Blue cells which after amplification and infection with VCS-M13 helper phage, will yield a stock of library phage. These phage will each contain one single-stranded member of the phagemid library.
Recovery of protein information
Once the selection has been performed, the primary structure of the proteins which display the appropriate characteristics must be determined. This is achieved by retrieval of the protein-encoding sequences (as originally inserted) from the cells showing the appropriate phenotype.
E. coli
The phagemid used to transform E. coli cells may be "rescued" from the selected cells by infecting them with VCS-M13 helper phage. The resulting phage particles that are produced contain the single-stranded phagemids and are used to infect XL-1 Blue cells. The double-stranded phagemids are subsequently collected from these XL-1 Blue cells, essentially reversing the process used to produce the original library phage. Finally, the DNA sequences are determined through dideoxy sequencing.
Controlling sensitivity
The Escherichia coli-derived Tet-R repressor can be used in line with a conventional reporter gene and can be controlled by tetracycline or doxicycline (Tet-R inhibitors). Thus the expression of Tet-R is controlled by the standard two-hybrid system but the Tet-R in turn controls (represses) the expression of a previously mentioned reporter such as HIS3, through its Tet-R promoter. Tetracycline or its derivatives can then be used to regulate the sensitivity of a system utilising Tet-R.
Sensitivity may also be controlled by varying the dependency of the cells on their reporter genes. For example, this may be affected by altering the concentration of histidine in the growth medium for his3-dependent cells and altering the concentration of streptomycin for aadA dependent cells. Selection-gene-dependency may also be controlled by applying an inhibitor of the selection gene at a suitable concentration. 3-Amino-1,2,4-triazole (3-AT) for example, is a competitive inhibitor of the HIS3-gene product and may be used to titrate the minimum level of HIS3 expression required for growth on histidine-deficient media.
Sensitivity may also be modulated by varying the number of operator sequences in the reporter DNA.
Non-fusion proteins
A third, non-fusion protein may be co-expressed with two fusion proteins. Depending on the investigation, the third protein may modify one of the fusion proteins or mediate or interfere with their interaction.
Co-expression of the third protein may be necessary for modification or activation of one or both of the fusion proteins. For example, S. cerevisiae possesses no endogenous tyrosine kinase. If an investigation involves a protein that requires tyrosine phosphorylation, the kinase must be supplied in the form of a tyrosine kinase gene.
The non-fusion protein may mediate the interaction by binding both fusion proteins simultaneously, as in the case of ligand-dependent receptor dimerization.
For a protein with an interacting partner, its functional homology to other proteins may be assessed by supplying the third protein in non-fusion form, which then may or may not compete with the fusion-protein for its binding partner. Binding between the third protein and the other fusion protein will interrupt the formation of the reporter expression activation complex and thus reduce reporter expression, leading to the distinguishing change in phenotype.
Split-ubiquitin yeast two-hybrid
One limitation of classic yeast two-hybrid screens is that they are limited to soluble proteins. It is therefore impossible to use them to study the protein–protein interactions between insoluble integral membrane proteins. The split-ubiquitin system provides a method for overcoming this limitation. In the split-ubiquitin system, two integral membrane proteins to be studied are fused to two different ubiquitin moieties: a C-terminal ubiquitin moiety ("Cub", residues 35–76) and an N-terminal ubiquitin moiety ("Nub", residues 1–34). These fused proteins are called the bait and prey, respectively. In addition to being fused to an integral membrane protein, the Cub moiety is also fused to a transcription factor (TF) that can be cleaved off by ubiquitin specific proteases. Upon bait–prey interaction, Nub and Cub-moieties assemble, reconstituting the split-ubiquitin. The reconstituted split-ubiquitin molecule is recognized by ubiquitin specific proteases, which cleave off the transcription factor, allowing it to induce the transcription of reporter genes.
Fluorescent two-hybrid assay
Zolghadr and co-workers presented a fluorescent two-hybrid system that uses two hybrid proteins that are fused to different fluorescent proteins as well as LacI, the lac repressor. The structure of the fusion proteins looks like this: FP2-LacI-bait and FP1-prey where the bait and prey proteins interact and bring the fluorescent proteins (FP1 = GFP, FP2=mCherry) in close proximity at the binding site of the LacI protein in the host cell genome. The system can also be used to screen for inhibitors of protein–protein interactions.
Enzymatic two-hybrid systems: KISS
While the original Y2H system used a reconstituted transcription factor, other systems create enzymatic activities to detect PPIs. For instance, the KInase Substrate Sensor ("KISS"), is a mammalian two-hybrid approach has been designed to map intracellular PPIs. Here, a bait protein is fused to a kinase-containing portion of TYK2 and a prey is coupled to a gp130 cytokine receptor fragment. When bait and prey interact, TYK2 phosphorylates STAT3 docking sites on the prey chimera, which ultimately leads to activation of a reporter gene.
One-, three- and one-two-hybrid variants
One-hybrid
The one-hybrid variation of this technique is designed to investigate protein–DNA interactions and uses a single fusion protein in which the AD is linked directly to the binding domain. The binding domain in this case however is not necessarily of fixed sequence as in two-hybrid protein–protein analysis but may be constituted by a library. This library can be selected against the desired target sequence, which is inserted in the promoter region of the reporter gene construct. In a positive-selection system, a binding domain that successfully binds the UAS and allows transcription is thus selected.
Note that selection of DNA-binding domains is not necessarily performed using a one-hybrid system, but may also be performed using a two-hybrid system in which the binding domain is varied and the bait and prey proteins are kept constant.
Three-hybrid
RNA-protein interactions have been investigated through a three-hybrid variation of the two-hybrid technique. In this case, a hybrid RNA molecule serves to adjoin together the two protein fusion domains—which are not intended to interact with each other but rather the intermediary RNA molecule (through their RNA-binding domains). Techniques involving non-fusion proteins that perform a similar function, as described in the 'non-fusion proteins' section above, may also be referred to as three-hybrid methods.
One-two-hybrid
Simultaneous use of the one- and two-hybrid methods (that is, simultaneous protein–protein and protein–DNA interaction) is known as a one-two-hybrid approach and expected to increase the stringency of the screen.
Host organism
Although theoretically, any living cell might be used as the background to a two-hybrid analysis, there are practical considerations that dictate which is chosen. The chosen cell line should be relatively cheap and easy to culture and sufficiently robust to withstand application of the investigative methods and reagents. The latter is especially important for doing high-throughput studies. Therefore the yeast S. cerevisiae has been the main host organism for two-hybrid studies. However it is not always the ideal system to study interacting proteins from other organisms. Yeast cells often do not have the same post translational modifications, have a different codon use or lack certain proteins that are important for the correct expression of the proteins. To cope with these problems several novel two-hybrid systems have been developed. Depending on the system used agar plates or specific growth medium is used to grow the cells and allow selection for interaction. The most common used method is the agar plating one where cells are plated on selective medium to see of interaction takes place. Cells that have no interaction proteins should not survive on this selective medium.
S. cerevisiae (yeast)
The yeast S. cerevisiae was the model organism used during the two-hybrid technique's inception. It is commonly known as the Y2H system. It has several characteristics that make it a robust organism to host the interaction, including the ability to form tertiary protein structures, neutral internal pH, enhanced ability to form disulfide bonds and reduced-state glutathione among other cytosolic buffer factors, to maintain a hospitable internal environment. The yeast model can be manipulated through non-molecular techniques and its complete genome sequence is known. Yeast systems are tolerant of diverse culture conditions and harsh chemicals that could not be applied to mammalian tissue cultures.
A number of yeast strains have been created specifically for Y2H screens, e.g. Y187 and AH109, both produced by Clontech. Yeast strains R2HMet and BK100 have also been used.
Candida albicans
C. albicans is a yeast with a particular feature: it translates the CUG codon into serine rather than leucine. Due to this different codon usage it is difficult to use the model system S. cerevisiae as a Y2H to check for protein-protein interactions using C. albicans genes. To provide a more native environment a C. albicans two-hybrid (C2H) system was developed. With this system protein-protein interactions can be studied in C. albicans itself. A recent addition was the creation of a high-throughput system.
E. coli
Bacterial two hybrid methods (B2H or BTH) are usually carried out in E. coli and have some advantages over yeast-based systems. For instance, the higher transformation efficiency and faster rate of growth lends E. coli to the use of larger libraries (in excess of 108). The absence of requirements for a nuclear localisation signal to be included in the protein sequence and the ability to study proteins that would be toxic to yeast may also be major factors to consider when choosing an experimental background organism.
The methylation activity of certain E. coli DNA methyltransferase proteins may interfere with some DNA-binding protein selections. If this is anticipated, the use of an E. coli strain that is defective for a particular methyltransferase may be an obvious solution. The B2H may not be ideal when studying eukaryotic protein-protein interactions (e.g. human proteins) as proteins may not fold as in eukaryotic cells or may lack other processing.
Mammalian cells
In recent years a mammalian two hybrid (M2H) system has been designed to study mammalian protein-protein interactions in a cellular environment that closely mimics the native protein environment. Transiently transfected mammalian cells are used in this system to find protein-protein interactions.
Using a mammalian cell line to study mammalian protein-protein interactions gives the advantage of working in a more native context.
The post-translational modifications, phosphorylation, acylation and glycosylation are similar. The intracellular localization of the proteins is also more correct compared to using a yeast two hybrid system.
It is also possible with the mammalian two-hybrid system to study signal inputs.
Another big advantage is that results can be obtained within 48 hours after transfection.
Arabidopsis thaliana
In 2005 a two hybrid system in plants was developed. Using protoplasts of A. thaliana protein-protein interactions can be studied in plants. This way the interactions can be studied in their native context. In this system the GAL4 AD and BD are under the control of the strong 35S promoter. Interaction is measured using a GUS reporter. In order to enable a high-throughput screening the vectors were made gateway compatible.
The system is known as the protoplast two hybrid (P2H) system.
Aplysia californica
The sea hare A californica is a model organism in neurobiology to study among others the molecular mechanisms of long-term memory. To study interactions, important in neurology, in a more native environment a two-hybrid system has been developed in A californica neurons. A GAL4 AD and BD are used in this system.
Bombyx mori
An insect two-hybrid (I2H) system was developed in a silkworm cell line from the larva or caterpillar of the domesticated silk moth, Bombyx mori (BmN4 cells). This system uses the GAL4 BD and the activation domain of mouse NF-κB P65. Both are under the control of the OpIE2 promoter.
Applications
Determination of sequences crucial for interaction
By changing specific amino acids by mutating the corresponding DNA base-pairs in the plasmids used, the importance of those amino acid residues in maintaining the interaction can be determined.
After using bacterial cell-based method to select DNA-binding proteins, it is necessary to check the specificity of these domains as there is a limit to the extent to which the bacterial cell genome can act as a sink for domains with an affinity for other sequences (or indeed, a general affinity for DNA).
Drug and poison discovery
Protein–protein signalling interactions pose suitable therapeutic targets due to their specificity and pervasiveness. The random drug discovery approach uses compound banks that comprise random chemical structures, and requires a high-throughput method to test these structures in their intended target.
The cell chosen for the investigation can be specifically engineered to mirror the molecular aspect that the investigator intends to study and then used to identify new human or animal therapeutics or anti-pest agents.
Determination of protein function
By determination of the interaction partners of unknown proteins, the possible functions of these new proteins may be inferred. This can be done using a single known protein against a library of unknown proteins or conversely, by selecting from a library of known proteins using a single protein of unknown function.
Zinc finger protein selection
To select zinc finger proteins (ZFPs) for protein engineering, methods adapted from the two-hybrid screening technique have been used with success. A ZFP is itself a DNA-binding protein used in the construction of custom DNA-binding domains that bind to a desired DNA sequence.
By using a selection gene with the desired target sequence included in the UAS, and randomising the relevant amino acid sequences to produce a ZFP library, cells that host a DNA-ZFP interaction with the required characteristics can be selected. Each ZFP typically recognises only 3–4 base pairs, so to prevent recognition of sites outside the UAS, the randomised ZFP is engineered into a 'scaffold' consisting of another two ZFPs of constant sequence. The UAS is thus designed to include the target sequence of the constant scaffold in addition to the sequence for which a ZFP is selected.
A number of other DNA-binding domains may also be investigated using this system.
Strengths
Two-hybrid screens are low-tech; they can be carried out in any lab without sophisticated equipment.
Two-hybrid screens can provide an important first hint for the identification of interaction partners.
The assay is scalable, which makes it possible to screen for interactions among many proteins. Furthermore, it can be automated, and by using robots many proteins can be screened against thousands of potentially interacting proteins in a relatively short time. Two types of large screens are used: the library approach and the matrix approach.
Yeast two-hybrid data can be of similar quality to data generated by the alternative approach of coaffinity purification followed by mass spectrometry (AP/MS).
Weaknesses
The main criticism applied to the yeast two-hybrid screen of protein–protein interactions are the possibility of a high number of false positive (and false negative) identifications. The exact rate of false positive results is not known, but earlier estimates were as high as 70%. This also, partly, explains the often found very small overlap in results when using a (high throughput) two-hybrid screening, especially when using different experimental systems.
The reason for this high error rate lies in the characteristics of the screen:
Certain assay variants overexpress the fusion proteins which may cause unnatural protein concentrations that lead to unspecific (false) positives.
The hybrid proteins are fusion proteins; that is, the fused parts may inhibit certain interactions, especially if an interaction takes place at the N-terminus of a test protein (where the DNA-binding or activation domain is typically attached).
An interaction may not happen in yeast, the typical host organism for Y2H. For instance, if a bacterial protein is tested in yeast, it may lack a chaperone for proper folding that is only present in its bacterial host. Moreover, a mammalian protein is sometimes not correctly modified in yeast (e.g., missing phosphorylation), which can also lead to false results.
The Y2H takes place in the nucleus. If test proteins are not localized to the nucleus (because they have other localization signals) two interacting proteins may be found to be non-interacting.
Some proteins might specifically interact when they are co-expressed in the yeast, although in reality they are never present in the same cell at the same time. However, in most cases it cannot be ruled out that such proteins are indeed expressed in certain cells or under certain circumstances.
Each of these points alone can give rise to false results. Due to the combined effects of all error sources yeast two-hybrid have to be interpreted with caution. The probability of generating false positives means that all interactions should be confirmed by a high confidence assay, for example co-immunoprecipitation of the endogenous proteins, which is difficult for large scale protein–protein interaction data. Alternatively, Y2H data can be verified using multiple Y2H variants or bioinformatics techniques. The latter test whether interacting proteins are expressed at the same time, share some common features (such as gene ontology annotations or certain network topologies), have homologous interactions in other species.
See also
Phage display, an alternative method for detecting protein–protein and protein–DNA interactions
Protein array, a chip-based method for detecting protein–protein interactions
Synthetic genetic array analysis, a yeast-based method for studying gene interactions
References
External links
Detail on sister technique two-hybrid system
Science Creative Quarterly's overview of the yeast two hybrid system
Gateway-Compatible Yeast One-Hybrid Screens
Video animation of the Yeast Two-Hybrid System
Yeast Two-Hybrid
BioGrid Database with protein-protein interactions
Cell biology
Molecular biology
Protein–protein interaction assays
Systems biology | Two-hybrid screening | [
"Chemistry",
"Biology"
] | 5,723 | [
"Biochemistry methods",
"Protein–protein interaction assays",
"Cell biology",
"Molecular biology",
"Biochemistry",
"Systems biology"
] |
5,097,491 | https://en.wikipedia.org/wiki/Atlantic%20meridional%20overturning%20circulation | The Atlantic meridional overturning circulation (AMOC) is the main ocean current system in the Atlantic Ocean. It is a component of Earth's ocean circulation system and plays an important role in the climate system. The AMOC includes Atlantic currents at the surface and at great depths that are driven by changes in weather, temperature and salinity. Those currents comprise half of the global thermohaline circulation that includes the flow of major ocean currents, the other half being the Southern Ocean overturning circulation.
The AMOC is composed of a northward flow of warm, more saline water in the Atlantic's upper layers and a southward, return flow of cold, salty, deep water. Warm water from the south is more saline ('halocline') because of the higher evaporation rate in the tropical zone. The warm saline water forms the upper layer of the ocean ('thermocline'), but when this layer cools down, the density of the salty water increases, making it sink into the deep. This is an important part of the motor of the AMOC system. The limbs are linked by regions of overturning in the Nordic Seas and the Southern Ocean. Overturning sites are associated with intense exchanges of heat, dissolved oxygen, carbon and other nutrients, and very important for the ocean's ecosystems and its function as a carbon sink. Changes in the strength of the AMOC can affect multiple elements of the climate system.
Climate change may weaken the AMOC through increases in ocean heat content and elevated flows of freshwater from melting ice sheets. Studies using oceanographic reconstructions suggest , the AMOC is weaker than it was before the Industrial Revolution. There is debate over the relative contributions of different factors and it is unclear how much of this weakening is due to climate change or the circulation's natural variability over millennia. Climate models predict the AMOC will further weaken during the 21st century. This weakening would reduce average air temperatures over Scandinavia, Great Britain and Ireland because these regions are warmed by the North Atlantic Current. Weakening of the AMOC would also accelerate sea level rise around North America and reduce primary production in the North Atlantic.
Severe weakening of the AMOC may lead to a collapse of the circulation, which would not be easily reversible and thus constitutes one of the tipping points in the climate system. A collapse would substantially lower the average temperature and amount of rain and snowfall in Europe. It may also raise the frequency of extreme weather events and have other severe effects.
Overall structure
The Atlantic meridional overturning circulation (AMOC) is the main current system in the Atlantic Ocean and is also part of the global thermohaline circulation, which connects the world's oceans with a single "conveyor belt" of continuous water exchange. Normally, relatively warm, less-saline water stays on the ocean's surface while deep layers are colder, denser and more-saline, in what is known as ocean stratification. Deep water eventually gains heat and/or loses salinity in an exchange with the mixed ocean layer, and becomes less dense and rises towards the surface. Differences in temperature and salinity exist between ocean layers and between parts of the World Ocean, and together they drive the thermohaline circulation. The Pacific Ocean is less saline than the other oceans because it receives large quantities of fresh rainfall. Its surface water is insufficiently saline to sink lower than several hundred meters, meaning deep ocean water must come from elsewhere.
Ocean water in the North Atlantic is more saline than that in the Pacific, partly because extensive evaporation on the surface concentrates salt within the remaining water and partly because sea ice near the Arctic Circle expels salt as it freezes during winter. Even more importantly, evaporated moisture in the Atlantic is swiftly carried away by atmospheric circulation before it can fall back as rain. Trade winds move this moisture across Central America and to the eastern North Pacific, where it falls as rain. Major mountain ranges such as the Tibetan Plateau, the Rocky Mountains and the Andes prevent any equivalent moisture transport back to the Atlantic.
Due to this process, Atlantic surface water becomes salty and therefore dense, eventually downwelling to form the North Atlantic Deep Water (NADW). NADW formation primarily occurs in the Nordic Seas and involves a complex interplay of regional water masses such as the Denmark Strait Overflow Water (DSOW), Iceland-Scotland Overflow Water (ISOW) and Nordic Seas Overflow Water. Labrador Sea Water may play an important role as well but increasing evidence suggests water in Labrador and Irminger Seas primarily recirculates through the North Atlantic Gyre and has little connection with the rest of the AMOC.
The NADW is not the deepest water layer in the Atlantic Ocean; the Antarctic bottom water (AABW) is always the densest, deepest ocean layer in any basin deeper than . As the upper reaches of the AABW flow upwells, it melds into and reinforces the NADW. The formation of the NADW is also the beginning of the lower cell of the circulation. The downwelling that forms the NADW is balanced by an equal amount of upwelling. In the western Atlantic, Ekman transport, the increase in ocean-layer mixing caused by wind activity, results in strong upwelling in the Canary Current and the Benguela Current, which are located on the northwest and southwest coasts of Africa. , upwelling is substantially stronger around the Canary Current than the Benguela Current, though an opposite pattern existed until the closure of the Central American Seaway during the late Pliocene. In the Eastern Atlantic, significant upwelling occurs only during certain months of the year because this region's deep thermocline means it is more dependent on the state of sea surface temperature than on wind activity. There is also a multi-year upwelling cycle that occurs in synchronization with the El Niño/La Niña cycle.
At the same time, the NADW moves southward and at the southern end of the Atlantic transect, around 80% of it upwells in the Southern Ocean, connecting it with the Southern Ocean overturning circulation (SOOC). After upwelling, the water is understood to take one of two pathways. Water surfacing close to Antarctica will likely be cooled by Antarctic sea ice and sink back into the lower cell of the circulation. Some of this water will rejoin the AABW but the rest of the lower-cell flow will eventually reach the depths of the Pacific and Indian oceans. Water that upwells at lower, ice-free latitudes moves further northward due to Ekman transport and is committed to the upper cell. The warm water in the upper cell is responsible for the return flow to the North Atlantic, which occurs mainly around the coast of Africa and through the Indonesian archipelago. Once this water returns to the North Atlantic, it becomes cooler and denser, and sinks, feeding back into the NADW.
Role in the climate system
Equatorial areas are the hottest part of the globe; due to thermodynamics, this heat moves towards the poles. Most of this heat is transported by atmospheric circulation but warm, surface ocean currents play an important role. Heat from the equator moves either northward or southward; the Atlantic Ocean is the only ocean in which the heat flow is northward. Much of the heat transfer in the Atlantic occurs due to the Gulf Stream, a surface current that carries warm water northward from the Caribbean. While the Gulf Stream as a whole is driven by winds alone, its northern-most segment, the North Atlantic Current, obtains much of its heat from thermohaline exchange in the AMOC. Thus, the AMOC carries up to 25% of the total heat toward the northern hemisphere, and plays an important role in the climate around northwest Europe.
Because atmospheric patterns also play a large role in heat transfer, the idea the climate in northern Europe would be as cold as that in northern North America without heat transport via ocean currents (i.e. up to colder) is generally considered incorrect. While one modeling study suggested collapse of the AMOC could result in Ice Age-like cooling, including sea-ice expansion and mass glacier formation, within a century, the accuracy of those results is questionable. There is a consensus the AMOC keeps northern and western Europe warmer than it would be otherwise be, with the difference of and depending on the area. For instance, studies of the Florida Current suggest the Gulf Stream was around 10% weaker from around 1200 to 1850 due to increased surface salinity, and this likely contributed to the conditions known as Little Ice Age.
The AMOC makes the Atlantic Ocean into a more-effective carbon sink in two major ways. Firstly, the upwelling that takes place supplies large quantities of nutrients to the surface waters, supporting the growth of phytoplankton and therefore increasing marine primary production and the overall amount of photosynthesis in the surface waters. Secondly, upwelled water has low concentrations of dissolved carbon because the water is typically 1,000 years old and has not been exposed to anthropogenic increases in the atmosphere. This water absorbs larger quantities of carbon than the more-saturated surface waters and is prevented from releasing carbon back into the atmosphere when it is downwelled. While Southern Ocean is by far the strongest ocean carbon sink, The North Atlantic is the largest single carbon sink in the northern hemisphere.
Abrupt changes during the Late Pleistocene
Because the Atlantic meriditional overturning circulation (AMOC) is dependent on a series of interactions between layers of ocean water of varying temperature and salinity, it is not static but experiences small, cyclical changes and larger, long-term shifts in response to external forcings. Many of those shifts occurred during the Late Pleistocene (126,000 to 11,700 years ago), which was the final geological epoch before the current Holocene. It also includes the Last Glacial Period, which is colloquially known as the "last ice age". Twenty-five abrupt temperature oscillations between the hemispheres occurred during this period; these oscillations are known as Dansgaard–Oeschger events (D-O events) after Willi Dansgaard and Hans Oeschger, who discovered them by analyzing Greenland ice cores in the 1980s.
D-O events are best known for the rapid warming of between 8 °C (15 °F) and 15 °C (27 °F) that occurred in Greenland over several decades. Warming also occurred over the entire North Atlantic region but equivalent cooling over the Southern Ocean also occurred during these events. This is consistent with the strengthened AMOC transporting more heat from one hemisphere to another. The warming of the northern hemisphere would have caused ice-sheet melting and many D-O events appear to have been ended by Heinrich events, in which massive streams of icebergs broke off from the then-present Laurentide ice sheet. As the icebergs melted in the ocean, the ocean water would have become fresher, weakening the circulation and stopping the D-O warming.
There is not yet a consensus explanation for why AMOC would have fluctuated so much, and only during this glacial period. Common hypotheses include cyclical patterns of salinity change in the North Atlantic or a wind-pattern cycle due to the growth and decline of the region's ice sheets, which are large enough to affect wind patterns. As of late 2010s, some research suggests the AMOC is most-sensitive to change during periods of extensive ice sheets and low , making the Last Glacial Period a "sweet spot" for such oscillations. It has been suggested the warming of the southern hemisphere would have initiated the pattern as warmer waters spread north through the overall thermohaline circulation. The paleoclimate evidence is not currently strong enough to say whether the D-O events started with changes in the AMOC or whether the AMOC changed in response to another trigger. For instance, some research suggests changes in sea-ice cover initiated the D-O events because they would have affected water temperature and circulation through Ice–albedo feedback.
D-O events are numbered in reverse order; the largest numbers are assigned to the oldest events. The penultimate event, Dansgaard–Oeschger event 1, occurred some 14,690 years ago and marks the transition from the Oldest Dryas period to the Bølling–Allerød Interstadial (), which lasted until 12,890 years Before Present. It was named after the two sites in Denmark with vegetation fossils that could only have survived during a comparatively warm period in the northern hemisphere. The major warming in the northern hemisphere was offset by southern-hemisphere cooling and little net change in global temperature, which is consistent with changes in the AMOC. The onset of the interstadial also caused a period of sea level rise from ice-sheet collapse that is designated Meltwater pulse 1A.
The Bølling and Allerød stages of the interglacial were separated by two centuries of the opposite pattern – northern-hemisphere cooling, southern-hemisphere warming – which is known as the Older Dryas because the Arctic flower Dryas octopetala became dominant where forests were able to grow during the interglacial. The interglacial ended with the onset of the Younger Dryas (YD) period (12,800–11,700 years ago), when northern-hemisphere temperatures returned to near-glacial levels, possibly within a decade. This happened due to an abrupt slowing of the AMOC, which, in a similar manner to Heinrich events, was caused by freshening due to ice loss from the Laurentide ice sheet. Unlike true Heinrich events, there was an enormous flow of meltwater through the Mackenzie River in what is now Canada rather than a mass iceberg loss. Major changes in the precipitation regime, such as the shift of the Intertropical Convergence Zone to the south, increased rainfall in North America, and the drying of South America and Europe, occurred. Global temperatures again barely changed during the Younger Dryas and long-term, post-glacial warming resumed after it ended.
Stability and vulnerability
The AMOC has not always existed; for much of Earth's history, overturning circulation in the northern hemisphere occurred in the North Pacific. Paleoclimate evidence shows the shift of overturning circulation from the Pacific to the Atlantic occurred 34 million years ago at the Eocene-Oligocene transition, when the Arctic-Atlantic gateway had closed. This closure fundamentally changed the thermohaline circulation structure; some researchers have suggested climate change may eventually reverse this shift and re-establish the Pacific circulation after the AMOC shuts down. Climate change affects the AMOC by making surface water warmer as a consequence of Earth's energy imbalance and by making surface water less saline due to the addition of large quantities of fresh water from melting ice – mainly from Greenland – and through increasing precipitation over the North Atlantic. Both of these causes would increase the difference between the surface and deep layers, thus making the upwelling and downwelling that drives the circulation more difficult.
In the 1960s, Henry Stommel did much of the research into the AMOC with what later became known as the Stommel Box model, which introduced the idea of Stommel Bifurcation in which the AMOC could exist either in a strong state like the one throughout recorded history or effectively collapse to a much weaker state and not recover unless the increased warming and/or freshening that caused the collapse is reduced. The warming and /freshening could directly cause the collapse or weaken the circulation to a state in which its ordinary fluctuations (noise) could push it past the tipping point. The possibility the AMOC is a bistable system that is either "on" or "off" and could suddenly collapse has been a topic of scientific discussion ever since. In 2004, The Guardian published the findings of a report commissioned by Pentagon defense adviser Andrew Marshall that suggests the average annual temperature in Europe would drop by between 2010 and 2020 as the result of an abrupt AMOC shutdown.
Modeling AMOC collapse
Some of the models developed after Stommel's work suggest the AMOC could have one or more intermediate stable states between full strength and full collapse. This is more-commonly seen in Earth Models of Intermediate Complexity (EMICs), which focus on certain parts of the climate system like AMOC and disregard others, rather than in the more-comprehensive general circulation models (GCMs) that represent the "gold standard" for simulating the entire climate but often have to simplify certain interactions. GCMs typically show the AMOC has a single equilibrium state and that it is difficult or impossible for it to collapse. Researchers have raised concerns this modeled resistance to collapse only occurs because GCM simulations tend to redirect large quantities of freshwater toward the North Pole, where it would no longer affect the circulation, a movement that does not occur in nature.
In 2024, three researchers performed a simulation with one of the Community Earth System Models (CIMP) in which a classic AMOC collapse had occurred, much like it does in intermediate-complexity models. Unlike some other simulations, they did not immediately subject the model to unrealistic meltwater levels but gradually increased the input. Their simulation had run for over 1,700 years before the collapse occurred and they had also eventually reached meltwater levels equivalent to a sea level rise of per year, about 20 times larger than the /year sea level rise between 1993 and 2017, and well above any level considered plausible. According to the researchers, those unrealistic conditions were intended to counterbalance the model's unrealistic stability and the model's output should not be regarded as a prediction but rather as a high-resolution representation of the way currents would start changing before a collapse. Other scientists agreed this study's findings would mainly help with calibrating more-realistic studies, particularly once better observational data becomes available.
Some research indicates classic EMIC projections are biased toward AMOC collapse because they subject the circulation toward an unrealistically constant flow of freshwater. In one study, the difference between constant and variable freshwater flux delayed collapse of the circulation in a typical Stommel's Bifurcation EMIC by over 1,000 years. The researchers said this simulation is more consistent with reconstructions of the AMOC's response to Meltwater pulse 1A 13,500–14,700 years ago and indicates a similarly long delay. In 2022, a paleoceanographic reconstruction found a limited effect from massive freshwater forcing of the final Holocene deglaciation ~11,700–6,000 years ago, when the sea level rise was around . It suggested most models overestimate the effects of freshwater forcing on the AMOC. If the AMOC is more dependent on wind strength – which changes relatively little with warming – than is commonly understood, then it would be more resistant to collapse. According to some researchers, the less-studied Southern Ocean overturning circulation (SOOC) may be more vulnerable to collapse than the AMOC.
High-quality Earth system models indicate a collapse is unlikely and would only become probable if high levels of warming (≥) are sustained long after 2100. Some paleoceanographic research seems to support this idea. Some researchers fear the complex models are too stable and that lower-complexity projections pointing to an earlier collapse are more accurate. One of those projections suggests AMOC collapse could happen around 2057 but many scientists are skeptical of the projection. Some research also suggests the Southern Ocean overturning circulation may be more prone to collapse than the AMOC. In October 2024, 44 climate scientists published an open letter, claiming that according to scientific studies in the past few years, the risk of AMOC collapse has been greatly underestimated, it can occur in the next few decades, with devastating impacts especially for Nordic countries. They called on Nordic countries to ensure the implementation of the Paris Agreement to prevent it.
Trends
Until 2024 there was a disagreement between observations showing a slowdown of the circulation and climate models showing a stable circulation. In November 2024, Nature Geoscience published a study which tried to solve the problem. The scientists used "Earth system and eddy-permitting coupled ocean–sea-ice models". Then observations and models corresponded to each other much better. The study found a slowdown of 0.46 sverdrups per decade since 1950.
Observations
Direct observations of the strength of the AMOC have been available since 2004 from RAPID, an in situ mooring array at 26°N in the Atlantic. Observational data needs to be collected for a prolonged period to be of use. Thus, some researchers have attempted to make predictions from smaller-scale observations; for instance, in May 2005, submarine-based research from Peter Wadhams indicated downwelling in the Greenland Sea – a small part of the AMOC system – which was measured using giant water columns nicknamed chimneys, transferring water downwards was at less than a quarter of its normal strength. In 2000, other researchers focused on trends in the North Atlantic Gyre (NAG), which is also known as the Northern Subpolar Gyre (SPG). Measurements taken in 2004 found a 30% decline in the NAG relative to the measurement in 1992; some interpreted this measurement as a sign of AMOC collapse. RAPID data have since shown this to be a statistical anomaly, and observations from 2007 and 2008 have shown a recovery of the NAG. It is now known the NAG is largely separate from the rest of the AMOC and could collapse independently of it.
By 2014, there was enough processed RAPID data up until the end of 2012; these data appeared to show a decline in circulation which was 10 times greater than that which was predicted by the most-advanced models of the time. Scientific debate about whether it indicated a strong impact of climate change or a large interdecadal variability of the circulation began. Data up until 2017 showed the decline in 2008 and 2009 was anomalously large but the circulation after 2008 was weaker than it was in 2004–2008.
The AMOC is also measured by tracking changes in heat transport that would be correlated with overall current flows. In 2017 and 2019, estimates derived from heat observations made by NASA's CERES satellites and international Argo floats suggested 15–20% less heat transport was occurring than was implied by the RAPID, and indicated a fairly stable flow with a limited indication of decadal variability.
The strength of Florida Current has been measured as stable over the last four decades after correction for changes in Earth's magnetic field.
Reconstructions
Recent past
Climate reconstructions allow research to assemble hints about the past state of the AMOC, though these techniques are necessarily less reliable than direct observations. In February 2021, RAPID data was combined with reconstructed trends from data that were recorded 25 years before RAPID. This study showed no evidence of an overall decline in the AMOC over the past 30 years. A Science Advances study published in 2020 found no significant change in the AMOC circulation compared to that in the 1990s, although substantial changes have occurred across the North Atlantic in the same period. A March 2022 review article concluded while global warming may cause a long-term weakening of the AMOC, it remains difficult to detect when analyzing changes since 1980, including both direct – as that time frame presents both periods of weakening and strengthening – and the magnitude of either change is uncertain, ranging between 5% and 25%. The review concluded with a call for more-sensitive and longer-term research.
20th century
Some reconstructions have attempted to compare the current state of the AMOC with that from a century or so earlier. For instance, a 2010 statistical analysis found a weakening of the AMOC has been continuing since the late 1930s with an abrupt shift of a North-Atlantic overturning cell around 1970. In 2015, a different statistical analysis interpreted a cold pattern in some years of temperature records as a sign of AMOC weakening. It concluded the AMOC has weakened by 15–20% in 200 years and that the circulation slowed during most of the 20th century. Between 1975 and 1995, the circulation was weaker than at any time over the past millennium. This analysis had also shown a limited recovery after 1990 but the authors cautioned another decline is likely to occur in the future.
In 2018, another reconstruction suggested a weakening of around 15% has occurred since the mid-twentieth century. A 2021 reconstruction used over a century of ocean-temperature-and-salinity data, which appeared to show significant changes in eight independent AMOC indices that could indicate "an almost complete loss of stability". This reconstruction was forced to omit all data from 35 years before 1900 and after 1980 to maintain consistent records of all eight indicators. These findings were challenged by 2022 research that used data recorded between 1900 and 2019, and found no change in the AMOC between 1900 and 1980, and a single-sverdrup reduction in AMOC strength did not occur until 1980, a variation that remains within range of natural variability.
Sediment analyses shows a weakening of the AMOC by 20% from the middle of the 20th century.
Millennial scale
According to a 2018 study, in the last 150 years, the AMOC has demonstrated exceptional weakness when compared to the previous 1,500 years and indicated a discrepancy in the modeled timing of AMOC decline after the Little Ice Age. A 2017 review concluded there is strong evidence for past changes in the strength and structure of the AMOC during abrupt climate events, such as the Younger Dryas and many of the Heinrich events. In 2022, another millennial-scale reconstruction found the Atlantic multidecadal variability strongly displayed increasing "memory", meaning it is now less likely to return to the mean state and instead would proceed in the direction of past variation. Because this pattern is likely connected to the AMOC, it could indicate a "quiet" loss of stability that is not seen in most models.
In February 2021, a major study in Nature Geoscience reported the preceding millennium saw an unprecedented weakening of the AMOC, an indication the change was caused by human actions. The study's co-author said the AMOC had already slowed by about 15% and effects now being seen; according to them: "In 20 to 30 years it is likely to weaken further, and that will inevitably influence our weather, so we would see an increase in storms and heatwaves in Europe, and sea level rises on the east coast of the US." In February 2022, Nature Geoscience published a "Matters Arising" commentary article co-authored by 17 scientists that disputed those findings and said the long-term AMOC trend remains uncertain. The journal also published a response from the authors of 2021 study, who defended their findings.
Possible indirect signs
Some researchers have interpreted a range of recently observed climatic changes and trends as being connected to a decline in the AMOC; for instance, a large area of the North Atlantic Gyre near Greenland has cooled by between 1900 and 2020, in contrast to substantial ocean warming elsewhere. This cooling is normally seasonal; it is most-pronounced in February, when cooling reaches at the area's epicenter but it still experiences warming relative to pre-industrial levels during warm months, particularly in August. Between 2014 and 2016, waters in the area stayed cool for 19 months before warming, and media described this phenomenon as the cold blob.
The cold-blob pattern occurs because sufficiently fresh, cool water avoids sinking into deeper layers. This freshening was immediately described as evidence of a slowing of the AMOC slowdown. Later research found atmospheric changes, such as an increase in low cloud cover and a strengthening of the North Atlantic oscillation (NAO) have also played a major role in this local cooling. The overall importance of the NAO in the phenomenon is disputed but cold-blob trends alone cannot be used to analyze the strength of the AMOC.
Another possible early indication of a slowing of the AMOC is the relative reduction in the North Atlantic's potential to act as a carbon sink. Between 2004 and 2014, the amount of carbon sequestered in the North Atlantic declined by 20% relative to 1994–2004, which the researchers considered evidence of AMOC slowing. This decline was offset by a comparable increase in the South Atlantic, which is considered part of the Southern Ocean. While the total amount of carbon absorption by all carbon sinks is generally projected to increase throughout the 21st century, a continuing decline in the North Atlantic sink would have important implications. Other processes that were attributed in some studies to AMOC slowing include increasing salinity in the South Atlantic, rapid deoxygenation in the Gulf of St. Lawrence, and an approximately 10% decline in phytoplankton productivity across the North Atlantic over the past 200 years.
Projections
Individual models
Historically, CMIP models, the gold standard in climate science, show the AMOC is very stable; although it may weaken, it will always recover rather than permanently collapse – for example, in a 2014 idealized experiment in which concentrations abruptly double from 1990 levels and do not change afterward, the circulation declines by around 25% but does not collapse, although it recovers by only 6% over the next 1,000 years. In 2020, research estimated if warming stabilizes at , or by 2100; in all three cases, the AMOC declines for an additional 5–10 years after the temperature rise ceases but does not approach collapse, and partially recovers after about 150 years.
Many researchers have said collapse is only avoided due to biases that persist across the large-scale models. While models have improved over time, the sixth and current generation CMIP6, retains some inaccuracies. On average, those models simulate much greater AMOC weakening in response to greenhouse warming than the previous generation; when four CMIP6 models simulated the AMOC under the SSP3-7 scenario in which levels more than double from 2015 values by 2100 from around 400 parts per million (ppm) to over 850 ppm, they found it declined by over 50% by 2100. The CMIP6 models are not yet capable of simulating North Atlantic Deep Water (NADW) without errors in its depth, area or both, reducing confidence in their projections.
To address these problems, some scientists experimented with bias correction. In another idealized doubling experiment, the AMOC collapsed after 300 years when bias correction was applied to the model. One 2016 experiment combined projections from eight then-state-of-the-art CMIP5 climate models with the improved Greenland ice-sheet melt estimates. It found by 2090–2100, the AMOC would weaken by around 18% (3%–34%) under the intermediate Representative Concentration Pathway 4.5, and by 37% (15%–65%) under the very high Representative Concentration Pathway 8.5, in which greenhouse gas emissions increase continuously. When the two scenarios were extended past 2100, the AMOC stabilized under RCP 4.5 but continued to decline under RCP 8.5, leading to an average decline of 74% by 2290–2300 and a 44% likelihood of a complete collapse.
In 2020, another team of researchers simulated RCP 4.5 and RCP 8.5 between 2005 and 2250 in a Community Earth System Model that was integrated with an advanced ocean physics module. Due to the module, the AMOC was subjected to four-to-ten times more freshwater when compared to the standard run. It simulated for RCP 4.5 very similar results to those of the 2016 study while below RCP 8.5, the circulation declines by two-thirds soon after 2100 but does not collapse past that level.
In 2023, a statistical analysis of output from multiple intermediate-complexity models suggested an AMOC collapse would most likely happen around 2057 with 95% confidence of a collapse between 2025 and 2095. This study received a lot of attention and criticism because intermediate-complexity models are considered less reliable in general and may confuse a major slowing of the circulation with its complete collapse. The study relied on proxy temperature data from the Northern Subpolar Gyre region, which other scientists do not consider representative of the entire circulation, believing it may be subject to a separate tipping point. Some scientists have described this research as "worrisome" and noted it can provide a "valuable contribution" once better observational data is available but there was widespread agreement among experts the paper's proxy record was "insufficient". Some experts said the study used old observational data from five ship surveys that "has long been discredited" by the lack of major weakening seen in direct observations since 2004, "including in the reference they cite for it".
In November 2024, Nature Geoscience published a study which tried to solve the problem of the differences between observation showing a slowdown of the circulation and the climate models which showing a stable circulation. The scientists used some new methods and models. Then, the observations and the models corresponded to each other much better. The study found a slowdown of 0.46 sverdrups per decade since 1950. In the future, 2 °C temperature rise will weaken AMOC by 33% in comparison to a state without climate change, which could be reached over the coming decade. The result will be big changes in climate and ecosystems. It also found that AMOC weakening raise temperature and salinity in the South Atlantic Ocean through the propagation of Kelvin and Rossby waves.
Major review studies
Large review papers and reports are capable of evaluating model output, direct observations and historical reconstructions to make expert judgements beyond what models alone can show. Around 2001, the IPCC Third Assessment Report projected high confidence the AMOC thermohaline circulation would weaken rather than stop and that warming effects would outweigh cooling, even over Europe. When the IPCC Fifth Assessment Report was published in 2014, a rapid transition of the AMOC was considered "very unlikely" and this assessment was offered at a high level of confidence.
In 2021, the IPCC Sixth Assessment Report again said the AMOC is "very likely" to decline within the 21st century and that there was a "high confidence" changes to it would be reversible within centuries if warming was reversed. Unlike the Fifth Assessment Report, it had only "medium confidence" rather than "high confidence" in the AMOC avoiding a collapse before the end of the 21st century. This reduction in confidence was likely influenced by several review studies that draw attention to the circulation stability bias within general circulation models, and simplified ocean-modelling studies suggesting the AMOC may be more vulnerable to abrupt change than larger-scale models suggest.
The synthesis report of the IPCC Sixth Assessment Report summarized the scientific consensus as follows: "The Atlantic Meridional Overturning Circulation is very likely to weaken over the 21st century for all considered scenarios (high confidence), however an abrupt collapse is not expected before 2100 (medium confidence). If such a low probability event were to occur, it would very likely cause abrupt shifts in regional weather patterns and water cycle, such as a southward shift in the tropical rain belt, and large impacts on ecosystems and human activities."
In 2022, an extensive assessment of all potential climate tipping points identified 16 plausible climate tipping points, including a collapse of the AMOC. It said a collapse would most likely be triggered by of global warming but that there is enough uncertainty to suggest it could be triggered at warming levels of between and . The assessment estimates once AMOC collapse is triggered, it would occur between 15 and 300 years, and most likely at around 50 years. The assessment also treated the collapse of the Northern Subpolar Gyre as a separate tipping point that could tip at between degrees and , although this is only simulated by a fraction of climate models. The most likely tipping point for the collapse of Northern Subpolar Gyre is and once triggered, the collapse of the gyre would occur between 5 and 50 years, and most likely at 10 years. The loss of this convection is estimated to lower the global temperature by while the average temperature in Europe would decrease by around . There would also be substantial effects on regional precipitation levels.
In October 2024, 44 climate scientists published an open letter to the Nordic Council of Ministers, claiming that according to scientific studies in the past few years, the risk of AMOC collapse has been greatly underestimated, that it can occur in the next few decades, and that some changes are already happening. It would have devastating and irreversible impacts especially for Nordic countries, but also for other parts of the world. The letter says that tipping points like this one can be passed already at 1.5–2 degrees of warming. They called on Nordic countries to ensure the implementation of the Paris Agreement to prevent it. Others disagree.
The "State of the cryosphere" report, dedicates significant space to AMOC, saying it may be enroute to collapse because of ice melt and water warming. Impacts will include cooling of Northern Europe faster than 3°C per decade, "with no realistic means of adaptation". At the same time, the Antarctic Circumpolar Current (ACC) is also slowing down and the Weddell Sea Bottom Water is losing volume, what can impact global ocean circulation and climate. UNESCO mentions that the report in the first time "notes a growing scientific consensus that melting Greenland and Antarctic ice sheets, among other factors, may be slowing important ocean currents at both poles, with potentially dire consequences for a much colder northern Europe and greater sea-level rise along the U.S. East Coast."
Effects of AMOC slowdown
, there is no consensus on whether a consistent slowing of the AMOC circulation has occurred but there is little doubt it will occur in the event of continued climate change. According to the IPCC, the most-likely effects of future AMOC decline are reduced precipitation in mid-latitudes, changing patterns of strong precipitation in the tropics and Europe, and strengthening storms that follow the North Atlantic track. In 2020, research found a weakened AMOC would slow the decline in Arctic sea ice. and result in atmospheric trends similar to those that likely occurred during the Younger Dryas, such as a southward displacement of Intertropical Convergence Zone. Changes in precipitation under high-emissions scenarios would be far larger.
A decline in the AMOC would be accompanied by an acceleration of sea level rise along the U.S. East Coast; at least one such event has been connected to a temporary slowing of the AMOC. This effect would be caused by increased warming and thermal expansion of coastal waters, which would transfer less of their heat toward Europe; it is one of the reasons sea level rise along the U.S. East Coast is estimated to be three-to-four times higher than the global average.
Some scientists believe a partial slowing of the AMOC would result in limited cooling of around in Europe. Other regions would be differently affected; according to 2022 research, 20th-century winter-weather extremes in Siberia were milder when the AMOC was weakened. According to one assessment, a slowing of the AMOC is one of the few climate tipping points that are likely to reduce the social cost of carbon, a common measure of economic impacts of climate change, by −1.4% rather than increasing it, because Europe represents a larger fraction of global GDP than the regions that will be negatively affected by the slowing. This study's methods have been said to have underestimating climate impacts in general. According to some research, the dominant effect on an AMOC slowdown would be a reduction in oceanic heat uptake, leading to increased global warming, but this is a minority opinion.
A 2021 study said other well-known tipping points, such as the Greenland ice sheet, the West Antarctic Ice Sheet and the Amazon rainforest would all be connected to the AMOC. According to this study, changes to the AMOC alone are unlikely to trigger tipping elsewhere but an AMOC slowdown would provide a connection between these elements and reduce the global-warming threshold beyond which any of those four elements – including the AMOC itself – could be expected to tip, rather than the thresholds that have been established from studying those elements in isolation. This connection could cause a cascade of tipping over several centuries.
Effects of an AMOC shutdown
Cooling
A complete collapse of the AMOC will be largely irreversible and recovery would likely take thousands of years. A shutting down of the AMOC is expected to trigger substantial cooling in Europe, particularly in Britain and Ireland, France and the Nordic countries. In 2002, research compared AMOC shutdown to Dansgaard–Oeschger events – abrupt temperature shifts that occurred during the Last Glacial Period. According to that paper, local cooling of up to would occur in Europe. In 2022, a major review of tipping points concluded an AMOC collapse would lower global temperatures by around while regional temperatures in Europe would fall by between and .
A 2020 study assessed the effects of an AMOC collapse on farming and food production in Great Britain. It found within Great Britain an average temperature drop of after the effect of warming was subtracted from collapse-induced cooling. A collapse of the AMOC would also lower rainfall during the growing season by around , which would in turn reduce the area of land suitable for arable farming from 32% to 7%. The net value of British farming would decline by around £346 million per year – over 10% of its value in 2020.
In 2024, one study that modeled the effect of an AMOC collapse on a pre-industrial world, predicted a more severe cooling in Europe. It predicted the average sea surface temperatures in northwest Europe falling and the average February temperatures on land falling between and within a century in northern and western Europe. This change would result in sea ice reaching into the territorial waters of the British Isles and Denmark during winter while Antarctic sea ice would diminish. These findings do not include the counteracting warming from climate change, and the modeling approach used by the paper is controversial.
A 2015 study led by James Hansen found a shutdown or substantial slowing of the AMOC will intensify severe weather because it increases baroclinicity and accelerates northeasterly winds up to 10–20% throughout the mid-latitude troposphere. This could boost winter and near-winter cyclonic "superstorms" that are associated with near-hurricane-force winds and intense snowfall. This paper has also been controversial.
Other
Several studies have investigated the effect of a collapse of the AMOC on the El Niño–Southern Oscillation (ENSO); results have ranged from no overall impact to an increase in ENSO strength, and a shift to a dominant La Niña conditions with an about 95% reduction in El Niño extremes but more-frequent extreme rainfall in eastern Australia, and intensified droughts and wildfire seasons in the southwestern U.S.
A 2021 study used a simplified modeling approach to evaluate the effects of an AMOC collapse on the Amazon rainforest, and its hypothesized dieback and transition to a savanna state in some climate-change scenarios. This study found an AMOC collapse would increase rainfall in the southern Amazon due to the shift of an Intertropical Convergence Zone, and this would help to counter the dieback and potentially stabilize the southern part of the rainforest. A 2024 study found the seasonal cycle of the Amazon could reverse with dry seasons becoming wet and vice versa.
A 2005 paper said severe disruption of the AMOC would collapse North Atlantic plankton counts to less than half of their normal biomass due to increased stratification and the large decline in nutrient exchange among ocean layers. A 2015 study simulated global ocean changes under AMOC slowing and collapse scenarios, and found these events would greatly decrease dissolved oxygen content in the North Atlantic, although dissolved oxygen would slightly increase globally due to greater increases across other oceans.
See also
8.2-kiloyear event
Climate security
Loop Current
North Atlantic Deep Water
Pacific decadal oscillation
Paleosalinity
Sverdrup balance
West Greenland Current
References
Climate change and the environment
Effects of climate change
Oceanography
Natural hazards
Future problems
Currents of the Atlantic Ocean | Atlantic meridional overturning circulation | [
"Physics",
"Environmental_science"
] | 9,034 | [
"Physical phenomena",
"Earth phenomena",
"Applied and interdisciplinary physics",
"Hydrology",
"Oceanography",
"Natural hazards"
] |
5,097,569 | https://en.wikipedia.org/wiki/Contact%20process%20%28mathematics%29 | The contact process is a stochastic process used to model population growth on the set of sites of a graph in which occupied sites become vacant at a constant rate, while vacant sites become occupied at a rate proportional to the number of occupied neighboring sites. Therefore, if we denote by the proportionality constant, each site remains occupied for a random time period which is exponentially distributed parameter 1 and places descendants at every vacant neighboring site at times of events of a Poisson process parameter during this period. All processes are independent of one another and of the random period of time sites remains occupied.
The contact process can also be interpreted as a model for the spread of an infection by
thinking of particles as a bacterium spreading over individuals that are positioned at the sites of , occupied sites correspond to infected individuals, whereas vacant correspond to healthy ones.
The main quantity of interest is the number of particles in the process, say , in the first interpretation, which corresponds to the number of infected sites in the second one. Therefore, the process survives whenever the number of particles is positive for all times, which corresponds to the case that there are always infected individuals in the second one. For any infinite graph there exists a positive and finite critical value so that if then survival of the process starting from a finite number of particles occurs with positive probability, while if their extinction is almost certain. Note that by and the infinite monkey theorem, survival of the process is equivalent to , as , whereas extinction is equivalent to , as , and therefore, it is natural to ask about the rate at which when the process survives.
Mathematical definition
If the state of the process at time is , then a site in is occupied, say by a particle, if and vacant if .
The contact process is a continuous-time Markov process with state space , where is a finite or countable graph, usually , and a special case of an interacting particle system.
More specifically, the dynamics of the basic contact process is defined by the following transition rates: at site ,
where the sum is over all the neighbors of in . This means that each site waits an exponential time with the corresponding rate, and then flips (so 0 becomes 1 and vice versa).
Connection to percolation
The contact process is a stochastic process that is closely connected to percolation theory. Ted Harris (1974) noted that the contact process on when infections and recoveries can occur only in discrete times corresponds to one-step-at-a-time bond percolation on the graph obtained by orienting each edge of in the direction of increasing coordinate-value.
The law of large numbers on the integers
A law of large numbers for the number of particles in the process on the integers informally means that for all large , is approximately equal to for some positive constant . Harris (1974) proved that, if the process survives, then the rate of growth of is at most and at least linear in time. A weak law of large numbers (that the process converges in probability) was shown by Durrett (1980). A few years later, Durrett and Griffeath (1983) improved this to a strong law of large numbers, giving almost sure convergence of the process.
Die out at criticality
Contact processes on all integer lattices die out almost surely at the critical value.
Durrett's conjecture and the central limit theorem
Durrett conjectured in survey papers and lecture notes during the 1980s and early 1990s regarding the central limit theorem for the Harris contact process, viz. that, if the process survives, then for all large , equals and the error equals multiplied by a (random) error distributed according to a standard Gaussian distribution.
Durrett's conjecture turned out to be correct for a different value of as proved in 2018.
References
Further reading
Thomas M. Liggett, "Stochastic Interacting Systems: Contact, Voter and Exclusion Processes", Springer-Verlag, 1999.
Stochastic processes
Lattice models | Contact process (mathematics) | [
"Physics",
"Materials_science"
] | 804 | [
"Statistical mechanics",
"Condensed matter physics",
"Lattice models",
"Computational physics"
] |
5,103,200 | https://en.wikipedia.org/wiki/Darwin%20%28unit%29 | The darwin (d) is a unit of evolutionary change, defined by J. B. S. Haldane in 1949. One darwin is defined to be an e-fold (about 2.718) change in a trait over one million years. Haldane named the unit after Charles Darwin.
Equation
The equation for calculating evolutionary change in darwins () is:
where and are the initial and final values of the trait and is the change in time in millions of years. An alternative form of this equation is:
Since the difference between two natural logarithms is a dimensionless ratio, the trait may be measured in any unit. Inexplicably, Haldane defined the millidarwin as 10−9 darwins, despite the fact that the prefix milli- usually denotes a factor of one thousandth (10−3).
Application
The measure is most useful in palaeontology, where macroevolutionary changes in the dimensions of fossils can be compared. Where this is used it is an indirect measure as it relies on phenotypic rather than genotypic data. Several data points are required to overcome natural variation within a population. The darwin only measures the evolution of a particular trait rather than a lineage; different traits may evolve at different rates within a lineage. The evolution of traits can however be used to infer as a proxy the evolution of lineages.
See also
Evolutionary biology
Macroevolution
Microevolution
References
Evolutionary biology
Rate of evolution
Units of level | Darwin (unit) | [
"Physics",
"Mathematics",
"Biology"
] | 309 | [
"Evolutionary biology",
"Physical quantities",
"Units of level",
"Quantity",
"Logarithmic scales of measurement",
"Units of measurement"
] |
5,105,731 | https://en.wikipedia.org/wiki/Mechanism%20of%20action | In pharmacology, the term mechanism of action (MOA) refers to the specific biochemical interaction through which a drug substance produces its pharmacological effect. A mechanism of action usually includes mention of the specific molecular targets to which the drug binds, such as an enzyme or receptor. Receptor sites have specific affinities for drugs based on the chemical structure of the drug, as well as the specific action that occurs there.
Drugs that do not bind to receptors produce their corresponding therapeutic effect by simply interacting with chemical or physical properties in the body. Common examples of drugs that work in this way are antacids and laxatives.
In contrast, a mode of action (MoA) describes functional or anatomical changes, at the cellular level, resulting from the exposure of a living organism to a substance.
Importance
Elucidating the mechanism of action of novel drugs and medications is important for several reasons:
In the case of anti-infective drug development, the information permits anticipation of problems relating to clinical safety. Drugs disrupting the cytoplasmic membrane or electron transport chain, for example, are more likely to cause toxicity problems than those targeting components of the cell wall (peptidoglycan or β-glucans) or 70S ribosome, structures which are absent in human cells.
By knowing the interaction between a certain site of a drug and a receptor, other drugs can be formulated in a way that replicates this interaction, thus producing the same therapeutic effects. Indeed, this method is used to create new drugs.
It can help identify which patients are most likely to respond to treatment. Because the breast cancer medication trastuzumab is known to target protein HER2, for example, tumors can be screened for the presence of this molecule to determine whether or not the patient will benefit from trastuzumab therapy.
It can enable better dosing because the drug's effects on the target pathway can be monitored in the patient. Statin dosage, for example, is usually determined by measuring the patient's blood cholesterol levels.
It allows drugs to be combined in such a way that the likelihood of drug resistance emerging is reduced. By knowing what cellular structure an anti-infective or anticancer drug acts upon, it is possible to administer a cocktail that inhibits multiple targets simultaneously, thereby reducing the risk that a single mutation in microbial or tumor DNA will lead to drug resistance and treatment failure.
It may allow other indications for the drug to be identified. Discovery that sildenafil inhibits phosphodiesterase-5 (PDE-5) proteins, for example, enabled this drug to be repurposed for pulmonary arterial hypertension treatment, since PDE-5 is expressed in pulmonary hypertensive lungs.
Determination
Microscopy-based methods
Bioactive compounds induce phenotypic changes in target cells, changes that are observable by microscopy and that can give insight into the mechanism of action of the compound.
With antibacterial agents, the conversion of target cells to spheroplasts can be an indication that peptidoglycan synthesis is being inhibited, and filamentation of target cells can be an indication that PBP3, FtsZ, or DNA synthesis is being inhibited. Other antibacterial agent-induced changes include ovoid cell formation, pseudomulticellular forms, localized swelling, bulge formation, blebbing, and peptidoglycan thickening. In the case of anticancer agents, bleb formation can be an indication that the compound is disrupting the plasma membrane.
A current limitation of this approach is the time required to manually generate and interpret data, but advances in automated microscopy and image analysis software may help resolve this.
Direct biochemical methods
Direct biochemical methods include methods in which a protein or a small molecule, such as a drug candidate, is labeled and is traced throughout the body. This proves to be the most direct approach to find target protein that will bind to small targets of interest, such as a basic representation of a drug outline, in order to identify the pharmacophore of the drug. Due to the physical interactions between the labeled molecule and a protein, biochemical methods can be used to determine the toxicity, efficacy, and mechanism of action of the drug.
Computation inference methods
Typically, computation inference methods are primarily used to predict protein targets for small molecule drugs based on computer based pattern recognition. However, this method could also be used for finding new targets for existing or newly developed drugs. By identifying the pharmacophore of the drug molecule, the profiling method of pattern recognition can be carried out where a new target is identified. This provides an insight at a possible mechanism of action since it is known what certain functional components of the drug are responsible for when interacting with a certain area on a protein, thus leading to a therapeutic effect.
Omics based methods
Omics based methods use omics technologies, such as chemoproteomics, reverse genetics and genomics, transcriptomics, and proteomics, to identify the potential targets of the compound of interest. Reverse genetics and genomics approaches, for instance, uses genetic perturbation (e.g. CRISPR-Cas9 or siRNA) in combination with the compound to identify genes whose knockdown or knockout abolishes the pharmacological effect of the compound. On the other hand, transcriptomics and proteomics profiles of the compound can be used to compare with profiles of compounds with known targets. Thanks to computation inference, it is then possible to make hypotheses about the mechanism of action of the compound, which can subsequently be tested.
Drugs with known MOA
There are many drugs in which the mechanism of action is known. One example is aspirin.
Aspirin
The mechanism of action of aspirin involves irreversible inhibition of the enzyme cyclooxygenase; therefore suppressing the production of prostaglandins and thromboxanes, thus reducing pain and inflammation. This mechanism of action is specific to aspirin and is not constant for all nonsteroidal anti-inflammatory drugs (NSAIDs). Rather, aspirin is the only NSAID that irreversibly inhibits COX-1.
Drugs with unknown MOA
Some drug mechanisms of action are still unknown. However, even though the mechanism of action of a certain drug is unknown, the drug still functions; it is just unknown or unclear how the drug interacts with receptors and produces its therapeutic effect.
Mode of action
In some literature articles, the terms "mechanism of action" and "mode of action" are used interchangeably, typically referring to the way in which the drug interacts and produces a medical effect. However, in actuality, a mode of action describes functional or anatomical changes, at the cellular level, resulting from the exposure of a living organism to a substance. This differs from a mechanism of action since it is a more specific term that focuses on the interaction between the drug itself and an enzyme or receptor and its particular form of interaction, whether through inhibition, activation, agonism, or antagonism. Furthermore, the term "mechanism of action" is the main term that is primarily used in pharmacology, whereas "mode of action" will more often appear in the field of microbiology or certain aspects of biology.
See also
Mode of action (MoA)
Pharmacodynamics
Chemoproteomics
References
Pharmacology
Pharmacodynamics
Medicinal chemistry | Mechanism of action | [
"Chemistry",
"Biology"
] | 1,539 | [
"Pharmacology",
"Pharmacodynamics",
"nan",
"Medicinal chemistry",
"Biochemistry"
] |
5,106,060 | https://en.wikipedia.org/wiki/Actinides%20in%20the%20environment | The actinide series is a group of chemical elements with atomic numbers ranging from 89 to 102, including notable elements such as uranium and plutonium. The nuclides (or isotopes) thorium-232, uranium-235, and uranium-238 occur primordially, while trace quantities of actinium, protactinium, neptunium, and plutonium exist as a result of radioactive decay and (in the case of neptunium and plutonium) neutron capture of uranium. These elements are far more radioactive than the naturally occurring thorium and uranium, and thus have much shorter half-lives. Elements with atomic numbers greater than 94 do not exist naturally on Earth, and must be produced in a nuclear reactor. However, certain isotopes of elements up to californium (atomic number 98) still have practical applications which take advantage of their radioactive properties.
While all actinides are radioactive, actinides and actinide compounds comprise a significant portion of the Earth's crust. There is enough thorium and uranium to be commercially mined, with thorium having a concentration in the Earth's crust about four times that of uranium. The global production of uranium in 2021 was over six million tons, with Australia having been the leading supplier. Thorium is extracted as a byproduct of titanium, zirconium, tin, and rare earths from monazite, from which thorium is often a waste product. Despite its greater abundance in the Earth's crust, the low demand for thorium in comparison to other metals extracted alongside thorium has led to a global surplus.
The primary hazard associated with actinides is their radioactivity, though they may also cause heavy metal poisoning if absorbed into the bloodstream. Generally, ingested insoluble actinide compounds, such as uranium dioxide and mixed oxide (MOX) fuel, will pass through the digestive tract with little effect since they have long half-lives, and cannot dissolve and be absorbed into the bloodstream. Inhaled actinide compounds, however, will be more damaging as they remain in the lungs and irradiate lung tissue.
Actinium
Actinium can be found naturally in traces in uranium ore as 227Ac, an α and β emitter with half-life 21.773 years. Uranium ore contains about 0.2 mg of actinium per ton of uranium. It is more commonly made in milligram amounts by neutron irradiation of 226Ra in a nuclear reactor. Natural actinium almost exclusively consists of one isotope, 227Ac, with only minute traces of other shorter-lived isotopes (225Ac and 228Ac) occurring in other decay chains.
Thorium
In India, a large amount of thorium ore can be found in the form of monazite in placer deposits of the Western and Eastern coastal dune sands, particularly in the Tamil Nadu coastal areas. The residents of this area are exposed to a naturally occurring radiation dose ten times higher than the worldwide average.
Occurrence
Thorium is found at low levels in most rocks and soils, where it is about three times more abundant than uranium and about as abundant as lead. On average, soil commonly contains around 6 parts per million (ppm) thorium. Thorium occurs in several minerals; the most common is the rare earth-thorium-phosphate mineral monazite, which contains up to 12% thorium oxide. Several countries have substantial deposits. 232Th decays very slowly (its half-life is about three times the age of the Earth). Other isotopes of thorium occur in the thorium and uranium decay chains. These are shorter-lived and hence much more radioactive than 232Th, though on a mass basis they are negligible.
Effects in humans
Thorium has been linked to liver cancer. In the past, thoria (thorium dioxide) was used as a contrast agent for medical X-ray radiography but its use has been discontinued. It was sold under the name Thorotrast.
Protactinium
Protactinium-231 occurs naturally in uranium ores such as pitchblende, to the extent of 3 ppm in some ores. Protactinium is naturally present in soil, rock, surface water, groundwater, plants and animals in very low concentrations (on the order of 1 ppt or 0.1 picocuries per gram (pCi/g).
Uranium
Uranium is a natural metal which is widely found. It is present in almost all soils and it is more plentiful than antimony, beryllium, cadmium, gold, mercury, silver, or tungsten, and is about as abundant as arsenic or molybdenum. Significant concentrations of uranium occur in some substances such as phosphate rock deposits, and minerals such as lignite, and monazite sands in uranium-rich ores (it is recovered commercially from these sources).
Seawater contains about 3.3 parts per billion of uranium by weight as uranium (VI) forms soluble carbonate complexes. Extraction of uranium from seawater has been considered as a means of obtaining the element. Because of the very low specific activity of uranium the chemical effects of it upon living things can often outweigh the effects of its radioactivity. Additional uranium has been added to the environment in some locations, from the nuclear fuel cycle and the use of depleted uranium in munitions.
Neptunium
Like plutonium, neptunium has a high affinity for soil. However, it is relatively mobile over the long term, and diffusion of neptunium-237 in groundwater is a major issue in designing a deep geological repository for permanent storage of spent nuclear fuel. 237Np has a half-life of 2.144 million years and is therefore a long-term problem; but its half-life is still much shorter than those of uranium-238, uranium-235, or uranium-236, and 237Np therefore has higher specific activity than those nuclides. It is used only to make plutonium-238 when bombarded with neutrons in a lab.
Plutonium
Sources
Plutonium in the environment has several sources. These include:
Atomic batteries
In space
In pacemakers
Bomb detonations
Bomb safety trials
Nuclear crime
Nuclear fuel cycle
Nuclear power plants
Environmental chemistry
Plutonium, like other actinides, readily forms a plutonium dioxide (plutonyl) core (PuO2). In the environment, this plutonyl core readily complexes with carbonate as well as other oxygen moieties (OH−, NO2−, NO3−, and SO42−) to form charged complexes which can be readily mobile with low affinities to soil.
PuO2CO32−
PuO2(CO3)24−
PuO2(CO3)36−
PuO2 formed from neutralizing highly acidic nitric acid solutions tends to form polymeric PuO2 which is resistant to complexation. Plutonium also readily shifts valences between the +3, +4, +5 and +6 states. It is common for some fraction of plutonium in solution to exist in all of these states in equilibrium.
Plutonium is known to bind to soil particles very strongly; see above for an X-ray spectroscopic study of plutonium in soil and concrete. While caesium has very different chemistry from the actinides, it is well known that both caesium and many actinides bind strongly to the minerals in soil. It has been possible to use 134Cs-labeled soil to study the migration of Pu and Cs is soils. It has been shown that colloidal transport processes control the migration of Cs (and will control the migration of Pu) in the soil at the Waste Isolation Pilot Plant.
Americium
Americium often enters landfills from discarded smoke detectors. The rules for the disposal of smoke detectors are very relaxed in most municipalities. For instance, in the UK it is permissible to dispose of a smoke detector containing americium by placing it in the dustbin with normal household rubbish, but each dustbin worth of rubbish is limited to only containing one smoke detector. The manufacture of products containing americium (such as smoke detectors) as well as nuclear reactors and explosions may also release the americium into the environment.
In 1999, a truck transporting 900 smoke detectors in France was reported to have caught fire; it is claimed that this led to a release of americium into the environment. In the U.S., the "Radioactive Boy Scout" David Hahn was able to buy thousands of smoke detectors at remainder prices and concentrate the americium from them.
There have been cases of humans being exposed to americium. The worst case was that of Harold McCluskey, who was exposed to an extremely high dose of americium-241 after an accident involving a glove box. He was subsequently treated with chelation therapy. It is likely that the medical care which he was given saved his life; despite similar biodistribution and toxicity to plutonium, the two radioactive elements have different solution-state chemistries. Americium is stable in the +3 oxidation state, while the +4 oxidation state of plutonium can form in the human body.
The most common isotope americium-241 decays (half-life 432 years) to neptunium-237 which has a much longer half-life, so in the long term, the issues discussed above for neptunium apply.
Americium released into the environment tends to remain in soil and water at relatively shallow depths and may be taken up by animals and plants during growth; shellfish such as shrimp take up americium-241 in their shells, and parts of grain plants can become contaminated by exposure. In a 2021 paper, J.D. Chaplin et al. reported advances in the diffusive gradients in thin films technique, which have provided a method to measure labile bioavailable americium in soils, as well as in freshwater and seawater.
Curium
Atmospheric curium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 4,000 times higher concentration of curium in the sandy soil particles than in water present in the soil pores. An even higher ratio of about 18,000 was measured in loam soils.
Californium
Californium is fairly insoluble in water, but it adheres well to ordinary soil, and concentrations of it in the soil can be 500 times higher than in the water surrounding the soil particles.
Notes
See also
Uranium in the environment
Radium in the environment
Background radiation
Radioecology
References
General references
Further reading
Hala, Jiri, and James D. Navratil. Radioactivity, Ionizing Radiation and Nuclear Energy. Konvoj: Brno, Czech Republic, 2003. .
External links
"Why do mechanisms matter in radioactive waste management?" – Royal Society for Chemistry
"Spectroscopies for Environmental Studies of Actinide Species" – Federation of American Scientists
Nuclear materials
Nuclear technology
Nuclear chemistry
Nuclear physics
Inorganic chemistry
Radiobiology
Radioactive contamination
Soil contamination | Actinides in the environment | [
"Physics",
"Chemistry",
"Technology",
"Biology",
"Environmental_science"
] | 2,277 | [
"Radioactive contamination",
"Nuclear chemistry",
"Radiobiology",
"Environmental chemistry",
"Nuclear technology",
"Materials",
"Nuclear materials",
"Soil contamination",
"nan",
"Nuclear physics",
"Radioactivity",
"Environmental impact of nuclear power",
"Matter"
] |
22,743,077 | https://en.wikipedia.org/wiki/Immunoglobulin%20Y | Immunoglobulin Y (abbreviated as IgY) is a type of immunoglobulin which is the major antibody in bird, reptile, and lungfish blood. It is also found in high concentrations in chicken egg yolk. As with the other immunoglobulins, IgY is a class of proteins which are formed by the immune system in reaction to certain foreign substances, and specifically recognize them.
IgY is often mislabelled as Immunoglobulin G (IgG) in older literature, and sometimes even in commercial product catalogues, due to its functional similarity to mammalian IgG and Immunoglobulin E (IgE). However, this older nomenclature is obsolete, since IgY differs both structurally and functionally from mammalian IgG, and does not cross-react with antibodies raised against mammalian IgG.
Since chickens can lay eggs almost every day, and the yolk of an immunised hen's egg contains a high concentration of IgY, chickens are gradually becoming popular as a source of customised antibodies for research. (Usually, mammals such as rabbits or goats are injected with the antigen of interest by the researcher or a contract laboratory.)
Ducks produce a truncated form of IgY which is missing part of the Fc region. As a result, it cannot bind complement or be picked up by macrophages.
IgY has also been analyzed in the Chinese soft-shelled turtle, Pelodiscus sinensis.
Characteristics
In chickens, immunoglobulin Y is the functional equivalent to Immunoglobulin G (IgG). Like IgG, it is composed of two light and two heavy chains. Structurally, these two types of immunoglobulin differ primarily in the heavy chains, which in IgY have a molecular mass of about 65,100 atomic mass units (amu), and are thus larger than in IgG. The light chains in IgY, with a molar mass of about 18,700 amu, are somewhat smaller than the light chains in IgG. The molar mass of IgY thus amounts to about 167,000 amu. The steric flexibility of the IgY molecule is less than that of IgG.
Functionally, IgY is partially comparable to Immunoglobulin E (IgE), as well as to IgG. However, in contrast to IgG, IgY does not bind to Protein A, to Protein G, or to cellular Fc receptors. Furthermore, IgY does not activate the complement system. The name Immunoglobulin Y was suggested in 1969 by G.A. Leslie and L.W. Clem, after they were able to show differences between the immunoglobulins found in chicken eggs, and immunoglobulin G. Other synonymous names are Chicken IgG, Egg Yolk IgG, and 7S-IgG.
Bioanalytic applications
As compared to mammalian antibodies, IgY offers various advantages for the targeted extraction of antibodies and their application in bioanalysis. Since the antibodies are extracted from the yolks of laid eggs, the method of antibody production is non-invasive. Thus, no blood must be taken from the animals for the extraction of blood serum.
The available quantity of a given antibody is considerably increased through repeated egg laying from the same hen. The cross-reactivity of IgY with proteins from mammals is also markedly less than that of IgG. Furthermore, the immune response against certain antigens in chickens is more strongly expressed than in rabbits or other mammals.
Of the immunoglobulins arising during the immune response, only IgY is found in chicken eggs. Thus, in preparations from chicken eggs, there is no contamination with Immunoglobulin A (IgA) or Immunoglobulin M (IgM). The yield of IgY from a chicken egg is comparable to that of IgG from rabbit serum.
One disadvantage of IgY, as compared to mammalian antibodies, is that the isolation of IgY from egg yolk is more difficult than the isolation of IgG from blood serum. This is due in large part to the fact that IgY cannot be bound with Protein A and Protein G. Thus, it cannot be separated from other components of the assay, for example from other proteins. Additionally, the egg yolk's rich store of lipids and lipoproteins must be removed. Antibody-containing blood serums, on the other hand, can sometimes be directly used in bioanalysis, i.e., without complicated isolation steps.
Utilization in foods
Particularly in Asian countries, IgY has been clinically tested as a food supplement and preservative. For example, yogurt products containing pathogen specific IgY, have been tested for their ability to reduce Helicobacter pylori in the stomach by hindering the attachment of the bacterium to the stomach lining. The IgY used for this purpose is extracted from the eggs of immunized hens. Antibodies against Salmonella and other bacteria, as well as against viruses, are produced in this manner, and employed as a nutritional component for protection against these pathogens. The Food Safety Lab of Ocean University of China has experimented with using IgY specific to the bacteria Shewanella putrefaciens and Pseudomonas fluorescens as a food preservative for fish. The shelf life of fish treated with the IgY was extended from 9 days to 12 – 15 days demonstrating a significant antimicrobial activity to the specific bacteria.
Anti-Fel d1 egg IgY immunoglobulin has been successfully tested to reduce active Fel d1 in cats saliva in order to lower allergenic potential of treated cats.
Literature
Rüdiger Schade, Irene Behn, Michael Erhard: Chicken Egg Yolk Antibodies, Production and Application. Springer-Verlag, Berlin 2001,
G.A. Leslie, L.W. Clem: Phylogeny of immunoglobulin structure and function. 3. Immunoglobulins of the chicken. In: Journal of Experimental Medicine. 130(6)/1969. Rockefeller University Press, S. 1337-1352,
A. Polson, M.B. von Wechmar, M.H. van Regenmortel: Isolation of viral IgY antibodies from yolks of immunized hens. In: Immunological Communications. 9(5)/1980. Dekker New York, S. 475-493,
A. Polson, M.B. von Wechmar, G. Fazakerley: Antibodies to proteins from yolk of immunized hens. In: Immunological Communications. 9(5)/1980. Dekker New York, S. 495-514,
References
Table comparing mammalian IgG and IgE with avian IgY and duck truncated IgY. Gallus Immunotech, accessed 28 October 2010.
Glycoproteins
Antibodies | Immunoglobulin Y | [
"Chemistry"
] | 1,457 | [
"Glycoproteins",
"Glycobiology"
] |
22,744,244 | https://en.wikipedia.org/wiki/Ron%20Aharoni | Ron Aharoni ( ) (born 1952) is an Israeli mathematician, working in finite and infinite combinatorics. Aharoni is a professor at the Technion – Israel Institute of Technology, where he received his Ph.D. in mathematics in 1979. With Nash-Williams and Shelah he generalized Hall's marriage theorem by obtaining the right transfinite conditions for infinite bipartite graphs. He subsequently proved the appropriate versions of the Kőnig theorem and the Menger theorem for infinite graphs (the latter with Eli Berger).
Aharoni is the author of several nonspecialist books; the most successful is Arithmetic for Parents, a book helping parents and elementary school teachers in teaching basic mathematics. He also wrote a book on the connections between Mathematics, poetry and beauty and on philosophy, The Cat That is not There. His book, "Man detaches meaning", is on a mechanism common to jokes and poetry. His last to date book is Circularity: A Common Secret to Paradoxes, Scientific Revolutions and Humor, which binds together mathematics, philosophy and the secrets of humor.
Books
1. Arithmetic for Parents, A book for grownups on children's mathematics, Schocken Press 2004
2. Mathematics, poetry and beauty (in Hebrew), Hakibutz Hameuchad 2008.
3. The cat that is not there - a non-philosophical book on philosophy, Magness Press (The Hebrew University Publishing House), 2009.
4. Man detaches meaning - poems, jokes and in between, Hakibutz Hameuchad 2011.
5. Mathematics, Poetry and Beauty (in English), World Scientific Publishing 2014.
6. Arithmetic for Parents (Revised Edition), World Scientific Publishing 2015
7. Circularity: A Common Secret to Paradoxes, Scientific Revolutions and Humor, World Scientific Publishing 2016.
References
External links
Ron Aharoni's home page on Elementary school mathematics
Vicious circles -- confusing, instructive, amusing?
Ron Aharoni: What I learnt in elementary school, Address at the British Mathematical Colloquium, Birmingham, 2003
Ron Aharoni: The Cat That is Not There, Magnes Press, December 2009.
Ron Aharoni: The cat that is not there, a summary
1952 births
Living people
Brandeis University alumni
Combinatorialists
Israeli mathematicians
Israeli Jews
Scientists from Haifa
Technion – Israel Institute of Technology alumni
Academic staff of Technion – Israel Institute of Technology | Ron Aharoni | [
"Mathematics"
] | 504 | [
"Combinatorialists",
"Combinatorics"
] |
22,745,050 | https://en.wikipedia.org/wiki/Winters%27s%20formula | Winters's formula, named after R. W. Winters, is a formula used to evaluate respiratory compensation when analyzing acid-base disorders in the presence of metabolic acidosis. It can be given as:
,
where − is given in units of mEq/L and P will be in units of mmHg.
History
Dr. R. W. Winters was an American physician and graduate from Yale Medical School. He was a professor of pediatrics at Columbia University College of Physicians and Surgeons. In 1974 he was awarded the Borden Award gold medal by the American Academy of Pediatrics.
Dr. R. W. Winters conducted an experiment in the 1960s on 60 patients with varying degrees of metabolic acidosis. He aimed to empirically determine a mathematical expression representing the effect of respiratory compensation during metabolic acidosis. He measured the blood pH, plasma PCO2, blood base excess, and plasma bicarbonate concentrations. He focused on the relationship between plasma PCO2 and plasma bicarbonate. Winter's Formula was derived from a linear regression of this relationship between plasma PCO2 and plasma bicarbonate.
Physiology
There are four primary acid-base derangements that can occur in the human body - metabolic acidosis, metabolic alkalosis, respiratory acidosis, and respiratory alkalosis. These are characterized by a serum pH below 7.4 (acidosis) or above 7.4 (alkalosis), and whether the cause is from a metabolic process or respiratory process. If the body experiences one of these derangements, the body will try to compensate by inducing an opposite process (e.g. induced respiratory alkalosis for a primary metabolic acidosis).
Respiratory compensation is one of three major processes the body uses to react to derangements in acid-base status (above or below pH 7.4). It is slower than the initial bicarbonate buffer system in the blood, but faster than renal compensation. Respiratory compensation usually begins within minutes to hours, but alone will not completely return arterial pH to a normal value (7.4). Winter's Formula quantifies the amount of respiratory compensation during metabolic acidosis.
During metabolic acidosis, a decrease in pH stimulates chemoreceptors. Peripheral chemoreceptors are found in the aortic and carotid bodies and respond to changes in the PaCO2, the arterial partial pressure of carbon dioxide. Central chemoreceptors are found in the brainstem and respond primarily to decreased pH in the cerebrospinal fluid. In response to decreased pH, these chemoreceptors lead to an increase in minute ventilation and increased elimination of carbon dioxide. A decrease in carbon dioxide lowers PaCO2 and pushes arterial pH towards normal.
Clinical use
One difficulty in evaluation acid-base derangements is the presence of multiple pathologies. A patient may present with a metabolic acidosis process alone, but they may also have a concomitant respiratory acidosis. Winters's formula gives an expected value for the patient's P; the patient's actual (measured) P is then compared to this. Using this information, physicians may elucidate additional causes of the acid-base derangement and identify different treatment options which may not have otherwise been considered.
If the two values correspond, respiratory compensation is considered to be adequate.
If the measured P is higher than the calculated value, there is also a primary respiratory acidosis.
If the measured P is lower than the calculated value, there is also a primary respiratory alkalosis.
References
Respiratory therapy
Mathematics in medicine | Winters's formula | [
"Mathematics"
] | 731 | [
"Applied mathematics",
"Mathematics in medicine"
] |
22,745,487 | https://en.wikipedia.org/wiki/Precursor%20%28physics%29 | Precursors are characteristic wave patterns caused by dispersion of an impulse's frequency components as it propagates through a medium. Classically, precursors precede the main signal, although in certain situations they may also follow it. Precursor phenomena exist for all types of waves, as their appearance is only predicated on the prominence of dispersion effects in a given mode of wave propagation. This non-specificity has been confirmed by the observation of precursor patterns in different types of electromagnetic radiation (microwaves, visible light, and terahertz radiation) as well as in fluid surface waves and seismic waves.
History
Precursors were first theoretically predicted in 1914 by Arnold Sommerfeld for the case of electromagnetic radiation propagating through a neutral dielectric in a region of normal dispersion. Sommerfeld's work was expanded in the following years by Léon Brillouin, who applied the saddle point approximation to compute the integrals involved. However, it was not until 1969 that precursors were first experimentally confirmed for the case of microwaves propagating in a waveguide, and much of the experimental work observing precursors in other types of waves has only been done since the year 2000. This experimental lag is mainly due to the fact that in many situations, precursors have a much smaller amplitude than the signals that give rise to them (a baseline figure given by Brillouin is six orders of magnitude smaller). As a result, experimental confirmations could only be done after technology became available to detect precursors.
Basic theory
As a dispersive phenomenon, the amplitude at any distance and time of a precursor wave propagating in one dimension can be expressed by the Fourier integral
where is the Fourier transform of the initial impulse and the complex exponential represents the individual component wavelets summed in the integral. To account for the effects of dispersion, the phase of the exponential must include the dispersion relation (here, the factor) for the particular medium in which the wave is propagating.
The integral above can only be solved in closed form when idealized assumptions are made about the initial impulse and the dispersion relation, as in Sommerfeld's derivation below. In most realistic cases, numerical integration is required to compute the integral.
Sommerfeld's derivation for electromagnetic waves in a neutral dielectric
Assuming the initial impulse takes the form of a sinusoid turned on abruptly at time ,
then we can write the general-form integral given in the previous section as
For simplicity, we assume the frequencies involved are all in a range of normal dispersion for the medium, and we let the dispersion relation take the form
where , being the number of atomic oscillators in the medium, and the charge and mass of each one, the natural frequency of the oscillators, and the vacuum permittivity. This yields the integral
To solve this integral, we first express the time in terms of the retarded time , which is necessary to ensure that the solution does not violate causality by propagating faster than . We also treat as large and ignore the term in deference to the second-order term. Lastly, we substitute , getting
Rewriting this as
and making the substitutions
allows the integral to be transformed into
where is simply a dummy variable, and, finally
where is a Bessel function of the first kind. This solution, which is an oscillatory function with amplitude and period that both increase with increasing time, is characteristic of a particular type of precursor known as the Sommerfeld precursor.
Stationary-Phase-Approximation-Based Period Analysis
The stationary phase approximation can be used to analyze the form of precursor waves without solving the general-form integral given in the Basic Theory section above. The stationary phase approximation states that for any speed of wave propagation determined from any distance and time , the dominant frequency of the precursor is the frequency whose group velocity equals :
Therefore, one can determine the approximate period of a precursor waveform at a particular distance and time by calculating the period of the frequency component that would arrive at that distance and time based on its group velocity. In a region of normal dispersion, high-frequency components have a faster group velocity than low-frequency ones, so the front of the precursor should have a period corresponding to that of the highest-frequency component of the original impulse; with increasing time, components with lower and lower frequencies arrive, so the period of the precursor becomes longer and longer until the lowest-frequency component arrives. As more and more components arrive, the amplitude of the precursor also increases. The particular type of precursor characterized by increasing period and amplitude is known as the high-frequency Sommerfeld precursor.
In a region of anomalous dispersion, where low-frequency components have faster group velocities than high-frequency ones, the opposite of the above situation occurs: the onset of the precursor is characterized by a long period, and the period of the signal decreases with time. This type of precursor is called a low-frequency Sommerfeld precursor.
In certain situations of wave propagation (for instance, fluid surface waves), two or more frequency components may have the same group velocity for particular ranges of frequency; this is typically accompanied by a local extremum in the group velocity curve. This means that for certain values of time and distance, the precursor waveform will consist of a superposition of both low- and high-frequency Sommerfeld precursors. Any local extrema only correspond to single frequencies, so at these points there will be a contribution from a precursor signal with a constant period; this is known as a Brillouin precursor.
References
Radiation | Precursor (physics) | [
"Physics",
"Chemistry"
] | 1,156 | [
"Transport phenomena",
"Waves",
"Physical phenomena",
"Radiation"
] |
22,748,103 | https://en.wikipedia.org/wiki/Microwave%20cavity | A microwave cavity or radio frequency cavity (RF cavity) is a special type of resonator, consisting of a closed (or largely closed) metal structure that confines electromagnetic fields in the microwave or RF region of the spectrum. The structure is either hollow or filled with dielectric material. The microwaves bounce back and forth between the walls of the cavity. At the cavity's resonant frequencies they reinforce to form standing waves in the cavity. Therefore, the cavity functions similarly to an organ pipe or sound box in a musical instrument, oscillating preferentially at a series of frequencies, its resonant frequencies. Thus it can act as a bandpass filter, allowing microwaves of a particular frequency to pass while blocking microwaves at nearby frequencies.
A microwave cavity acts similarly to a resonant circuit with extremely low loss at its frequency of operation, resulting in quality factors (Q factors) up to the order of 106, for copper cavities, compared to 102 for circuits made with separate inductors and capacitors at the same frequency. For superconducting cavities, quality factors up to the order of 1010 are possible. They are used in place of resonant circuits at microwave frequencies, since at these frequencies discrete resonant circuits cannot be built because the values of inductance and capacitance needed are too low. They are used in oscillators and transmitters to create microwave signals, and as filters to separate a signal at a given frequency from other signals, in equipment such as radar equipment, microwave relay stations, satellite communications, and microwave ovens.
RF cavities can also manipulate charged particles passing through them by application of acceleration voltage and are thus used in particle accelerators and microwave vacuum tubes such as klystrons and magnetrons.
Theory of operation
Most resonant cavities are made from closed (or short-circuited) sections of waveguide or high-permittivity dielectric material (see dielectric resonator). Electric and magnetic energy is stored in the cavity. This energy decays over time due to several possible loss mechanisms.
The section on 'Physics of SRF cavities' in the article on superconducting radio frequency contains a number of important and useful expressions which apply to any microwave cavity:
The energy stored in the cavity is given by the integral of field energy density over its volume,
,
where:
H is the magnetic field in the cavity and
μ0 is the permeability of free space.
The power dissipated due just to the resistivity of the cavity's walls is given by the integral of resistive wall losses over its surface,
,
where:
Rs is the surface resistance.
For copper cavities operating near room temperature, Rs is simply determined by the empirically measured bulk electrical conductivity σ see Ramo et al pp.288-289
.
A resonator's quality factor is defined by
,
where:
ω is the resonant frequency in [rad/s],
U is the energy stored in [J], and
Pd is the power dissipated in [W] in the cavity to maintain the energy U.
Basic losses are due to finite conductivity of cavity walls and dielectric losses of material filling the cavity. Other loss mechanisms exist in evacuated cavities, for example the multipactor effect or field electron emission. Both multipactor effect and field electron emission generate copious electrons inside the cavity. These electrons are accelerated by the electric field in the cavity and thus extract energy from the stored energy of the cavity. Eventually the electrons strike the walls of the cavity and lose their energy. In superconducting radio frequency cavities there are additional energy loss mechanisms associated with the deterioration of the electric conductivity of the superconducting surface due to heating or contamination.
Every cavity has numerous resonant frequencies that correspond to electromagnetic field modes satisfying necessary boundary conditions on the walls of the cavity. Because of these boundary conditions that must be satisfied at resonance (tangential electric fields must be zero at cavity walls), at resonance, cavity dimensions must satisfy particular values. Depending on the resonance transverse mode, transverse cavity dimensions may be constrained to expressions related to geometric functions, or to zeros of Bessel functions or their derivatives (see below), depending on the symmetry properties of the cavity's shape. Alternately it follows that cavity length must be an integer multiple of half-wavelength at resonance (see page 451 of Ramo et al). In this case, a resonant cavity can be thought of as a resonance in a short circuited half-wavelength transmission line.
The external dimensions of a cavity can be made considerably smaller at its lowest frequency mode by loading the cavity with either capacitive or inductive elements. Loaded cavities usually have lower symmetries and compromise certain performance indicators, such as the best Q factor. As examples, the reentrant cavity and helical resonator are capacitive and inductive loaded cavities, respectively.
Multi-cell cavity
Single-cell cavities can be combined in a structure to accelerate particles (such as electrons or ions) more efficiently than a string of independent single cell cavities. The figure from the U.S. Department of Energy shows a multi-cell superconducting cavity in a clean room at Fermi National Accelerator Laboratory.
Loaded microwave cavities
A microwave cavity has a fundamental mode, which exhibits the lowest resonant frequency of all possible resonant modes. For example, the fundamental mode of a cylindrical cavity is the TM010 mode. For certain applications, there is motivation to reduce the dimensions of the cavity. This can be done by using a loaded cavity, where a capacitive or an inductive load is integrated in the cavity's structure.
The precise resonant frequency of a loaded cavity must be calculated using finite element methods for Maxwell's equations with boundary conditions.
Loaded cavities (or resonators) can also be configured as multi-cell cavities.
Loaded cavities are particularly suited for accelerating low velocity charged particles. This application for many types of loaded cavities, Some common types are listed below.
The reentrant cavity
The helical resonator
The spiral resonator
The split-ring resonator
The quarter wave resonator
The half wave resonator. A variant of the half-wave resonator is the spoke resonator.
The Radio-frequency quadrupole
.
Compact Crab cavity. Compact crab cavities are an important upgrade for the LHC.
The Q factor of a particular mode in a resonant cavity can be calculated. For a cavity with high degrees of symmetry, using analytical expressions of the electric and magnetic field, surface currents in the conducting walls and electric field in dielectric lossy material. For cavities with arbitrary shapes, finite element methods for Maxwell's equations with boundary conditions must be used. Measurement of the Q of a cavity are done using a Vector Network analyzer (electrical), or in the case of a very high Q by measuring the exponential decay time of the fields, and using the relationship .
The electromagnetic fields in the cavity are excited via external coupling. An external power source is usually coupled to the cavity by a small aperture, a small wire probe or a loop, see page 563 of Ramo et al. External coupling structure has an effect on cavity performance and needs to be considered in the overall analysis, see Montgomery et al page 232.
Resonant frequencies
The resonant frequencies of a cavity are a function of its geometry.
Rectangular cavity
Resonance frequencies of a rectangular microwave cavity for any or resonant mode can be found by imposing boundary conditions on electromagnetic field expressions. This frequency is given at page 546 of Ramo et al:
where is the wavenumber, with , , being the mode numbers and , , being the corresponding dimensions; c is the speed of light in vacuum; and and are relative permeability and permittivity of the cavity filling respectively.
Cylindrical cavity
The field solutions of a cylindrical cavity of length and radius follow from the solutions of a cylindrical waveguide with additional electric boundary conditions at the position of the enclosing plates. The resonance frequencies are different for TE and TM modes.
TM modes
See Jackson
TE modes
See Jackson
Here, denotes the -th zero of the -th Bessel function, and denotes the -th zero of the derivative of the -th Bessel function. and are relative permeability and permittivity respectively.
Quality factor
The quality factor of a cavity can be decomposed into three parts, representing different power loss mechanisms.
, resulting from the power loss in the walls which have finite conductivity. The Q of the lowest frequency mode, or "fundamental mode" are calculated, see pp. 541-551 in Ramo et al for a rectangular cavity (Equation 3a) with dimensions and parameters , and the mode of a cylindrical cavity (Equation 3b) with parameters as defined above.
where is the intrinsic impedance of the dielectric, is the surface resistivity of the cavity walls. Note that .
, resulting from the power loss in the lossy dielectric material filling the cavity, where is the loss tangent of the dielectric
, resulting from power loss through unclosed surfaces (holes) of the cavity geometry.
Total Q factor of the cavity can be found as in page 567 of Ramo et al
Comparison to LC circuits
Microwave resonant cavities can be represented and thought of as simple LC circuits, see Montgomery et al pages 207-239. For a microwave cavity, the stored electric energy is equal to the stored magnetic energy at resonance as is the case for a resonant LC circuit. In terms of inductance and capacitance, the resonant frequency for a given mode can be written as given in Montgomery et al page 209
where V is the cavity volume, is the mode wavenumber and and are permittivity and permeability respectively.
To better understand the utility of resonant cavities at microwave frequencies, it is useful to note that conventional inductors and capacitors start to become impractically small with frequency in the VHF, and definitely so for frequencies above one gigahertz. Because of their low losses and high Q factors, cavity resonators are preferred over conventional LC and transmission-line resonators at high frequencies.
Losses in LC resonant circuits
Conventional inductors are usually wound from wire in the shape of a helix with no core. Skin effect causes the high frequency resistance of inductors to be many times their direct current resistance. In addition, capacitance between turns causes dielectric losses in the insulation which coats the wires. These effects make the high frequency resistance greater and decrease the Q factor.
Conventional capacitors use air, mica, ceramic or perhaps teflon for a dielectric. Even with a low loss dielectric, capacitors are also subject to skin effect losses in their leads and plates. Both effects increase their equivalent series resistance and reduce their Q.
Even if the Q factor of VHF inductors and capacitors is high enough to be useful, their parasitic properties can significantly affect their performance in this frequency range. The shunt capacitance of an inductor may be more significant than its desirable series inductance. The series inductance of a capacitor may be more significant than its desirable shunt capacitance. As a result, in the VHF or microwave regions, a capacitor may appear to be an inductor and an inductor may appear to be a capacitor. These phenomena are better known as parasitic inductance and parasitic capacitance.
Losses in cavity resonators
Dielectric loss of air is extremely low for high-frequency electric or magnetic fields. Air-filled microwave cavities confine electric and magnetic fields to the air spaces between their walls. Electric losses in such cavities are almost exclusively due to currents flowing in cavity walls. While losses from wall currents are small, cavities are frequently plated with silver to increase their electrical conductivity and reduce these losses even further. Copper cavities frequently oxidize, which increases their loss. Silver or gold plating prevents oxidation and reduces electrical losses in cavity walls. Even though gold is not quite as good a conductor as copper, it still prevents oxidation and the resulting deterioration of Q factor over time. However, because of its high cost, it is used only in the most demanding applications.
Some satellite resonators are silver-plated and covered with a gold flash layer. The current then mostly flows in the high-conductivity silver layer, while the gold flash layer protects the silver layer from oxidizing.
References
External links
Cavity Resonators, The Feynman Lectures on Physics Vol. II Ch. 23
Crab cavity for the LHC
Microwave technology
Accelerator physics | Microwave cavity | [
"Physics"
] | 2,667 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
22,751,916 | https://en.wikipedia.org/wiki/Frank%20Newman%20Speller%20Award | The Frank Newman Speller Award is an annual award for significant contributions to corrosion engineering and is administered by NACE International. (The organization was previously known as the National Association of Corrosion Engineers.) The award is named in honor of Frank Newman Speller, a Canadian-born American metallurgical engineer notable for his pioneering text on corrosion.
Recipients
Source: NACE International
1947 - Frank Newman Speller
1948 - John M. Pearson
1949 - Francis L. LaQue
1950 - O.C. Mudd
1951 - Kirk H. Logan
1952 - Starr Thayer
1953 - Scott P. Ewing
1954 - E.H. Dix, Jr.
1955 - Gordon N. Scott
1956 - Mars G. Fontana
1957 - Walter F. Rogers
1958 - Robert J. Kuhn
1959 - A. Wachter
1960 - J.C. Hudson
1961 - K.G. Compton
1962 - C.P. Larrabee
1963 - Thomas P. May
1964 - Hugh P. Godard
1965 - F.W. Wink
1966 - Richard S. Treseder
1967 - John D. Sudbury
1968 - Lee P. Sudrabin
1969 - Charles G. Munger
1970 - Arland W. Peabody
1971 - Andrew Dravnieks
1972 - No recipient
1973 - Fred M. Reinhart
1974 - K.N. Barnard
1975 - Bernard Husock
1976 - E.H. Phelps
1977 - Walter K. Boyd
1978 - Joseph B. Cotton
1979 - M.C. Miller
1980 - H. Spahn
1981 - J.H. Morgan
1982 - Richard F. Stratful
1983 - Ernest W. Haycock
1984 - Warren E. Berry
1985 - Stanley L. Lopata
1986 - R.N. Miller
1987 - Einar Mattsson
1988 - Robert A. Baboian
1989 - A.J. Sedricks
1990 - M.E. Indig
1991 - Sheldon W. Dean
1993 - Bryan E. Wilde
1994 - S. Evans
1995 - P.R. Rhodes
1996 - Peter L. Andresen
1997 - Jacques-Philippe Berge
1998 - H. Okada
1999 - Herbert E. Townsend
2000 - Peter M. Scott
2001 - G. Schick
2002 - G.M. Gordon
2003 - R.W. Schutz
2004 - Boris A. Miksic
2005 - D. Knotkova-Cermakova
2006 - Masakatsu Ueda
2007 - Jorge A. González
2008 - David C. Silverman
2009 - Bruce Hinton
2010 - Andrew Garner
2011 - Pierre Combrade
2012 - William Hartt
2013 - John Beavers
2014 - Shunichi Suzuki
2015 - Jeffrey Gorman
2016 - David Shifler
2017 - Narasi Sridhar
2018 - Robert Tapping
2019 - U. Kamachi Mudali
2020 - Roy Johnsen
See also
List of engineering awards
References
External links
Frank Newman Speller Award
Chemical engineering awards
Corrosion prevention | Frank Newman Speller Award | [
"Chemistry",
"Technology",
"Engineering"
] | 578 | [
"Corrosion prevention",
"Chemical engineering",
"Corrosion",
"Science award stubs",
"Chemical engineering awards",
"Science and technology awards"
] |
1,374,906 | https://en.wikipedia.org/wiki/Doublet%E2%80%93triplet%20splitting%20problem | In particle physics, the doublet–triplet (splitting) problem is a problem of some Grand Unified Theories, such as SU(5), SO(10), and . Grand unified theories predict Higgs bosons (doublets of ) arise from representations of the unified group that contain other states, in particular, states that are triplets of color. The primary problem with these color triplet Higgs is that they can mediate proton decay in supersymmetric theories that are only suppressed by two powers of GUT scale (i.e. they are dimension 5 supersymmetric operators). In addition to mediating proton decay, they alter gauge coupling unification. The doublet–triplet problem is the question 'what keeps the doublets light while the triplets are heavy?'
Doublet–triplet splitting and the μ-problem
In 'minimal' SU(5), the way one accomplishes doublet–triplet splitting is through a combination of interactions
where is an adjoint of SU(5) and is traceless. When acquires a vacuum expectation value
that breaks SU(5) to the Standard Model gauge symmetry the Higgs doublets and triplets acquire a mass
Since is at the GUT scale ( GeV) and the Higgs doublets need to have a weak scale mass (100 GeV), this requires
.
So to solve this doublet–triplet splitting problem requires a tuning of the two terms to within one part in .
This is also why the mu problem of the MSSM (i.e. why are the Higgs doublets so light) and doublet–triplet splitting are so closely intertwined.
Solutions to the doublet-triplet splitting
The missing partner mechanism
One solution to the doublet–triplet splitting (DTS) in the context of supersymmetric proposed in and is called the missing partner mechanism (MPM). The main idea is that in addition to the usual fields there are two additional chiral super-fields and . Note that decomposes as follows under the SM gauge group:
which contains no field that could couple to the doublets of or . Due to group theoretical reasons has to be broken by a instead of the usual , at least at the renormalizable level. The superpotential then reads
After breaking to the SM the colour triplet can get super heavy, suppressing proton decay, while the SM Higgs does not. Note that nevertheless the SM Higgs will have to pick up a mass in order to reproduce the electroweak theory correctly.
Note that although solving the DTS problem the MPM tends to render models non-perturbative just above the GUT scale. This problem is addressed by the Double missing partner mechanism.
Dimopoulos–Wilczek mechanism
In an SO(10) theory, there is a potential solution to the doublet–triplet splitting problem known as the 'Dimopoulos–Wilczek' mechanism. In SO(10), the adjoint field, acquires a vacuum expectation value of the form
.
and give masses to the Higgs doublet and triplet, respectively, and are independent of each other, because is traceless for any values they may have. If , then the Higgs doublet remains massless. This is very similar to the way that doublet–triplet splitting is done in either higher-dimensional grand unified theories or string theory.
To arrange for the VEV to align along this direction (and still not mess up the other details of the model) often requires very contrived models, however.
Higgs representations in Grand Unified Theories
In SU(5):
In SO(10):
Proton decay
Non-supersymmetric theories suffer from quartic radiative corrections to the mass squared of the electroweak Higgs boson (see hierarchy problem). In the presence of supersymmetry, the triplet Higgsino needs to be more massive than the GUT scale to prevent proton decay because it generates dimension 5 operators in MSSM; there it is not enough simply to require the triplet to have a GUT scale mass.
References
'Supersymmetry at Ordinary Energies. 1. Masses AND Conservation Laws.' Steven Weinberg. Published in Phys. Rev. D 26:287,1982.
'Proton Decay in Supersymmetric Models.' Savas Dimopoulos, Stuart A. Raby, Frank Wilczek. Published in Phys. Lett. B 112:133,1982.
'Incomplete Multiplets in Supersymmetric Unified Models.' Savas Dimopoulos, Frank Wilczek.
External links
(In this video from 12:00 to 18:00, Arkani-Hamed gives a brief discussion of the relation between the doublet–triplet splitting problem and the hierarchy problem.)
Grand Unified Theory | Doublet–triplet splitting problem | [
"Physics"
] | 996 | [
"Unsolved problems in physics",
"Physics beyond the Standard Model",
"Grand Unified Theory"
] |
1,375,226 | https://en.wikipedia.org/wiki/Immunoassay | An immunoassay (IA) is a biochemical test that measures the presence or concentration of a macromolecule or a small molecule in a solution through the use of an antibody (usually) or an antigen (sometimes). The molecule detected by the immunoassay is often referred to as an "analyte" and is in many cases a protein, although it may be other kinds of molecules, of different sizes and types, as long as the proper antibodies that have the required properties for the assay are developed. Analytes in biological liquids such as serum or urine are frequently measured using immunoassays for medical and research purposes.
Immunoassays come in many different formats and variations. Immunoassays may be run in multiple steps with reagents being added and washed away or separated at different points in the assay. Multi-step assays are often called separation immunoassays or heterogeneous immunoassays. Some immunoassays can be carried out simply by mixing the reagents and samples and making a physical measurement. Such assays are called homogeneous immunoassays, or less frequently non-separation immunoassays.
The use of a calibrator is often employed in immunoassays. Calibrators are solutions that are known to contain the analyte in question, and the concentration of that analyte is generally known. Comparison of an assay's response to a real sample against the assay's response produced by the calibrators makes it possible to interpret the signal strength in terms of the presence or concentration of analyte in the sample.
Principle
Immunoassays rely on the ability of an antibody to recognize and bind a specific macromolecule in what might be a complex mixture of macromolecules. In immunology the particular macromolecule bound by an antibody is referred to as an antigen and the area on an antigen to which the antibody binds is called an epitope.
In some cases, an immunoassay may use an antigen to detect for the presence of antibodies, which recognize that antigen, in a solution. In other words, in some immunoassays, the analyte may be an antibody rather than an antigen.
In addition to the binding of an antibody to its antigen, the other key feature of all immunoassays is a means to produce a measurable signal in response to the binding. Most, though not all, immunoassays involve chemically linking antibodies or antigens with some kind of detectable label. A large number of labels exist in modern immunoassays, and they allow for detection through different means. Many labels are detectable because they either emit radiation, produce a color change in a solution, fluoresce under light, or can be induced to emit light.
History
Rosalyn Sussman Yalow and Solomon Berson are credited with the development of the first immunoassays in the 1950s. Yalow accepted the Nobel Prize for her work in immunoassays in 1977, becoming the second American woman to have won the award.
Immunoassays became considerably simpler to perform and more popular when techniques for chemically linked enzymes to antibodies were demonstrated in the late 1960s.
In 1983, Professor Anthony Campbell
at Cardiff University replaced radioactive iodine used in immunoassay with an acridinium ester that makes its own light: chemiluminescence. This type of immunoassay is now used in around 100 million clinical tests every year worldwide, enabling clinicians to measure a wide range of proteins, pathogens and other molecules in blood samples.
By 2012, the commercial immunoassay industry earned and was thought to have prospects of slow annual growth in the 2 to 3 percent range.
Labels
Immunoassays employ a variety of different labels to allow for detection of antibodies and antigens. Labels are typically chemically linked or conjugated to the desired antibody or antigen.
Enzymes
Possibly one of the most popular labels to use in immunoassays is enzymes. Immunoassays which employ enzymes are referred to as enzyme immunoassays (EIAs), of which enzyme-linked immunosorbent assays (ELISAs) and enzyme multiplied immunoassay technique (EMIT) are the most common types.
Enzymes used in ELISAs include horseradish peroxidase (HRP), alkaline phosphatase (AP) or glucose oxidase. These enzymes allow for detection often because they produce an observable color change in the presence of certain reagents. In some cases these enzymes are exposed to reagents which cause them to produce light or chemiluminescence. There are several types of ELISA: direct, indirect, sandwich, competitive.
Radioactive isotopes
Radioactive isotopes can be incorporated into immunoassay reagents to produce a radioimmunoassay (RIA). Radioactivity emitted by bound antibody-antigen complexes can be easily detected using conventional methods.
RIAs were some of the earliest immunoassays developed, but have fallen out of favor largely due to the difficulty and potential dangers presented by working with radioactivity.
DNA reporters
A newer approach to immunoassays involves combining real-time quantitative polymerase chain reaction (RT qPCR) and traditional immunoassay techniques. Called real-time immunoquantitative PCR (iqPCR) the label used in these assays is a DNA probe.
Fluorogenic reporters
Fluorogenic reporters like phycoerythrin are used in a number of modern immunoassays. Protein microarrays are a type of immunoassay that often employ fluorogenic reporters.
Electrochemiluminescent tags
Some labels work via electrochemiluminescence (ECL), in which the label emits detectable light in response to electric current.
Label-free immunoassays
While some kind of label is generally employed in immunoassays, there are certain kinds of assays which do not rely on labels, but instead employ detection methods that do not require the modification or labeling the components of the assay. Surface plasmon resonance is an example of technique that can detect binding between an unlabeled antibody and antigens. Another demonstrated labeless immunoassay involves measuring the change in resistance on an electrode as antigens bind to it.
Classifications and formats
Immunoassays can be run in a number of different formats. Generally, an immunoassay will fall into one of several categories depending on how it is run.
Competitive, homogeneous immunoassays
In a competitive, homogeneous immunoassay, unlabelled analyte in a sample competes with labeled analyte to bind an antibody. The amount of labelled, unbound analyte is then measured. In theory, the more analyte in the sample, the more labelled analyte gets displaced and then measured; hence, the amount of labelled, unbound analyte is proportional to the amount of analyte in the sample.
The fluorescence polarization immunoassay (FPIA) measures the fluorescence polarization signal after incubation, without separating bound and free labels. Free labeled analyte analog molecules are added to the sample, and their Brownian motion differs when bound to a large antibody (Ab) versus free in solution. The analyte competes for binding to the Ab, and if the labeled analyte binds to the Ab, a signal is produced. The signal intensity is inversely proportional to the analyte concentration.
In the enzyme multiplied immunoassay technique (EMIT), free analyte analog molecules labeled with an enzyme (e.g., glucose-6-phosphate dehydrogenase enzyme) compete with the analyte being tested. The active enzyme reduces NAD (no signal) to NADH (which absorbs at 340 nm), so absorbance is monitored at 340 nm. When the labeled analyte binds to the Ab, the enzyme becomes inactive, and a signal is generated by the free label. The signal intensity is directly proportional to the analyte concentration.
The luminescent oxygen channeling immunoassay (LOCI) generates singlet oxygen species in microbeads coupled to the analyte, and when the analyte binds to the respective Ab molecule, coupled to another kind of bead, the analyte reacts with singlet oxygen, generating chemiluminescence signals proportional to the concentration of the analyte-Ab complex.
In the kinetic interaction of microparticle in solution (KIMS) and particle enhanced turbidimetric inhibition immunoassay (PETINIA), free antibodies bind to drug microparticle conjugates to form aggregates that absorb in the visible range in the absence of the analyte. In the presence of the analyte, the Ab binds to the free analyte, preventing microparticle aggregation and causing a reduction in absorbance. The signal is inversely proportional to the analyte concentration.
The cloned enzyme donor immunoassay (CEDIA) involves genetically engineering an enzyme (e.g., beta-galactosidase) into two inactive fragments: a small enzyme donor (ED) conjugated with the drug analog, and a larger enzyme acceptor (EA). When the two fragments associate, the full enzyme converts a substrate into a cleaved colored product. If drug analyte molecules are present, they compete with the ED-labeled drug in solution for the limited Ab sites. Free ED-labeled drug analog will bind to EA, generating a colorimetric signal directly proportional to the amount of analyte.
Competitive, heterogeneous immunoassays
As in a competitive, homogeneous immunoassay, unlabelled analyte in a sample competes with labelled analyte to bind an antibody. In the heterogeneous assays, the labelled, unbound analyte is separated or washed away, and the remaining labelled, bound analyte is measured.
One-site, noncompetitive immunoassays
Mixing a sample with labelled antibodies, the targeted analyte is bound by labelled antibodies. The unbound, labelled antibodies are washed away, and the bound, labelled antibodies are measured. The intensity of the signal is directly proportional to the amount of analyte in the sample.
Two-site, noncompetitive immunoassays
The analyte in the unknown sample is bound to the antibody site, then the labelled antibody is bound to the analyte. The amount of labelled antibody on the site is then measured. It will be directly proportional to the concentration of the analyte because the labelled antibody will not bind if the analyte is not present in the unknown sample. This type of immunoassay is also known as a sandwich assay as the analyte is "sandwiched" between two antibodies.
Examples
Clinical tests
A wide range of medical tests are immunoassays, called immunodiagnostics in this context. Many home pregnancy tests are immunoassays, which detect the pregnancy marker human chorionic gonadotropin. More specifically, they are qualitative tests that detect whether hCG is present, using a lateral flow setup. The COVID-19 rapid antigen test is also a qualitative, lateral-flow test.
Other clinical immunoassays are quantitative; they measure amounts. Immunoassays can measure levels of CK-MB to assess heart disease, insulin to assess hypoglycemia, prostate-specific antigen to detect prostate cancer, and some are also used for the detection and/or quantitative measurement of some pharmaceutical compounds (see Enzyme multiplied immunoassay technique for more details).
Drug testing also starts with a quick qualtitative immunoassay.
Sports anti-doping analysis
Immunoassays are used in sports anti-doping laboratories to test athletes' blood samples for prohibited recombinant human growth hormone (rhGH, rGH, hGH, GH).
Research
Photoacoustic Immunoassay
The photoacoustic immunoassay measures low-frequency acoustic signals generated by metal nanoparticle tags. Illuminated by a modulated light at a plasmon resonance wavelength, the nanoparticles generate strong acoustic signal, which can be measured using a microphone. The photoacoustic immunoassay can be applied to lateral flow tests, which use colloidal nanoparticles.
See also
ELISA
MELISA
ECLIA
Immunoscreening
Nephelometry
Lateral flow test
Magnetic immunoassay
Radioimmunoassay
Surround Optical Fiber Immunoassay (SOFIA)
CD/DVD based immunoassay
Agglutination-PCR
References
External links
"The Immunoassay Handbook", 3rd Edition, David Wild, Ed., Elsevier, 2008
Chapter 5 and 6 in the book "Bioanalytical Chemistry" by Susan R. Mikkelsen | Immunoassay | [
"Biology"
] | 2,726 | [
"Immunologic tests"
] |
1,375,452 | https://en.wikipedia.org/wiki/Fast%20ice | Fast ice (also called land-fast ice, landfast ice, and shore-fast ice) is sea ice or lake ice that is "fastened" to the coastline, to the sea floor along shoals, or to grounded icebergs. Fast ice may either grow in place from the sea water or by freezing pieces of drifting ice to the shore or other anchor sites. Unlike drift (or pack) ice, fast ice does not move with currents and winds.
The width (and the presence) of this ice zone is usually seasonal and depends on ice thickness, topography of the sea floor and islands. It ranges from a few meters to several hundred kilometers. Seaward expansion is a function of a number of factors, notably water depth, shoreline protection, time of year and pressure from the pack ice. In Arctic seas the fast ice extends down to depths of , while in the Subarctic seas, the zone extends to depths of about . In some coastal areas with abrupt shelf and no islands, e.g., in the Sea of Okhotsk off Hokkaidō, tides prevent the formation of any fast ice. Smaller ocean basins may contain only the fast ice zone with no pack ice (e.g. McMurdo Sound in Antarctica).
The topography of the fast ice varies from smooth and level to rugged (when submitted to large pressures). The ice foot refers to ice that has formed at the shoreline, through multiple freezing of water between ebb tides, and is separated from the remainder of the fast ice surface by tidal cracks. Further away from the coastline, the ice may become anchored to the sea bottom—it is then referred to as bottomfast ice. Fast ice can survive one or more melting seasons (i.e. summer), in which case it can be designated following the usual age-based categories: first-year, second-year, multiyear. The fast ice boundary is the limit between fast ice and drift (or pack) ice—in places, this boundary may coincide with a shear ridge. Fast ice may be delimited or enclose pressure ridges which extend sufficiently downward so as to be grounded—these features are known as stamukhi.
See also
Anchor ice, also called bottom-fast ice
Ice bridge
Sea ice
Stamukha
References
Sea ice
Glaciology
Bodies of ice | Fast ice | [
"Physics"
] | 474 | [
"Physical phenomena",
"Earth phenomena",
"Sea ice"
] |
1,375,635 | https://en.wikipedia.org/wiki/Adenylate%20kinase | Adenylate kinase (EC 2.7.4.3) (also known as ADK or myokinase) is a phosphotransferase enzyme that catalyzes the interconversion of the various adenosine phosphates (ATP, ADP, and AMP). By constantly monitoring phosphate nucleotide levels inside the cell, ADK plays an important role in cellular energy homeostasis.
Substrate and products
The reaction catalyzed is:
ATP + AMP ⇔ 2 ADP
The equilibrium constant varies with condition, but it is close to 1. Thus, ΔGo for this reaction is close to zero. In muscle from a variety of species of vertebrates and invertebrates, the concentration of ATP is typically 7-10 times that of ADP, and usually greater than 100 times that of AMP. The rate of oxidative phosphorylation is controlled by the availability of ADP. Thus, the mitochondrion attempts to keep ATP levels high due to the combined action of adenylate kinase and the controls on oxidative phosphorylation.
Isozymes
To date there have been nine human ADK protein isoforms identified. While some of these are ubiquitous throughout the body, some are localized into specific tissues. For example, ADK7 and ADK8 are both only found in the cytosol of cells; and ADK7 is found in skeletal muscle whereas ADK8 is not. Not only do the locations of the various isoforms within the cell vary, but the binding of substrate to the enzyme and kinetics of the phosphoryl transfer are different as well. ADK1, the most abundant cytosolic ADK isozyme, has a Km about a thousand times higher than the Km of ADK7 and 8, indicating a much weaker binding of ADK1 to AMP. Sub-cellular localization of the ADK enzymes is done by including a targeting sequence in the protein. Each isoform also has different preference for NTP's. Some will only use ATP, whereas others will accept GTP, UTP, and CTP as the phosphoryl carrier.
Some of these isoforms prefer other NTP's entirely. There is a mitochondrial GTP:AMP phosphotransferase, also specific for the phosphorylation of AMP, that can only use GTP or ITP as the phosphoryl donor. ADK has also been identified in different bacterial species and in yeast. Two further enzymes are known to be related to the ADK family, i.e. yeast uridine monophosphokinase and slime mold UMP-CMP kinase. Some residues are conserved across these isoforms, indicating how essential they are for catalysis. One of the most conserved areas includes an Arg residue, whose modification inactivates the enzyme, together with an Asp that resides in the catalytic cleft of the enzyme and participates in a salt bridge.
Subfamilies
Adenylate kinase, subfamily
UMP-CMP kinase
Adenylate kinase, isozyme 1
Mechanism
Phosphoryl transfer only occurs on closing of the 'open lid'. This causes an exclusion of water molecules that brings the substrates in proximity to each other, lowering the energy barrier for the nucleophilic attack by the α-phosphoryl of AMP on the γ-phosphoryl group of ATP resulting in formation of ADP by transfer of the γ-phosphoryl group to AMP. In the crystal structure of the ADK enzyme from E. coli with inhibitor Ap5A, the Arg88 residue binds the Ap5A at the α-phosphate group. It has been shown that the mutation R88G results in 99% loss of catalytic activity of this enzyme, suggesting that this residue is intimately involved in the phosphoryl transfer. Another highly conserved residue is Arg119, which lies in the adenosine binding region of the ADK, and acts to sandwich the adenine in the active site. It has been suggested that the promiscuity of these enzymes in accepting other NTP's is due to this relatively inconsequential interactions of the base in the ATP binding pocket. A network of positive, conserved residues (Lys13, Arg123, Arg156, and Arg167 in ADK from E. coli) stabilize the buildup of negative charge on phosphoryl group during the transfer. Two distal aspartate residues bind to the arginine network, causing the enzyme to fold and reduces its flexibility. A magnesium cofactor is also required, essential for increasing the electrophilicity of the phosphate on AMP, though this magnesium ion is only held in the active pocket by electrostatic interactions and dissociates easily.
Structure
Flexibility and plasticity allow proteins to bind to ligands, form oligomers, aggregate, and perform mechanical work. Large conformational changes in proteins play an important role in cellular signaling. Adenylate Kinase is a signal transducing protein; thus, the balance between conformations regulates protein activity. ADK has a locally unfolded state that becomes depopulated upon binding.
A 2007 study by Whitford et al. shows the conformations of ADK when binding with ATP or AMP. The study shows that there are three relevant conformations or structures of ADK—CORE, Open, and Closed. In ADK, there are two small domains called the LID and NMP. ATP binds in the pocket formed by the LID and CORE domains. AMP binds in the pocket formed by the NMP and CORE domains. The Whitford study also reported findings that show that localized regions of a protein unfold during conformational transitions. This mechanism reduces the strain and enhances catalytic efficiency. Local unfolding is the result of competing strain energies in the protein.
The local (thermodynamic) stability of the substrate-binding domains ATPlid and AMPlid has been shown to be significantly lower when compared with the CORE domain in ADKE. coli. Furthermore, it has been shown that the two subdomains (ATPlid and AMPlid) can fold and unfold in a "non-cooperative manner." Binding of the substrates causes preference for 'closed' conformations amongst those that are sampled by ADK. These 'closed' conformations are hypothesized to help with removal of water from the active site to avoid wasteful hydrolysis of ATP in addition to helping optimize alignment of substrates for phosphoryl-transfer. Furthermore, it has been shown that the apoenzyme will still sample the 'closed' conformations of the ATPlid and AMPlid domains in the absence of substrates. When comparing the rate of opening of the enzyme (which allows for product release) and the rate of closing that accompanies substrate binding, closing was found to be the slower process.
Function
Metabolic monitoring
The ability for a cell to dynamically measure energetic levels provides it with a method to monitor metabolic processes. By continually monitoring and altering the levels of ATP and the other adenyl phosphates (ADP and AMP levels) adenylate kinase is an important regulator of energy expenditure at the cellular level. As energy levels change under different metabolic stresses adenylate kinase is then able to generate AMP; which itself acts as a signaling molecule in further signaling cascades. This generated AMP can, for example, stimulate various AMP-dependent receptors such as those involved in glycolytic pathways, K-ATP channels, and 5' AMP-activated protein kinase (AMPK). Common factors that influence adenine nucleotide levels, and therefore ADK activity are exercise, stress, changes in hormone levels, and diet. It facilitates decoding of cellular information by catalyzing nucleotide exchange in the intimate “sensing zone” of metabolic sensors.
ADK shuttle
Adenylate kinase is present in mitochondrial and myofibrillar compartments in the cell, and it makes two high-energy phosphoryls (β and γ) of ATP available to be transferred between adenine nucleotide molecules. In essence, adenylate kinase shuttles ATP to sites of high energy consumption and removes the AMP generated over the course of those reactions. These sequential phosphotransfer relays ultimately result in propagation of the phosphoryl groups along collections of ADK molecules. This process can be thought of as a bucket brigade of ADK molecules that results in changes in local intracellular metabolic flux without apparent global changes in metabolite concentrations. This process is extremely important for overall homeostasis of the cell.
Disease relevance
Nucleoside diphosphate kinase deficiency
Nucleoside diphosphate (NDP) kinase catalyzes in vivo ATP-dependent synthesis of ribo- and deoxyribonucleoside triphosphates. In mutated Escherichia coli that had a disrupted nucleoside diphosphate kinase, adenylate kinase performed dual enzymatic functions. ADK complements nucleoside diphosphate kinase deficiency.
AK1 and post-ischemic coronary reflow
Knock out of AK1 disrupts the synchrony between inorganic phosphate and turnover at ATP-consuming sites and ATP synthesis sites. This reduces the energetic signal communication in the post-ischemic heart and precipitates inadequate coronary reflow following ischemia-reperfusion.
ADK2 deficiency
Adenylate Kinase 2 (AK2) deficiency in humans causes hematopoietic defects associated with sensorineural deafness. Reticular dysgenesis is an autosomal recessive form of human combined immunodeficiency. It is also characterized by an impaired lymphoid maturation and early differentiation arrest in the myeloid lineage. AK2 deficiency results in absent or a large decrease in the expression of proteins. AK2 is specifically expressed in the stria vascularis of the inner ear which indicates why individuals with an AK2 deficiency will have sensorineural deafness.
Structural adaptations
AK1 genetic ablation decreases tolerance to metabolic stress. AK1 deficiency induces fiber-type specific variation in groups of transcripts in glycolysis and mitochondrial metabolism. This supports muscle energy metabolism.
Plastidial ADK deficiency in Arabidopsis thaliana
Enhanced growth and elevated photosynthetic amino acid is associated with plastidial adenylate kinase deficiency in Arabidopsis thaliana.
References
External links
Cellular respiration
EC 2.7.4 | Adenylate kinase | [
"Chemistry",
"Biology"
] | 2,189 | [
"Biochemistry",
"Cellular respiration",
"Metabolism"
] |
1,376,120 | https://en.wikipedia.org/wiki/Sackur%E2%80%93Tetrode%20equation | The Sackur–Tetrode equation is an expression for the entropy of a monatomic ideal gas.
It is named for Hugo Martin Tetrode (1895–1931) and Otto Sackur (1880–1914), who developed it independently as a solution of Boltzmann's gas statistics and entropy equations, at about the same time in 1912.
Formula
The Sackur–Tetrode equation expresses the entropy of a monatomic ideal gas in terms of its thermodynamic state—specifically, its volume , internal energy , and the number of particles :
where is the Boltzmann constant, is the mass of a gas particle and is the Planck constant.
The equation can also be expressed in terms of the thermal wavelength :
For a derivation of the Sackur–Tetrode equation, see the Gibbs paradox. For the constraints placed upon the entropy of an ideal gas by thermodynamics alone, see the ideal gas article.
The above expressions assume that the gas is in the classical regime and is described by Maxwell–Boltzmann statistics (with "correct Boltzmann counting"). From the definition of the thermal wavelength, this means the Sackur–Tetrode equation is valid only when
The entropy predicted by the Sackur–Tetrode equation approaches negative infinity as the temperature approaches zero.
Sackur–Tetrode constant
The Sackur–Tetrode constant, written S0/R, is equal to S/kBN evaluated at a temperature of T = 1 kelvin, at standard pressure (100 kPa or 101.325 kPa, to be specified), for one mole of an ideal gas composed of particles of mass equal to the atomic mass constant (). Its 2018 CODATA recommended value is:
S0/R = for po = 100 kPa
S0/R = for po = 101.325 kPa.
Information-theoretic interpretation
In addition to the thermodynamic perspective of entropy, the tools of information theory can be used to provide an information perspective of entropy. In particular, it is possible to derive the Sackur–Tetrode equation in information-theoretic terms. The overall entropy is represented as the sum of four individual entropies, i.e., four distinct sources of missing information. These are positional uncertainty, momenta uncertainty, the quantum mechanical uncertainty principle, and the indistinguishability of the particles. Summing the four pieces, the Sackur–Tetrode equation is then given as
The derivation uses Stirling's approximation, . Strictly speaking, the use of dimensioned arguments to the logarithms is incorrect, however their use is a "shortcut" made for simplicity. If each logarithmic argument were divided by an unspecified standard value expressed in terms of an unspecified standard mass, length and time, these standard values would cancel in the final result, yielding the same conclusion. The individual entropy terms will not be absolute, but will rather depend upon the standards chosen, and will differ with different standards by an additive constant.
References
Further reading
.
. (This derives a Sackur–Tetrode equation in a different way, also based on information.)
.
.
Equations of state
Ideal gas
Thermodynamic entropy | Sackur–Tetrode equation | [
"Physics"
] | 667 | [
"Thermodynamic systems",
"Physical quantities",
"Equations of physics",
"Statistical mechanics",
"Thermodynamic entropy",
"Physical systems",
"Entropy",
"Equations of state",
"Ideal gas"
] |
1,376,386 | https://en.wikipedia.org/wiki/Process%20Safety%20Management%20%28OSHA%20regulation%29 | Process Safety Management of Highly Hazardous Chemicals is a regulation promulgated by the U.S. Occupational Safety and Health Administration (OSHA). It defines and regulates a process safety management (PSM) program for plants using, storing, manufacturing, handling or carrying out on-site movement of hazardous materials above defined amount thresholds. Companies affected by the regulation usually build a compliant process safety management system and integrate it in their safety management system. Non-U.S. companies frequently choose on a voluntary basis to use the OSHA scheme in their business.
The PSM regulation was the culmination of a push for more comprehensive regulation of facilities storing and/or processing hazardous materials, which began in the wake of the 1984 Bhopal disaster. The regulation was promulgated by OSHA in 1992 in fulfilment of requirements set in the 1990 amendments to the Clean Air Act. The EPA followed suit with a similar and complementary regulation in 1996.
Compliance
Any U.S. facility that stores or uses a hazardous material above thresholds defined in section (a)(1) and Appendix A must comply with the PSM regulation. For individual chemical species listed in Appendix A, threshold quantities vary from as low as 100 lb (45 kg; e.g., methyl hydrazine, phosgene) to as much as 15,000 lb (6804 kg; e.g., ammonia solutions, methyl chloride). The threshold for flammable gases and liquids (the latter defined as having a flash point below 100°F or 37.8°C) is 10,000 lb (4536 kg).
Usually, these facilities are also subject to another, similar regulation issued by the Environmental Protection Agency (EPA), known as the Risk Management Program (RMP) rule (Title 40 CFR Part 68). The Center for Chemical Process Safety (CCPS) of the American Institute of Chemical Engineers (AIChE) publishes guidelines for building PSM systems that comply and exceed OSHA's expectations. These include for example guidelines on process safety documentation and implementing process safety management systems.
Process Safety Management elements
The Process Safety Management program is divided into 14 "elements":
Employee participation
Process safety information
Process hazard analysis
Operating procedures
Training
Contractors
Pre-startup safety review
Mechanical integrity
Hot work permit
Management of change
Incident investigation
Emergency planning and response
Compliance audits
Trade secrets
All the elements are interlinked and interdependent. Every element either contributes information to other elements for the completion or utilizes information from other elements in order to be completed.
Employee participation
Under PSM, employers must consult with employees and their representatives on the conduct and development of process hazard analyses and on the development of the other elements of process management, and they must provide to employees and their representatives access to process hazard analyses and to all other information required to be developed by the standard. Employee participation in process safety activities and processes helps the organization build a positive climate of collaboration across management and workers, which sustains in turn a strong process safety culture.
Process safety information
Process safety information (PSI) refers to key documentation for identifying and understanding the hazards posed by the plant activities involving highly hazardous chemicals. In order to be in compliance with the OSHA PSM regulation, process safety information should include information pertaining to three areas: hazardous chemicals used or produced, technology of the process, and equipment in the process.
Information pertaining to the material hazards (which is usually collected in dedicated Material Safety Data Sheets [MSDS]) shall consist of at least:
Toxicity information
Permissible exposure limits
Physical data
Reactivity data
Corrosivity data
Thermal and chemical stability data
Hazardous effects of inadvertent mixing of different materials that could foreseeably occur
Information pertaining to the technology of the process shall include at least:
A block flow diagram or simplified process flow diagram
Process chemistry and its properties
Maximum intended inventory
Safety upper and lower limits for such items as temperatures, pressures, flows or compositions
An evaluation of the consequences of deviations, including those affecting the safety and health of the employees
Information pertaining to the equipment in the process should include the following:
Materials of construction
Piping and instrumentation diagrams (P&IDs)
Electrical classification
Relief system design and design basis
Ventilation system design
Design codes and standards employed
Material and energy balances
Safety system (for example interlocks, detection and suppression systems)
The employer shall document that equipment complies with "recognized and generally accepted good engineering practices" (RAGAGEP).
Process hazard analysis
A process hazard analysis (PHA) (or process hazard evaluation) is an exercise for the identification of hazards of a process facility and the qualitative or semi-quantitative assessment of the associated risk. A PHA provides information intended to assist managers and employees in making decisions for improving safety and reducing the consequences of unwanted or unplanned releases of hazardous materials. A PHA is directed toward analyzing potential causes and consequences of fires, explosions, releases of toxic or flammable chemicals and major spills of hazardous chemicals, and it focuses on equipment, instrumentation, utilities, human actions, and external factors that might impact the process.
This element has been called "the heart of the program", as it "impacts or interfaces with all of the other elements". PHA relies on availability and completeness of process safety information; it requires employee participation in order to be effective; it may impact operating procedure through its findings and recommendations; it must be embedded in any management-of-change process and any pre-start-up safety review.
There are varieties of methodologies that can be used to conduct a PHA, including checklists, Preliminary Hazard Analysis (PreHA), Hazard Identification (HAZID) reviews, What-If reviews and SWIFT, Hazard and Operability (HAZOP) studies, Failure Mode and Effect Analysis (FMEA), etc.
Operating procedures
Operating procedures must be consistent with the process safety information and provide clear instructions for safely conducting activities involving hazardous materials. To ensure that a ready and up-to-date reference is available, and to form a foundation for needed employee training, operating procedures must be readily accessible to employees who work in or maintain a process. They must address at least the following elements:
Steps for each operating phase: initial startup, normal operations, temporary operations, emergency shutdown, emergency operations, normal shutdown, and startup following a planned or emergency shutdown.
Operating limits: consequences of deviation, and steps required to correct or avoid deviation.
Safety and health considerations: properties and hazards of the chemicals used in the process, precautions (including engineering controls, administrative controls, and personal protective equipment), control measures to be taken if physical contact or airborne exposure occurs, quality control for raw materials and control of hazardous chemical inventory levels, any special or unique hazards, and safety systems (e.g., interlocks, detection or suppression).
The operating procedures must be reviewed as often as necessary to ensure that they reflect current operating practices, including changes in process chemicals, technology, equipment, and facilities. To guard against outdated or inaccurate operating procedures, the employer must certify annually that these operating procedures are current and accurate.
It is mandatory that the following activities be covered in dedicated operating procedures: lockout/tagout, confined space entry, opening process equipment or piping, and control over entrance into a facility by maintenance, contractor, laboratory, or other support personnel. These safe work practices must apply both to employees and to contractor employees.
Training
Training relevant to PSM must include emphasis on the specific safety and health hazards of the process, emergency operations including shutdown, and other safe work practices that apply to the employee’s job tasks. The regulation distinguishes between two types of training relevant to PSM, i.e. initial training and refresher training. Training records must be kept and maintained.
Contractors
Contractor management is important in any safety management system, including process safety management programs. A contracting company has to be mindful that outsourced personnel are not necessarily aware of the work site hazards and/or the way the contracting company manages those hazards. Contractors may also introduce new hazards to the plant.
OSHA's PSM includes special provisions for contractors and their employees to emphasize the importance of everyone taking care that they do nothing to endanger those working nearby who may work for another employer. The contracting party must obtain and evaluate the contractor's safety performance and programs, inform the contracted personnel of the relevant fire, explosion, or toxic release hazards, explain to them the applicable provisions of the emergency action plan, evaluate periodically their performance in fulfilling their obligations, and maintain a contract employee injury and illness log. The contracted company must ensure that its employees have sufficient relevant training for the contracted job, ensure that its employees are instructed in the relevant site process hazards and the applicable provisions of the emergency action plan, document that they have received and understood required training, keep a record of key information about the contracted employees on the job and the activities carried out, and ensure that each contracted employee follows the safety rules of the facility.
Pre-startup safety review
A pre-startup safety review (PSSR) shall take place before any highly hazardous material is introduced into a process, i.e. before the plant start-up. The requirement applies to new facilities and modified ones, when the modification causes changes in the process safety information. The review must confirm that:
Construction and equipment are in accordance with design specifications.
Safety, operating, maintenance, and emergency procedures are in place and are adequate.
A process hazard analysis has been performed for new facilities and recommendations have been resolved or implemented.
Modified facilities meet the management of change requirements.
Training of each employee involved in operating a process has been completed.
Mechanical integrity
In the context of OSHA's PSM, mechanical integrity requirements apply to the following equipment:
Pressure vessels and storage tanks.
Piping systems (including piping components such as valves).
Relief and vent systems and devices.
Emergency shutdown systems.
Controls (including monitoring devices and sensors, alarms, and interlocks).
Pumps.
In order to minimize the risk of unwanted releases of hazardous materials, companies must establish and implement adequate maintenance strategies.
PSM schemes other than OSHA's usually extend this element to cover the integrity assurance of safety-critical systems in general, not just those directly responsible for fluid containment, according to a wider asset integrity management strategy that includes systems such as active and passive fire protection, fire and gas detection, sources of emergency power, etc.
Hot work permit
Among several safety systems of work relevant to hazardous process plants, OSHA's PSM singles out the permit-to-work for hot work as arguably the most critical for the prevention of major process safety accidents. Hot work provides ignition sources to potential flammable vapors, which can cause fires and/or explosions. The permit must document that the fire prevention and protection requirements in OSHA regulations have been implemented prior to beginning the hot work operations. It must indicate the date(s) authorized for hot work and identify the object on which hot work is to be performed. The permit must be kept on file until completion of the hot work.
Management of change
Undocumented, not properly risk assessed changes to a plant handling hazardous materials are a recipe for disaster. An eminent example of this is the Flixborough disaster, where improvised changes involving the bypassing of a stage in a reactor train was at the origin of the accident. The change had not been properly thought out, documented and risk-assessed, so that the event of breach of containment had not been identified. Changes to a process must be thoroughly evaluated to fully assess their impact on employee safety and health and to determine needed changes to operating procedures. Written procedures to manage changes (except for “replacements in kind”) to process chemicals, technology, equipment, and procedures must be established and implemented. Minimum content of the documentation is:
The technical basis for the change.
Impact of the change on safety and health.
Modifications to operating procedures.
Necessary time period for the change.
Authorization requirements.
Employees who operate a process and maintenance and contract employees whose job tasks will be affected by a change in the process must be informed of, and trained in, the change.
Incident investigation
Incident investigation provides a fundamental opportunity to learn from past mistakes and disseminate the new knowledge gathered throughout the organization and, if possible, to external stakeholders. Accordingly, thorough internal investigation of incidents to identify the chain of events and causes is crucial to OSHA's PSM. Investigation must be initiated as promptly as possible, not later than 48 hours following the incident. OSHA establishes requirements for the investigation team selection and the content of the investigation report, which has to conclude with a series of relevant lessons learnt in the form of recommendations. These shall be tracked and closed out accordingly.
Emergency planning and response
The consequences of an accident can be significantly reduced with effective emergency planning and response. By way of example, the response to the Tacoa disaster was largely unorganized and uninformed about the nature of the fire that was burning inside a fuel oil tank. As a result, the responders, as well as scores of bystanders and media workers, stayed well within the area impacted by the violent boilover that took place, which resulted in the death of more than 150 people. Additionally, robust emergency management helps an organization safeguard its public image in case of accidents. Accordingly, the PSM regulation mandates that emergency preparedness arrangements be put in place, including emergency pre-planning and training to make employees aware of, and able to execute, proper actions. The plan must comply with the provisions of other OSHA rules (29 CFR 1910.38).
Compliance audits
Similar to incident investigation, audits are an important tool an organization can use to assess whether its process safety management system is in place and it is effectively applied throughout its ranks. To be certain process safety management is effective, employers subject to the PSM regulation must certify by way of audits that they have evaluated compliance with the provisions of PSM at least every three years. The compliance audit must be conducted by at least one person knowledgeable in the process and a report of the findings of the audit must be developed and documented noting deficiencies that have been corrected.
Trade secrets
OSHA's PSM is the only major process safety management code to include trade secrets among its elements. Emphasis is given in the regulation to the fact that trade secrets may in principle restrict circulation of key information in several ambits of process safety management, such as process safety information, compliance audits, operating procedures, process hazard analysis, incident investigation, etc. The regulation makes it compulsory for organizations to release the information to the respective parties, irrespective of whether it is protected by trade secrecy. Nothing in PSM, however, precludes the employer from requiring those persons to enter into confidentiality agreements not to disclose the information.
See also
Process safety
Safety management systems
References
Process Safety Management (OSHA regulation)
Process Safety Management (OSHA regulation)
Process Safety Management (OSHA regulation) | Process Safety Management (OSHA regulation) | [
"Chemistry",
"Engineering"
] | 3,051 | [
"Chemical process engineering",
"Safety engineering",
"Process safety"
] |
1,376,805 | https://en.wikipedia.org/wiki/Laminate%20flooring | Laminate flooring (also called floating wood tile in the United States) is a multi-layer synthetic flooring product fused together with a lamination process. Laminate flooring simulates wood (or sometimes stone) with a photographic appliqué layer under a clear protective layer. The inner core layer is usually composed of melamine resin and fiber board materials. There is a European Standard No. EN 13329:2000 specifying laminate floor covering requirements and testing methods.
Laminate flooring has grown significantly in popularity, perhaps because it may be easier to install and maintain than more traditional surfaces such as hardwood flooring. It may also have the advantages of costing less and requiring less skill to install than alternative flooring materials. It is reasonably durable, hygienic (several brands contain an antimicrobial resin), and relatively easy to maintain.
Installation
Laminate floors are reasonably easy for a DIY homeowner to install. Laminate flooring is packaged as a number of tongue and groove planks, which can be clicked into one another. Sometimes a glue backing is provided for ease of installation. Installed laminate floors typically "float" over the sub-floor on top of a foam/film underlayment, which provides moisture- and sound-reducing properties. A small () gap is required between the flooring and any immovable object such as walls, this allows the flooring to expand without being obstructed. Baseboards (skirting boards) can be removed and then reinstalled after laying of the flooring is complete for a neater finish, or the baseboard can be left in place with the flooring butted into it, then small beading trims such as shoe moulding or the larger quarter-round moulding can be fitted to the bottoms of the baseboards. Saw cuts on the planks are usually required at edges and around cupboard and door entrances, but professional installers typically use door jamb undercut saws to cut out a space to a height that allows the flooring to go under the door jamb and casing for a cleaner look.
Improper installation can result in peaking, in which adjacent boards form an inverted V shape projecting from the floor, or gaps, in which two adjacent boards are separated from each other.
Care
It is important to keep laminate clean, as dust, dirt, and sand particles may scratch the surface over time in high-traffic areas. It is also important to keep laminate relatively dry, since sitting water/moisture can cause the planks to swell, warp, etc., though some brands are equipped with water-resistant coatings. Water spills are not a problem if they are wiped up quickly, and not allowed to sit for a prolonged period of time.
Adhesive felt pads are often placed on the feet of furniture on laminate floors to prevent scratching.
Inferior glueless laminate floors may gradually become separated, creating visible gaps between planks. It is important to "tap" the planks back together using the appropriate tool as gaps are noticed in order to prevent dirt filling the gaps, thus making it more difficult to put into place.
Quality glueless laminate floors use joining mechanisms which hold the planks together under constant tension which prevent dirt entering the joints and do not need "tapping" back together periodically.
Advocacy
The North American Laminate Flooring Association (NALFA) is a trade association of laminate flooring manufacturers and laminate flooring manufacturer suppliers in the United States and Canada. It is a standards developing organization accredited by the American National Standards Institute (ANSI) to develop voluntary consensus standards for laminate flooring materials, and it has established testing and performance criteria that are used in North America.
NALFA issues a certification mark named the NALFA Certification Seal which signifies that the product has passed 10 performance tests, has been proven to meet these standards by an independent, third-party testing lab, and has been manufactured in North America. The certification review includes:
Static Load – Measures the ability of laminate flooring to resist residual indentation resulting from a static load.
Thickness Swell – Measures the ability of laminate flooring to resist increase in thickness after being exposed to water.
Light Resistance – Measures the ability of laminate flooring to retain its color when exposed to a light source having a frequency range approximating sunlight through window glass. It is not intended to show the resistance to continuous exposure to outdoor weathering conditions.
Cleanability and Stain Resistance – Measures both the ease of cleanability and stain resistance of laminate flooring to common household substances.
Large Ball Resistance – Measures the ability of laminate flooring to resist fracture due to impact by a large diameter ball.
Small Ball Resistance – Measures the ability of laminate flooring to resist fracture due to impact by a small diameter ball.
Water Resistance – Measures the ability of the surface of laminate flooring to resist abrasive wear through the décor layer.
Dimension Tolerance – Measures the dimensional variance between tiles of laminate flooring in a manufactured free standing (unrestricted) shape in respect to thickness, length, width, straightness and squareness.
Caster Chair Resistance – Specifies a method for determining the change of appearance and stability of a laminate floor, including joints, under the movement of a caster chair.
Surface Bond – Measures the force required to delaminate or split away the surface of laminate flooring plank or tile.
Potential health effects and LEED status
Laminate flooring is often made of melamine resin, a compound made with formaldehyde. The formaldehyde is more tightly bound in melamine formaldehyde (MF) than it is in urea-formaldehyde (UF), reducing emissions and potential health effects. Thus LEED v2.2's EQ Credit 4.4 precludes the use of UF, but allows the use of MF.
Laminated flooring is commonly used in LEED residential and commercial applications.
History
Laminate flooring was invented in 1977 by the Swedish company , and sold under the brand name Pergo. They had been making floor surfaces since 1923. The company first marketed its product to Europe in 1984, and later to the United States in 1994. Perstorp spun off its flooring division as the separate company named Pergo, now a subsidiary of Mohawk Industries. Pergo is the most widely known laminate flooring manufacturer, but the trademark PERGO is not synonymous for all laminate floors.
Glueless laminate flooring was invented in 1996 by the Swedish company Välinge Aluminium (now Välinge Innovation) and sold under the names of Alloc and Fiboloc. However, a system for holding flooring panels together was also developed in parallel by the Belgian company Unilin released in 1997 and sold under the name of .
The two companies have been in a great number of legal conflicts over the years, and today most, if not all glueless locking flooring is made under license from either Välinge, Unilin, or even a combination of both.
References
Composite materials
Floors
Swedish inventions
fr:Parquet flottant | Laminate flooring | [
"Physics",
"Engineering"
] | 1,451 | [
"Structural engineering",
"Floors",
"Composite materials",
"Materials",
"Matter"
] |
1,377,241 | https://en.wikipedia.org/wiki/Phragm%C3%A9n%E2%80%93Lindel%C3%B6f%20principle | In complex analysis, the Phragmén–Lindelöf principle (or method), first formulated by Lars Edvard Phragmén (1863–1937) and Ernst Leonard Lindelöf (1870–1946) in 1908, is a technique which employs an auxiliary, parameterized function to prove the boundedness of a holomorphic function (i.e, ) on an unbounded domain when an additional (usually mild) condition constraining the growth of on is given. It is a generalization of the maximum modulus principle, which is only applicable to bounded domains.
Background
In the theory of complex functions, it is known that the modulus (absolute value) of a holomorphic (complex differentiable) function in the interior of a bounded region is bounded by its modulus on the boundary of the region. More precisely, if a non-constant function is holomorphic in a bounded region and continuous on its closure , then for all . This is known as the maximum modulus principle. (In fact, since is compact and is continuous, there actually exists some such that .) The maximum modulus principle is generally used to conclude that a holomorphic function is bounded in a region after showing that it is bounded on its boundary.
However, the maximum modulus principle cannot be applied to an unbounded region of the complex plane. As a concrete example, let us examine the behavior of the holomorphic function in the unbounded strip
.
Although , so that is bounded on boundary , grows rapidly without bound when along the positive real axis. The difficulty here stems from the extremely fast growth of along the positive real axis. If the growth rate of is guaranteed to not be "too fast," as specified by an appropriate growth condition, the Phragmén–Lindelöf principle can be applied to show that boundedness of on the region's boundary implies that is in fact bounded in the whole region, effectively extending the maximum modulus principle to unbounded regions.
Outline of the technique
Suppose we are given a holomorphic function and an unbounded region , and we want to show that on . In a typical Phragmén–Lindelöf argument, we introduce a certain multiplicative factor satisfying to "subdue" the growth of . In particular, is chosen such that (i): is holomorphic for all and on the boundary of an appropriate bounded subregion ; and (ii): the asymptotic behavior of allows us to establish that for (i.e., the unbounded part of outside the closure of the bounded subregion). This allows us to apply the maximum modulus principle to first conclude that on and then extend the conclusion to all . Finally, we let so that for every in order to conclude that on .
In the literature of complex analysis, there are many examples of the Phragmén–Lindelöf principle applied to unbounded regions of differing types, and also a version of this principle may be applied in a similar fashion to subharmonic and superharmonic functions.
Example of application
To continue the example above, we can impose a growth condition on a holomorphic function that prevents it from "blowing up" and allows the Phragmén–Lindelöf principle to be applied. To this end, we now include the condition that
for some real constants and , for all . It can then be shown that for all implies that in fact holds for all . Thus, we have the following proposition:
Proposition. Let
Let be holomorphic on and continuous on , and suppose there exist real constants such that
for all and for all . Then for all .
Note that this conclusion fails when , precisely as the motivating counterexample in the previous section demonstrates. The proof of this statement employs a typical Phragmén–Lindelöf argument:
Proof: (Sketch) We fix and define for each the auxiliary function by . Moreover, for a given , we define to be the open rectangle in the complex plane enclosed within the vertices . Now, fix and consider the function . Because one can show that for all , it follows that for . Moreover, one can show for that uniformly as . This allows us to find an such that whenever and . Now consider the bounded rectangular region . We have established that for all . Hence, the maximum modulus principle implies that for all . Since also holds whenever and , we have in fact shown that holds for all . Finally, because as , we conclude that for all . Q.E.D.
Phragmén–Lindelöf principle for a sector in the complex plane
A particularly useful statement proved using the Phragmén–Lindelöf principle bounds holomorphic functions on a sector of the complex plane if it is bounded on its boundary. This statement can be used to give a complex analytic proof of the Hardy's uncertainty principle, which states that a function and its Fourier transform cannot both decay faster than exponentially.
Proposition. Let be a function that is holomorphic in a sector
of central angle , and continuous on its boundary. If
for , and
for all , where and , then holds also for all .
Remarks
The condition () can be relaxed to
with the same conclusion.
Special cases
In practice the point 0 is often transformed into the point ∞ of the Riemann sphere. This gives a version of the principle that applies to strips, for example bounded by two lines of constant real part in the complex plane. This special case is sometimes known as Lindelöf's theorem.
Carlson's theorem is an application of the principle to functions bounded on the imaginary axis.
See also
Hadamard three-lines theorem
References
(Correction, vol. 21, 1921).
(See chapter 5)
Mathematical principles
Theorems in complex analysis | Phragmén–Lindelöf principle | [
"Mathematics"
] | 1,191 | [
"Mathematical principles",
"Theorems in mathematical analysis",
"Theorems in complex analysis"
] |
1,377,405 | https://en.wikipedia.org/wiki/Zeckendorf%27s%20theorem | In mathematics, Zeckendorf's theorem, named after Belgian amateur mathematician Edouard Zeckendorf, is a theorem about the representation of integers as sums of Fibonacci numbers.
Zeckendorf's theorem states that every positive integer can be represented uniquely as the sum of one or more distinct Fibonacci numbers in such a way that the sum does not include any two consecutive Fibonacci numbers. More precisely, if is any positive integer, there exist positive integers , with , such that
where is the th Fibonacci number. Such a sum is called the Zeckendorf representation of . The Fibonacci coding of can be derived from its Zeckendorf representation.
For example, the Zeckendorf representation of 64 is
.
There are other ways of representing 64 as the sum of Fibonacci numbers
but these are not Zeckendorf representations because 34 and 21 are consecutive Fibonacci numbers, as are 5 and 3.
For any given positive integer, its Zeckendorf representation can be found by using a greedy algorithm, choosing the largest possible Fibonacci number at each stage.
History
While the theorem is named after the eponymous author who published his paper in 1972, the same result had been published 20 years earlier by Gerrit Lekkerkerker. As such, the theorem is an example of Stigler's Law of Eponymy.
Proof
Zeckendorf's theorem has two parts:
Existence: every positive integer has a Zeckendorf representation.
Uniqueness: no positive integer has two different Zeckendorf representations.
The first part of Zeckendorf's theorem (existence) can be proven by induction. For it is clearly true (as these are Fibonacci numbers), for we have . If is a Fibonacci number then there is nothing to prove. Otherwise there exists such that . Now suppose each positive integer has a Zeckendorf representation (induction hypothesis) and consider . Since , has a Zeckendorf representation by the induction hypothesis. At the same time, (we apply the definition of Fibonacci number in the last equality), so the Zeckendorf representation of does not contain , and hence also does not contain . As a result, can be represented as the sum of and the Zeckendorf representation of , such that the Fibonacci numbers involved in the sum are distinct.
The second part of Zeckendorf's theorem (uniqueness) requires the following lemma:
Lemma: The sum of any non-empty set of distinct, non-consecutive Fibonacci numbers whose largest member is is strictly less than the next larger Fibonacci number .
The lemma can be proven by induction on .
Now take two non-empty sets and of distinct non-consecutive Fibonacci numbers which have the same sum, . Consider sets and which are equal to and from which the common elements have been removed (i. e. and ). Since and had equal sum, and we have removed exactly the elements from from both sets, and must have the same sum as well, .
Now we will show by contradiction that at least one of and is empty. Assume the contrary, i. e. that and are both non-empty and let the largest member of be and the largest member of be . Because and contain no common elements, . Without loss of generality, suppose . Then by the lemma, , and, by the fact that , , whereas clearly . This contradicts the fact that and have the same sum, and we can conclude that either or must be empty.
Now assume (again without loss of generality) that is empty. Then has sum 0, and so must . But since can only contain positive integers, it must be empty too. To conclude: which implies , proving that each Zeckendorf representation is unique.
Fibonacci multiplication
One can define the following operation on natural numbers , : given the Zeckendorf representations
and we define the Fibonacci product
For example, the Zeckendorf representation of 2 is , and the Zeckendorf representation of 4 is ( is disallowed from representations), so
(The product is not always in Zeckendorf form. For example, )
A simple rearrangement of sums shows that this is a commutative operation; however, Donald Knuth proved the surprising fact that this operation is also associative.
Representation with negafibonacci numbers
The Fibonacci sequence can be extended to negative index using the rearranged recurrence relation
which yields the sequence of "negafibonacci" numbers satisfying
Any integer can be uniquely represented as a sum of negafibonacci numbers in which no two consecutive negafibonacci numbers are used. For example:
0 is represented by the empty sum.
, for example, so the uniqueness of the representation does depend on the condition that no two consecutive negafibonacci numbers are used.
This gives a system of coding integers, similar to the representation of Zeckendorf's theorem. In the string representing the integer , the th digit is 1 if appears in the sum that represents ; that digit is 0 otherwise. For example, 24 may be represented by the string 100101001, which has the digit 1 in places 9, 6, 4, and 1, because . The integer is represented by a string of odd length if and only if .
See also
Complete sequence
Fibonacci coding
Fibonacci nim
Ostrowski numeration
References
External links
Zeckendorf's theorem at cut-the-knot
Fibonacci numbers
Theorems in number theory
Articles containing proofs | Zeckendorf's theorem | [
"Mathematics"
] | 1,165 | [
"Mathematical theorems",
"Recurrence relations",
"Fibonacci numbers",
"Golden ratio",
"Theorems in number theory",
"Mathematical relations",
"Articles containing proofs",
"Mathematical problems",
"Number theory"
] |
1,378,439 | https://en.wikipedia.org/wiki/Whisker%20%28metallurgy%29 | Metal whiskering is a phenomenon that occurs in electrical devices when metals form long whisker-like projections over time. Tin whiskers were noticed and documented in the vacuum tube era of electronics early in the 20th century in equipment that used pure, or almost pure, tin solder in their production. It was noticed that small metal hairs or tendrils grew between metal solder pads, causing short circuits. Metal whiskers form in the presence of compressive stress. Germanium, zinc, cadmium, and even lead whiskers have been documented. Many techniques are used to mitigate the problem, including changes to the annealing process (heating and cooling), the addition of elements like copper and nickel, and the inclusion of conformal coatings. Traditionally, lead has been added to slow down whisker growth in tin-based solders.
Following the Restriction of Hazardous Substances Directive (RoHS), the European Union banned the use of lead in most consumer electronic products from 2006 due to health problems associated with lead and the "high-tech trash" problem, leading to a re-focusing on the issue of whisker formation in lead-free solders.
Mechanism
Metal whiskering is a crystalline metallurgical phenomenon involving the spontaneous growth of tiny, filiform hairs from a metallic surface. The effect is primarily seen on elemental metals but also occurs with alloys.
The mechanism behind metal whisker growth is not well understood, but seems to be encouraged by compressive mechanical stresses including:
energy gained due to electrostatic polarization of metal filaments in the electric field,
residual stresses caused by electroplating,
mechanically induced stresses,
stresses induced by diffusion of different metals,
thermally induced stresses, and
strain gradients in materials.
Metal whiskers differ from metallic dendrites in several respects: dendrites are fern-shaped and grow across the surface of the metal, while metal whiskers are hair-like and project normal to the surface. Dendrite growth requires moisture capable of dissolving the metal into a solution of metal ions, which are then redistributed by electromigration in the presence of an electromagnetic field. While the precise mechanism for whisker formation remains unknown, it is known that whisker formation does not require either dissolution of the metal or the presence of an electromagnetic field.
Effects
Whiskers can cause short circuits and arcing in electrical equipment. The phenomenon was discovered by telephone companies in the late 1940s and it was later found that the addition of lead to tin solder provided mitigation. The European Restriction of Hazardous Substances Directive (RoHS), which took effect on July 1, 2006, restricted the use of lead in various types of electronic and electrical equipment. This has driven the use of lead-free alloys with a focus on preventing whisker formation . Others have focused on the development of oxygen-barrier coatings to prevent whisker formation.
Airborne zinc whiskers have been responsible for increased system failure rates in computer server rooms. Zinc whiskers grow from galvanized (electroplated) metal surfaces at a rate of up to a millimeter per year with a diameter of a few micrometers. Whiskers can form on the underside of zinc electroplated floor tiles on raised floors. These whiskers can then become airborne within the floor plenum when the tiles are disturbed, usually during maintenance. Whiskers can be small enough to pass through air filters and can settle inside equipment, resulting in short circuits and system failure.
Tin whiskers do not have to be airborne to damage equipment, as they are typically already growing directly in the environment where they can produce short circuits, i.e., the electronic equipment itself. At frequencies above 6 GHz or in fast digital circuits, tin whiskers can act like miniature antennas, affecting the circuit impedance and causing reflections. In computer disk drives they can break off and cause head crashes or bearing failures. Tin whiskers often cause failures in relays and have been found upon examination of failed relays in nuclear power facilities. Pacemakers have been recalled due to tin whiskers. Research has also identified a particular failure mode for tin whiskers in vacuum (such as in space), where in high-power components a short-circuiting tin whisker is ionized into a plasma that is capable of conducting hundreds of amperes of current, massively increasing the damaging effect of the short circuit. The possible increase in the use of pure tin in electronics due to the RoHS directive drove JEDEC and IPC to release a tin whisker acceptance testing standard and mitigation practices guideline intended to help manufacturers reduce the risk of tin whiskers in lead-free products.
Silver whiskers often appear in conjunction with a layer of silver sulfide, which forms on the surface of silver electrical contacts operating in an atmosphere rich in hydrogen sulfide and high humidity. Such atmospheres can exist in sewage treatment plants and paper mills.
Whiskers over 20 μm in length were observed on gold-plated surfaces and noted in a 2003 NASA internal memorandum.
The effects of metal whiskering were chronicled on History Channel's program Engineering Disasters 19.
Mitigation and elimination
Several approaches are used to reduce or eliminate whisker growth, with ongoing research in the area.
Conformal coatings
Conformal compound coatings stop the whiskers from penetrating a barrier, reaching a nearby termination and forming a short.
Altering plating chemistry
Termination finishes of nickel, gold or palladium have been shown to eliminate whisker formation in controlled trials.
Tin whisker examples and incidents
Galaxy IV
Galaxy IV was a telecommunications satellite that was disabled and lost due to short circuits caused by tin whiskers in 1998. It was initially thought that space weather contributed to the failure, but later it was discovered that a conformal coating had been misapplied, allowing whiskers formed in the pure tin plating to find their way through a missing coating area, causing a failure of the main control computer. The manufacturer, Hughes, has moved to nickel plating, rather than tin, to reduce the risk of whisker growth. The trade-off has been an increase in weight, adding per payload.
Millstone Nuclear Power Plant
On April 17, 2005, the Millstone Nuclear Power Plant in Connecticut was shut down due to a "false alarm" that indicated an unsafe pressure drop in the reactor's steam system when the steam pressure was actually nominal. The false alarm was caused by a tin whisker that short circuited the logic board responsible for monitoring the steam pressure lines in the power plant.
Toyota accelerator position sensors false positive
In September 2011, three NASA investigators claimed that they identified tin whiskers on the accelerator position sensors of sampled Toyota Camry models that could contribute to the "stuck accelerator" crashes affecting certain Toyota models during 2005–2010. This contradicted an earlier 10-month joint investigation by the National Highway Traffic Safety Administration (NHTSA) and a large group of other NASA researchers that found no electronic defects.
In 2012, NHTSA maintained: "We do not believe that tin whiskers are a plausible explanation for these incidents...[the likely cause was] pedal misapplication."
Toyota also maintains that tin whiskers were not the cause of any stuck accelerator issues: "In the words of U.S. Transportation Secretary Ray LaHood, 'The verdict is in. There is no electronic-based cause for unintended high-speed acceleration in Toyotas. Period. According to a Toyota press release, "no data indicates that tin whiskers are more prone to occur in Toyota vehicles than any other vehicle in the marketplace." Toyota also states that "their systems are designed to reduce the risk that tin whiskers will form in the first place."
See also
Monocrystalline whisker
Dendrite (metal)
Crystal growth
Gold-aluminium intermetallic
Impurity
References
External links
Images of silver whiskers NASA
Electronic engineering
Metallurgy | Whisker (metallurgy) | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 1,655 | [
"Computer engineering",
"Metallurgy",
"Materials science",
"Electronic engineering",
"nan",
"Electrical engineering"
] |
1,379,266 | https://en.wikipedia.org/wiki/Ring-imaging%20Cherenkov%20detector | The ring-imaging Cherenkov, or RICH, detector is a device for identifying the type of an electrically charged subatomic particle of known momentum, that traverses a transparent refractive medium, by measurement of the presence and characteristics of the Cherenkov radiation emitted during that traversal. RICH detectors were first developed in the 1980s and are used in high energy elementary particle-, nuclear- and astro-physics experiments.
Ring-imaging Cherenkov (RICH) detector
Origins
The ring-imaging detection technique was first proposed by Jacques Séguinot and Tom Ypsilantis, working at CERN in 1977. Their research and development, of high precision single-photon detectors and related optics, lay the foundations for the design development and construction of the first large-scale Particle Physics RICH detectors, at CERN's OMEGA facility
and LEP (Large Electron–Positron Collider) DELPHI experiment.
Principles
A ring-imaging Cherenkov (RICH) detector allows the identification of electrically charged subatomic particle types through the detection of the Cherenkov radiation emitted (as photons) by the particle in traversing a medium with refractive index > 1. The identification is achieved by measurement of the angle of emission, , of the Cherenkov radiation, which is related to the charged particle's velocity by
where is the speed of light.
Knowledge of the particle's momentum and direction (normally available from an associated momentum-spectrometer) allows a predicted for each hypothesis of the particles type; using the known of the RICH radiator gives a corresponding prediction of that can be compared to the of the detected Cherenkov photons, thus indicating the particle's identity (usually as a probability per particle type). A typical (simulated) distribution of vs momentum of the source particle, for single Cherenkov photons, produced in a gaseous radiator (n~1.0005, angular resolution~0.6mrad) is shown in the following Fig.1:
The different particle types follow distinct contours of constant mass, smeared by the effective angular resolution of the RICH detector; at higher momenta each particle emits a number of Cherenkov photons which, taken together, give a more precise measure of the average than does a single photon (see Fig.3 below), allowing effective particle separation to extend beyond 100 GeV in this example.
This particle identification is essential for the detailed understanding of the intrinsic physics of the structure and interactions of elementary particles. The essence of the ring-imaging method is to devise an optical system with single-photon detectors, that can isolate the Cherenkov photons that each particle emits, to form a single "ring image" from which an accurate can be determined.
A polar plot of the Cherenkov angles of photons associated with a 22 GeV/c particle in a radiator with =1.0005 is shown in Fig.2; both pion and kaon are illustrated; protons are below Cherenkov threshold,
, producing no radiation in this case (which would also be a very clear signal of particle type = proton, since fluctuations in the number of photons follow Poisson statistics about the expected mean, so that the probability of e.g. a 22 GeV/c kaon producing zero photons when ~12 were expected is very small; e−12 or 1 in 162755). The number of detected photons shown for each particle type is, for illustration purposes, the average for that type in a RICH having ~ 25 (see below). The distribution in azimuth is random between 0 and 360 degrees; the distribution in is spread with RMS angular resolution ~ 0.6 milliradians.
Note that, because the points of emission of the photons can be at any place on the (normally straight line) trajectory of the particle through the radiator, the emerging photons occupy a light-cone in space.
In a RICH detector the photons within this light-cone pass through an optical system and impinge upon a position sensitive photon detector. With a suitably focusing optical system this allows reconstruction of a ring, similar to that above in Fig.2, the radius of which gives a measure of the Cherenkov emission angle .
The resolving power of this method is illustrated by comparing the Cherenkov angle per photon, see the first plot, Fig.1 above, with the mean Cherenkov angle per particle (averaged over all photons emitted by that particle) obtained by ring-imaging, shown in Fig.3; the greatly enhanced separation between particle types is very clear.
Optical Precision and Response
This ability of a RICH system to successfully resolve different hypotheses for the particle type depends on two principal factors, which in turn depend upon the listed sub-factors;
The effective angular resolution per photon,
Chromatic dispersion in the radiator ( varies with photon frequency)
Aberrations in the optical system
Position resolution of the photon detector
The maximum number of detected photons in the ring-image,
The length of radiator through which the particle travels
Photon transmission through the radiator material
Photon transmission through the optical system
Quantum efficiency of the photon detectors
is a measure of the intrinsic optical precision of the RICH detector. is a measure of the optical response of the RICH; it can be thought of as the limiting case of the number of actually detected photons produced by a particle whose velocity approaches that of light, averaged over all relevant particle trajectories in the RICH detector. The average number of Cherenkov photons detected, for a slower particle, of charge (normally ±1), emitting photons at angle is then
and the precision with which the mean Cherenkov angle can be determined with these photons is approximately
to which the angular precision of the emitting particle's measured direction must be added in quadrature, if it is not negligible compared to .
Particle Identification
Given the known momentum of the emitting particle and the refractive index of the radiator, the expected Cherenkov angle for each particle type can be predicted, and its difference from the observed mean Cherenkov angle calculated. Dividing this difference by then gives a measure of the 'number of sigma' deviation of the hypothesis from the observation, which can be used in computing a probability or likelihood for each possible hypothesis. The following Fig.4 shows the 'number of sigma' deviation of the kaon hypothesis from a true pion ring image (π not k) and of the pion hypothesis from a true kaon ring image (k not π), as a function of momentum, for a RICH with = 1.0005, = 25, = 0.64 milliradians;
Also shown are the average number of detected photons from pions(Ngπ) or from kaons(Ngk). One can see that the RICH's ability to separate the two particle types exceeds 4-sigma everywhere between threshold and 80 GeV/c, finally dropping below 3-sigma at about 100 GeV.
It is important to note that this result is for an 'ideal' detector, with homogeneous acceptance and efficiency, normal error distributions and zero background. No such detector exists, of course, and in a real experiment much more sophisticated procedures are actually used to account for those effects; position dependent acceptance and efficiency; non-Gaussian error distributions; non negligible and variable event-dependent backgrounds.
In practice, for the multi-particle final states produced in a typical collider experiment, separation of kaons from other final state hadrons, mainly pions, is the most important purpose of the RICH. In that context the two most vital RICH functions, which maximise signal and minimise combinatorial backgrounds, are its ability to correctly identify a kaon as a kaon and its ability not to misidentify a pion as a kaon. The related probabilities, which are the usual measures of signal detection and background rejection in real data, are plotted in Fig.5 below to show their variation with momentum (simulation with 10% random background);
Note that the ~30% π → k misidentification rate at 100 GeV is, for the most part, due to the presence of 10% background hits (faking photons) in the simulated detector; the 3-sigma separation in the mean Cherenkov angle (shown in Fig.4 above) would, by itself, only account for about 6% misidentification. More detailed analyses of the above type, for operational RICH detectors, can be found in the published literature.
For example, the LHCb experiment at the CERN LHC studies, amongst other B-meson decays, the particular process B0 → π+π−. The following Fig.6 shows, on the left, the π+π− mass distribution without RICH identification, where all particles are assumed to be π; the B0 → π+π− signal of interest is the turquoise-dotted line and is completely swamped by background due to B and Λ decays involving kaons and protons, and combinatorial background from particles not associated with the B0 decay.
On the right are the same data with RICH identification used to select only pions and reject kaons and protons; the B0 → π+π− signal is preserved but all kaon- and proton-related backgrounds are greatly reduced, so that the overall B0 signal/background has improved by a factor ~ 6, allowing much more precise measurement of the decay process.
RICH Types
Both focusing and proximity-focusing detectors are in use (Fig.7). In a focusing RICH detector, the photons are collected by a spherical mirror with focal length and focused onto the photon detector placed at the focal plane. The result is a circle with a radius , independent of the emission point along the particle's track (). This scheme is suitable for low refractive index radiators (i.e., gases) with their larger radiator length needed to create enough photons.
In the more compact proximity-focusing design a thin radiator volume emits a cone of Cherenkov light which traverses a small distance, the proximity gap, and is detected on the photon detector plane. The image is a ring of light the radius of which is defined by the Cherenkov emission angle and the proximity gap. The ring thickness is mainly determined by the thickness of the radiator. An example of a proximity gap RICH detector is the High Momentum Particle Identification (HMPID), one of the detectors of ALICE (A Large Ion Collider Experiment), which is one of the five experiments at the LHC (Large Hadron Collider) at CERN.
In a DIRC (Detection of Internally Reflected Cherenkov light, Fig.8), another design of a RICH detector, light that is captured by total internal reflection inside the solid radiator reaches the light sensors at the detector perimeter, the precise rectangular cross section of the radiator preserving the angular information of the Cherenkov light cone. One example is the DIRC of the BaBar experiment at SLAC.
The LHCb experiment on the Large Hadron Collider, Fig.9, uses two RICH detectors for differentiating between pions and kaons. The first (RICH-1) is located immediately after the Vertex Locator (VELO) around the interaction point and is optimised for low-momentum particles and the second (RICH-2) is located after the magnet and particle-tracker layers and optimised for higher-momentum particles.
The Alpha Magnetic Spectrometer device AMS-02, Fig.10, recently mounted on the International Space Station uses a RICH detector in combination with other devices to analyze cosmic rays.
References
Particle detectors | Ring-imaging Cherenkov detector | [
"Technology",
"Engineering"
] | 2,433 | [
"Particle detectors",
"Measuring instruments"
] |
1,379,315 | https://en.wikipedia.org/wiki/Time-of-flight%20detector | A time-of-flight (TOF) detector is a particle detector which can discriminate between a lighter and a heavier elementary particle of same momentum using their time of flight between two scintillators. The first of the scintillators activates a clock upon being hit while the other stops the clock upon being hit. If the two masses are denoted by and and have velocities and then the time of flight difference is given by
where is the distance between the scintillators. The approximation is in the relativistic limit at momentum and denotes the speed of light in vacuum.
See also
Time-of-flight mass spectrometry
References
Particle detectors | Time-of-flight detector | [
"Physics",
"Technology",
"Engineering"
] | 141 | [
"Particle physics stubs",
"Particle detectors",
"Particle physics",
"Measuring instruments"
] |
1,379,448 | https://en.wikipedia.org/wiki/Vacuum%20grease | Vacuum grease is a lubricant with low volatility and is used for applications in low pressure environments. Lubricants with higher volatility would evaporate, causing two problems:
They would not be present to provide lubrication.
They would make lowering the pressure below their vapor pressure difficult.
As well as a lubricant, vacuum grease is also used as a sealant for joints in vacuum systems. This is usually limited to soft vacuums, as ultra high vacuum or high temperatures may give problems with the grease outgassing. Grease is most commonly used with glass vacuum systems. All metal systems usually use knife-edge seals in soft metals instead. Where O ring seals are used, these should not be greased (in static seals at least) as it can cause the O rings to become permanently distorted when compressed.
In electronics manufacturing processes, vacuum grease is often used to prevent corrosion.
One of the early vacuum greases is the Ramsay grease.
Examples
Perfluoropolyether
References
Grease
Lubricants | Vacuum grease | [
"Physics",
"Engineering"
] | 214 | [
"Materials stubs",
"Vacuum",
"Materials",
"Vacuum systems",
"Matter"
] |
34,377,958 | https://en.wikipedia.org/wiki/Petri%20TTL | Petri TTL was a manual 35 mm SLR camera with TTL metering. It was built by Petri Camera Company, Japan, from 1974. It is unknown when the production stopped.
Features
The Petri TTL was a no-frills and very conservative camera. It was quite big and of heavy, all-metal construction. The only 'luxury' item found on the camera was a self-timer.
The camera was fully manual, with a built-in CdS light meter. The battery was only for the metering circuit. The user needed to push a button on the front of the camera to close the aperture, and then set the aperture ring on the lens to a value where the meter needle would fit inside a marker ring. After this, the user could let go of the button, and have full light in the viewfinder to compose the picture.
On release of the shutter, the aperture would close to the correct setting.
As soon as the film was wound forwards, the light meter would switch on. It was not possible to switch it off manually, so the only way to conserve battery would be to delay advancing the film until the next exposure. It was not possible to attach a winder or motor to the camera.
The shutter was a horizontal cloth-curtain focal-plane shutter with a speed range of 1/1 to 1/1000 second. As it was fully mechanical, the camera could be used even if the battery was dead. Flash sync was set for 1/60 second.
The release button was placed in an uncommon spot, halfway down the front of the camera. If the user used the middle finger for the shutter release, it was possible to have an unusually solid grip on the housing.
For reasons unknown, it did not activate the self-timer: the timer had a separate release button that became available when the self-timer arm was cocked. Even with the self-timer ready, the camera could be used in the normal mode.
There were a wide range of lenses, bellows and other accessories available, both from Petri and from third-party producers.
References
Anonymous. "Petri TTL instruction book" ©Petri Camera Company, inc.
Cameras by type
Single-lens reflex cameras
Products introduced in 1974 | Petri TTL | [
"Technology"
] | 459 | [
"System cameras",
"Single-lens reflex cameras"
] |
34,385,670 | https://en.wikipedia.org/wiki/Photoelectron%20photoion%20coincidence%20spectroscopy | Photoelectron photoion coincidence spectroscopy (PEPICO) is a combination of photoionization mass spectrometry and photoelectron spectroscopy. It is largely based on the photoelectric effect. Free molecules from a gas-phase sample are ionized by incident vacuum ultraviolet (VUV) radiation. In the ensuing photoionization, a cation and a photoelectron are formed for each sample molecule. The mass of the photoion is determined by time-of-flight mass spectrometry, whereas, in current setups, photoelectrons are typically detected by velocity map imaging. Electron times-of-flight are three orders of magnitude smaller than those of ions, which allows electron detection to be used as a time stamp for the ionization event, starting the clock for the ion time-of-flight analysis. In contrast with pulsed experiments, such as REMPI, in which the light pulse must act as the time stamp, this allows to use continuous light sources, e.g. a discharge lamp or a synchrotron light source. No more than several ion–electron pairs are present simultaneously in the instrument, and the electron–ion pairs belonging to a single photoionization event can be identified and detected in delayed coincidence.
History
Brehm and von Puttkammer published the first PEPICO study on methane in 1967. In the early works, a fixed energy light source was used, and the electron detection was carried out using retarding grids or hemispherical analyzers: the mass spectra were recorded as a function of electron energy. Tunable vacuum ultraviolet light sources were used in later setups, in which fixed, mostly zero kinetic energy electrons were detected, and the mass spectra were recorded as a function of photon energy. Detecting zero kinetic energy or threshold electrons in threshold photoelectron photoion coincidence spectroscopy, TPEPICO, has two major advantages. Firstly, no kinetic energy electrons are produced in energy ranges with poor Franck–Condon factors in the photoelectron spectrum, but threshold electrons can still be emitted via other ionization mechanisms. Secondly, threshold electrons are stationary and can be detected with higher collection efficiencies, thereby increasing signal levels.
Threshold electron detection was first based on line-of-sight, i.e. a small positive field was applied towards the electron detector, and kinetic energy electrons with perpendicular velocities are stopped by small apertures. The inherent compromise between resolution and collection efficiency was resolved by applying velocity map imaging conditions. Most recent setups offer meV or better (0.1 kJ mol−1) resolution both in terms of photon energy and electron kinetic energy.
The 5–20 eV (500–2000 kJ mol−1, λ = 250–60 nm) energy range is of prime interest in valence photoionization. Widely tunable light sources are few and far between in this energy range. The only laboratory based one is the H2 discharge lamp, which delivers quasi-continuous radiation up to 14 eV. The few high resolution laser setups for this energy range are not easily tunable over several eV. Currently, VUV beamlines at third generation synchrotron light sources are the brightest and most tunable photon sources for valence ionization. The first high energy resolution PEPICO experiment at a synchrotron was the pulsed-field ionization setup at the Chemical Dynamics Beamline of the Advanced Light Source.
Overview
The primary application of TPEPICO is the production of internal energy selected ions to study their unimolecular dissociation dynamics as a function of internal energy. The electrons are extracted by a continuous electric field and are velocity map imaged depending on their initial kinetic energy. Ions are accelerated in the opposite direction and their mass is determined by time-of-flight mass spectrometry. The data analysis yields dissociation thresholds, which can be used to derive new thermochemistry for the sample.
The electron imager side can also be used to record photoionization cross sections, photoelectron energy and angular distributions. With the help of circularly polarized light, photoelectron circular dichroism (PECD) can be studied. A thorough understanding of PECD effects could help explain the homochirality of life. Flash pyrolysis can also be used to produce free radicals or intermediates, which are then characterized to complement e.g. combustion studies. In such cases, the photoion mass analysis is used to confirm the identity of the radical produced.
Photoelectron photoion coincidence spectroscopy can be used to shed light on reaction mechanisms, and can also be generalized to study double ionization in (photoelectron) photoion photoion coincidence ((PE)PIPICO), fluorescence using photoelectron photon coincidence (PEFCO), or photoelectron photoelectron coincidence (PEPECO). Times-of-flight of photoelectrons and photoions can be combined in a form of a map, which visualizes the dynamics of the dissociative ionization process. Ion–electron velocity vector correlation functions can be obtained in double imaging setups, in which the ion detector also delivers position information.
Energy selection
The relatively low intensity of the ionizing VUV radiation guarantees one-photon processes, in other words only one, fixed energy photon will be responsible for photoionization. The energy balance of photoionization comprises the internal energy and the adiabatic ionization energy of the neutral as well as the photon energy, the kinetic energy of the photoelectron and of the photoion. Because only threshold electrons are considered and the conservation of momentum holds, the last two terms vanish, and the internal energy of the photoion is known:
Scanning the photon energy corresponds to shifting the internal energy distribution of the parent ion. The parent ion sits in a potential energy well, in which the lowest energy exit channel often corresponds to the breaking of the weakest chemical bond, resulting in the formation of a fragment or daughter ion. A mass spectrum is recorded at every photon energy, and the fractional ion abundances are plotted to obtain the breakdown diagram. At low energies no parent ion is energetic enough to dissociate, and the parent ion corresponds to 100% of the ion signal. As the photon energy is increased, a certain fraction of the parent ions (in fact according to the cumulative distribution function of the neutral internal energy distribution) still has too little energy to dissociate, but some do. The parent ion fractional abundances decrease, and the daughter ion signal increases. At the dissociative photoionization threshold, E0, all parent ions, even the ones with initially 0 internal energy, can dissociate, and the daughter ion abundance reaches 100% in the breakdown diagram.
If the potential energy well of the parent ion is shallow and the complete initial thermal energy distribution is broader than the depth of the well, the breakdown diagram can also be used to determine adiabatic ionization energies.
Data analysis
The data analysis becomes more demanding if there are competing parallel dissociation channels or if the dissociation at threshold is too slow to be observed on the time scale (several μs) of the experiment. In the first case, the slower dissociation channel will appear only at higher energies, an effect called competitive shift, whereas in the second, the resulting kinetic shift means that the fragmentation will only be observed at some excess energy, i.e. only when it is fast enough to take place on the experimental time scale. When several dissociation steps follow sequentially, the second step typically occurs at high excess energies: the system has much more internal energy than needed for breaking the weakest bond in the parent ion. Some of this excess energy is retained as internal energy of the fragment ion, some may be converted into the internal energy of the leaving neutral fragment (invisible to mass spectrometry) and the rest is released as kinetic energy, in that the fragments fly apart at some non-zero velocity.
More often than not, dissociative photoionization processes can be described within a statistical framework, similarly to the approach used in collision-induced dissociation experiments. If the ergodic hypothesis holds, the system will explore each region of the phase space with a probability according to its volume. A transition state (TS) can then be defined in the phase space, which connects the dissociating ion with the dissociation products, and the dissociation rates for the slow or competing dissociations can be expressed in terms of the TS phase space volume vs. the total phase space volume. The total phase space volume is calculated in a microcanonical ensemble using the known energy and the density of states of the dissociating ion. There are several approaches how to define the transition state, the most widely used being RRKM theory. The unimolecular dissociation rate curve as a function of energy, k(E), vanishes below the dissociative photoionization energy, E0.
Statistical theory can also be used in the microcanonical formalism to describe the excess energy partitioning in sequential dissociation steps, as proposed by Klots for a canonical ensemble. Such a statistical approach was used for more than a hundred systems to determine accurate dissociative photoionization onsets, and derive thermochemical information from them.
Furthermore, algorithms based on probabilistic Bayesian analyses are known to considerably reduce systematic biases induced by false coincidences. The intensity of these false coincidences can big strong enough to appear as a separate peaks in the signal and complicate the analysis of the spectra.
Thermochemical applications
Dissociative photoionization processes can be generalized as:
If the enthalpies of formation of two of the three species are known, the third can be calculated with the help of the dissociative photoionization energy, E0, using Hess's law. This approach was used, for instance, to determine the enthalpy of formation of the methyl ion, , which in turn was used to obtain the enthalpy of formation of iodomethane, as 15.23 kJ mol−1, with an uncertainty of only 0.3 kJ mol−1.
If different sample molecules produce shared fragment ions, a complete thermochemical chain can be constructed, as was shown for some methyl trihalides, where the uncertainty in e.g. the , (Halon-1021) heat of formation was reduced from 20 to 2 kJ mol−1. Furthermore, dissociative photoionization energies can be combined with calculated isodesmic reaction energies to build thermochemical networks. Such an approach was used to revise primary alkylamine enthalpies of formation.
See also
Covariance mapping
Photoelectric effect
References
External links
PEPICO endstation at the Swiss Light Source
DELICIOUS2: a PEPICO experiment at SOLEIL, France
PEPICO page at the University of the Pacific
Physical chemistry
Chemical kinetics
Thermochemistry
Spectroscopy
Mass spectrometry
Electron spectroscopy | Photoelectron photoion coincidence spectroscopy | [
"Physics",
"Chemistry"
] | 2,296 | [
"Chemical reaction engineering",
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Molecular physics",
"Thermochemistry",
"Electron spectroscopy",
"Instrumental analysis",
"Mass",
"Physical chemistry",
"Mass spectrometry",
"nan",
"Chemical kinetics",
"Spectroscopy",
... |
6,717,182 | https://en.wikipedia.org/wiki/Ceramic%20glaze | Ceramic glaze, or simply glaze, is a glassy coating on ceramics. It is used for decoration, to ensure the item is impermeable to liquids and to minimize the adherence of pollutants.
Glazing renders earthenware impermeable to water, sealing the inherent porosity of earthenware. It also gives a tougher surface. Glaze is also used on stoneware and porcelain. In addition to their functionality, glazes can form a variety of surface finishes, including degrees of glossy or matte finish and color. Glazes may also enhance the underlying design or texture either unmodified or inscribed, carved or painted.
Most pottery produced in recent centuries has been glazed, other than pieces in bisque porcelain, terracotta, and some other types. Tiles are often glazed on the surface face, and modern architectural terracotta is often glazed. Glazed brick is also common. Sanitaryware is invariably glazed, as are many ceramics used in industry, for example ceramic insulators for overhead power lines.
The most important groups of traditional glazes, each named after its main ceramic fluxing agent, are:
Ash glaze, traditionally important in East Asia, simply made from wood or plant ash, which contains potash and lime.
Feldspathic glazes of porcelain.
Lead glazes, plain or coloured, are glossy and transparent after firing, which need only about . They have been used for about 2,000 years in China e.g. sancai, around the Mediterranean, and in Europe e.g. Victorian majolica.
Salt-glaze, mostly European stoneware. It uses ordinary salt.
Tin-glaze, which coats the ware with lead glaze made opaque white by the addition of tin. Known in the Ancient Near East and then important in Islamic pottery, from which it passed to Europe. Includes Hispano-Moresque ware, Italian Renaissance maiolica (also called majolica), faience and Delftware.
Glaze may be applied by spraying, dipping, trailing or brushing on an aqueous suspension of the unfired glaze. The colour of a glaze after it has been fired may be significantly different from before firing. To prevent glazed wares sticking to kiln furniture during firing, either a small part of the object being fired (for example, the foot) is left unglazed or, alternatively, special refractory "spurs" are used as supports. These are removed and discarded after the firing.
History
Historically, glazing of ceramics developed rather slowly, as appropriate materials needed to be discovered, and also firing technology able to reliably reach the necessary temperatures was needed. Glazes first appeared on stone materials in the 4th millennium BC, and Ancient Egyptian faience (fritware rather than a clay-based material) was self-glazing, as the material naturally formed a glaze-like layer during firing. Glazing of pottery followed the invention of glass around 1500 BC, in the Middle East and Egypt with alkali glazes including ash glaze, and in China, using ground feldspar. By around 100 BC lead-glazing was widespread in the Old World.
Glazed brick goes back to the Elamite Temple at Chogha Zanbil, dated to the 13th century BC. The Iron Pagoda, built in 1049 in Kaifeng, China, of glazed bricks is a well-known later example.
Lead glazed earthenware was probably made in China during the Warring States period (475 – 221 BC), and its production increased during the Han dynasty. High temperature proto-celadon glazed stoneware was made earlier than glazed earthenware, since the Shang dynasty (1600 – 1046 BCE).
During the Kofun period of Japan, Sue ware was decorated with greenish natural ash glazes. From 552 to 794 AD, differently colored glazes were introduced. The three colored glazes of the Tang dynasty were frequently used for a period, but were gradually phased out; the precise colors and compositions of the glazes have not been recovered. Natural ash glaze, however, was commonly used throughout the country.
In the 13th century, flower designs were painted with red, blue, green, yellow and black overglazes. Overglazes became very popular because of the particular look they gave ceramics.
From the eighth century, the use of glazed ceramics was prevalent in Islamic art and Islamic pottery, usually in the form of elaborate pottery. Tin-opacified glazing was one of the earliest new technologies developed by the Islamic potters. The first Islamic opaque glazes can be found as blue-painted ware in Basra, dating to around the 8th century. Another significant contribution was the development of stoneware, originating from 9th century Iraq. Other places for innovative pottery in the Islamic world included Fustat (from 975 to 1075), Damascus (from 1100 to around 1600) and Tabriz (from 1470 to 1550).
Composition
Glazes need to include a ceramic flux which functions by promoting partial liquefaction in the clay bodies and the other glaze materials. Fluxes lower the high melting point of the glass forms silica, and sometimes boron trioxide.
Raw materials for ceramic glazes generally include silica, which will be the main glass former. Various metal oxides, such as those of sodium, potassium and calcium, act as flux and therefore lower the melting temperature. Alumina, often derived from clay, stiffens the molten glaze to prevent it from running off the piece. Colorants, such as iron oxide, copper carbonate or cobalt carbonate, and sometimes opacifiers including tin oxide and zirconium oxide, are used to modify the visual appearance of the fired glaze.
Process
Most commonly, glazes in aqueous suspension of various powdered minerals and metal oxides are applied by dipping pieces directly into the glaze. Other techniques include pouring the glaze over the piece, spraying it onto the piece with an airbrush or similar tool, or applying it directly with a tool such as a brush. Though mostly obsolete, salt glaze pottery is another form of glazing. Dry-dusting a mixture over the surface of the clay body or inserting salt or soda into the kiln at high temperatures creates an atmosphere rich in sodium vapor. This interacts with the aluminium and silica oxides in the body to form and deposit glass.
To prevent the glazed article from sticking to the kiln during firing, either a small part of the item is left unglazed, or it is supported on small refractory supports such as kiln spurs and stilts. The supports are then removed and discarded after the firing. Small marks left by these spurs are sometimes visible on finished ware.
Colour and decoration
Underglaze decoration is applied before the glaze, usually to unfired pottery ("raw" or "greenware") but sometimes to "biscuit"-fired (an initial firing of some articles before the glazing and re-firing). A wet glaze—usually transparent—is applied over the decoration. The pigment fuses with the glaze, and appears to be underneath a layer of clear glaze; generally the body material used fires to a whitish colour. The best known type of underglaze decoration is the blue and white porcelain first produced in China, and then copied in other countries. The striking blue color uses cobalt as cobalt oxide or cobalt carbonate. However many of the imitative types, such as Delftware, have off-white or even brown earthenware bodies, which are given a white tin-glaze and either inglaze or overglaze decoration. With the English invention of creamware and other white-bodied earthenwares in the 18th century, underglaze decoration became widely used on earthenware as well as porcelain.
Overglaze decoration is applied on top of a fired layer of glaze, and generally uses colours in "enamel", essentially glass, which require a second firing at a relatively low temperature to fuse them with the glaze. Because it is only fired at a relatively low temperature, a wider range of pigments could be used in historic periods. Overglaze colors are low-temperature glazes that give ceramics a more decorative, glassy look. A piece is fired first, this initial firing being called the glost firing, then the overglaze decoration is applied, and it is fired again. Once the piece is fired and comes out of the kiln, its texture is smoother due to the glaze.
Other methods are firstly inglaze, where the paints are applied onto the glaze before firing, and then become incorporated within the glaze layer during firing. This works well with tin-glazed pottery, such as maiolica, but the range of colours was limited to those that could withstand a glost firing, as with underglaze. Coloured glazes, where the pigments are mixed into the liquid glaze before it is applied to the pottery, are mostly used to give a single colour to a whole piece, as in most celadons, but can also be used to create designs in contrasting colours, as in Chinese sancai ("three-colour") wares, or even painted scenes.
Many historical styles, for example Japanese Imari ware, Chinese doucai and wucai, combine the different types of decoration. In such cases the first firing for the body, any underglaze decoration and glaze is typically followed by a second firing after the overglaze enamels have been applied.
Environmental impact
Heavy metals are dense metals used in glazes to produce a particular color or texture. Glaze components are more likely to be leached into the environment when non-recycled ceramic products are exposed to warm or acidic water. Leaching of heavy metals occurs when ceramic products are glazed incorrectly or damaged. Lead and chromium are two heavy metals which can be used in ceramic glazes that are heavily monitored by government agencies due to their toxicity and ability to bioaccumulate.
Metal oxide chemistry
Metals used in ceramic glazes are typically in the form of metal oxides.
Lead(II) oxide
Ceramic manufacturers primarily use lead(II) oxide (PbO) as a flux for its low melting range, wide firing range, low surface tension, high index of refraction, and resistance to devitrification. Lead used in the manufacture of commercial glazes are molecularly bound to silica in a 1:1 ratio, or included in frit form, to ensure stabilization and reduce the risk of leaching.
In polluted environments, nitrogen dioxide reacts with water () to produce nitrous acid () and nitric acid ().
+ 2 → +
Soluble Lead(II) nitrate () forms when lead(II) oxide (PbO) of leaded glazes is exposed to nitric acid ()
PbO + 2 → +
Because lead exposure is strongly linked to a variety of health problems, collectively referred to as lead poisoning, the disposal of leaded glass (chiefly in the form of discarded CRT displays) and lead-glazed ceramics is subject to toxic waste regulations.
Barium carbonate and Strontium carbonate
Barium carbonate (BaCO3) is used to create a unique glaze color known as barium blue. However, the ethical nature of using barium carbonate for glazes on food contact surfaces has come into question. Barium poisoning by ingestion can result in convulsions, paralysis, digestive discomfort, and death. It is also somewhat soluble in acid, and can contaminate water and soil for long periods of time. These concerns have led to attempts to substitute Strontium carbonate (SrCO3) in glazes that require barium carbonate. Unlike Barium carbonate, Strontium carbonate is not considered a safety hazard by the NIH. Experiments in strontium substitution tend to be successful in gloss type glazes, although there are some effects and colors produced in matte type glazes that can only be obtained through use of barium.
To reduce the likelihood of leaching, barium carbonate is used in frit form and bound to silica in a 1:1 ratio. It is also recommended that barium glazes not be used on food contact surfaces or outdoor items.
Chromium(III) oxide
Chromium(III) oxide () is used as a colorant in ceramic glazes. Chromium(III) oxide can undergo a reaction with calcium oxide (CaO) and atmospheric oxygen in temperatures reached by a kiln to produce calcium chromate (). The oxidation reaction changes chromium from its +3 oxidation state to its +6 oxidation state. Chromium(VI) is very soluble and the most mobile out of all the other stable forms of chromium.
+ 2CaO + →
Chromium may enter water systems via industrial discharge. Chromium(VI) can enter the environment directly or oxidants present in soils can react with chromium(III) to produce chromium(VI). Plants have reduced amounts of chlorophyll when grown in the presence of chromium(VI).
Uranium(IV) oxide (UO2)
Urania-based ceramic glazes are dark green or black when fired in a reduction or when UO2 is used; more commonly it is used in oxidation to produce bright yellow, orange and red glazes Uranium glazes were used in the 1920s and 1930s for making uranium tile, watch, clock and aircraft dials.
Uranium dioxide is produced by reducing uranium trioxide with hydrogen.
UO3 + H2 → UO2 + H2O at 700 °C (973 K)
Prevention
Chromium oxidation during manufacturing processes can be reduced with the introduction of compounds that bind to calcium. Ceramic industries are reluctant to use lead alternatives since leaded glazes provide products with a brilliant shine and smooth surface. The United States Environmental Protection Agency has experimented with a dual glaze, barium alternative to lead, but they were unsuccessful in achieving the same optical effect as leaded glazes.
Gallery
See also
Celadon
Frit
Glaze defects
Pottery#Glazing and firing techniques
Shino ware
Swatow ware
Uranium tile
Vitreous enamel
References
Bibliography
Painting techniques
Artistic techniques
Pottery
Glass applications
Glass compositions
Ceramic glazes
Ceramic engineering | Ceramic glaze | [
"Chemistry",
"Engineering"
] | 3,000 | [
"Glass chemistry",
"Glass compositions",
"Coatings",
"Ceramic engineering",
"Ceramic glazes"
] |
6,717,193 | https://en.wikipedia.org/wiki/Compacted%20oxide%20layer%20glaze | Compacted oxide layer glaze describes the often shiny, wear-protective layer of oxide formed when two metals (or a metal and ceramic) are slid against each other at high temperature in an oxygen-containing atmosphere. The layer forms on either or both of the surfaces in contact and can protect against wear.
Background
A not often used definition of glaze is the highly sintered compacted oxide layer formed due to the sliding of either two metallic surfaces (or sometimes a metal surface and ceramic surface) at high temperatures (normally several hundred degrees Celsius) in oxidizing conditions. The sliding or tribological action generates oxide debris that can be compacted against one or both sliding surfaces and, under the correct conditions of load, sliding speed and oxide chemistry as well as (high) temperature, sinter together to form a 'glaze' layer. The 'glaze' formed in such cases is actually a crystalline oxide, with a very small crystal or grain size having been shown to approach nano-scale levels. Such 'glaze' layers were originally thought to be amorphous oxides of the same form as ceramic glazes, hence the name 'glaze' is still currently used.
Such 'glazes' have attracted limited attention due to their ability to protect the metallic surfaces on which they may form, from wear under the high temperature conditions in which they are generated. This high temperature wear protection allows potential use at temperatures beyond the range of conventional hydrocarbon-based, silicone-based or even solid lubricants such as molybdenum disulfide (the latter useful up to about short term). Once they form, little further damage occurs unless there is a dramatic change in sliding conditions.
Such 'glazes' work by providing a mechanically resistant layer, which prevents direct contact between the two sliding surfaces. For example, when two metals slide against each other, there can be a high degree of adhesion between the surfaces. The adhesion may be sufficient to result in metallic transfer from one surface to the other (or removal and ejection of such material) - effectively adhesive wear (also referred to as severe wear). With the 'glaze' layer present, such severe adhesive interactions cannot occur and wear may be greatly reduced. The continued generation of oxidized debris during the more gradual wear that results (entitled mild wear) can sustain the 'glaze' layer and maintain this low wear regime.
However, their potential application has been hampered as they have only successfully been formed under the very sliding conditions where they are meant to offer protection. A limited amount of sliding damage (referred to as 'run in wear' - actually a brief period of adhesive or severe wear) needs to occur before the oxides are generated and such 'glaze' layers can form. Efforts at encouraging their early formation have met with very limited success, and the damage inflicted during the 'run in' period is one factor preventing this technique being used for practical applications.
As oxide generated is effectively the result of the tribochemical decay of one or both of the metallic (or ceramic) surfaces in contact, the study of compacted oxide layer glazes is sometimes referred to as part of the more general field of high temperature corrosion.
The generation of oxides during high temperature sliding wear does not automatically lead to the production of a compacted oxide layer 'glaze'. Under certain conditions (potentially due to non-ideal conditions of sliding speed, load, temperature or oxide chemistry / composition), the oxide may not sinter together and instead the loose oxide debris may assist or enhance the removal of material by abrasive wear. A change in conditions may also see a switch from the formation of a loose, abrasive oxide to the formation of wear protective compacted oxide glaze layers and vice versa, or even the reappearance of adhesive or severe wear. Due to the complexities of the conditions controlling the types of wear observed, there have been a number of attempts to map types of wear with reference to sliding conditions in order to help better understand and predict them.
Potential uses
Due to the potential for wear protection at high temperatures beyond which conventional lubricants can be used, possible uses have been speculated in applications such as car engines, power generation and even aerospace, where there is an increasing demand for ever higher efficiency and thus operating temperature.
Compacted oxide layers at low temperatures
Compacted oxide layers can form due to sliding at low temperatures and offer some wear protection; however, in the absence of heat as a driving force (either due to frictional heating or higher ambient temperature), they cannot sinter together to form more protective 'glaze' layers.
See also
Tribology
Wear
References
I.A. Inman. Compacted Oxide Layer Formation under Conditions of Limited Debris Retention at the Wear Interface during High Temperature Sliding Wear of Superalloys, Ph.D. Thesis (2003), Northumbria University, (preview)
S.R. Rose – Studies of the High Temperature Tribological Behaviour of Superalloys, Ph.D. Thesis, AMRI, Northumbria University (2000)
P.D. Wood – The Effect of the Counterface on the Wear Resistance of Certain Alloys at Room Temperature and 750°C, Ph.D. Thesis, SERG, Northumbria University (1997)
J.F. Archard and W. Hirst – The Wear of Metals under Unlubricated Conditions, Proc Royal Society London, A 236 (1956) 397-410
J.F. Archard and W. Hirst – An Examination of a Mild Wear Process Proc. Royal Society London, A 238 (1957) 515-528
J.K. Lancaster – The Formation of Surface Films at the Transition Between Mild and Severe Metallic Wear, Proc. Royal Society London, A 273 (1962) 466-483
T.F.J. Quinn – Review of Oxidational Wear. Part 1: The Origins of Oxidational Wear Tribo. Int., 16 (1983) 257-270
I.A. Inman, P.K. Datta, H.L. Du, Q Luo, S. Piergalski – Studies of high temperature sliding wear of metallic dissimilar interfaces, Tribology International 38 (2005) 812–823 (Elsevier / Science Direct)
F.H. Stott, D.S. Lin and G.C. Wood – The Structure and Mechanism of Formation of the ‘Glaze’ Oxide Layers Produced on Nickel-Based Alloys during Wear at High Temperatures, Corrosion Science, Vol. 13 (1973) 449-469
F.H. Stott, J.Glascott and G.C. Wood – Models for the Generation of Oxides during Sliding Wear, Proc Royal Society London A 402 (1985) 167-186
F.H. Stott – The Role of Oxidation in the Wear of Alloys, Tribology International, 31 (1998) 61-71
F.H. Stott – High-Temperature Sliding Wear of Metals, Trib. Int., 35 (2002) 489-495
J. Jiang, F.H. Stott and M.M. Stack – A Mathematical Model for Sliding Wear of Metals at Elevated Temperatures, Wear 181 (1995) 20-31
T.F.J. Quinn – “Oxidational Wear”, Wear 18 (1971) 413-419
S.C. Lim – Recent Development in Wear Maps, Tribo. Int., Vol. 31, Nos. 1-3 (1998) 87-97
Metallurgy | Compacted oxide layer glaze | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,561 | [
"Metallurgy",
"Materials science",
"nan"
] |
25,685,037 | https://en.wikipedia.org/wiki/Journal%20of%20Bioinformatics%20and%20Computational%20Biology | The Journal of Bioinformatics and Computational Biology was founded in 2003 and is published by World Scientific. The journal covers analysis of cellular information, especially in the technical aspect. The managing editor is Limsoon Wong (National University of Singapore).
Abstracting and indexing
The journal is abstracted and indexed in:
Index Medicus
BIOSIS Previews
Biological Abstracts
MEDLINE
CompuScience
Scopus
Inspec
English-language journals
Academic journals established in 2003
Bioinformatics and computational biology journals
World Scientific academic journals | Journal of Bioinformatics and Computational Biology | [
"Biology"
] | 107 | [
"Bioinformatics",
"Bioinformatics and computational biology journals"
] |
25,685,354 | https://en.wikipedia.org/wiki/Journal%20of%20Circuits%2C%20Systems%2C%20and%20Computers | The Journal of Circuits, Systems and Computers was founded in 1991 and is published eight times annually by World Scientific. It covers a wide range of topics regarding circuits, systems and computers, from basic mathematics to engineering and design.
The editor-in-chief of the journal is Professor Wai-Kai Chen and the five regional editors include Piero Malcovati from the University of Pavia, Emre Salman from Stony Brook University, Masazaku Sengoku from Niigata University, Zoran Stamenkovic from IHP GmbH, and Tongquan Wei from East China Normal University.
Abstracting and indexing
The journal is abstracted and indexed in:
SciSearch
Scopus
ISI Alerting Services
Current Contents/Engineering, Computing & Technology
Mathematical Reviews
Inspec
io-port.net
Compendex
Computer Abstracts
References
English-language journals
Academic journals established in 1991
Electrical and electronic engineering journals
Computer science journals
World Scientific academic journals | Journal of Circuits, Systems, and Computers | [
"Engineering"
] | 191 | [
"Electrical engineering",
"Electronic engineering",
"Electrical and electronic engineering journals"
] |
25,686,447 | https://en.wikipedia.org/wiki/Joint%20FAO/WHO%20Expert%20Committee%20on%20Food%20Additives | The Joint FAO/WHO Expert Committee on Food Additives (JECFA) is an international scientific expert committee that is administered jointly by the Food and Agriculture Organization of the United Nations (FAO) and the World Health Organization (WHO). It has been meeting since 1956 to provide independent scientific advice pertaining to the safety evaluation of food additives. Its current scope of work now also includes the evaluation of contaminants, naturally occurring toxicants and residues of veterinary drugs in food.
The role of JECFA
As the FAO/WTO publication describes, global food safety can be difficult to ensure without international reference standards. While all countries require access to reliable risk assessments of the various chemicals in our food, not all have the resources or the funds available to conduct such evaluations for a large number of substances. Through expert-driven risk assessments JECFA defines the safe exposure levels to chemicals found in food. JECFA plays a key role by providing scientific advice that is both reliable and independent, thereby contributing to the setting of standards on a global scale for the protection of consumer health while ensuring trade of safe food. Over time JECFA has developed and updated the methods for risk assessments of chemicals in food. The Environmental Health Criteria 240 or EHC 240 captures this work and constitutes the international point of reference recognized by national and regional food safety authorities.
JECFA organization
JECFA normally meets twice a year. The meetings either cover (i) food additives, contaminants and naturally occurring toxicants in food or (ii) residues of veterinary drugs in food. Different sets of experts (called Members for the purposes of the meeting) are invited to these meetings to solicit their expertise depending on the topics being discussed.
Sometimes FAO and WHO may also convene expert meetings to provide scientific advice on issues that are related to chemical food safety but fall outside the purview of JECFA. These ad hoc meetings are called either in response to specific requests from Codex, and/or to advise national authorities on risks or incidents that affect consumers’ health and have serious economic and trade repercussions.
Dissemination of information
The work of the Codex Alimentarius Commission (CAC), which is the most important international body in the field of food standards, is based on the scientific advice provided by bodies like JECFA. This advice to CAC is normally provided to the various Codex Committees, such as the Codex Committee on Food Additives (CCFA), Codex Committee on Contaminants in Food (CCCF), and the Codex Committee on Residues of Veterinary Drugs (CCVRDF).
FAO, WHO and the member countries of both the organizations also benefit from the evaluations made by JECFA. Some use the information from JECFA to establish their own national food safety control programs.
The JECFA Committee also develops principles for the safety assessment of chemicals in food that are consistent with current scientific knowledge on risk assessments, while taking into account the recent developments in toxicology and other relevant scientific areas such as epidemiology, biotechnology, exposure assessment, food chemistry including analytical chemistry and assessment of maximum residue limits for veterinary drugs.
JECFA publications
Resources produced for or after the JECFA meetings include:
Summary and Conclusions report
Chemical & Technical Assessments (CTA)
Full JECFA Meeting reports published in the WHO Technical Report Series
Compendium of FAO Food Additive Specifications
Veterinary drug residues monographs published in the FAO JECFA Monograph series
Toxicological monographs published in the WHO Food Additive Series (FAS)
References
External links
About Codex Alimentarius
JECFA at FAO
JECFA at WHO
JECFA Evaluations Database at WHO
Codex Committee on Food Additives (CCFA)
Codex Committee on Contaminants in Foods (CCCF)
Codex Committee on Residues of Veterinary Drugs in Foods (CCRVDF)
Food chemistry organizations
World Health Organization | Joint FAO/WHO Expert Committee on Food Additives | [
"Chemistry"
] | 802 | [
"Food chemistry organizations",
"Food chemistry"
] |
25,687,055 | https://en.wikipedia.org/wiki/Gendicine | Gendicine is a gene therapy medication used to treat patients with head and neck squamous cell carcinoma linked to mutations in the TP53 gene. It consists of recombinant adenovirus engineered to code for p53 protein (rAd-p53) and is manufactured by Shenzhen SiBiono GeneTech.
Gendicine was the first gene therapy product to obtain regulatory approval for clinical use in humans after Chinese State Food and Drug Administration approved it in 2003. As of 2024, Gendicine has not been approved for use in the United States and the European Union.
Mechanism of action
Gendicine enters the tumour cells by way of receptor-mediated endocytosis and begins to over-express genes coding for the p53 protein needed to fight the tumour. Ad-p53 seems to act by stimulating the apoptotic pathway in tumour cells, which increases the expression of tumour suppressor genes and immune response factors (such as the ability of natural killer (NK) cells to exert "bystander" effects). It also decreases the expression of multi-drug resistance, vascular endothelial growth factor and matrix metalloproteinase-2 genes and blocking transcriptional survival signals.
p53 mutation status of the tumour cells and response to Ad-p53 treatment are not closely correlated. Ad-p53 appears to act synergistically with conventional treatments such as chemo- and radiotherapy. This synergy still exists in patients with chemotherapy and radiotherapy-resistant tumors. Gendicine produces fewer side effects than conventional therapy.
Related development
Contusugene ladenovec (Advexin), a similar gene therapy developed by Introgene that also uses adenovirus to deliver the p53 gene, was turned down by the FDA in 2008 and withdrawn by the maker from the EMA approval shortly after.
References
Gene delivery
Adenoviridae
Immunotherapy
Gene therapy | Gendicine | [
"Chemistry",
"Engineering",
"Biology"
] | 399 | [
"Genetics techniques",
"Genetic engineering",
"Gene therapy",
"Molecular biology techniques",
"Gene delivery"
] |
25,687,934 | https://en.wikipedia.org/wiki/List%20of%20transport%20megaprojects | This is a list of megaprojects within the transport sector. Take care in comparing the cost of projects from different times—even a few years apart—due to inflation; comparing nominal costs without taking this into account can be highly misleading. Note that inflation-calculated values are current .
According to the Oxford Handbook of Megaproject Management in 2017, "Megaprojects are large-scale, complex ventures that typically cost $1 billion or more, take many years to develop and build, involve multiple public and private stakeholders, are transformational, and impact millions of people".
Completed projects
Partially completed and open
Under construction
Suspended or abandoned
Proposed
Airport projects
Notes
References
Megaprojects
Infrastructure-related lists
Megaprojects
Lists of most expensive things | List of transport megaprojects | [
"Physics",
"Engineering"
] | 153 | [
"Megaprojects",
"Physical systems",
"Transport",
"nan",
"Transport lists"
] |
30,283,143 | https://en.wikipedia.org/wiki/Air%20core%20gauge | An air core gauge is a specific type of rotary actuator in an analog display gauge that allows an indicator to rotate a full 360 degrees. It is used in gauges and displays, most commonly automotive instrument clusters.
A typical automotive application is shown at the right. The air core gauge is a type of "air-core motor". It may be considered a "gauge movement" or "pointer indication device".
Background
There are four common types of rotary actuators:
Physical gauges, in which the needle is attached directly to the value being measured; for example, a mechanical pressure gauge
Analog volt meters or d'Arsonval movements, which consist of a coil and a permanent magnet
Stepper motors, which move in one-notch increments or steps
Air-core motors, as described below.
Construction and operation
The air core gauge consists of two independent, perpendicular coils surrounding a hollow chamber. A needle shaft protrudes into the chamber, where a permanent magnet is affixed to the shaft. When current flows through the perpendicular coils, their magnetic fields superimpose and the magnet is free to align with the combined fields.
A typical air core gauge has four terminals, two for each coil, as shown. The two coils are identified as the sine coil and the cosine coil.
Theory
The direction of the overall magnetic field is approximately:
Where and are the coils' sine and cosine currents respectively. The permanent magnet aligns itself with that field, eventually settling near . In this way, by proportioning the current through each coil, the needle can reach all 360° of rotation.
Example
If the sin coil current is 29 mA and the cos current is 50 mA:
The coil current ratio is 0.58, and arctan 0.58 = 30 degrees.
Drivers
Air core gauges require special electronics to properly drive the coils. Some driver integrated circuits have a serial input data port and two pair of output lines. One pair of the output lines drives the sin coil and one pair drives the cos coil.
The input data defines:
The quadrant to which the actuator will point. This defines the polarity of the voltage to the sin coil and the cos coil.
The desired number of degrees within the quadrant.
Some typical driver ICs include:
On Semiconductor CS4172 16 pin dual inline package (DISCONTINUED)
On Semiconductor CS4192 surface mount package (DISCONTINUED)
On Semiconductor CS8190 DIP and SOIC surface mount packages
MLX10407
Melexis MLX10420
See also
Dashboard
Electronic instrument cluster
Electric motor
Gauge (instrument)
References
Actuators
Measuring instruments | Air core gauge | [
"Technology",
"Engineering"
] | 542 | [
"Measuring instruments"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.