id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
55,632,689 | https://en.wikipedia.org/wiki/Sergey%20Kitaev | Sergey Kitaev (Russian: Сергей Владимирович Китаев; born 1 January 1975 in Ulan-Ude) is a Professor of Mathematics at the University of Strathclyde, Glasgow, Scotland.
He obtained his Ph.D. in mathematics from the University of Gothenburg in 2003 under the supervision of Einar Steingrímsson.
Kitaev's research interests concern aspects of combinatorics and graph theory.
Contributions
Kitaev is best known for his book Patterns in permutations and words (2011), an introduction to the field of permutation patterns.
He is also the author (with Vadim Lozin) of Words and graphs (2015) on the theory of word-representable graphs which he pioneered.
Kitaev has written over 120 research articles in mathematics.
Of particular note is his work generalizing vincular patterns to having partially ordered entries, a classification (with Anders Claesson) of bijections between 321- and 132-avoiding permutations, and a solution (with Steve Seif) of the word problem for the Perkins semigroup, as well as his work on word-representable graphs.
Selected publications
External links
Sergey Kitaev's page at the University of Strathclyde
References
Combinatorialists
21st-century Russian mathematicians
Academics of the University of Strathclyde
University of Gothenburg alumni
Novosibirsk State University alumni
1975 births
Living people | Sergey Kitaev | Mathematics | 305 |
16,153,400 | https://en.wikipedia.org/wiki/Harding%20test | The term Harding test is generically understood to mean an automatic test for photosensitive epilepsy (PSE), triggered by provocative image sequences in television content. This is properly known as a PSE test since the publication of the Digital Production Partnership (DPP) technical requirements and the DPP PSE Devices document (in the UK) updated in November 2018.
The Harding Flash and Pattern Analyser (FPA) is proprietary software that is used to analyse video content for flashing and stationary patterns which may cause harm to those who suffer from photosensitive epilepsy. It is an implementation of the guidelines set by the regulator Ofcom in the UK largely based on the findings by Graham Harding, a professor at Aston University. It is available in both tape-based and file-based versions, allowing video streams from SDI, composite, component, HDMI, and files to all be analysed, in resolutions up to 8k. Versions for both Microsoft Windows and Apple mac-IOS are available. There are other manufacturers of similar and different solutions available which are also approved on the DPP Devices list.
Photosensitive epilepsy
Photosensitive epilepsy affects approximately one in 4,000 people and is a form of epilepsy in which seizures are triggered by visual stimuli that form patterns in time or space, such as flashing lights, bold regular patterns, or regular moving patterns. In 1993, an advert for Pot Noodles induced seizures in three people in the United Kingdom, leading to the then regulator the ITC introducing these guidelines.
The Broadcast Code of Advertising Practice requires that TV ads are tested and pass a PSE Test. Companies such as Clearcast, are responsible for clearing ads for UK commercial broadcasters and will perform a PSE check on all ads before clearance.
Testing procedures
The algorithms behind PSE testing look at video frames from second to second and analyse for potentially provocative image sequences. Luminance flashes, red flashes and spatial patterns over prescribed amplitude and frequency limits are then logged. Any such over limit violations give rise to the media being failed. Otherwise the media is passed fit for broadcast and a pass certificate can be automatically generated.
The first PSE test was developed by Cambridge Research Systems Ltd. and are based on research by Graham Harding. All Harding FPA products implement the same guidelines. There are also other approved manufacturers' products which either use the same algorithm in different packages or have independently developed software and algorithms that broadly provide PSE checks to the same specifications.
The PSE testing is currently used by all television stations in the UK to check for compliance with the guidelines. If a program fails, it usually requires a re-editing of offending scenes. Normally, problems can be rectified by reducing the number of flashes in the scene and/or reducing the intensity of colors (most notably saturated red). After re-editing the problem areas, the entire program must be re-tested in order to obtain a PSE test certificate. PSE testing is also used in Japan, particularly for anime content on both broadcast TV and online streaming platforms, following the Pokémon Shock incident in 1997.
In 2010, HardingTest.com was launched to provide users with a way of testing video remotely, without the need to have an in-house Harding FPA machine. This provided a much-needed service for freelance editors and production companies who previously had to export their movie to video tape to send to a larger post-production facility for testing, all of which increased time and expense. This service means users can upload a digital video file and have it tested and results returned within minutes rather than hours.
References
British inventions
Epilepsy types
Medical software
Television advertising | Harding test | Biology | 747 |
72,255,622 | https://en.wikipedia.org/wiki/Eva%20Zurek | Eva Dagmara Zurek (born 1976) is a theoretical chemist, solid-state physicist and materials scientist. As a professor of chemistry at the University at Buffalo, Zurek studies the electronic structure, properties, and reactivity of a wide variety of materials using quantum mechanical calculations. She is interested in high pressure science, superhard, superconducting, quantum and planetary materials, catalysis, as well as solvated electrons and electrides. She develops algorithms to predict the structures of crystals, interfaces them with machine learning models, and applies them in materials discovery.
Early life and education
Zurek was born in 1976 in Poland. She completed her Bachelor of Science and master's degree at the University of Calgary, where she carried out research with Tom Ziegler. While at the University of Calgary, Zurek was a recipient of one of the Alberta Ingenuity grants. Zurek's PhD was carried out in the group of Ole Krogh Andersen at the Max Planck Institute for Solid State Research, and she received her degree from the University of Stuttgart in Germany. Following her PhD, she accepted a postdoctoral associate position at Cornell University under Roald Hoffmann.
Career
Upon completing her postdoctoral work at Cornell, Zurek joined the faculty at the University at Buffalo (UB) in 2009. In October 2009, Zurek co-authored a paper with Hoffman and other colleagues in the Proceedings of the National Academy of Sciences of the United States of America predicting that LiH6 could form as a stable metal at a pressure of around 1 million atmospheres. As an assistant professor of chemistry, Zurek research group wrote an algorithm called XtalOpt to predict hydrogen-rich compounds that may be superconducting metals under pressure.
By 2016, Zurek was promoted to associate professor where she continued her work on superconductors. Her research team used the algorithm XtalOpt to understand which combinations of phosphorus and hydrogen were stable at pressures of up to 200 gigapascals. Their results determined that phosphine's superconductivity under pressure likely arose due to the compound decomposing into other chemical products that contain phosphorus and hydrogen. In 2019, Zurek oversaw a research team which used computational techniques to identify 43 previously unknown forms of carbon that are thought to be stable and superhard.
In 2021, Zurek was named a Fellow of the American Physical Society (APS) for "the application of forefront computational electronic structure methods to reveal microscopic processes occurring in large molecules and nanostructures, for the design of hydride superconductors, and for related educational innovations in computational science." At the end of the 2020–21 academic year, Zurek was named a recipient of the 2021 SUNY Chancellor's Award for Excellence. She was also elected by the Division of Computational Physics of the APS as its Vice Chair for the 2022–23 year.
Awards and honors
2024 State University of New York (SUNY) Distinguished Professor
2023 WNYACS Schoellkopf Medal
2022 Fellow of the American Physical Society
2021 SUNY Chancellor's Award for Excellence in Scholarship
2014 The Minerals, Metals & Materials Society's Young Leader Professional Development Award
2014 Promising Young Scientist prize from the Centre de Mecanique Ondulatoire Appliquee
2013 Sloan Research Fellowship
References
External links
Living people
1976 births
Place of birth missing (living people)
Scientists from Alberta
21st-century Canadian women scientists
Theoretical chemists
University of Calgary alumni
University of Stuttgart alumni
Fellows of the American Physical Society | Eva Zurek | Chemistry | 719 |
18,383 | https://en.wikipedia.org/wiki/Leonids | The Leonids ( ) are a prolific annual meteor shower associated with the comet Tempel–Tuttle, and are also known for their spectacular meteor storms that occur about every 33 years. The Leonids get their name from the location of their radiant in the constellation Leo: the meteors appear to radiate from that point in the sky. Their proper Greek name should be Leontids with an additional (), but the word was initially constructed as a Greek/Latin hybrid and it has been used since.
Earth moves through meteoroid streams left from passages of a comet. The streams consist of solid particles, known as meteoroids, normally ejected by the comet as its frozen gases evaporate under the heat of the Sun when it is near the Sun – typically closer than Jupiter's orbit. Due to the retrograde orbit of 55P/Tempel–Tuttle, the Leonids are fast moving streams which encounter the path of Earth and impact at . It is the fastest annual meteor shower. Larger Leonids which are about across have a mass of and are known for generating bright (apparent magnitude −1.5) meteors. An annual Leonid shower may deposit 12 or 13 tons of particles across the entire planet.
The meteoroids left by the comet are organized in trails in orbits similar tothough different fromthat of the comet. They are differentially disturbed by the planets, in particular Jupiter, and to a lesser extent by radiation pressure from the Sunthe Poynting–Robertson effect and the Yarkovsky effect. These trails of meteoroids cause meteor showers when Earth encounters them. Old trails are spatially not dense and compose the meteor shower with a few meteors per minute. In the case of the Leonids, that tends to peak around 18 November, but some are spread through several days on either side and the specific peak changes every year. Conversely, young trails are spatially very dense and the cause of meteor outbursts when the Earth enters one.
The Leonids also produce (very large outbursts) about every 33 years, during which activity exceeds 1,000 meteors per hour, with some events exceeding 100,000 meteors per hour, in contrast to the sporadic background (5 to 8 meteors per hour) and the shower background (several meteors per hour).
History
1800s
The Leonids are famous because their meteor showers, or storms, can be among the most spectacular. Because of the storm of 1833 and the developments in scientific thought of the time (see for example the identification of Halley's Comet), the Leonids have had a major effect on the scientific study of meteors, which had previously been thought to be atmospheric phenomena. Although it has been suggested the Leonid meteor shower and storms have been noted in ancient times, it was the meteor storm of November 12–13, 1833 that broke into people's modern-day awareness. One estimate of the peak rate is over one hundred thousand meteors an hour, while another, done as the storm abated, estimated in excess of 240,000 meteors during the nine hours of the storm, over the entire region of North America east of the Rocky Mountains.
The event was marked by several nations of Native Americans: the Cheyenne established a peace treaty and the Lakota calendar was reset. Many Native American birthdays were calculated by reference to the 1833 Leonid event. Abolitionists including Harriet Tubman and Frederick Douglass as well as slave-owners took note and others. The New York Evening Post carried a series of articles on the event including reports from Canada to Jamaica, it made news in several states beyond New York and, though it appeared in North America, was talked about in Europe. The journalism of the event tended to rise above the partisan debates of the time and reviewed facts as they could be sought out. Abraham Lincoln commented on it years later. Near Independence, Missouri, in Clay County, a refugee Mormon community watched the meteor shower on the banks of the Missouri River after having been driven from their homes by local settlers. Joseph Smith, the founder and first leader of Mormonism, afterwards noted in his journal for November 1833 his belief that this event was "a litteral [sic] fulfillment of the word of God" and a harbinger of the imminent second coming of Christ. Though it was noted in the midwest and eastern areas, it was also noted in Far West, Missouri.
Denison Olmsted explained the event most accurately. After spending the last weeks of 1833 collecting information, he presented his findings in January 1834 to the American Journal of Science and Arts, published in January–April 1834, and January 1836. He noted the shower was of short duration and was not seen in Europe, and that the meteors radiated from a point in the constellation of Leo and he speculated the meteors had originated from a cloud of particles in space. Accounts of the 1866 repeat of the Leonids counted hundreds per minute/a few thousand per hour in Europe. The Leonids were again seen in 1867, when moonlight reduced the rates to 1,000 meteors per hour. Another strong appearance of the Leonids in 1868 reached an intensity of 1,000 meteors per hour in dark skies. It was in 1866–67 that information on Comet Tempel-Tuttle was gathered, pointing it out as the source of the meteor shower and meteor storms. When the storms failed to return in 1899, it was generally thought that the dust had moved on and the storms were a thing of the past.
1900s
In 1966, a spectacular meteor storm was seen over the Americas. Historical notes were gathered thus noting the Leonids back to 900 AD. Radar studies showed the 1966 storm included a relatively high percentage of smaller particles while 1965's lower activity had a much higher proportion of larger particles. In 1981 Donald K. Yeomans of the Jet Propulsion Laboratory reviewed the history of meteor showers for the Leonids and the history of the dynamic orbit of Comet Tempel-Tuttle. A graph from it was adapted and re-published in Sky and Telescope. It showed relative positions of the Earth and Tempel-Tuttle and marks where Earth encountered dense dust. This showed that the meteoroids are mostly behind and outside the path of the comet, but paths of the Earth through the cloud of particles resulting in powerful storms were very near paths of nearly no activity. But overall the 1998 Leonids were in a favorable position so interest was rising.
Leading up to the 1998 return, an airborne observing campaign was organized to mobilize modern observing techniques by Peter Jenniskens at NASA Ames Research Center. In 1999, there were also efforts to observe impacts of meteoroids on the Moon, as an example of transient lunar phenomenon. A particular reason to observe the Moon is that our vantage from a location on Earth sees only meteors coming into the atmosphere relatively close to us, while impacts on the Moon would be visible from across the Moon in a single view. The sodium tail of the Moon tripled just after the 1998 Leonid shower which was composed of larger meteoroids (which in the case of the Earth was witnessed as fireballs). However, in 1999 the sodium tail of the Moon did not change from the Leonid impacts.
Research by Kondrat'eva, Reznikov and colleagues at Kazan University had shown how meteor storms could be accurately predicted, but for some years the worldwide meteor community remained largely unaware of these results. The work of David J. Asher, Armagh Observatory and Robert H. McNaught, Siding Spring Observatory and independently by Esko Lyytinen in 1999, following on from the Kazan research, is considered by most meteor experts as the breakthrough in modern analysis of meteor storms. Whereas previously it was hazardous to guess if there would be a storm or little activity, the predictions of Asher and McNaught timed bursts in activity down to ten minutes by narrowing down the clouds of particles to individual streams from each passage of the comet, and their trajectories amended by subsequent passage near planets. However, whether a specific meteoroid trail will be primarily composed of small or large particles, and thus the relative brightness of the meteors, was not understood. But McNaught did extend the work to examine the placement of the Moon with trails and saw a large chance of a storm impacting in 1999 from a trail while there were less direct impacts from trails in 2000 and 2001 (successive contact with trails through 2006 showed no hits).
2000s
Viewing campaigns resulted in spectacular footage from the 1999, 2001, and 2002 storms which produced up to 3,000 Leonid meteors per hour. Predictions for the Moon's Leonid impacts also noted that in 2000 the side of the Moon facing the stream was away from the Earth, but that impacts should be in number enough to raise a cloud of particles kicked off the Moon which could cause a detectable increase in the sodium tail of the Moon. Research using the explanation of meteor trails/streams have explained the storms of the past. The 1833 storm was not due to the recent passage of the comet, but from a direct impact with the previous 1800 dust trail. The meteoroids from the 1733 passage of Comet Tempel-Tuttle resulted in the 1866 storm and the 1966 storm was from the 1899 passage of the comet. The double spikes in Leonid activity in 2001 and in 2002 were due to the passage of the comet's dust ejected in 1767 and 1866. This ground breaking work was soon applied to other meteor showers – for example the 2004 June Bootids. Peter Jenniskens has published predictions for the next 50 years. However, a close encounter with Jupiter is expected to perturb the comet's path, and many streams, making storms of historic magnitude unlikely for many decades. Recent work tries to take into account the roles of differences in parent bodies and the specifics of their orbits, ejection velocities off the solid mass of the core of a comet, radiation pressure from the Sun, the Poynting–Robertson effect, and the Yarkovsky effect on the particles of different sizes and rates of rotation to explain differences between meteor showers in terms of being predominantly fireballs or small meteors.
Predictions until the end of the 21st century have been published by Mikhail Maslov.
See also
List of meteor showers
"Stars Fell on Alabama", based on the 1833 Leonid shower
Perseids, associated with the comet Swift–Tuttle
References
Further reading
External links
The Discovery of the Perseid Meteors (after the Leonids and) Prior to 1837, nobody realized the Perseids were an annual event, by Mark Littmann
Lunar Leonids: Encounters of the Moon with Leonid dust trails by Robert H. McNaught
NASA: Background facts on meteors and meteor showers
NASA: Estimate the best viewing times for your part of the world
How to hear the Leonid Meteor Shower
Observatorio ARVAL – The Leonid Meteors
Animation of the Leonid Meteor Shower at shadow&substance.com
Leo (constellation)
Meteor showers
November | Leonids | Astronomy | 2,223 |
125,769 | https://en.wikipedia.org/wiki/Binding%20energy | In physics and chemistry, binding energy is the smallest amount of energy required to remove a particle from a system of particles or to disassemble a system of particles into individual parts. In the former meaning the term is predominantly used in condensed matter physics, atomic physics, and chemistry, whereas in nuclear physics the term separation energy is used. A bound system is typically at a lower energy level than its unbound constituents. According to relativity theory, a decrease in the total energy of a system is accompanied by a decrease in the total mass, where .
Types
There are several types of binding energy, each operating over a different distance and energy scale. The smaller the size of a bound system, the higher its associated binding energy.
Mass–energy relation
A bound system is typically at a lower energy level than its unbound constituents because its mass must be less than the total mass of its unbound constituents. For systems with low binding energies, this "lost" mass after binding may be fractionally small, whereas for systems with high binding energies, the missing mass may be an easily measurable fraction. This missing mass may be lost during the process of binding as energy in the form of heat or light, with the removed energy corresponding to the removed mass through Einstein's equation . In the process of binding, the constituents of the system might enter higher energy states of the nucleus/atom/molecule while retaining their mass, and because of this, it is necessary that they are removed from the system before its mass can decrease. Once the system cools to normal temperatures and returns to ground states regarding energy levels, it will contain less mass than when it first combined and was at high energy. This loss of heat represents the "mass deficit", and the heat itself retains the mass that was lost (from the point of view of the initial system). This mass will appear in any other system that absorbs the heat and gains thermal energy.
For example, if two objects are attracting each other in space through their gravitational field, the attraction force accelerates the objects, increasing their velocity, which converts their potential energy (gravity) into kinetic energy. When the particles either pass through each other without interaction or elastically repel during the collision, the gained kinetic energy (related to speed) begins to revert into potential energy, driving the collided particles apart. The decelerating particles will return to the initial distance and beyond into infinity, or stop and repeat the collision (oscillation takes place). This shows that the system, which loses no energy, does not combine (bind) into a solid object, parts of which oscillate at short distances. Therefore, to bind the particles, the kinetic energy gained due to the attraction must be dissipated by resistive force. Complex objects in collision ordinarily undergo inelastic collision, transforming some kinetic energy into internal energy (heat content, which is atomic movement), which is further radiated in the form of photonsthe light and heat. Once the energy to escape the gravity is dissipated in the collision, the parts will oscillate at a closer, possibly atomic, distance, thus looking like one solid object. This lost energy, necessary to overcome the potential barrier to separate the objects, is the binding energy. If this binding energy were retained in the system as heat, its mass would not decrease, whereas binding energy lost from the system as heat radiation would itself have mass. It directly represents the "mass deficit" of the cold, bound system.
Closely analogous considerations apply in chemical and nuclear reactions. Exothermic chemical reactions in closed systems do not change mass, but do become less massive once the heat of reaction is removed, though this mass change is too small to measure with standard equipment. In nuclear reactions, the fraction of mass that may be removed as light or heat, i.e. binding energy, is often a much larger fraction of the system mass. It may thus be measured directly as a mass difference between rest masses of reactants and (cooled) products. This is because nuclear forces are comparatively stronger than the Coulombic forces associated with the interactions between electrons and protons that generate heat in chemistry.
Mass change
Mass change (decrease) in bound systems, particularly atomic nuclei, has also been termed mass defect, mass deficit, or mass packing fraction.
The difference between the unbound system calculated mass and experimentally measured mass of nucleus (mass change) is denoted as Δm. It can be calculated as follows:
Mass change = (unbound system calculated mass) − (measured mass of system)
e.g. (sum of masses of protons and neutrons) − (measured mass of nucleus)
After a nuclear reaction occurs that results in an excited nucleus, the energy that must be radiated or otherwise removed as binding energy in order to decay to the unexcited state may be in one of several forms. This may be electromagnetic waves, such as gamma radiation; the kinetic energy of an ejected particle, such as an electron, in internal conversion decay; or partly as the rest mass of one or more emitted particles, such as the particles of beta decay. No mass deficit can appear, in theory, until this radiation or this energy has been emitted and is no longer part of the system.
When nucleons bind together to form a nucleus, they must lose a small amount of mass, i.e. there is a change in mass to stay bound. This mass change must be released as various types of photon or other particle energy as above, according to the relation . Thus, after the binding energy has been removed, binding energy = mass change × . This energy is a measure of the forces that hold the nucleons together. It represents energy that must be resupplied from the environment for the nucleus to be broken up into individual nucleons.
For example, an atom of deuterium has a mass defect of 0.0023884 Da, and its binding energy is nearly equal to 2.23 MeV. This means that energy of 2.23 MeV is required to disintegrate an atom of deuterium.
The energy given off during either nuclear fusion or nuclear fission is the difference of the binding energies of the "fuel", i.e. the initial nuclide(s), from that of the fission or fusion products. In practice, this energy may also be calculated from the substantial mass differences between the fuel and products, which uses previous measurements of the atomic masses of known nuclides, which always have the same mass for each species. This mass difference appears once evolved heat and radiation have been removed, which is required for measuring the (rest) masses of the (non-excited) nuclides involved in such calculations.
See also
Semi-empirical mass formula
Separation energy (binding energy of one nucleon)
Virial mass
Prout's hypothesis, an early model of the atom that did not account for mass defect
References
External links
Nuclear Binding Energy
Mass and Nuclide Stability
Experimental atomic mass data compiled Nov. 2003
Energy (physics)
Mass spectrometry
Nuclear physics
Forms of energy | Binding energy | Physics,Chemistry,Mathematics | 1,446 |
21,421,469 | https://en.wikipedia.org/wiki/Bicarbonate%20indicator | A hydrogencarbonate indicator (hydrogencarbonate indicator) is a type of pH indicator that is sensitive enough to show a color change as the concentration of carbon dioxide gas in an aqueous solution increases. The indicator is used in photosynthesis and respiration experiments to find out whether carbon dioxide is being liberated. It is also used to test the carbon dioxide content during gaseous exchange of organisms. When the carbon dioxide content is higher than 0.04%, the initial red colour changes to yellow as the pH becomes more acidic. If the carbon dioxide content is lower than 0.04%, it changes from red to magenta and, in relatively very low carbon dioxide concentrations, to purple. Carbon dioxide, even in the concentrations found in exhaled air, will dissolve in the indicator to form carbonic acid, a weak acid, which will lower the pH and give the characteristic colour change. A colour change to purple during photosynthesis shows a reduction in the percentage of carbon dioxide and is sometimes inferred as production of oxygen, but there is not actually any direct evidence for it.
Great care must be taken to avoid acidic or alkaline contamination of the apparatus in such experiments, since the test is not directly specific to gases like carbon dioxide.
Composition
Two solutions are prepared separately:
Solution A: 0.02 g of thymol blue, 0.01 g cresol red and 2 mL of ethanol
Solution B: 0.8 g of sodium bicarbonate, 7.48 g of potassium chloride and 90 mL of water
Mix Solution A and B and mix 9 mL of the mixed solution to 1000 mL of distilled water.
This method to determinate the concentration of bicarbonates and carbonates is also called "Magni's method."
Color change
References
PH indicators | Bicarbonate indicator | Physics,Chemistry,Materials_science,Astronomy | 364 |
870,711 | https://en.wikipedia.org/wiki/Triple%20conjunction | A triple conjunction is an astronomical event when two planets or a planet and a star appear to meet each other three times during a brief period, either in opposition or at the time of inferior conjunction, if an inferior planet is involved. The visible movement of the planet or the planets in the sky appears therefore normally prograde at the first conjunction, retrograde at the second conjunction, and again prograde at the third conjunction.
The lining-up of three planets is a particular case of syzygy.
There are three possible cases of triple conjunctions.
Between Mercury and Venus
At nearly every superior conjunction of Venus (when Venus passes behind the Sun) there is a triple conjunction between Mercury and Venus. In most cases the second conjunction is not visible, because both planets have too small elongation from the Sun.
Triple conjunctions between Mercury and Venus are also possible when they are passing between Earth and the Sun at the same time. This event is much rarer, and also in this case the second conjunction is usually not observable.
Of inferior planets with superior planets or stars
If Mars is in conjunction with the Sun, there is often a triple conjunction between Mars and Mercury or between Mars and Venus. In the events in which Mercury is involved, the second conjunction is invisible because of small elongation from Sun; both other events are difficult to see because of the nearness to horizon and the relatively low brightness of Mars, which is there always near its greatest distance from Earth, barely visible.
For a Mars–Venus triple conjunction all three events can almost always be seen, but Mars is dim because of its great distance from the Earth.
Triple conjunctions between the inferior planets Mercury and Venus and the superior planets Jupiter, Saturn, Uranus, Neptune, dwarf planet Pluto or with stars take place when these objects are at the same time in conjunction to Sun while Mercury or Venus are at inferior conjunction.
Frequently the second conjunction takes place when both bodies are too close to the Sun in order to be seen, while the other conjunctions are easily visible, especially if the other body is Jupiter, Saturn or a bright star.
With the dim planets Uranus, Neptune and dwarf planet Pluto the visibility of such an event is difficult, because of the low elongation from Sun.
Triple conjunctions of Mercury and Venus with the exterior planets Jupiter, Saturn, Uranus, Neptune, and dwarf planet Pluto happen relatively frequently (approximately once in 10 years).
Between two exterior planets
These are the most interesting triple conjunctions, because all three conjunctions can be seen very easily, because of the great elongation of the planets or stars involved.
Triple conjunctions between the bright exterior planets are very rare: the last triple conjunctions between Mars and Jupiter occurred in 1789–1790, in 1836–1837 and in 1979–1980. The next events of this kind will be again in 2123 and in 2169–2170.
The last triple conjunctions between Mars and Saturn took place in 1779, 1877 (only in right ascension) and in 1945–1946. The next triple conjunction between these planets will occur in 2148–2149, in 2185 and in 2187.
For both at triple conjunctions between Mars and Jupiter and for triple conjunctions between Mars and Saturn it is possible that two such events follow at an interval of only 2 years. This last happened for Mars and Jupiter in 927 and 929 and will be again in 2742 and 2744. It last happened for Mars and Saturn in 1742–1743 and 1744–1745 and will occur again in 2185 and 2187.
Conjunctions between Jupiter and Saturn—so-called great conjunctions, and are sometimes triple (seven times between AD 1200 and 2400). The three conjunctions occur several months apart, over a broad range of elongations from the sun.
The most historically important triple conjunction was that one between Jupiter and Saturn in 7 BCE-5 BCE, which has been proposed as the explanation for the star of Bethlehem. Triple conjunctions between Jupiter and Saturn last took place in 1682–1683, 1821 (only in right ascension), 1940–1941 and 1981. It will not occur again until 2238–2239.
There are more frequent triple conjunctions of Jupiter with Uranus or Neptune. They are unspectacular, but offer a good possibility for amateur astronomers to find these dim planets. The last triple conjunction between Jupiter and Uranus was in 2010–2011 and the next will be in 2037–2038. The last between Jupiter and Neptune was in 2009 and the next will be in 2047–2048.
At each opposition, because of the visible loop movement of the planets, there are triple conjunctions between the planet and some stars. Triple conjunctions between planets and bright stars close to the zodiac are not so frequent (approximately 2 events in 10 years).
Of the planets Mars, Jupiter, Saturn, Uranus, and Neptune in right ascension between 1800 and 2100
Of the planets Mars, Jupiter, Saturn, Uranus, and Neptune in ecliptic longitude between 1800 and 2100
Note that conjunctions in right ascension and ecliptic longitude need not take place on the same date. It is possible that there is a triple conjunction in right ascension, but not in ecliptic longitude and vice versa.
Some triple conjunctions between 2100 and 3000
See also
Celestial mechanics
Great conjunction
Positional astronomy
References
This text is a translation of the German Wikipedia article :de:Dreifache Konjunktion. Please update as needed.
Astrometry
Astrological aspects
Conjunctions (astronomy and astrology) | Triple conjunction | Astronomy | 1,134 |
11,422,330 | https://en.wikipedia.org/wiki/Tombusvirus%205%E2%80%B2%20UTR | Tombusvirus 5′ UTR is an important cis-regulatory region of the Tombus virus genome.
Tomato bushy stunt virus is the prototype member of the Tombusviridae family. The genome of this virus is positive sense single stranded RNA. Replication occurs via a negative strand RNA intermediate. In addition to viral proteins p33 and the RNA-dependent RNA polymerase p92, and unknown host factors, conserved and structural regions within the 5′ untranslated region (5′ UTR) are important for regulating genome replication.
2 RNA domains in the 5′ UTR have been reported, a 5′ T-shaped domain (TSD) followed by a stem-loop (SL5) and a downstream domain (DSD). TSD-DSD interactions are proposed to be involved in the mediation of viral RNA replication.
An interesting feature of Tombusvirus is its ability to support the replication of defective interfering (DI) RNAs. These sub-viral replicons are small, non-coding, deletion mutants of the viral genome that maintain cis-acting RNA elements necessary for replication
Other non-coding RNA structures in Tombusvirus include the 3′ UTR region IV and an internal replication element.
References
External links
Cis-regulatory RNA elements
Tombusviridae | Tombusvirus 5′ UTR | Chemistry | 264 |
55,381,816 | https://en.wikipedia.org/wiki/Juan%20de%20Fuca%20Channel | Juan de Fuca Channel is a submarine channel off the shore of Washington state, United States and the Strait of Juan de Fuca.
The geography of Juan de Fuca Channel
The Juan de Fuca Channel is a submarine canyon running from the shelf break, off southern Vancouver Island to Juan de Fuca Strait. The canyon is both narrow and deep and has sides that are steep. Over its width at the rim it drops from in depth to over deep at the thalweg.
Along a track, seismic profiles over Juan de Fuca Channel show the canyon consists of two distinct parts. The upper canyon is narrow, extending southwestward down the continental slope. This area has an average gradient of . It is there carved in consolidated or semi-consolidated material of the slope.
The lower part of the channel trends northwestward, parallel to the shelf edge, with a gradient of only , terminating at the apex of Nitinat fan. The lower channel represents a small fan and valley feature. Further, Nitinat Fan was constructed on the deep-sea floor at what is the presently the terminus of Juan de Fuca Channel.
Most of the shelf break canyons from Oregon and north cross only part of the continental shelf, and cut from the shelf break toward the coast. Then well below the mixed layer, they end on the continental shelf. Different from other aquatic canyons, the Juan de Fuca Channel cuts the continental shelf and continues into the Strait of Juan de Fuca
Size and flow
The Juan de Fuca Channel reaches to the opening of the Juan de Fuca Strait, which separates the United States and Canada. The canyon is just less than wide and at least deep, that is, twice the depth of the surrounding seafloor.
For decades, it has been known that 20 to 30 times more deep water flows into Puget Sound than from all Earth's rivers combined, far bigger than the Amazon River. This flow is towards land, not away.
2017 measurements show this canyon may supply most of the water coming into Puget Sound, the Strait of Juan de Fuca, and Canada's Georgia Strait. The pattern of water circulation sends a dense lower layer of ocean water towards land, while the upper layer flows out to sea.
Its role
Juan de Fuca Channel appears to be a pathway, bringing deep Pacific water into the Salish Sea.
Life within
Puget Sound has famously rich water. There is reason for this; the channel pulls nutrient-rich water from the deep ocean. Recent measurements (as of 2017) may explain why this canyon helps the Pacific Northwest’s inland waters support so many shellfish, salmon runs and even pods of whales.
Water surges up the channel, mixing at surprisingly high rates. The intense flow and mixing measured inside the Juan de Fuca Channel may help explain the formerly mysterious productivity of the shores of Washington; coastal winds usually bring in some nutrients, but the numbers don’t add up: "Washington is several times more productive – has more phytoplankton – than Oregon or California, and yet the winds here are several times weaker. That’s been kind of a puzzle, for years," observes Matthew Alford, an oceanographer with the University of Washington's Applied Physics Laboratory.
At below the surface, water has flowed as fast as , showing mixing up to 1,000 times the normal rate in the deep ocean. The flow is hydraulically-controlled, which means it flows smoothly over a shallow ridge just off Cape Flattery then on the other side forms a turbulent breaking undersea wave, mixing with surface water far above. The deep water forced up through the channel is rich in nutrients, which support the growth of marine plants, which then feed other marine life. The water is more acidic, and lower in oxygen, both of which contribute to water conditions in the Strait of Juan de Fuca.
Fishing in the Juan de Fuca Canyon
"The location of this sill would be an outstanding place to fish," observes Matthew Alford, the oceanographer with the University of Washington's Applied Physics Laboratory. "People fish in Juan de Fuca Canyon pretty actively, and that’s probably no coincidence."
See also
Nearby submarine canyons
All of the following submarine canyons are near, headed north to south:
Barkley Canyon
Clayoquot Canyon
Father Charles Canyon
Loudon Canyon
Barkely Canyon
Nitinat Canyon
Juan de Fuca Canyon
Juan de Fuca Trough
Quileute Canyon
Quinault Canyon
Grays Canyon
Guide Canyon
Willapa Canyon
Astoria Canyon
Local geography
Abyssal fan
Astoria Canyon
Astoria Fan
Cascadia Basin
Cascadia Channel
Cascadia Margin
Cascadia Subduction Zone
Grays Canyon
Juan de Fuca Canyon
Juan de Fuca Plate
Juan de Fuca Channel
Nitinat Canyon
Nitinat Fan
Quileute Canyon
Willapa Canyon
References
External links and references
One link
A geography link
Submarine canyon flow feeds Strait of Juan de Fuca like an underwater Amazon River
Research on the Juan de Fuca Canyon
More research
Article from the San Francisco Chronicle
Coastal and oceanic landforms | Juan de Fuca Channel | Chemistry | 1,021 |
14,725,503 | https://en.wikipedia.org/wiki/Alpha-2-HS-glycoprotein | alpha-2-HS-glycoprotein (AHSG, Alpha-2-Heremans-Schmid Glycoprotein) also known as fetuin-A is a protein that in humans is encoded by the AHSG gene. Fetuin-A belongs to the fetuin class of plasma binding proteins and is more abundant in fetal than adult blood.
Function
Alpha2-HS glycoprotein, a glycoprotein present in the serum, is synthesized by hepatocytes and adipocytes. The AHSG molecule consists of two polypeptide chains, which are both cleaved from a proprotein encoded from a single mRNA. It is involved in several functions, such as endocytosis, brain development and the formation of bone tissue. The protein is commonly present in the cortical plate of the immature cerebral cortex and bone marrow hemopoietic matrix, and it has therefore been postulated that it participates in the development of the tissues. However, its exact significance is still obscure.
The choroid plexus is an established extrahepatic expression site. The mature circulating AHSG molecule consists of two polypeptide chains, which are both cleaved from a proprotein encoded from a single mRNA. Multiple post-translational modifications have been reported. Thus AHSG is a secreted partially phosphorylated glycoprotein with complex proteolytic processing that circulates in blood and extracellular fluids. In the test tube AHSG can bind multiple ligands and therefore has been claimed to be involved in several functions, such as endocytosis, brain development and the formation of bone tissue. Most of these functions await confirmation in vivo.
Clinical significance
Fetuins are carrier proteins like albumin. Fetuin-A forms soluble complexes with calcium and phosphate and thus is a carrier of otherwise insoluble calcium phosphate. Thus fetuin-A is a potent inhibitor of pathological calcification, in particular Calciphylaxis. Mice deficient in fetuin-A show systemic calcification of soft tissues. Fetuin-A can inhibit calcification, and inhibits osteogenesis in bone. Fetuin-A appears to promote calcification in coronary artery disease, but oppose calcification in peripheral artery disease.
High levels of Fetuin-A are associated with obesity and insulin resistance. Fetuin-A promotes insulin resistance by enhancing the binding of free fatty acids to TLR4. In adipose tissue, Fetuin-A downregulates the expression of adiponectin, thereby increasing inflammation and insulin resistance. Also in adipose tissue, Fetuin-A reduces lipogenesis and increases lipolysis, thereby increasing obesity and insulin resistance.
Supervised exercise (that is not associated with weight reduction) reduces Fetuin-A.
See also
Fetuin-B
References
External links
Further reading
Glycoproteins | Alpha-2-HS-glycoprotein | Chemistry | 621 |
34,167,371 | https://en.wikipedia.org/wiki/Georgy%20Pfeiffer | Georgy Pfeiffer also Yurii or Yury Pfeiffer (, , 23 December 1872 – 10 October 1946) was a Russian Empire and Soviet mathematician of German descent. Pfeiffer was known as a specialist in the field of integration of differential equations and systems of partial differential equations. He was also interested in algebraic geometry.
He was an Invited Speaker of the International Congress of Mathematicians (ICM) in 1908 at Rome, in 1928 at Bologna, and in 1932 at Zurich. He was a chairman of the Academic Council of the Faculty of Physics and Mathematics at the University of Kiev, Russian Empire. Pfeiffer was also attached to the Institute of Mathematics of the Academy of Sciences of the Ukrainian SSR in Kiev and served as Director during two periods, namely 1934 to 1941 and again from 1944 until his death in 1946. In the three years 1941–44, Pfeiffer was in Ufa, Russia the capital of the republic of Bashkortostan in western Russia. In Ufa, Pfeiffer was Director of the Institute of Mathematics and Physics.
References
External links
Biography
Soviet mathematicians
Ukrainian mathematicians
Mathematical analysts
19th-century mathematicians from the Russian Empire
1872 births
1946 deaths
People from Chernihiv Oblast
People from Poltava Governorate
Taras Shevchenko National University of Kyiv alumni
Academic staff of the Taras Shevchenko National University of Kyiv
Academic staff of Kyiv Polytechnic Institute
NASU Institute of Physics
NASU Institute of Mathematics
Burials at Lukianivka Cemetery | Georgy Pfeiffer | Mathematics | 303 |
11,101,076 | https://en.wikipedia.org/wiki/Inch%20of%20water | Inches of water is a non-SI unit for pressure. It is also given as inches of water gauge (iwg or in.w.g.), inches water column (inch wc, in. WC, " wc, etc. or just wc or WC), inAq, Aq, or inHO. The units are conventionally used for measurement of certain pressure differentials such as small pressure differences across an orifice, or in a pipeline or shaft, or before and after a coil in an HVAC unit.
It is defined as the pressure exerted by a column of water of 1 inch in height at defined conditions. At a temperature of 4 °C (39.2 °F) pure water has its highest density (1000 kg/m3). At that temperature and assuming the standard acceleration of gravity, 1 inAq is approximately .
Alternative standard in uncommon usage are 60 °F (15,6 °C), or 68 °F (20 °C), and depends on industry standards rather than on international standards.
Feet of water is an alternative way to specify pressure as height of a water column; it is conventionally equated to .
In North America, air and other industrial gases are often measured in inches of water when at low pressure. This is in contrast to inches of mercury or pounds per square inch (psi, lbf/in) for larger pressures. One usage is in the measurement of air ("wind") that supplies a pipe organ and is referred simply as inches. It is also used in natural gas distribution for measuring utilization pressure (U.P., i.e. the residential point of use) which is typically between 6 and 7 inches WC or about 0.25 lbf/in.
1 inAq ≈ 0.036 lbf/in, or 27.7 inAq ≈ 1 lbf/in.
{|
|-
|1 inH2O ||= 249.0889 pascals
|-
|rowspan=7|
|= 2.490889 mbar or hectopascals
|-
|= 2.54 cmH2O
|-
|≈
|-
|≈ or mmHg
|-
|≈
|-
|≈
|}
See also
Pressure head
Barometer
Centimetre of water
Inch of mercury
Millimetre of mercury
References
Units of pressure | Inch of water | Mathematics | 477 |
227,686 | https://en.wikipedia.org/wiki/Lennard-Jones%20potential | In computational chemistry, molecular physics, and physical chemistry, the Lennard-Jones potential (also termed the LJ potential or 12-6 potential; named for John Lennard-Jones) is an intermolecular pair potential. Out of all the intermolecular potentials, the Lennard-Jones potential is probably the one that has been the most extensively studied. It is considered an archetype model for simple yet realistic intermolecular interactions. The Lennard-Jones potential is often used as a building block in molecular models (a.k.a. force fields) for more complex substances. Many studies of the idealized "Lennard-Jones substance" use the potential to understand the physical nature of matter.
Overview
The Lennard-Jones potential is a simple model that still manages to describe the essential features of interactions between simple atoms and molecules: Two interacting particles repel each other at very close distance, attract each other at moderate distance, and eventually stop interacting at infinite distance, as shown in the Figure. The Lennard-Jones potential is a pair potential, i.e. no three- or multi-body interactions are covered by the potential.
The general Lennard-Jones potential combines a repulsive potential, , with an attractive potential, , using empirically determined coefficients and :
In his 1931 review Lennard-Jones suggested using
to match the London dispersion force and based matching experimental data. Setting and gives the widely used Lennard-Jones 12-6 potential:
where is the distance between two interacting particles, is the depth of the potential well, and is the distance at which the particle-particle potential energy is zero. The Lennard-Jones 12-6 potential has its minimum at a distance of where the potential energy has the value
The Lennard-Jones potential is usually the standard choice for the development of theories for matter (especially soft-matter) as well as for the development and testing of computational methods and algorithms.
Numerous intermolecular potentials have been proposed in the past for the modeling of simple soft repulsive and attractive interactions between spherically symmetric particles, i.e. the general shape shown in the Figure. Examples for other potentials are the Morse potential, the Mie potential, the Buckingham potential and the Tang-Tönnies potential. While some of these may be more suited to modelling real fluids, the simplicity of the Lennard-Jones potential, as well as its often surprising ability to accurately capture real fluid behavior, has historically made it the pair-potential of greatest general importance.
History
In 1924, the year that Lennard-Jones received his PhD from Cambridge University, he published a series of landmark papers on the pair potentials that would ultimately be named for him. In these papers he adjusted the parameters of the potential then using the result in a model of gas viscosity, seeking a set of values consistent with experiment. His initial results suggested a repulsive and an attractive .
Before Lennard-Jones, back in 1903, Gustav Mie had worked on effective field theories; Eduard Grüneisen built on Mie work for solids, showing that and is required for solids. As a result of this work the Lennard-Jones potential is sometimes called the Mie−
Grüneisen potential in solid-state physics.
In 1930, after the discovery of quantum mechanics, Fritz London showed that theory predicts the long-range attractive force should have . In 1931, Lennard-Jones applied this form of the potential to describe many properties of fluids setting the stage for many subsequent studies.
Dimensionless (reduced units)
Dimensionless reduced units can be defined based on the Lennard-Jones potential parameters, which is convenient for molecular simulations. From a numerical point of view, the advantages of this unit system include computing values which are closer to unity, using simplified equations and being able to easily scale the results. This reduced units system requires the specification of the size parameter and the energy parameter of the Lennard-Jones potential and the mass of the particle . All physical properties can be converted straightforwardly taking the respective dimension into account, see table. The reduced units are often abbreviated and indicated by an asterisk.
In general, reduced units can also be built up on other molecular interaction potentials that consist of a length parameter and an energy parameter.
Long-range interactions
The Lennard-Jones potential, cf. Eq. (1) and Figure on the top, has an infinite range. Only under its consideration, the 'true' and 'full' Lennard-Jones potential is examined. For the evaluation of an observable of an ensemble of particles interacting by the Lennard-Jones potential using molecular simulations, the interactions can only be evaluated explicitly up to a certain distance – simply due to the fact that the number of particles will always be finite. The maximum distance applied in a simulation is usually referred to as 'cut-off' radius (because the Lennard-Jones potential is radially symmetric). To obtain thermophysical properties (both macroscopic or microscopic) of the 'true' and 'full' Lennard-Jones (LJ) potential, the contribution of the potential beyond the cut-off radius has to be accounted for.
Different correction schemes have been developed to account for the influence of the long-range interactions in simulations and to sustain a sufficiently good approximation of the 'full' potential. They are based on simplifying assumptions regarding the structure of the fluid. For simple cases, such as in studies of the equilibrium of homogeneous fluids, simple correction terms yield excellent results. In other cases, such as in studies of inhomogeneous systems with different phases, accounting for the long-range interactions is more tedious. These corrections are usually referred to as 'long-range corrections'. For most properties, simple analytical expressions are known and well established. For a given observable , the 'corrected' simulation result is then simply computed from the actually sampled value and the long-range correction value , e.g. for the internal energy . The hypothetical true value of the observable of the Lennard-Jones potential at truly infinite cut-off distance (thermodynamic limit) can in general only be estimated.
Furthermore, the quality of the long-range correction scheme depends on the cut-off radius. The assumptions made with the correction schemes are usually not justified at (very) short cut-off radii. This is illustrated in the example shown in Figure on the right. The long-range correction scheme is said to be converged, if the remaining error of the correction scheme is sufficiently small at a given cut-off distance, cf. Figure.
Extensions and modifications
The Lennard-Jones potential – as an archetype for intermolecular potentials – has been used numerous times as starting point for the development of more elaborate or more generalized intermolecular potentials. Various extensions and modifications of the Lennard-Jones potential have been proposed in the literature; a more extensive list is given in the 'interatomic potential' article. The following list refers only to several example potentials that are directly related to the Lennard-Jones potential and are of both historic importance and still relevant for present research.
Mie potential The Mie potential is the generalized version of the Lennard-Jones potential, i.e. the exponents 12 and 6 are introduced as parameters and . Especially thermodynamic derivative properties, e.g. the compressibility and the speed of sound, are known to be very sensitive to the steepness of the repulsive part of the intermolecular potential, which can therefore be modeled more sophisticated by the Mie potential. The first explicit formulation of the Mie potential is attributed to Eduard Grüneisen. Hence, the Mie potential was actually proposed before the Lennard-Jones potential. The Mie potential is named after Gustav Mie.
Buckingham potential The Buckingham potential was proposed by Richard Buckingham. The repulsive part of the Lennard-Jones potential is therein replaced by an exponential function and it incorporates an additional parameter.
Stockmayer potential The Stockmayer potential is named after W.H. Stockmayer. The Stockmayer potential is a combination of a Lennard-Jones potential superimposed by a dipole. Hence, Stockmayer particles are not spherically symmetric, but rather have an important orientational structure.
Two center Lennard-Jones potential The two center Lennard-Jones potential consists of two identical Lennard-Jones interaction sites (same , , ) that are bonded as a rigid body. It is often abbreviated as 2CLJ. Usually, the elongation (distance between the Lennard-Jones sites) is significantly smaller than the size parameter . Hence, the two interaction sites are significantly fused.
Lennard-Jones truncated & splined potential The Lennard-Jones truncated & splined potential is a rarely used yet useful potential. Similar to the more popular LJTS potential, it is sturdily truncated at a certain 'end' distance and no long-range interactions are considered beyond. Opposite to the LJTS potential, which is shifted such that the potential is continuous, the Lennard-Jones truncated & splined potential is made continuous by using an arbitrary but favorable spline function.
Lennard-Jones truncated & shifted (LJTS) potential
The Lennard-Jones truncated & shifted (LJTS) potential is an often used alternative to the 'full' Lennard-Jones potential (see Eq. (1)). The 'full' and the 'truncated & shifted' Lennard-Jones potential have to be kept strictly separate. They are simply two different intermolecular potentials yielding different thermophysical properties. The Lennard-Jones truncated & shifted potential is defined as
with
Hence, the LJTS potential is truncated at and shifted by the corresponding energy value . The latter is applied to avoid a discontinuity jump of the potential at . For the LJTS potential, no long-range interactions beyond are required – neither explicitly nor implicitly. The most frequently used version of the Lennard-Jones truncated & shifted potential is the one with . Nevertheless, different values have been used in the literature. Each LJTS potential with a given truncation radius has to be considered as a potential and accordingly a substance of its own.
The LJTS potential is computationally significantly cheaper than the 'full' Lennard-Jones potential, but still covers the essential physical features of matter (the presence of a critical and a triple point, soft repulsive and attractive interactions, phase equilibria etc.). Therefore, the LJTS potential is used for the testing of new algorithms, simulation methods, and new physical theories.
Interestingly, for homogeneous systems, the intermolecular forces that are calculated from the LJ and the LJTS potential at a given distance are the same (since is the same), whereas the potential energy and the pressure are affected by the shifting. Also, the properties of the LJTS substance may furthermore be affected by the chosen simulation algorithm, i.e. MD or MC sampling (this is in general not the case for the 'full' Lennard-Jones potential).
For the LJTS potential with , the potential energy shift is approximately 1/60 of the dispersion energy at the potential well: . The Figure on the right shows the comparison of the vapor–liquid equilibrium of the 'full' Lennard-Jones potential and the 'Lennard-Jones truncated & shifted' potential. The 'full' Lennard-Jones potential results prevail a significantly higher critical temperature and pressure compared to the LJTS potential results, but the critical density is very similar. The vapor pressure and the enthalpy of vaporization are influenced more strongly by the long-range interactions than the saturated densities. This is due to the fact that the potential is manipulated mainly energetically by the truncation and shifting.
Applications
The Lennard-Jones potential is not only of fundamental importance in computational chemistry and soft-matter physics, but also for the modeling of real substances. The Lennard-Jones potential is used for fundamental studies on the behavior of matter and for elucidating atomistic phenomena. It is also often used for somewhat special use cases, e.g. for studying thermophysical properties of two- or four-dimensional substances (instead of the classical three spatial directions of our universe).
There are two main applications of the Lennard-Jones potentials: (i) for studying the hypothetical Lennard-Jones substance and (ii) for modeling interactions in real substance models. These two applications are discussed in the following.
Lennard-Jones substance
A Lennard-Jones substance or "Lennard-Jonesium" is the name given to an idealized substance which would result from atoms or molecules interacting exclusively through the Lennard-Jones potential.
Statistical mechanics and computer simulations can be used to study the Lennard-Jones potential and to obtain thermophysical properties of the 'Lennard-Jones substance'. The Lennard-Jones substance is often referred to as 'Lennard-Jonesium,' suggesting that it is viewed as a (fictive) chemical element. Moreover, its energy and length parameters can be adjusted to fit many different real substances. Both the Lennard-Jones potential and, accordingly, the Lennard-Jones substance are simplified yet realistic models, such as they accurately capture essential physical principles like the presence of a critical and a triple point, condensation and freezing. Due in part to its mathematical simplicity, the Lennard-Jones potential has been extensively used in studies on matter since the early days of computer simulation.
Thermophysical properties of the Lennard-Jones substance
Thermophysical properties of the Lennard-Jones substance, i.e. particles interacting with the Lennard-Jones potential can be obtained using statistical mechanics. Some properties can be computed analytically, i.e. with machine precision, whereas most properties can only be obtained by performing molecular simulations. The latter will in general be superimposed by both statistical and systematic uncertainties. The virial coefficients can for example be computed directly from the Lennard-potential using algebraic expressions and reported data has therefore no uncertainty. Molecular simulation results, e.g. the pressure at a given temperature and density has both statistical and systematic uncertainties. Molecular simulations of the Lennard-Jones potential can in general be performed using either molecular dynamics (MD) simulations or Monte Carlo (MC) simulation. For MC simulations, the Lennard-Jones potential is directly used, whereas MD simulations are always based on the derivative of the potential, i.e. the force . These differences in combination with differences in the treatment of the long-range interactions (see below) can influence computed thermophysical properties.
Since the Lennard-Jonesium is the archetype for the modeling of simple yet realistic intermolecular interactions, a large number of thermophysical properties were studied and reported in the literature. Computer experiment data of the Lennard-Jones potential is presently considered the most accurately known data in classical mechanics computational chemistry. Hence, such data is also mostly used as a benchmark for validating and testing new algorithms and theories. The Lennard-Jones potential has been constantly used since the early days of molecular simulations. The first results from computer experiments for the Lennard-Jones potential were reported by Rosenbluth and Rosenbluth and Wood and Parker after molecular simulations on "fast computing machines" became available in 1953. Since then many studies reported data of the Lennard-Jones substance; approximately 50,000 data points are publicly available. The current state of research on the thermophysical properties of the Lennard-Jones substance is summarized by Stephan et al. (which did not cover transport and mixture properties). The US National Institute of Standards and Technology (NIST) provides examples of molecular dynamics and Monte Carlo codes along with results obtained from them. Transport property data of Lennard-Jones fluids have been compiled by Bell et al. and Lautenschaeger and Hasse.
Figure on the right shows the phase diagram of the Lennard-Jones fluid. Phase equilibria of the Lennard-Jones potential have been studied numerous times and are accordingly known today with good precision. The Figure shows results correlations derived from computer experiment results (hence, lines instead of data points are shown).
The mean intermolecular interaction of a Lennard-Jones particle strongly depends on the thermodynamic state, i.e., temperature and pressure (or density). For solid states, the attractive Lennard-Jones interaction plays a dominant role – especially at low temperatures. For liquid states, no ordered structure is present compared to solid states. The mean potential energy per particle is negative. For gaseous states, attractive interactions of the Lennard-Jones potential play a minor role – since they are far distanced. The main part of the internal energy is stored as kinetic energy for gaseous states. At supercritical states, the attractive Lennard-Jones interaction plays a minor role. With increasing temperature, the mean kinetic energy of the particles increases and exceeds the energy well of the Lennard-Jones potential. Hence, the particles mainly interact by the potentials' soft repulsive interactions and the mean potential energy per particle is accordingly positive.
Overall, due to the large timespan the Lennard-Jones potential has been studied and thermophysical property data has been reported in the literature and computational resources were insufficient for accurate simulations (to modern standards), a noticeable amount of data is known to be dubious. Nevertheless, in many studies such data is used as reference. The lack of data repositories and data assessment is a crucial element for future work in the long-going field of Lennard-Jones potential research.
Characteristic points and curves
The most important characteristic points of the Lennard-Jones potential are the critical point and the vapor–liquid–solid triple point. They were studied numerous times in the literature and compiled in Ref. The critical point was thereby assessed to be located at
The given uncertainties were calculated from the standard deviation of the critical parameters derived from the most reliable available vapor–liquid equilibrium data sets. These uncertainties can be assumed as a lower limit to the accuracy with which the critical point of fluid can be obtained from molecular simulation results.
The triple point is presently assumed to be located at
The uncertainties represent the scattering of data from different authors. The critical point of the Lennard-Jones substance has been studied far more often than the triple point. For both the critical point and the vapor–liquid–solid triple point, several studies reported results out of the above stated ranges. The above stated data is the presently assumed correct and reliable data. Nevertheless, the determinateness of the critical temperature and the triple point temperature is still unsatisfactory.
Evidently, the phase coexistence curves (cf. figures) are of fundamental importance to characterize the Lennard-Jones potential. Furthermore, Brown's characteristic curves yield an illustrative description of essential features of the Lennard-Jones potential. Brown's characteristic curves are defined as curves on which a certain thermodynamic property of the substance matches that of an ideal gas. For a real fluid, and its derivatives can match the values of the ideal gas for special , combinations only as a result of Gibbs' phase rule. The resulting points collectively constitute a characteristic curve. Four main characteristic curves are defined: One 0th-order (named Zeno curve) and three 1st-order curves (named Amagat, Boyle, and Charles curve). The characteristic curve are required to have a negative or zero curvature throughout and a single maximum in a double-logarithmic pressure-temperature diagram. Furthermore, Brown's characteristic curves and the virial coefficients are directly linked in the limit of the ideal gas and are therefore known exactly at . Both computer simulation results and equation of state results have been reported in the literature for the Lennard-Jones potential.
Points on the Zeno curve Z have a compressibility factor of unity . The Zeno curve originates at the Boyle temperature , surrounds the critical point, and has a slope of unity in the low temperature limit. Points on the Boyle curve B have . The Boyle curve originates with the Zeno curve at the Boyle temperature, faintly surrounds the critical point, and ends on the vapor pressure curve. Points on the Charles curve (a.k.a. Joule-Thomson inversion curve) have and more importantly , i.e. no temperature change upon isenthalpic throttling. It originates at in the ideal gas limit, crosses the Zeno curve, and terminates on the vapor pressure curve. Points on the Amagat curve A have . It also starts in the ideal gas limit at , surrounds the critical point and the other three characteristic curves and passes into the solid phase region. A comprehensive discussion of the characteristic curves of the Lennard-Jones potential is given by Stephan and Deiters.
Properties of the Lennard-Jones fluid
Properties of the Lennard-Jones fluid have been studied extensively in the literature due to the outstanding importance of the Lennard-Jones potential in soft-matter physics and related fields. About 50 datasets of computer experiment data for the vapor–liquid equilibrium have been published to date. Furthermore, more than 35,000 data points at homogeneous fluid states have been published over the years and recently been compiled and assessed for outliers in an open access database.
The vapor–liquid equilibrium of the Lennard-Jones substance is presently known with a precision, i.e. mutual agreement of thermodynamically consistent data, of for the vapor pressure, for the saturated liquid density, for the saturated vapor density, for the enthalpy of vaporization, and for the surface tension. This status quo can not be considered satisfactory considering the fact that statistical uncertainties usually reported for single data sets are significantly below the above stated values (even for far more complex molecular force fields).
Both phase equilibrium properties and homogeneous state properties at arbitrary density can in general only be obtained from molecular simulations, whereas virial coefficients can be computed directly from the Lennard-Jones potential. Numerical data for the second and third virial coefficient is available in a wide temperature range. For higher virial coefficients (up to the sixteenth), the number of available data points decreases with increasing number of the virial coefficient. Also transport properties (viscosity, heat conductivity, and self diffusion coefficient) of the Lennard-Jones fluid have been studied, but the database is significantly less dense than for homogeneous equilibrium properties like – or internal energy data. Moreover, a large number of analytical models (equations of state) have been developed for the description of the Lennard-Jones fluid (see below for details).
Properties of the Lennard-Jones solid
The database and knowledge for the Lennard-Jones solid is significantly poorer than for the fluid phases. It was realized early that the interactions in solid phases should not be approximated to be pair-wise additive – especially for metals.
Nevertheless, the Lennard-Jones potential is used in solid-state physics due to its simplicity and computational efficiency. Hence, the basic properties of the solid phases and the solid–fluid phase equilibria have been investigated several times, e.g. Refs.
The Lennard-Jones substance form fcc (face centered cubic), hcp (hexagonal close-packed) and other close-packed polytype lattices – depending on temperature and pressure, cf. figure above with phase diagram. At low temperature and up to moderate pressure, the hcp lattice is energetically favored and therefore the equilibrium structure. The fcc lattice structure is energetically favored at both high temperature and high pressure and therefore overall the equilibrium structure in a wider state range. The coexistence line between the fcc and hcp phase starts at at approximately , passes through a temperature maximum at approximately , and then ends on the vapor–solid phase boundary at approximately , which thereby forms a triple point. Hence, only the fcc solid phase exhibits phase equilibria with the liquid and supercritical phase, cf. figure above with phase diagram.
The triple point of the two solid phases (fcc and hcp) and the vapor phase is reported to be located at:
not reported yet
Note, that other and significantly differing values have also been reported in the literature. Hence, the database for the fcc-hcp–vapor triple point should be further solidified in the future.
Mixtures of Lennard-Jones substances
Mixtures of Lennard-Jones particles are mostly used as a prototype for the development of theories and methods of solutions, but also to study properties of solutions in general. This dates back to the fundamental work of conformal solution theory of Longuet-Higgins and Leland and Rowlinson and co-workers. Those are today the basis of most theories for mixtures.
Mixtures of two or more Lennard-Jones components are set up by changing at least one potential interaction parameter ( or ) of one of the components with respect to the other. For a binary mixture, this yields three types of pair interactions that are all modeled by the Lennard-Jones potential: 1-1, 2-2, and 1-2 interactions. For the cross interactions 1–2, additional assumptions are required for the specification of parameters or from , and , . Various choices (all more or less empirical and not rigorously based on physical arguments) can be used for these so-called combination rules. The most widely used combination rule is the one of Lorentz and Berthelot
The parameter is an additional state-independent interaction parameter for the mixture. The parameter is usually set to unity since the arithmetic mean can be considered physically plausible for the cross-interaction size parameter. The parameter on the other hand is often used to adjust the geometric mean so as to reproduce the phase behavior of the model mixture. For analytical models, e.g. equations of state, the deviation parameter is usually written as . For , the cross-interaction dispersion energy and accordingly the attractive force between unlike particles is intensified, and the attractive forces between unlike particles are diminished for .
For Lennard-Jones mixtures, both fluid and solid phase equilibria can be studied, i.e. vapor–liquid, liquid–liquid, gas–gas, solid–vapor, solid–liquid, and solid–solid. Accordingly, different types of triple points (three-phase equilibria) and critical points can exist as well as different eutectic and azeotropic points. Binary Lennard-Jones mixtures in the fluid region (various types of equilibria of liquid and gas phases) have been studied more comprehensively then phase equilibria comprising solid phases. A large number of different Lennard-Jones mixtures have been studied in the literature. To date, no standard for such has been established. Usually, the binary interaction parameters and the two component parameters are chosen such that a mixture with properties convenient for a given task are obtained. Yet, this often makes comparisons tricky.
For the fluid phase behavior, mixtures exhibit practically ideal behavior (in the sense of Raoult's law) for . For attractive interactions prevail and the mixtures tend to form high-boiling azeotropes, i.e. a lower pressure than pure components' vapor pressures is required to stabilize the vapor–liquid equilibrium. For repulsive interactions prevail and mixtures tend to form low-boiling azeotropes, i.e. a higher pressure than pure components' vapor pressures is required to stabilize the vapor–liquid equilibrium since the mean dispersive forces are decreased. Particularly low values of furthermore will result in liquid–liquid miscibility gaps. Also various types of phase equilibria comprising solid phases have been studied in the literature, e.g. by Carol and co-workers. Also, cases exist where the solid phase boundaries interrupt fluid phase equilibria. However, for phase equilibria that comprise solid phases, the amount of published data is sparse.
Equations of state
A large number of equations of state (EOS) for the Lennard-Jones potential/ substance have been proposed since its characterization and evaluation became available with the first computer simulations. Due to the fundamental importance of the Lennard-Jones potential, most currently available molecular-based EOS are built around the Lennard-Jones fluid. They have been comprehensively reviewed by Stephan et al.
Equations of state for the Lennard-Jones fluid are of particular importance in soft-matter physics and physical chemistry, used as starting point for the development of EOS for complex fluids, e.g. polymers and associating fluids. The monomer units of these models are usually directly adapted from Lennard-Jones EOS as a building block, e.g. the PHC EOS, the BACKONE EOS, and SAFT type EOS.
More than 30 Lennard-Jones EOS have been proposed in the literature. A comprehensive evaluation of such EOS showed that several EOS describe the Lennard-Jones potential with good and similar accuracy, but none of them is outstanding. Three of those EOS show an unacceptable unphysical behavior in some fluid region, e.g. multiple van der Waals loops, while being elsewise reasonably precise. Only the Lennard-Jones EOS of Kolafa and Nezbeda was found to be robust and precise for most thermodynamic properties of the Lennard-Jones fluid. Furthermore, the Lennard-Jones EOS of Johnson et al. was found to be less precise for practically all available reference data than the Kolafa and Nezbeda EOS.
Lennard-Jones potential as building block for force fields
The Lennard-Jones potential is extensively used for molecular modeling of real substances. There are essentially two ways the Lennard-Jones potential can be used for molecular modeling: (1) A real substance atom or molecule is modeled directly by the Lennard-Jones potential, which yields very good results for noble gases and methane, i.e. dispersively interacting spherical particles. In the case of methane, the molecule is assumed to be spherically symmetric and the hydrogen atoms are fused with the carbon atom to a common unit. This simplification can in general also be applied to more complex molecules, but yields usually poor results. (2) A real substance molecule is built of multiple Lennard-Jones interactions sites, which can be connected either by rigid bonds or flexible additional potentials (and eventually also consists of other potential types, e.g. partial charges). Molecular models (often referred to as 'force fields') for practically all molecular and ionic particles can be constructed using this scheme for example for alkanes.
Upon using the first outlined approach, the molecular model has only the two parameters of the Lennard-Jones potential and that can be used for the fitting, e.g. and can be used for argon. Upon adjusting the model parameters and to real substance properties, the Lennard-Jones potential can be used to describe simple substance (like noble gases) with good accuracy. Evidently, this approach is only a good approximation for spherical and simply dispersively interacting molecules and atoms. The direct use of the Lennard-Jones potential has the great advantage that simulation results and theories for the Lennard-Jones potential can be used directly. Hence, available results for the Lennard-Jones potential and substance can be directly scaled using the appropriate and (see reduced units). The Lennard-Jones potential parameters and can in general be fitted to any desired real substance property. In soft-matter physics, usually experimental data for the vapor–liquid phase equilibrium or the critical point are used for the parametrization; in solid-state physics, rather the compressibility, heat capacity or lattice constants are employed.
The second outlined approach of using the Lennard-Jones potential as a building block of elongated and complex molecules is far more sophisticated. Molecular models are thereby tailor-made in a sense that simulation results are only applicable for that particular model. This development approach for molecular force fields is today mainly performed in soft-matter physics and associated fields such as chemical engineering, chemistry, and computational biology. A large number of force fields are based on the Lennard-Jones potential, e.g. the TraPPE force field, the OPLS force field, and the MolMod force field (an overview of molecular force fields is out of the scope of the present article). For the state-of-the-art modeling of solid-state materials, more elaborate multi-body potentials (e.g. EAM potentials) are used.
The Lennard-Jones potential yields a good approximation of intermolecular interactions for many applications: The macroscopic properties computed using the Lennard-Jones potential are in good agreement with experimental data for simple substances like argon on one side and the potential function is in fair agreement with results from quantum chemistry on the other side. The Lennard-Jones potential gives a good description of molecular interactions in fluid phases, whereas molecular interactions in solid phases are only roughly well described. This is mainly due to the fact that multi-body interactions play a significant role in solid phases, which are not comprised in the Lennard-Jones potential. Therefore, the Lennard-Jones potential is extensively used in soft-matter physics and associated fields, whereas it is less frequently used in solid-state physics. Due to its simplicity, the Lennard-Jones potential is often used to describe the properties of gases and simple fluids and to model dispersive and repulsive interactions in molecular models. It is especially accurate for noble gas atoms and methane. It is furthermore a good approximation for molecular interactions at long and short distances for neutral atoms and molecules. Therefore, the Lennard-Jones potential is very often used as a building block of molecular models of complex molecules, e.g. alkanes or water. The Lennard-Jones potential can also be used to model the adsorption interactions at solid–fluid interfaces, i.e. physisorption or chemisorption.
It is well accepted, that the main limitations of the Lennard-Jones potential lie in the fact the potential is a pair potential (does not cover multi-body interactions) and that the exponent term is used for the repulsion. Results from quantum chemistry suggest that a higher exponent than 12 has to be used, i.e. a steeper potential. Furthermore, the Lennard-Jones potential has a limited flexibility, i.e. only the two model parameters and can be used for the fitting to describe a real substance.
See also
Comparison of force-field implementations
Embedded atom model
Force field (chemistry)
Molecular mechanics
Morse potential and Morse/Long-range potential
Virial expansion
References
External links
Lennard-Jones model on SklogWiki.
Chemical bonding
Computational chemistry
Intermolecular forces
Quantum mechanical potentials
Theoretical chemistry
Thermodynamics | Lennard-Jones potential | Physics,Chemistry,Materials_science,Mathematics,Engineering | 7,200 |
8,008,410 | https://en.wikipedia.org/wiki/Downtime | In computing and telecommunications, downtime (also (system) outage or (system) drought colloquially) is a period when a system is unavailable. The unavailability is the proportion of a time-span that a system is unavailable or offline.
This is usually a result of the system failing to function because of an unplanned event, or because of routine maintenance (a planned event).
The terms are commonly applied to networks and servers. The common reasons for unplanned outages are system failures (such as a crash) or communications failures (commonly known as network outage or network drought colloquially). For outages due to issues with general computer systems, the term computer outage (also IT outage or IT drought) can be used.
The term is also commonly applied in industrial environments in relation to failures in industrial production equipment. Some facilities measure the downtime incurred during a work shift, or during a 12- or 24-hour period. Another common practice is to identify each downtime event as having an operational, electrical or mechanical origin.
The opposite of downtime is uptime.
Types
Industry standards for the term "Outage Duration" or "Maintenance Duration" can have different point of initiation and completion thus the following clarification should be used to avoid conflicts in contract execution:
"Turnkey" this is the most engrossing of all outage types. Outage or Maintenance starts with operator of the plant or equipment pressing the shutdown or stop button to initiate a halt in operation. Unless otherwise noted, Outage or Maintenance is considered completed when the plant or equipment is back in normal operation ready to begin manufacturing or ready be synchronized with system or grid or ready to perform duties as pump or compressor.
"Breaker to Breaker" This Outage or Maintenance starts with operator of the plant or equipment removing the power circuit (Main power breaker at "off" or "disengaged" or "On-Cooldown"), not the control circuit from operation. This still would allow for the equipment to be cooled down or brought to ambient such that outage/maintenance work can be prepared or initiated. Depending on equipment types, "Breaker to Breaker" outage can be advantageous if contracting out controls related maintenance as this type of maintenance work can be performed while main equipment is still on cool-down or on stand-by. Unless otherwise noted, this type of outage is considered complete when power circuit is re-energized via engaging of the power breaker.
"Completion of Lock-out/Tag-out" This Outage or Maintenance (sometimes mistaken for "Off-Cooldown" but not the same) starts with operator of the plant or equipment removing the power circuit, disengaging the control circuit and performing other neutralization of potential power and hazard sources (typically called Lock-Out, Tag-Out "LOTO") This point of maintenance period is typically the last phase of the outage initiation stage before actual work starts on the facility, plant or equipment. Safety briefing should always follow the LOTO activity, before any work is conducted. Unless otherwise noted, this type of outage is considered complete when the equipment has reached mechanical completion and ready to be placed on slow-roll for many heavy rotating equipment, Bump-test or rotation check for motors, etc., but must follow return or work permit per LOTO procedures.
Any on-line testing, performance testing and tuning required should not count towards the outage duration as these activities are typically conducted after the completion of outage or maintenance event and are out of control of most maintenance contractors.
Characteristics
Unplanned downtime may be the result of an equipment malfunction, etc.
Telecommunication outage classifications
Downtime can be caused by failure in
hardware (physical equipment),
(logic controlling equipment),
interconnecting equipment (such as cables, facilities, routers,...),
transmission (wireless, microwave, satellite), and/or
capacity (system limits).
The failures can occur because of
damage,
failure,
design,
procedural (improper use by humans),
engineering (how to use and deployment),
overload (traffic or system resources stressed beyond designed limits),
environment (support systems like power and HVAC),
(outages designed into the system for a purpose such as software upgrades and equipment growth),
other (none of the above but known), or
unknown.
The failures can be the responsibility of
customer/service provider,
vendor/supplier,
utility,
government,
contractor,
end customer,
public individual,
act of nature,
other (none of the above but known), or
unknown.
Impact
Outages caused by system failures can have a serious impact on the users of computer/network systems, in particular those industries that rely on a nearly 24-hour service:
Medical informatics
Nuclear power and other infrastructure
Banks and other financial institutions
Aeronautics, airlines
News reporting
E-commerce and online transaction processing
Persistent online games
Also affected can be the users of an ISP and other customers of a telecommunication network.
Corporations can lose business due to network outage or they may default on a contract, resulting in financial losses. According to Veeam 2019 cloud data management report organizations encounter unplanned downtime, on average, 5-10 times per year with the average cost of one hour of downtime being $102,450.
Those people or organizations that are affected by downtime can be more sensitive to particular aspects:
some are more affected by the length of an outage - it matters to them how much time it takes to recover from a problem
others are sensitive to the timing of an outage - outages during peak hours affect them the most
The most demanding users are those that require high availability.
Famous outages
On Mother's Day, Sunday, May 8, 1988, a fire broke out in the main switching room of the Hinsdale Central Office of the Illinois Bell telephone company. One of the largest switching systems in the state, the facility processed more than 3.5 million calls each day while serving 38,000 customers, including numerous businesses, hospitals, and Chicago's O'Hare and Midway Airports.
Virtually the entire AT&T network of 4ESS toll tandems switches went in and out of service over and over again on January 15, 1990, disrupting long-distance service for the entire United States. The problem dissipated by itself when traffic slowed down. A software bug was found.
AT&T lost its Frame Relay network for 26 hours on April 13, 1998. This affected many thousands of customers, and bank transactions were one casualty. AT&T failed to meet the service level agreement on their contracts with customers and had to refund 6,600 customer accounts, costing millions of dollars.
Xbox Live had intermittent downtime during the 2007–2008 holiday season which lasted thirteen days. Increased demand from Xbox 360 purchasers (the largest number of new user sign-ups in the history of Xbox Live) was given as the reason for the downtime; in order to make amends for the service issues, Microsoft offered their users the opportunity to receive a free game.
Sony's PlayStation Network April 2011 outage, began on April 20, 2011, and was gradually restored on May 14, 2011, starting in the United States. This outage is the longest amount of time the PSN has been offline since its inception in 2006. Sony has stated the problem was caused by an external intrusion which resulted in the confiscation of personal information. Sony reported on April 26, 2011, that a large amount of user data had been obtained by the same hack that resulted in the downtime.
Telstra's Ryde switch failed in late 2011 after water egressed into the electrical switch board from continuing wet weather. The Ryde switch is one of the largest by area switches in Australia, and affected more than 720,000 services.
The Miami datacenter of ServerAxis went offline unannounced on February 29, 2016, and was never restored. This impacted multiple providers and hundreds of websites. The outage impacted coverage of the 2016 NCAA Division I women's basketball tournament as WBBState, one of the affected sites, was by far the most comprehensive provider of women's basketball statistics available.
The game platform Roblox had an outage around October 2021, during their Chipotle Event. Many users thought it was because of the event, because it received massive reception, as users could get a free Chipotle burrito during it. The outage was Roblox's longest downtime, lasting 3 days.
On July 8, 2022, Rogers suffered a major nationwide outage in Canada. This simultaneously affected cell phone and internet access, causing 911 calls, interbank transactions to fail and also disrupting government services.
On July 19th, 2024, CrowdStrike issued a faulty device driver update for their Falcon software, resulting in Windows PCs, servers, and virtual machines to crash and boot loop. The incident unintentionally affected approximately 8.5 million Windows machines worldwide, including critical infrastructure such as 911 services in various states. It is considered to be the largest outage in the history of information technology.
Service levels
In service level agreements, it is common to mention a percentage value (per month or per year) that is calculated by dividing the sum of all downtimes timespans by the total time of a reference time span (e.g. a month). 0% downtime means that the server was available all the time.
For Internet servers downtimes above 1% per year or worse can be regarded as unacceptable as this means a downtime of more than 3 days per year. For e-commerce and other industrial use any value above 0.1% is usually considered unacceptable.
Response and reduction of impact
It is the duty of the network designer to make sure that a network outage does not happen. When it does happen, a well-designed system will further reduce the effects of an outage by having localized outages which can be detected and fixed as soon as possible.
A process needs to be in place to detect a malfunction - network monitoring - and to restore the network to a working condition - this generally involves a help desk team that can troubleshoot a problem, one composed of trained engineers; a separate help desk team is usually necessary in order to field user input, which can be particularly demanding during a downtime.
A network management system can be used to detect faulty or degrading components prior to customer complaints, with proactive fault rectification.
Risk management techniques can be used to determine the impact of network outages on an organisation and what actions may be required to minimise risk. Risk may be minimised by using reliable components, by performing maintenance, such as upgrades, by using redundant systems or by having a contingency plan or business continuity plan.
Technical means can reduce errors with error correcting codes, retransmission, checksums, or diversity scheme.
One of the biggest causes of downtime is misconfiguration, where a planned change goes wrong. Typically organisations rely on manual effort to manage the process of configuration backups, but this requires highly skilled engineers with the time to manage the process across a multi-vendor network. Automation tools are available to manage backups, but there are very few solutions that handle configuration recovery which is needed to minimize the overall impact of the outage.
Planning
A planned outage is the result of a planned activity by the system owner and/or by a service provider. These outages, often scheduled during the maintenance window, can be used to perform tasks including the following:
Deferred maintenance, e.g., a deferred hardware repair or a deferred restart to clean up a garbled memory
Diagnostics to isolate a detected fault
Hardware fault repair
Fixing an error or omission in a configuration database or omission in a recent configuration database change
Fixing an error in application database or an error in a recent application database change
Software patching/software updates to fix a software fault.
Outages can also be planned as a result of a predictable natural event, such as Sun outage.
Maintenance downtimes have to be carefully scheduled in industries that rely on computer systems. In many cases, system-wide downtimes can be averted using what is called a "rolling upgrade" - the process of incrementally taking down parts of the system for upgrade, without affecting the overall functionality.
Avoidance
For most websites, website monitoring is available. Website monitoring (synthetic or passive) is a service that "monitors" downtime and users on the site.
Other usage
Downtime can also refer to time when human capital or other assets go down. For instance, if employees are in meetings or unable to perform their work due to another constraint, they are down. This can be equally expensive, and can be the result of another asset (i.e. computer/systems) being down. This is also commonly known as "dead time".
Downtime is also generalized in a personal sense, being used to refer to a period of sleep or recreation.
This term is used also in factories or industrial use. See total productive maintenance (TPM).
Measuring downtime
There are many external services which can be used to monitor the uptime and downtime as well as availability of a service or a host.
See also
High availability
Uptime
Mean down time
Planned downtime
Carrier grade
References
External links
Engineering failures
Maintenance
System administration
it:Tempo di fermo | Downtime | Technology,Engineering | 2,760 |
44,915,309 | https://en.wikipedia.org/wiki/Psi2%20Draconis | {{DISPLAYTITLE:Psi2 Draconis}}
Psi2 Draconis is a solitary giant star in the northern circumpolar constellation of Draco, also designated 34 Draconis. It lies just over a degree east of the brighter Psi1 Draconis. Psi2 Draconis has a yellow-white hue and is dimly visible to the naked eye with an apparent visual magnitude of 5.45. It is located at a distance of from the Sun based on parallax, but is drifting closer with a radial velocity of −2 km/s.
According to R. O. Gray and associates (2001), the stellar classification of Psi2 Draconis is F2III+; a star that has used up its core hydrogen, cooled, and expanded away from the main sequence. A. P. Cowley and W. P. Bidelman (1979) found a similar class of F3 II-III, with the comment that the spectrum showed "many weak lines". Based on the abundance of iron, the metallicity of this star is much lower than in the Sun. It is about 800 million years old and is spinning with a projected rotational velocity of 50 km/s. The star has double the mass of the Sun but has expanded to 15 times the Sun's radius. It is radiating 448 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 6,925 K.
References
F-type giants
Draco (constellation)
Draconis, Psi2
Durchmusterung objects
Draconis, 34
164613
087728
6725 | Psi2 Draconis | Astronomy | 336 |
11,225,948 | https://en.wikipedia.org/wiki/Spitzenk%C3%B6rper | The Spitzenkörper (German for 'pointed body', SPK) is a structure found in fungal hyphae that is the organizing center for hyphal growth and morphogenesis. It consists of many small vesicles and is present in growing hyphal tips, during spore germination, and where branch formation occurs. Its position in the hyphal tip correlates with the direction of hyphal growth. The Spitzenkörper is a part of the endomembrane system in fungi.
The vesicles are organized around a central area that contains a dense meshwork of microfilaments. Polysomes are often found closely to the posterior boundary of the Spitzenkörper core within the Ascomycota, microtubules extend into and often through the Spitzenkörper and within the Ascomycota Woronin bodies are found in the apical region near the Spitzenkörper.
The cytoplasm of the extreme apex is occupied almost exclusively by secretory vesicles. In the higher fungi (Ascomycota and Basidiomycota), secretory and endocytic vesicles are arranged into a dense, spherical aggregation called the Spitzenkörper or ‘apical body’. The Spitzenkörper may be seen in growing hyphae even with a light microscope. Hyphae of the Oomycota and some lower Eumycota (notably the Zygomycota) do not contain a recognizable Spitzenkörper, and the vesicles are instead distributed more loosely often in a crescent-shaped arrangement beneath the apical plasma membrane.
This structure is most commonly found in Dikarya and was at first thought to only occur among them. Vargas et al 1993 however were the first to find a Spitzenkörper in another clade, specifically the Allomyces (Blastocladiomycota), then subsequently Basidiobolus ranarum which has been placed in several different phyla was also found to have an SPK. these and the Blastocladiella (also in Blastocladiomycota) are the only known taxa to bear this structure.
References
Cell biology
Fungal morphology and anatomy
Articles containing video clips | Spitzenkörper | Biology | 465 |
2,426,119 | https://en.wikipedia.org/wiki/Criticality%20matrix | In operations research and engineering, a criticality matrix is a representation (often graphical) of failure modes along with their probabilities and severities.
Severity may be classified in four categories, with Level I as most severe or "catastrophic"; Level II for "critical"; Level III for "marginal"; and Level IV for "minor".
Example
For example, an aircraft might have the following matrix:
References
Industrial engineering | Criticality matrix | Engineering | 89 |
32,497,546 | https://en.wikipedia.org/wiki/Fern%20sports | Fern sports are plants that show marked change from the normal type or parent stock as a result of mutation. The term Morphotype is also used for any of a group of different types of individuals of the same species in a population. Fern fronds in sports are typically altered in several ways, such as the frond apex divided and pinnae similarly duplicated.
Occurrence
Soft Shield Fern Polystichum setiferum, Lady Fern Athyrium filix-femina and Hart's Tongue Fern Asplenium scolopendrium are known to have around three hundred varieties or sports. Scaly Male Fern Dryopteris affinis and Male Fern Dryopteris filix-mas have a number of commercially available and naturally occurring sports or subspecies. Examples are D. affinis polydactyla Dadds, A. filix-femina plumosum, A. filix-femina corymbiferum, and D. filix-mas Barnesii
Lady Fern (Athyrium filix-femina)
Characteristics
The frond of a sport may be branched at the tip and at the tips of the pinna, the colour may vary, and variegation may occur; fronds generally remain bilaterally symmetrical. Ferns sports remain normal in certain respects, such as viability with sori and indusia appearing normal. The frond stipe may be a different colour.
Hart's Tongue fern (Asplenium scolopendrium)
Misidentification
Galls on ferns and other physical damage to fern fronds can be mistaken as sports, however this is usually asymmetric, ferns generally being bilaterally symmetrical. In Athyrium and Dryopteris species white maggots of Chirosia betuleti create mop-head galls on fern frond tips that look somewhat like fern sports, however this is physical damage and not a growth form.
Rarity
Ferns sports particularly suffered during the Victorian-era Pteridomania ('Fern-Fever') craze, when over collecting of fern species included over collecting unusual fern varieties.
See also
Knotting gall
References
Notes
Sources
Redfern, Margaret & Shirley, Peter (2002). British Plant Galls. Identification of galls on plants & fungi. AIDGAP. Shrewsbury : Field Studies Council. .
External links
Ferns sports
Ferns
Plant anatomy
Plant morphology | Fern sports | Biology | 494 |
850,663 | https://en.wikipedia.org/wiki/Trust%20%28social%20science%29 | Trust is the belief that another person will do what is expected. It brings with it a willingness for one party (the trustor) to become vulnerable to another party (the trustee), on the presumption that the trustee will act in ways that benefit the trustor. In addition, the trustor does not have control over the actions of the trustee. Scholars distinguish between generalized trust (also known as social trust), which is the extension of trust to a relatively large circle of unfamiliar others, and particularized trust, which is contingent on a specific situation or a specific relationship.
As the trustor is uncertain about the outcome of the trustee's actions, the trustor can only develop and evaluate expectations. Such expectations are formed with a view to the motivations of the trustee, dependent on their characteristics, the situation, and their interaction. The uncertainty stems from the risk of failure or harm to the trustor if the trustee does not behave as desired.
In the social sciences, the subtleties of trust are a subject of ongoing research. In sociology and psychology, the degree to which one party trusts another is a measure of belief in the honesty, fairness, or benevolence of another party. The term "confidence" is more appropriate for a belief in the competence of the other party. A failure in trust may be forgiven more easily if it is interpreted as a failure of competence rather than a lack of benevolence or honesty. In economics, trust is often conceptualized as reliability in transactions. In all cases, trust is a heuristic decision rule, allowing a person to deal with complexities that would require unrealistic effort in rational reasoning.
Types
Types of trust identified in academic literature include contractual trust, competence trust and goodwill trust. American lawyer Charles Fried speaks of "contractual trust" as a "humdrum" experience based on the voluntary acceptance of contractual obligations: for example, people keep appointments and undertake commercial transactions. "Competence trust" can be defined as "a belief in the other's ability to do the job or complete a task"; this term is applied, for example, in relation to cultural competence in healthcare. In working relationships, "goodwill trust" has been described as "trust regarding the benevolence and integrity of [a] counterpart".
Four types of social trust are recognized:
Generalized trust, or a dispositional trait geared towards trusting others, is an important form of trust in modern society, which involves much social interaction with strangers. Schilke et al. refer to generalized and particularized trust (trust exhibited in a specific situation or a specific relationship) as two significant research streams in the sociology of trust.
Out-group trust is the trust a person has in members of a different group. This could be members of a different ethnic group, or citizens of a different country, for example.
In-group trust is placed in members of one's own group.
Trust in neighbors considers the relationships between people with a common residential environment.
Sociology
Sociology claims trust is one of several social constructs; an element of the social reality. Other constructs frequently discussed together with trust include control, confidence, risk, meaning and power. Trust is attributable to relationships between social actors, both individuals and groups (social systems). Sociology is concerned with the position and role of trust in social systems. Interest in trust has grown significantly since the early 1980s, from the early works of Luhmann, Barber, and Giddens (see Sztompka for a more detailed overview). This growth of interest in trust has been stimulated by ongoing changes in society, known as late modernity and post-modernity.
Sviatoslav contended that society needs trust because it increasingly finds itself operating at the edge between confidence in what is known from everyday experience and contingency of new possibilities. Without trust, one should always consider all contingent possibilities, leading to paralysis by analysis. Trust acts as a decisional heuristic, allowing the decision-maker to overcome bounded rationality and process what would otherwise be an excessively complex situation. Trust can be seen as a bet on one of many contingent futures, specifically, the one that appears to deliver the greatest benefits. Once the bet is decided (i.e. trust is granted), the trustor suspends his or her disbelief, and the possibility of a negative course of action is not considered at all. Hence trust acts as a reducing agent of social complexity, allowing for cooperation.
Sociology tends to focus on two distinct views: the macro view of social systems, and a micro view of individual social actors (where it borders with social psychology). Views on trust follow this dichotomy. On one side, the systemic role of trust can be discussed with a certain disregard to the psychological complexity underpinning individual trust. The behavioral approach to trust is usually assumed while actions of social actors are measurable, allowing for statistical modelling of trust. This systemic approach can be contrasted with studies on social actors and their decision-making process, in anticipation that understanding of such a process will explain (and allow to model) the emergence of trust.
Sociology acknowledges that the contingency of the future creates a dependency between social actors and, specifically, that the trustor becomes dependent on the trustee. Trust is seen as one of the possible methods to resolve such a dependency, being an attractive alternative to control. Trust is valuable if the trustee is much more powerful than the trustor, yet the trustor is under social obligation to support the trustee.
Modern information technologies have not only facilitated the transition to a post-modern society but have also challenged traditional views on trust. Information systems research has identified that people have come to trust in technology via two primary constructs: The first consists of human-like constructs, including benevolence, honesty, and competence, whilst the second employs system-like constructs, such as usefulness, reliability, and functionality. The discussion surrounding the relationship between information technologies and trust is still in progress as research remains in its infant stages.
High- and low-trust societies
Influence of ethnic diversity
Several dozen studies have examined the impact of ethnic diversity on social trust. Research published in the Annual Review of Political Science concluded that there were three key debates on the subject:
Why does ethnic diversity modestly reduce social trust?
Can contact reduce the negative association between ethnic diversity and social trust?
Is ethnic diversity a stand-in for social disadvantage?
The review's meta-analysis of 87 studies showed a consistent, though modest, negative relationship between ethnic diversity and social trust. Ethnic diversity has the strongest negative impact on neighbor trust, in-group trust, and generalized trust. It did not appear to have a significant impact on out-group trust. The authors present a warning about the modest size of the effect, stating, "However, the rather modest size of the [overall negative relationship] implies that apocalyptic claims regarding the severe threat of ethnic diversity for social trust in contemporary societies are exaggerated."
Psychology
In psychology, trust is believing that the trusted person will do what is expected. According to the psychoanalyst Erik Erikson, development of basic trust is the first state of psychosocial development occurring, or failing, during the first two years of life. Success results in feelings of security and optimism, while failure leads towards an orientation of insecurity and mistrust possibly resulting in attachment disorders. A person's dispositional tendency to trust others can be considered a personality trait and as such is one of the strongest predictors of subjective well-being. Trust increases subjective well-being because it enhances the quality of one's interpersonal relationships; happy people are skilled at fostering good relationships.
Trust is integral to the idea of social influence: it is easier to influence or persuade someone who is trusting. The notion of trust is increasingly adopted to predict acceptance of behaviors by others, institutions (e.g. government agencies), and objects such as machines. Yet once again, perceptions of honesty, competence and value similarity (slightly similar to benevolence) are essential.
There are three forms of trust commonly studied in psychology:
Trust is being vulnerable to someone even when they are trustworthy.
Trustworthiness are the characteristics or behaviors of one person that inspire positive expectations in another person.
Trust propensity is the tendency to make oneself vulnerable to others in general. Research suggests that this general tendency can change over time in response to key life events.
Once trust is lost by violation of one of these three determinants, it is very hard to regain. There is asymmetry in the building versus destruction of trust.
Research has been conducted into the social implications of trust, for instance:
Barbara Misztal attempted to combine all notions of trust. She described three functions of trust: it makes social life predictable, it creates a sense of community, and it makes it easier for people to work together.
In the context of sexual trust, Riki Robbins describes four stages. These consist of perfect trust, damaged trust, devastated trust, and restored trust.
In the context of information theory, Ed Gerck social functions such as power, surveillance, and accountability.
From a social identity perspective, the propensity to trust strangers (see in-group favoritism) arises from the mutual knowledge of a shared group membership, stereotypes, or the need to maintain the group's positive distinctiveness.
Despite the centrality of trust to the positive functioning of people and relationships, very little is known about how and why trust evolves, is maintained, and is destroyed.
One factor that enhances trust among people is facial resemblance. Experimenters who digitally manipulated facial resemblance in a two-person sequential trust game found evidence that people have more trust in a partner who has similar facial features. Facial resemblance also decreased sexual desire for a partner. In a series of tests, digitally manipulated faces were presented to subjects who evaluated them for attractiveness within a long-term or short-term relationship. The results showed that within the context of a short-term relationship dependent on sexual desire, similar facial features caused a decrease in desire. Within the context of a long-term relationship, which is dependent on trust, similar facial features increased a person's attractiveness. This suggests that facial resemblance and trust have great effects on relationships.
Interpersonal trust literature investigates "trust-diagnostic situations": situations that test partners' abilities to act in the best interests of the other person or the relationship while rejecting a conflicting option which is merely in their self-interest. Trust-diagnostic situations occur throughout everyday life, though they can also be deliberately engineered by people who want to test the current level of trust in a relationship.
A low-trust relationship is one in which a person has little confidence their partner is truly concerned about them or the relationship. People in low trust relationships tend to make distress-maintaining attributions whereby they place their greatest focus on the consequences of their partner's negative behavior, and any impacts of positive actions are minimized. This feeds into the overarching notion that the person's partner is uninterested in the relationship, and any positive acts on their part are met with skepticism, leading to further negative outcomes.
Distrusting people may miss opportunities for trusting relationships. Someone subject to an abusive childhood may have been deprived of any evidence that trust is warranted in future relationships. An important key to treating sexual victimization of a child is the rebuilding of trust between parent and child. Failure by adults to validate that sexual abuse occurred contributes to the child's difficulty in trusting self and others. A child's trust can also be affected by the erosion of the marriage of their parents. Children of divorce do not exhibit less trust in mothers, partners, spouses, friends, and associates than their peers of intact families. The impact of parental divorce is limited to trust in the father.
People may trust non-human agents. For instance, people may trust animals, the scientific process, and . Trust helps create a social contract that allows humans and domestic animals to live together. Trust in the scientific process is associated with increased trust in innovations such as biotechnology. When it comes to trust in social machines, people are more willing to trust intelligent machines with humanoid morphologies and female cues, when they are focused on tasks (versus socialization), and when they behave morally well. More generally, they may be trusted as a function of the "machine heuristic"—a mental shortcut with which people assume that machines are less biased, more accurate, and more reliable than people—such that people may sometimes trust a robot more than a person.
People are disposed to trust and to judge the trustworthiness of other people or groups—for instance, in developing relationships with potential mentors. One example would be as part of interprofessional work in the referral pathway from an emergency department to a hospital ward. Another would be building knowledge on whether new practices, people, and things introduced into our lives are indeed accountable or worthy of investing confidence and trust in. This process is captured by the empirically grounded construct of "Relational Integration" within Normalization Process Theory. This can be traced in neuroscience terms to the neurobiological structure and activity of a human brain. Some studies indicate that trust can be altered by the application of oxytocin.
Social identity approach
The social identity approach explains a person's trust in strangers as a function of their group-based stereotypes or in-group favoring behaviors which they base on salient group memberships. With regard to ingroup favoritism, people generally think well of strangers but expect better treatment from in-group members in comparison to out-group members. This greater expectation translates into a propensity to trust a member of the in-group more than a member of the out-group. It is only advantageous for one to form such expectations of an in-group stranger if the stranger also knows one's own group membership.
The social identity approach has been empirically investigated. Researchers have employed allocator studies to understand group-based trust in strangers. They may be operationalized as unilateral or bilateral relationships of exchange. General social categories such as university affiliation, course majors, and even ad-hoc groups have been used to distinguish between in-group and out-group members. In unilateral studies of trust, the participant is asked to choose between envelopes containing money that an in-group or out-group member previously allocated. Participants have no prior or future opportunities for interaction, thereby testing Brewer's notion that group membership is sufficient to bring about group-based trust and hence cooperation. Participants could expect an amount ranging from nothing to the maximum value an allocator could give out. Bilateral studies of trust have employed an investment game devised by Berg and colleagues in which people choose to give a portion or none of their money to another. Any amount given would be tripled and the receiver would then decide whether they would return the favor by giving money back to the sender. This was meant to test trusting behavior on the sender's part and the receiver's eventual trustworthiness.
Empirical research demonstrates that when group membership is salient to both parties, trust is granted more readily to in-group members than out-group members. This occurs even when the in-group's stereotype was comparatively less positive than the out-group's , , and when participants had the option of a sure sum of money (i.e. in essence opting out of the need to trust a stranger to gain some monetary reward). When only was made aware of group membership, trust becomes reliant upon group stereotypes. The group with the more positive stereotype was trusted (e.g. one's university affiliation over another's) even over that of the in-group (e.g. nursing over psychology majors).
Another explanation for in-group-favoring behaviors could be the need to maintain in-group positive distinctiveness, particularly in the presence of social identity threat. Trust in out-group strangers increased when personal cues to identity were revealed.
Philosophy
Many philosophers have written about different forms of trust. Most agree that interpersonal trust is the foundation on which these forms can be modeled. For an act to be an expression of trust, it must not betray the expectations of the trustee. Some philosophers, such as Lagerspetz, argue that trust is a kind of reliance, though not merely reliance. Gambetta argued that trust is the inherent belief that others generally have good intentions, which is the foundation for our reliance on them. Philosophers such as Annette Baier challenged this view, asserting a difference between trust and reliance by saying that trust can be betrayed, whereas reliance can only be disappointed. Carolyn McLeod explains Baier's argument with the following examples: we can rely on our clock to give the time, but we do not feel betrayed when it breaks, thus, we cannot say that we trusted it; we are not trusting when we are suspicious of another person, because this is in fact an expression of distrust. The violation of trust warrants this sense of betrayal. Thus, trust is different from reliance in the sense that a trustor accepts the risk of being betrayed.
Karen Jones proposed an emotional aspect to trust—optimism that the trustee will do the right thing by the trustor, which is also described as "affective trust". People sometimes trust others even without this optimistic expectation, instead hoping that by extending trust this will prompt trustworthy behavior in the trustee. This is known as "therapeutic trust" and gives both the trustee a reason to be trustworthy, and the trustor a reason to believe they are trustworthy.
The definition of trust as a belief in something or a confident expectation about something eliminates the notion of risk because it does not include whether the expectation or belief is favorable or unfavorable. For example, to expect a friend to arrive to dinner late because she has habitually arrived late for the last fifteen years is a confident expectation (whether or not we find her late arrivals to be annoying). The trust is not about what we wish for, but rather it is in the consistency of the data. As a result, there is no risk or sense of betrayal because the data exists as collective knowledge. Faulkner contrasts such "predictive trust" with the aforementioned affective trust, proposing that predictive trust may only warrant disappointment as a consequence of an inaccurate prediction, not a sense of betrayal.
Economics
Trust in economics explains the difference between actual human behavior and behavior that could be explained by people's desire to maximize utility. In economic terms, trust can explain a difference between Nash equilibrium and the observed equilibrium. Such an approach can be applied to individual people as well as to societies.
Trust is important to economists for many reasons. Taking the "Market for Lemons" transaction popularized by George Akerlof as an example, if a potential buyer of a car does not trust the seller not to sell a lemon, the transaction will not take place. The buyer will not buy without trust, even if the product would be of great value to the buyer. Trust can act as an economic lubricant, reducing the cost of transactions between parties, enabling new forms of cooperation, and generally furthering business activities, employment, and prosperity. This observation prompted interest in trust as a form of social capital and research into the process of creation and distribution of such capital. A higher level of social trust may be positively correlated with economic development: Even though the original concept of "high trust" and "low trust" societies may not necessarily hold, social trust benefits the economy and a low level of trust inhibits economic growth. The absence of trust restricts growth in employment, wages, and profits, thus reducing the overall welfare of society. The World Economic Forums of 2022 and 2024 both adopted the rebuilding of trust as their themes.
Theoretical economical modelling demonstrates that the optimum level of trust that a rational economic agent should exhibit in transactions is equal to the trustworthiness of the other party. Such a level of trust leads to an efficient market. Trusting less leads to losing economic opportunities, while trusting more leads to unnecessary vulnerabilities and potential exploitation. Economics is also interested in quantifying trust, usually in monetary terms. The level of correlation between an increase in profit margin and a decrease in transactional costs can be used as an indicator of the economic value of trust.
Economic "trust games" empirically quantify trust in relationships under laboratory conditions. Several games and game-like scenarios related to trust have been tried, those that allow the estimation of confidence in monetary terms. In games of trust the Nash equilibrium differs from Pareto optimum so that no player alone can maximize their own utility by altering their selfish strategy without cooperation. Cooperating partners can also benefit. The classical version of the game of trust has been described as an abstract investment game, using the scenario of an investor and a broker. The investor can invest some fraction of his money, and the broker can return to the investor some fraction of the investor's gains. If both players follow their naive economic best interest, the investor should never invest, and the broker will never be able to repay anything. Thus the flow of money, its volume, is attributable entirely to the existence of trust. Such a game can be played as a once-off, or repeatedly with the same or different sets of players to distinguish between a general propensity to trust and trust within particular relationships. Several variants of this game exist. Reversing rules leads to the game of distrust, pre-declarations can be used to establish intentions of players, while alterations to the distribution of gains can be used to manipulate the perceptions of both players. The game can be played by several players on the closed market, with or without information about reputation.
Other interesting games include binary-choice trust games and the gift-exchange game. Games based on the Prisoner's Dilemma link trust with economic utility and demonstrate the rationality behind reciprocity.
The popularization of e-commerce led to new challenges related to trust within the digital economy and the desire to understand buyers' and sellers' decisions to trust one another. For example, interpersonal relationships between buyers and sellers have been disintermediated by the technology, and consequentially they required improvement. Websites can influence the buyer to trust the seller, regardless of the seller's actual trustworthiness. Reputation-based systems can improve trust assessment by capturing a collective perception of trustworthiness; this has generated interest in various models of reputation.
Management and organization science
In management and organization science, trust is studied as a factor which organizational actors can manage and influence. Scholars have researched how trust develops across individual and organizational levels of analysis. They suggest a reciprocal process in which organizational structures influence people's trust and, at the same time, people's trust manifests in organizational structures. Trust is also one of the conditions of an organizational culture that supports knowledge sharing. An organizational culture that supports knowledge sharing allows employees to feel secure and comfortable to share their knowledge, their work, and their expertise. Structure often creates trust in a person, and this encourages them to feel comfortable and excel in the workplace; it makes an otherwise stressful environment manageable.
Management and organization science scholars have also studied how trust is influenced by contracts and how trust interacts with formal mechanisms. Scholars in management and related disciplines have also made a case for the importance of distrust as a related but distinct construct. Similarly scholars have assessed the relationship between monitoring and trust, for example in a trust game context, and in shareholder-management relations.
Since the mid-1990s, organizational research has followed two distinct but nonexclusive paradigms of trust research:
The first distinguishes between two major dimensions of trust: Trust in another can be characterized as cognition-based trust (based on rational calculation) and affect-based trust (based on emotional attachment). For example, trust in an auto repair shop could come in the form of an assessment of the capabilities of the shop to do a good job repairing one's car (cognition-based trust) or of having a longstanding relationship with the shop's owner (affect-based trust).
The second distinguishes between the trustworthiness factors that give rise to trust (i.e., one's perceived ability, benevolence, and integrity) and trust itself.
Together, these paradigms predict how different dimensions of trust form in organizations by demonstrating various trustworthiness attributes.
Systems
In systems, a trusted component has a set of properties that another component can rely on. If A trusts B, a violation in B's properties might compromise A's correct operation. Observe that those properties of B trusted by A might not correspond quantitatively or qualitatively to B's actual properties. This occurs when the designer of the overall system does not consider the relation. Consequently, trust should be placed to the extent of the component's trustworthiness. The trustworthiness of a component is thus, not surprisingly, defined by how well it secures a set of functional and non-functional properties, deriving from its architecture, construction, and environment, and evaluated as appropriate.
Other
Trust in politics is political efficacy.
See also
References
Further reading
Bachmann, Reinhard and Zaheer, Akbar (eds) (2006). Handbook of Trust Research. Cheltenham: Edward Elgar.
Bicchieri, Cristina, Duffy, John and Tolle, Gil (2004). "Trust among strangers", Philosophy of Science 71: 1–34.
Herreros, Francisco (2023). "The State and Trust". Annual Review of Political Science 26 (1)
Kelton, Kari; Fleischmann, Kenneth R. & Wallace, William A. (2008). "Trust in Digital Information". Journal of the American Society for Information Science and Technology, 59(3):363–374.
Maister, David H., Green, Charles H. & Galford, Robert M. (2000). The Trusted Advisor. Free Press, New York
Schilke, Oliver; Reimann, Martin; Cook, Karen S. (2021). "Trust in Social Relations". Annual Review of Sociology. 47(1).
External links
Trust at Psychology Today
The Neuroscience of Trust
New Research Determines Who You Can Trust the Most
Trust Building Activities
Trust: Making and Breaking Cooperative Relations, edited by Diego Gambetta
Am I Trustworthy? (1950) Educational video clip
Stony Brook University weekly seminars on the issue of trust in the personal, religious, social, and scientific realms
World Database of Trust Harvey S. James Jr., Ph.D. (Updated August 2007) A variety of definitions of trust are collected and listed.
Interpersonal relationships
Reputation management
Concepts in ethics
Accountability
Social constructionism
Social epistemology
Sociological terminology
Emotions
Moral psychology | Trust (social science) | Technology,Biology | 5,484 |
38,763,918 | https://en.wikipedia.org/wiki/Truncated%20infinite-order%20square%20tiling | In geometry, the truncated infinite-order square tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t{4,∞}.
Uniform color
In (*∞44) symmetry this tiling has 3 colors. Bisecting the isosceles triangle domains can double the symmetry to *∞42 symmetry.
Symmetry
The dual of the tiling represents the fundamental domains of (*∞44) orbifold symmetry. From [(∞,4,4)] (*∞44) symmetry, there are 15 small index subgroup (11 unique) by mirror removal and alternation operators. Mirrors can be removed if its branch orders are all even, and cuts neighboring branch orders in half. Removing two mirrors leaves a half-order gyration point where the removed mirrors met. In these images fundamental domains are alternately colored black and white, and mirrors exist on the boundaries between colors. The symmetry can be doubled to *∞42 by adding a bisecting mirror across the fundamental domains. The subgroup index-8 group, [(1+,∞,1+,4,1+,4)] (∞22∞22) is the commutator subgroup of [(∞,4,4)].
Related polyhedra and tiling
See also
Uniform tilings in hyperbolic plane
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
Hyperbolic tilings
Infinite-order tilings
Isogonal tilings
Square tilings
Truncated tilings
Uniform tilings | Truncated infinite-order square tiling | Physics | 352 |
49,127,301 | https://en.wikipedia.org/wiki/MuSIASEM | MuSIASEM or Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism, is a method of accounting used to analyse socio-ecosystems and to simulate possible patterns of development. It is based on maintaining coherence across scales and different dimensions (e.g. economic, demographic, energetic) of quantitative assessments generated using different metrics.
History
MuSIASEM is designed to detect and analyze patterns in the societal use of resources making a distinction between:
the internal end uses (who is using which resources, how much, how, and why);
the resulting internal environmental pressures associated with the various end uses, allowing an analysis of the impacts they create on the environment; and
the level of externalization through trade of both requirements of additional end uses and resulting environmental pressures that are moved to the social-ecological systems producing the imports.
The ability to integrate quantitative assessments across dimensions and scales makes MuSIASEM particularly suited for different types of sustainability analysis: the nexus between food, energy, water and land uses; urban metabolism; waste metabolism; tourism metabolism; and rural development.
The approach was created around 1997 by Mario Giampietro and Kozo Mayumi, and has been developed since then by the members of the IASTE (Integrated Assessment: Sociology, Technology and the Environment) group at the Institute of Environmental Science and Technology of the Autonomous University of Barcelona and its external collaborators.
The purpose of MuSIASEM is to characterize metabolic patterns of socio-ecological systems (how and why humans use resources and how this use depends on and affects the stability of the ecosystems embedding the society). This integrated approach allows for a quantitative implementation of the DPSIR framework (Drivers, Pressures, States, Impacts and Responses) and application as a decision support tool. Different alternatives of the option space can be checked in terms of feasibility (compatibility with processes outside human control), viability (compatibility with processes under human control) and desirability (compatibility with normative values and institutions).
The original version of the accounting scheme has been improved using theoretical concepts from complex systems theory leading to the generation of MuSIASEM version 2.0, tested in several case studies.
Applications
MuSIASEM accounting has been used for the integrated assessment of agricultural systems, biofuels
, nuclear power
, energetics, sustainability of water use, mining, urban waste management systems, and urban metabolism in developing countries. Moreover, the methodology has been applied to assess societal metabolism at the municipal, regional (rural Laos, Catalonia, China, Europe, Galapagos Islands), national, and supernational scale. An application of MuSIASEM to the nexus between natural resources is in the book Resource Accounting for Sustainability: The Nexus between Energy, Food, Water and Land Use. This work has been tested in collaboration with FAO. The Ecuadorian National Secretariat for Development and Planning (SENPLADES) has included the MuSIASEM approach in the training of its personnel. Finally, several master courses about the application to the approach to energy system in various Southern African Universities have been elaborated under the Participia project. MuSIASEM has been applied to the analysis of Shanghai's urban metabolism.
See also
Anthropogenic metabolism
Industrial metabolism
Material flow analysis
Social metabolism
Urban metabolism
References
External links
Institute of Environmental Science and Technology (ICTA) at the Universitat Autònoma de Barcelona.
MAGIC Nexus: an EU H2020 project applying the MuSIASEM approach.
The Nexus between Energy, Food, Land Use, and Water: Application of a Multi-Scale Integrated Approach.
About MuSIASEM rationale and methodology.
The Sustainability Sudoku: Simplified application of the MuSIASEM approach to the energy-food-land nexus for didactic purposes.
The Participia Home Page: Participatory Integrated Assessment of Energy Systems to Promote Energy Access and Efficiency.
Presentation of the MuSIASEM 2.0 approach, August 2020 report from the MAGIC Nexus project.
Scientific models
Ecological economics
Industrial ecology
Natural resources
Energy economics
Water management
Systems ecology
Environmental science
Environmental social science
Systems theory
Water and the environment | MuSIASEM | Chemistry,Engineering,Environmental_science | 833 |
4,754,533 | https://en.wikipedia.org/wiki/Particle%20counter | A particle counter is used for monitoring and diagnosing particle contamination within specific clean media, including air, water, and chemicals. Particle counters are used to support clean manufacturing practices in a variety of industrial applications. Clean manufacturing is required for the production of many electronic components and assemblies, pharmaceutical drug products and medical devices, and industrial technologies such as oil and gas.
Technology
Particle counters function primarily using the principles of light scattering, although other technologies may also be employed. Light scattering by particles use instrumentation comprising a high-intensity light source (a laser), a controlled media flow (air, gas or liquid) and highly sensitive light-gathering detectors (a photo detector).
Laser optical particle counters employ five major systems:
Lasers and optics: A laser operates on a single wavelength, so the light source is consistent with constant power output to illuminate the particle sampling region.
Controlled flow: The viewing volume is a small chamber illuminated by the laser. The sample medium (air, liquid or gas) is drawn into the viewing volume, the laser passes through the medium, the particles scatter (reflect) light, and a photodetector tallies the scattered light sources (the particles).
Photodetector: The photodetector is an electric device that is sensitive to light, and when particles scatter light, the photodetector observes the flash of light and converts it to an electric signal, or pulse. An amplifier converts the pulses to a proportional control voltage.
Pulse height analyzer (PHA): The pulses from the photodetector are sent to a pulse height analyzer (PHA). The PHA examines the magnitude of the pulse and places its value into an appropriate sizing channel, called bins. The bins contain data about each pulse, and this data correlates to particle sizes.
Black box: The black box, or support circuitry, looks at the number of pulses in each bin and converts the information into particle data.
Light obscuration by particles works on the principle where the presence of particles blocks some of the light from the photodetector, typically through either absorbance or light scattering. The photodetector records the obscuration of light and converts this to an electrical signal, this signal is then correlated to a specific sized particle using a PHA as with the scattering description above.
Direct imaging particle counting employs the use of a high-resolution camera and a light source to detect particles. Vision based particle sizing units obtain two dimensional images that are analyzed by computer software to obtain particle size measurement, images can be retained and replayed for additional analysis
A Coulter counter is an apparatus for counting and sizing particles suspended in electrolytes. It is typically used for cellular particles. The Coulter principle, and the Coulter counter that is based on it, is the commercial term for the technique known as resistive pulse sensing or electrical zone sensing.
Detection methods
There are several methods used for detecting and measuring particle size or size distribution — light blocking (obscuration), light scattering, Coulter principle and direct imaging. A high intensity light source is used to illuminate the particle as it passes through the detection chamber.
The light blocking optical particle counter method is typically useful for detecting and sizing particles greater than 1 micrometre in size and is based upon the amount of light a particle blocks when passing through the detection area of the particle counter. This type of technique allows high resolution and reliable measurement.
If light scattering is used, then the redirected light is detected by a photo detector. The light scattering method is capable of detecting smaller-sized particles. This technique is based upon the amount of light that is deflected by a particle passing through the detection area of the particle counter. This deflection is called light scattering. Typical detection sensitivity of the light scattering method is 0.05 micrometre or larger. However, employment of the condensation nuclei counter (CNC) technique would allow a higher detection sensitivity in particle sizes down to nanometre range. A typical application is monitoring of ultrapure water in semiconductor fabrication facilities.
If light blocking (obscuration) is used the loss of light is detected. The amplitude of the light scattered or light blocked is measured and the particle is counted and tabulated into standardized counting bins. The light blocking method is specified for particle counters that are used for counting in hydraulic and lubricating fluids. Particle counters are used here to measure contamination of hydraulic oil, and therefore allow the user to maintain their hydraulic system, reduce breakdowns, schedule maintenance during no or slow work periods, monitor filter performance, etc. Particle counters used for this purpose typically use ISO Standard 4406:1999 as their reporting standard, and ISO 11171 as the calibration standard. Others also in use are NAS 1638 and its successor SAE AS4059D.
If direct imaging is used, a halogen light illuminates particles from the back within a cell while a high definition, high magnification camera records passing particles. Recorded video is then analyzed by computer software to measure particle attributes. Direct imaging particle counting employs the use of a high resolution camera and a light to detect particles. Vision based particle sizing units obtain two dimensional images that are analyzed by computer software to obtain particle size measurement in both the laboratory and online. Along with particle size, color and shape analysis can also be determined. Direct imaging is a technique that uses the light emitted by a laser as a source to illuminate a cell where particles are passing through. The technique does not measure the light blocked by the particles, but rather measures the area of the particles functioning like an automated microscope. A pulsed laser diode freezes the particle motion. The light transmitted through the fluid is imaged onto an electronic camera with macro focusing optics. The particles in the sample will block the light, and the resulting silhouettes will be imaged onto the digital camera chip.
Matrices
Applications of particle counters are separated into three primary categories:
Aerosol particle counters
Liquid particle counters
Solid particle counters
Aerosol particle counters
Aerosol particle counters are used to determine the air quality by counting and sizing the number of particles in the air. This information is useful in determining the quantity of particles inside a building or in the ambient air. It also is useful in understanding the cleanliness level in a controlled environment. A common controlled environment aerosol particle counters are used in is a cleanroom. Cleanrooms are used extensively in semiconductor device fabrication, biotechnology, pharmaceuticals, disk drives, aerospace and other fields that are very sensitive to environmental contamination. Cleanrooms have defined particle count limits. Aerosol particle counters are used to test and classify a cleanroom to ensure its performance is up to a specific cleanroom classification standard. Several standards exist for cleanroom classification. The most frequently referred to classification is from the United States. Though originating in the United States, the standard Federal Standard 209E was the first and most commonly referred to. This standard was replaced in 1999 by an international standard, but Federal Standard 209E remains today the most widely referenced standard in the world.
There are several direct-reading instruments for measuring aerosol particle emissions. The CPC and differential mobility particle sizers, including the scanning mobility particle sizer and fast mobility particle sizer, can measure aerosol concentration; the diffusion charger and electric low pressure impactor can measure surface area; the size selective static sampler and tapered element oscillating microbalance can measure mass.
For cleanrooms, the replacement standard is ISO 14644-1 and is meant to completely replace Federal Standard 209E. This ISO Standard can be found through the non-profit organization, Institute of Environmental Sciences and Technology (IEST). Each of these standards represents the maximum allowable number of particles in a unit of air. The typical unit is either cubic feet or cubic meters. The particle counts are always listed as cumulative.
Liquid particle counters
Liquid particle counters are used to determine the quality of the liquid passing through them. The size and number of particles can determine if the liquid is clean enough to be used for the designed application. Liquid particle counters can be used to test the quality of drinking water or cleaning solutions, or the cleanliness of power generation equipment, manufacturing parts, or injectable drugs.
Liquid particle counters are also used to determine the cleanliness level of hydraulic fluids and various other systems including (engines, gears and compressors), the reason being that 75-80% of hydraulic breakdowns can be attributed to contamination. There are various types, installed on the equipment, operated in a laboratory as part of an oil analysis programme.
or portable units that can be transported to site, e.g., a construction site, and then used on the machine, e.g., a bulldozer, to determine fluid cleanliness. By determining and monitoring these levels, and following a proactive or predictive maintenance program, the user can reduce hydraulic failures, increase uptime and machine availability, and to reduce oil consumption. They can also be used to assure that hydraulic fluids have been cleaned using filtration, to acceptable or target cleanliness levels. There are various standards in use in the hydraulic industry, of which ISO 4406:1999, NAS1638 and SAE AS 4059 are probably the most common.
A typical hydraulic oil cleanliness to iso 4406 is 20/18/15.
Solid particle counters
Solid particle counters are used to measure dry particles for various industrial applications. One such application could be for the detection of particle size coming from a rock crusher within a mining quarry. Sieves are the standard instruments used to measure dry particle size. Vision based systems are also used to measure dry particle size. With a vision based system quick and efficient particle sizing can be done with ease and tremendous accuracy.
Specialized types
Remote particle counters
Small particle counters that are used to monitor a fixed location typically inside a cleanroom or mini-environment to continuously monitor particle levels. These smaller counters typically do not have a local display and are connected to a network of other particle counters and other types of sensors to monitoring the overall cleanroom performance. This network of sensors is typically connected to a facility monitoring system (FMS), data acquisition system or programmable logic controller.
This computer based system can integrate into a database, alarming and may have e-mail capability to notify facility or process personnel when conditions inside the cleanroom have exceeded predetermined environmental limits. Remote particle counters are available in several different configurations, from single channel to models that detect up to 8 channels simultaneously. Remote particle counters can have a particle size detection range from 0.1 to 100 micrometres and may feature one of a variety of output options including 4-20 mA, RS-485 Modbus, Ethernet and pulse output.
Manifold particle counters
Modified aerosol portable particle counter that has been attached to a sequencing sampling system. The sequencing sampling system allows for one particle counter to sample multiple locations, via a series of tubes drawing air from up to 32 locations inside a cleanroom. Typically less expensive than utilizing remote particle counters, each tube is monitored in sequence.
Hand-held particle counter
A hand-held particle counter is a small, self-contained device that is easily transported and used, and designed for use with Indoor Air Quality (IAQ) investigations. Though lower flow rates of 0.1 ft3/min (0.2 m3/h) than larger portables with 1 ft3/m (2 m3/h), hand-helds are useful for most of the same applications. However longer sample times may be required when performing cleanroom certification and testing. (Hand-held counters are not recommended for cleanrooms). Most hand-held particle counters have direct mount isokinetic sampling probes. One may use a barbed probe on a short piece of sample tubing, but it is recommended that the length of the tubing not exceed , due to loss of larger particles in the sample tubing.
Applications
Particle counters are used in applications where contamination control in manufacturing is required. Examples of these industries include: semiconductor manufacturing; electronic component manufacture and assembly; photonic and optics manufacture and assembly; aerospace; pharmaceutical and biotech production; medical device manufacturing; cosmetics production; and food and beverage production. They are also used in industrial applications such as oil and gas, hydraulic fluids and automotive assembly and painting.
A primary use of aerosol particle counters is in the determination of contamination levels within a cleanroom or clean containment device. Cleanrooms and clean containment devices maintain low levels of particulate-free air through the use of filters and are classified according to the number of particles permitted; the primary standard for cleanroom or clean air devices is ISO 14644-1, other local standards may also exist such as FED-STD-209E.
Electronics
Electronics manufacturing, and electronics assembly requires stringent environmental controls, especially where processes are performed within reactive conditions. Yields are reduced when components are contaminated with particles, and trace elements. Particle counters demonstrate that these controls are effective, and the production environments are optimized for the quality required.
Depending on the application and size of the particles of interest, different instrumentation is required.
General environments
Air particle monitoring is required for ensuring the manufacturing environment is free from contamination level that will cause defects. It is performed either for the entire cleanroom areas (ballrooms, bays and chases), or specific local controlled environments (tools and minienvironments).
Where large areas are to be monitored a manifold can be used, a manifold is a device used to connect many sample locations via sample tubing lengths, to a central stepper device and a central particle counter, it will sequentially move between tube locations taking a read from each location. Smaller spaces can be monitored using small point of use particle sensors, these are dedicated to sampling at a single location and rely on either a central vacuum supply or an internal sample pump. The contaminant particle size and frequency of measurement are factors in determining which method is most suitable.
Liquid systems
There are two primary liquid applications in the electronics manufacturing processes, fabrication process chemicals and ultra clean water for cleaning and rinsing.
Process chemicals are used in semiconductor and other critical product processing steps (chemical etch, mask removal and chemical mechanical polishing). Particle monitoring in process chemicals, from manufacture through to the point-of-use, is extremely important for these clean processes to be controlled to ensure yield and throughput quality. The use of on-line continuous particle monitoring enables both process engineers and facility engineers to respond rapidly to changes in chemical purity levels throughout the chemical distribution process.
Ultra-Pure Water (UPW) / DI Water is used for critical cleaning and rinsing steps, UPW processes must maintain very low particle concentration levels, typically measured at the 20 nm level. UPW is also commonly used for chemical dilution and flushing steps within chemical blending and distribution systems. The use of on-line continuous particle monitoring, either at the final water purification step or at the wafer point-of-use, provides process engineers the critical particle data needed to effectively manage the water purification and wafer cleaning processes.
Gas Systems. High purity gases are critical to advanced component manufacturing. Products such as integrated circuits require many process gases for: etching, deposition, oxidation, doping, and inert overlaying applications. Impurities in these gas streams can create failures in critical processes and impact yield and throughput. Gases that are explosive of hazardous are tested at pressure using particle counters contained within an inert gas, pressurized enclosure. Non-reactive gases can be depressurized using a clean path gas diffusion device and tested using a portable particle counter.
Life sciences
Life Science applications include industries such as pharmaceutical manufacturing, biotech manufacturing, compounding facilities, medical devices, nutraceuticals and food processing; they are those industries that create products to improve the lives of living organisms. Manufacturing environments should remove or reduce contaminants to minimize the risk of finished product contamination, which may lead to chemical reactions within the product or undesired quality of the product.
The industry is controlled through government oversight for the formulation, manufacture and release of all product, and controls are established and monitored to ensure production is maintained to the agreed quality criteria. Good Manufacturing Practices (GMP) ensures that product is manufactured to national and international standards by organizations such as the Food and Drug Administration (FDA), European Medicine Agency (EMA) and the World Health Organization (WHO), other national governmental bodies also regulate the manufacture of product for their countries.
General environments
Environments for the manufacture of drug products require controls to be used to ensure that total particulate and microbial aerosol burden are maintained at suitable levels to reduce risk of contamination to product. Environmental design considers the contamination in various process steps, including: raw material purification, formulation of product, final filling and packaging. Depending on the type of product being manufactured the level of clean controlled space is initially determined using the cleanroom classification standards, the higher the risk of contamination the cleaner the environment, e.g., aseptic filling is performed in an ISO 5 controlled environment, whereas terminally sterilized product is finished in an ISO 7 area (prior to final sterilization).
The classification of risk also contributes to the type of instrument used. General monitoring on a periodic basis uses portable equipment, moved from location to location as determined by a risk assessment. For more risk critical production these are performed in a machine that isolates the general environment from the process environment, the removal of personnel from the direct area using isolators or RABS increases confidence of control, these machines are monitored continuously using point of sample instruments giving continuous feedback as to the quality of the environment and any contamination events in real time. The primary concern for contamination is risk of adverse effects by the end user, a resultant demonstration of control, is an increase in production. The general environments are also monitored for any microbial contaminants using traditional techniques such as settle plates and volumetric air samplers.
Liquid systems
Liquid systems are used primarily in a laboratory to demonstrate the absence of particles in finished liquid products. Any particles present may be a contaminant or undesired agglomerations of insoluble product. Liquids for injection have regulated limits for maximum particle concentrations, standards contained within the United States Pharmacopeia (USP), European Pharmacopeia (EP) and Japanese Pharmacopeia (JP) define these limits.
Gas systems
Compressed gases used in formulation, conveying and overlaying are required to meet the same standards for GMP compliance as all environmental air quality and should be tested at point of use. Particle counters fitted with gas pressure diffusion devices reduce line pressure to atmospheric without impacting the flow path of particles within the airstream, the gas is then tested at atmospheric pressure.
Industrial
Other industries also use particle counters to demonstrate either cleanliness of manufacturing environments or quality of finished product. These combine to reduce any additional cleaning processes.
Automotive
Painting automobiles in clean environments reduces the need to rework defects in paint finishes. Particle counters located within the clean areas give continuous feedback to quality engineers, ensuring clean conditions are maintained. Engines built to exacting tolerances are cleaned and assembled in clean areas, using cleaning agents verified using particle counters.
Hydraulic fluids
Hydraulic fluids and oils must meet specific ISO 4406 standards. The application of hydraulic fluids vary from aerospace and turbine cooling and lubrication to heavy machinery. The build up and presence of particles can cause failure of bearings, pumps and seals.
Water
Water is a universal product with an unlimited number of applications and can be contaminated due to intentional interactions with processes or unintentional and seasonal variances. Monitoring the quality of water using particle counters, either by spot checking at a sample location or continuously monitoring a distribution system, allows quality engineers to react to changes in the processes where water is being used.
Particle counters are used to determine the filtration rates, chemical addition requirements, flushing intervals, sedimentation information, cooling flowrates and other process variables that allow for continuous feedback, ensuring a consistent quality of water to a process.
Environmental
Particulates exist in the atmosphere at concentrations that can be deleterious to health, and have been proven to factor into causes of many airborne illnesses such as asthma. Types of atmospheric particles include suspended particulate matter; thoracic and respirable particles; inhalable coarse particles, designated PM10, which are coarse particles with a diameter of 10 micrometers (μm) or less; fine particles, designated PM2.5, with a diameter of 2.5 μm or less; ultrafine particles; and soot.
Particle counters are used to monitor atmospheric contamination levels of these suspended particulates, allowing for the reduction of particles associated with a specific source (e.g. combustion) or technology (e.g. power generation). The modelling of particulate data from particle counters distributed globally gives trend information to the state of quality of air and its migration.
See also
Particle mass analyser
Particulate matter sampler
Aerosol mass spectrometry
References
External links
www.iest.org — Institute of Environmental Sciences and Technology
Meteorological instrumentation and equipment
Counting instruments
Aerosols
Occupational safety and health | Particle counter | Chemistry,Mathematics,Technology,Engineering | 4,317 |
28,958,968 | https://en.wikipedia.org/wiki/Aptina | Aptina Imaging Corporation was a company that sold CMOS imaging products. Their CMOS sensors were used in the Nikon V1 (10.1 MP, CX format, 16.9x17.9 mm), Nikon J1, and Nikon V2. By 2009 year Aptina had a 16% share of the CMOS image sensors market, with revenue estimated at $671 million. The company was acquired in 2014 by ON Semiconductor
History
Aptina Imaging was created as a spin-off of Micron Technology's Image Sensor Division in March 2008. Aptina was still an independent division within Micron until July 2009, when Aptina became an independent, privately held company, being partially sold to a group including TPG and Riverwood Capital. ON Semiconductor Corporation completed the acquisition of Aptina Imaging in August 2014.
Milestones
2014 – ON Semiconductor completes acquisition of Aptina Imaging
2014 – Aptina Imaging has bought color filter array processing and imager probe assets from Micron Technology, and close to 100 Micron employees will join Aptina's manufacturing facility in Nampa, Idaho on Aug. 4.
2011 – Shipped 2nd billionth sensor
2009 – Aptina spins out as an independent privately held company
2008 – Shipped 1 billionth sensor
2008 – Micron Technology launches Aptina: a CMOS image sensor division
2006 – Micron Imaging Group acquires Avago Technologies' image sensor business
1992–1995 – JPL team invented CMOS active pixel sensor technology
Products
Predecessors of Aptinas products were CMOS image sensors of Photobit and Micron technology. Photobit was founded in 1995 and acquired by Micron in 2001, who started selling their own image sensors a year later. The first commercially available sensors of Photobit were the PB-159 in 1998 and the PB-100 in 1999. Micron showcased a 1.75-μm CMOS pixel sensor in 2005 and launched it in 2007. Micron also developed and built a prototype of a 1.4-μm CMOS pixel sensor by 2007.
The Nikon 1 series used Aptina sensors with dual conversion gain sensors, allowing users to choose from a mode with high dynamic range (DR) but low ISO, and a low light mode with low read noise but also less DR. In 2014 the company started offering a 1-Inch 4K Image Sensor for security and surveillance cameras.
Awards
2011 – AET (China) Best Product Award Winner: AR0331
2010 – Winner, EDN Innovation Award: MT9H004
2009 – Finalist, EDN Innovation Award: MT9M033
2008 – Takayanagi Award: Presented to Dr. Junichi Nakamura
2008 – Best Supplier Award: Foxconn
References
External links
Aptina web-site (with ON Semiconductor)
Aptina Corporate Fact Sheet
Semiconductor companies of the United States
Equipment semiconductor companies
Private equity portfolio companies
2014 mergers and acquisitions
Micron Technology | Aptina | Engineering | 587 |
28,893,297 | https://en.wikipedia.org/wiki/Aminoacetonitrile | Aminoacetonitrile is the organic compound with the formula . The compound is a colorless liquid. It is unstable at room temperature, owing to the incompatibility of the amine nucleophile and the nitrile electrophile. For this reason it is usually encountered as the chloride and bisulfate salts of the ammonium derivative, i.e., [NCCH2NH3]+Cl− and [NCCH2NH3]+HSO4−.
Production and applications
Industrially aminoacetonitrile is produced from glycolonitrile by reaction with ammonia:
HOCH2CN + NH3 → H2NCH2CN + H2O
The aminoacetonitrile can be hydrolysed to give glycine: Being bifunctional, it is useful in the synthesis of diverse nitrogen-containing heterocycles.
Aminoacetonitrile derivatives are useful antihelmintics. They act as nematode specific ACh agonists causing a spastic paralysis and rapid expulsion from the host.
Occurrence in the interstellar medium
Using radio astronomy, aminoacetonitrile was discovered in the Large Molecule Heimat, a giant gas cloud near the Galactic Center in the constellation Sagittarius. This discovery is significant to the debate on whether glycine exists widely in the universe.
References
External links
Property data at the National Institute of Standards and Technology (NIST)
Nitriles
Amines | Aminoacetonitrile | Chemistry | 309 |
223,430 | https://en.wikipedia.org/wiki/Monito%20del%20monte | The monito del monte (Dromiciops gliroides), or colocolo opossum, is a diminutive species of marsupial native only to south-western South America (Argentina and Chile). It is the only extant species in the ancient order Microbiotheria, and the sole New World representative of the superorder Australidelphia, being more closely related to Australian marsupials than to other American marsupials. The species is nocturnal and arboreal, and lives in thickets of South American mountain bamboo in the Valdivian temperate forests of the southern Andes, aided by its partially prehensile tail. It consumes an omnivorous diet based on insects and fruit.
Taxonomy and etymology
Dromiciops gliroides is the sole extant member of the order Microbiotheria. It was first described by British zoologist Oldfield Thomas in 1894. The generic name Dromiciops is based on the resemblance of the monito del monte to the eastern pygmy possum (Cercartetus nanus), one of the synonyms of which is Dromicia nana. The specific name gliroides is a combination of the Latin glis, gliris ("dormouse", more generally "rodent") and Greek oides ("similar to"). The name australis in a synonym (D. australis) refers to the southern distribution of the animal. The common name monito del monte is Spanish for "little monkey of the bush".
In his 1943 Mammals of Chile, American zoologist Wilfred Hudson Osgood identified two subspecies of the monito del monte:
Dromiciops gliroides australis F. Philippi, 1893: It occurs in the Valdivian temperate rain forest in southcentral Chile.
Dromiciops gliroides gliroides Thomas, 1894: It occurs in the northeastern Chiloé Island.
Phylogeny and biogeography
South American marsupials have long been suspected to be ancestral to those of Australia, consistent with the fact that the two continents were connected via Antarctica in the early Cenozoic. Australia's earliest known marsupial is Djarthia, a primitive mouse-like animal that lived in the early Eocene about 55 million years ago (mya). Djarthia had been identified as the earliest known australidelphian, and this research suggested that the monito del monte was the last of a clade that included Djarthia. This relationship suggests that the ancestors of the monito del monte might have reached South America by back-migration from Australia. The time of divergence between the monito del monte and Australian marsupials was estimated to have been 46 mya.
Dromiciops is thought to have evolved from members of the genus Microbiotherium, known from the early Miocene of South America, with some authors considering the genera indistinguishable. All other genera, like Pachybiotherium, had become extinct by the late Miocene.
However, in 2010, analysis of retrotransposon insertion sites in the nuclear DNA of a variety of marsupials, while confirming the placement of the monito del monte in Australidelphia, also clarified that its lineage is the most basal of that superorder. The study further confirmed that the most basal of all marsupial orders are the other two South American lineages (Didelphimorphia and Paucituberculata, with the former probably branching first). This conclusion indicates that Australidelphia arose in South America (along with the ancestors of all other living marsupials), and probably reached Australia in a single dispersal event after Microbiotheria split off. Fossils of another Eocene australidelphian, the microbiotherian Woodburnodon casei, have been described from the Antarctic Peninsula, and fossils of a related early Eocene woodburnodontid have been found in Patagonia.
Habitat
Monitos del monte live in the dense forests of highland Argentina and Chile, mainly in trees, where they construct spherical nests of water-resistant colihue leaves. These leaves are then lined with moss or grass, and placed in well-protected areas of the tree, such as underbrush, tree cavities, or fallen timber. The nests are sometimes covered with gray moss as a form of camouflage. These nests provide the monito del monte with some protection from cold, both when it is active and when it hibernates.
Fragmentation of Valdivian temperate rainforests into non-contiguous areas is known to reduce the abundance of monitos del monte in a given area, but has little or no impact on whether it occurs in an area or not.
Morphology
Monitos del monte are small marsupials that look like mice. Dromiciops have the same dental formula as Didelphids: , a total of 50 teeth. Their size ranges from . They have short and dense fur that is primarily brown-gray with patches of white at their shoulders and back, and their underside is more of a cream or light gray color. Monitos del monte also have distinct black rings around their eyes. Their small furred ears are well-rounded and their rostrums are short. The head to body length is around , and their tail length is between . Their tails are somewhat prehensile and mostly furred with the exception of of the underside. The naked underside of their tails may contribute to increasing friction when the mammal is on a tree. The base of their tails also functions as a fat storage organ which they use during winter hibernation. In a week, Monitos del monte can store enough fat to double their body size.
Sexual dimorphism
At the end of the summer, female monitos del monte tend to be larger and heavier than males. The tails of the sexes also vary in size during this time; females have a thicker tail, which is where they store fat; the difference suggests that females need more energy than males during hibernation. The sexual dimorphism is only seen during this time and not year-round.
Reproduction
Monitos del monte have a monogamous mating system. The females have well-formed, fur-lined pouch containing four teats. They normally reproduce in the spring once a year and can have a litter size varying from one to five. They can feed a maximum of four offspring, so if there are five young, one will not survive. When the young are mature enough to leave the pouch, approximately five months, they are nursed in a distinctive nest. They are then carried on the mother's back. The young remain in association with the mother after weaning. Males and females both reach sexual maturity after two years.
Habits
The monito del monte is adapted to arboreal life; its tail and paws are prehensile. It is largely nocturnal and, depending on the ambient and internal temperature, and on the availability of food, it spends much of the day in a state of torpor. Such behaviour enables it to survive periods of extreme weather and food shortage, conserving energy instead of foraging to no effect.
The animal covers its nest with moss for concealment, and for insulation and protection from bad weather.
Diet
The monito del monte depends on consuming both insects and fruit, with either component individually being nutritionally unbalanced. Fruit consumed comes from 16 species of plant, with the mistletoe species Tristerix corymbosus being a preferred source of fruit. A study performed in the temperate forests of southern Argentina showed a mutualistic seed dispersal relationship between D. gliroides and Tristerix corymbosus. The monito del monte is the sole dispersal agent for this plant, and without it the plant would likely become extinct. The monito del monte eats the fruit of T. corymbosus, and germination takes place in the gut.
Conservation
For the past few years the number of Dromiciops has declined, and the species is now classified as "near threatened". Many factors contribute to the decline:
The monito del monte is not the only organism which will be affected if it becomes endangered. Dromiciops illustrate parasite-host specificity with the tick Ixodes neuquenensis. This tick can only be found on the monito del monte, so it depends on the survival of this nearly endangered mammal. T. corymbosus also depends on the survival of this species, because without the seed dispersal agency of the monito del monte, it would not be able to reproduce.
Currently, there are minimal conservation efforts. Ecological studies are being conducted in the Chiloé Island that might help future conservation efforts. Dromiciops has been found in the Los Ruiles National Reserve and the Valdivian Coastal Reserve, which are protected areas in Chile and the Nothofagus forest of Parque Nacional Los Alerces, Chubut, Southern Argentina.
References
Sources
Microbiotheriidae
Mammals described in 1894
EDGE species
Marsupials of Argentina
Marsupials of Chile
Mammals of Patagonia
Mammals of the Andes
Taxa named by Oldfield Thomas
Fauna of the Valdivian temperate forests | Monito del monte | Biology | 1,903 |
39,374,174 | https://en.wikipedia.org/wiki/Fermat%27s%20and%20energy%20variation%20principles%20in%20field%20theory | In general relativity, light is assumed to propagate in a vacuum along a null geodesic in a pseudo-Riemannian manifold. Besides the geodesics principle in a classical field theory there exists Fermat's principle for stationary gravity fields.
Fermat's principle
In case of conformally stationary spacetime with coordinates a Fermat metric takes the form
where the conformal factor depends on time and space coordinates and does not affect the lightlike geodesics apart from their parametrization.
Fermat's principle for a pseudo-Riemannian manifold states that the light ray path between points and corresponds to stationary action.
where is any parameter ranging over an interval and varying along curve with fixed endpoints and .
Principle of stationary integral of energy
In principle of stationary integral of energy for a light-like particle's motion, the pseudo-Riemannian metric with coefficients is defined by a transformation
With time coordinate and space coordinates with indexes k,q=1,2,3 the line element is written in form
where is some quantity, which is assumed equal 1. Solving light-like interval equation for under condition gives two solutions
where are elements of the four-velocity. Even if one solution, in accordance with making definitions, is .
With and even if for one k the energy takes form
In both cases for the free moving particle the Lagrangian is
Its partial derivatives give the canonical momenta
and the forces
Momenta satisfy energy condition for closed system
which means that is the energy of the system that combines the light-like particle and the gravitational field.
Standard variational procedure according to Hamilton's principle is applied to action
which is integral of energy. Stationary action is conditional upon zero variational derivatives and leads to Euler–Lagrange equations
which is rewritten in form
After substitution of canonical momentum and forces they yields motion equations of lightlike particle in a free space
and
where are the Christoffel symbols of the first kind and indexes take values .
Energy integral variation and Fermat principles give identical curves for the light in stationary space-times.
Generalized Fermat's principle
In the generalized Fermat’s principle the time is used as a functional and together as a variable. It is applied Pontryagin’s minimum principle of the optimal control theory and obtained an effective Hamiltonian for the light-like particle motion in a curved spacetime. It is shown that obtained curves are null geodesics.
The stationary energy integral for a light-like particle in gravity field and the generalized Fermat principles give identity velocities. The virtual displacements of coordinates retain path of the light-like particle to be null in the pseudo-Riemann space-time, i.e. not lead to the Lorentz-invariance violation in locality and corresponds to the variational principles of mechanics. The equivalence of the solutions produced by the generalized Fermat principle to the geodesics, means that the using the second also turns out geodesics. The stationary energy integral principle gives a system of equations that has one equation more. It makes possible to uniquely determine canonical momenta of the particle and forces acting on it in a given reference frame.
Euler–Lagrange equations in contravariant form
The equations
can be transformed into a contravariant form
where the second term in the left part is the change in the energy and momentum transmitted to the gravitational field
when the particle moves in it. The force vector ifor principle of stationary integral of energy is written in form
In general relativity, the energy and momentum of a particle is ordinarily associated with a contravariant energy-momentum vector . The quantities do not form a tensor. However, for the photon in Newtonian limit of Schwarzschild field described by metric in isotropic coordinates they correspond to its passive gravitational mass equal to twice rest mass of the massive particle of equivalent energy. This is consistent with Tolman, Ehrenfest and Podolsky result for the active gravitational mass of the photon in case of interaction between directed flow of radiation and a massive particle that was obtained by solving the Einstein-Maxwell equations.
After replacing the affine parameter
the expression for the momenta turned out to be
where 4-velocity is defined as . Equations with contravariant momenta
are rewritten as follows
These equations are identical in form to the ones obtained from the Euler-Lagrange equations with Lagrangian by raising the indices. In turn, these equations are identical to the geodesic equations, which confirms that the solutions given by the principle of stationary integral of energy are geodesic. The quantities
and
appear as tensors for linearized metrics.
See also
Fermat's principle
Variational methods in general relativity
References
General relativity
Variational principles | Fermat's and energy variation principles in field theory | Physics,Mathematics | 979 |
53,658,651 | https://en.wikipedia.org/wiki/Magnetic%20drug%20delivery | Magnetic nanoparticle-based drug delivery is a means in which magnetic particles such as iron oxide nanoparticles are a component of a delivery vehicle for magnetic drug delivery, due to the simplicity with which the particles can be drawn to (external) magnetopuissant targets. Magnetic nanoparticles can impart imaging and controlled release capabilities to drug delivery materials such as micelles, liposomes, and polymers.
Synopsis
Molecular magnets (single-molecule magnets) are a platform that incorporates insoluble (toxic) drugs into biocompatible carrier materials, without adding magnetic iron oxide nanoparticles which might adversely affect patients susceptible to iron overdose. The drawbacks in conventional magnetic drug delivery methods can be overcome by switching from typical iron oxide nanoparticles to ones based on molecular magnets, such as Fe(salen)-based "anticancer nanomagnet" with proven cancer-fighting ability. However, insoluble drugs including Fe(salen) also have some inherent drawbacks, such as poor water solubility, loss of magnetic activity in solvents, and potential cytotoxicity when accumulated in tissues and organs.
As an alternative synthetic method of magnetic drug delivery, a "non-iron oxide"-based smart delivery platform has been very recently developed by self-assembly of the Fe(salen) drugs into nano-cargoes encapsulated by a smart polymer, exhibiting bio-safe multifunctional magnetic capabilities, including MRI, magnetic field- and pH-responsive heat-releasing hyperthermia effects, and controlled release.
References
Drug delivery devices
Magnetic devices | Magnetic drug delivery | Chemistry | 338 |
26,831,746 | https://en.wikipedia.org/wiki/Metaterol | Metaterol (), also known as isofenefrine, isopropylnoradrianol, and 3,β-dihydroxy-N-isopropylphenethylamine, is a sympathomimetic and bronchodilator of the phenethylamine family that was never marketed. It is structurally related to norfenefrine, phenylephrine, and etilefrine.
References
Abandoned drugs
Bronchodilators
Isopropyl compounds
3-Hydroxyphenyl compounds
Phenylethanolamines
Sympathomimetics | Metaterol | Chemistry | 128 |
72,718 | https://en.wikipedia.org/wiki/Prototype | A prototype is an early sample, model, or release of a product built to test a concept or process. It is a term used in a variety of contexts, including semantics, design, electronics, and software programming. A prototype is generally used to evaluate a new design to enhance precision by system analysts and users. Prototyping serves to provide specifications for a real, working system rather than a theoretical one. Physical prototyping has a long history, and paper prototyping and virtual prototyping now extensively complement it. In some design workflow models, creating a prototype (a process sometimes called materialization) is the step between the formalization and the evaluation of an idea.
A prototype can also mean a typical example of something such as in the use of the derivation 'prototypical'. This is a useful term in identifying objects, behaviours and concepts which are considered the accepted norm and is analogous with terms such as stereotypes and archetypes.
The word prototype derives from the Greek , "primitive form", neutral of , "original, primitive", from πρῶτος protos, "first" and τύπος typos, "impression" (originally in the sense of a mark left by a blow, then by a stamp struck by a die (note "typewriter"); by implication a scar or mark; by analogy a shape i.e. a statue, (figuratively) style, or resemblance; a model for imitation or illustrative example—note "typical").
Types
Prototypes explore different aspects of an intended design:
A proof-of-principle prototype serves to verify some key functional aspects of the intended design, but usually does not have all the functionality of the final product.
A working prototype represents all or nearly all of the functionality of the final product.
A visual prototype represents the size and appearance, but not the functionality, of the intended design. A form study prototype is a preliminary type of visual prototype in which the geometric features of a design are emphasized, with less concern for color, texture, or other aspects of the final appearance.
A user experience prototype represents enough of the appearance and function of the product that it can be used for user research.
A functional prototype captures both function and appearance of the intended design, though it may be created with different techniques and even different scale from final design.
A paper prototype is a printed or hand-drawn representation of the user interface of a software product. Such prototypes are commonly used for early testing of a software design, and can be part of a software walkthrough to confirm design decisions before more costly levels of design effort are expended.
Differences in creating a prototype vs. a final product
In general, the creation of prototypes will differ from creation of the final product in some fundamental ways:
Material: The materials that will be used in a final product may be expensive or difficult to fabricate, so prototypes may be made from different materials than the final product. In some cases, the final production materials may still be undergoing development themselves and not yet available for use in a prototype.
Process: Mass-production processes are often unsuitable for making a small number of parts, so prototypes may be made using different fabrication processes than the final product. For example, a final product that will be made by plastic injection molding will require expensive custom tooling, so a prototype for this product may be fabricated by machining or stereolithography instead. Differences in fabrication process may lead to differences in the appearance of the prototype as compared to the final product.
Verification: The final product may be subject to a number of quality assurance tests to verify conformance with drawings or specifications. These tests may involve custom inspection fixtures, statistical sampling methods, and other techniques appropriate for ongoing production of a large quantity of the final product. Prototypes are generally made with much closer individual inspection and the assumption that some adjustment or rework will be part of the fabrication process. Prototypes may also be exempted from some requirements that will apply to the final product.
Engineers and prototype specialists attempt to minimize the impact of these differences on the intended role for the prototype. For example, if a visual prototype is not able to use the same materials as the final product, they will attempt to substitute materials with properties that closely simulate the intended final materials.
Characteristics and limitations of prototypes
Engineers and prototyping specialists seek to understand the limitations of prototypes to exactly simulate the characteristics of their intended design.
Prototypes represent some compromise from the final production design. This is due to the skill and choices of the designer(s), and the inevitable inherent limitations of a prototype. Due to differences in materials, processes and design fidelity, it is possible that a prototype may fail to perform acceptably although the production design may have been sound. Conversely, prototypes may perform acceptably but the production design and outcome may prove unsuccessful.
In general, it can be expected that individual prototype costs will be substantially greater than the final production costs due to inefficiencies in materials and processes. Prototypes are also used to revise the design for the purposes of reducing costs through optimization and refinement.
It is possible to use prototype testing to reduce the risk that a design may not perform as intended, however prototypes generally cannot eliminate all risk.
Building the full design is often expensive and can be time-consuming, especially when repeated several times—building the full design, figuring out what the problems are and how to solve them, then building another full design. As an alternative, rapid prototyping or rapid application development techniques are used for the initial prototypes, which implement part, but not all, of the complete design. This allows designers and manufacturers to rapidly and inexpensively test the parts of the design that are most likely to have problems, solve those problems, and then build the full design.
Engineering sciences
In technology research, a technology demonstrator is a prototype serving as proof-of-concept and demonstration model for a new technology or future product, proving its viability and illustrating conceivable applications.
In large development projects, a testbed is a platform and prototype development environment for rigorous experimentation and testing of new technologies, components, scientific theories and computational tools.
With recent advances in computer modeling it is becoming practical to eliminate the creation of a physical prototype (except possibly at greatly reduced scales for promotional purposes), instead modeling all aspects of the final product as a computer model. An example of such a development can be seen in Boeing 787 Dreamliner, in which the first full sized physical realization is made on the series production line. Computer modeling is now being extensively used in automotive design, both for form (in the styling and aerodynamics of the vehicle) and in function—especially for improving vehicle crashworthiness and in weight reduction to improve mileage.
Mechanical and electrical engineering
The most common use of the word prototype is a functional, although experimental, version of a non-military machine (e.g., automobiles, domestic appliances, consumer electronics) whose designers would like to have built by mass production means, as opposed to a mockup, which is an inert representation of a machine's appearance, often made of some non-durable substance.
An electronics designer often builds the first prototype from breadboard or stripboard or perfboard, typically using "DIP" packages.
However, more and more often the first functional prototype is built on a "prototype PCB" almost identical to the production PCB, as PCB manufacturing prices fall and as many components are not available in DIP packages, but only available in SMT packages optimized for placing on a PCB.
Builders of military machines and aviation prefer the terms "experimental" and "service test".
Electronics
Computer programming and computer science
Prototype software is often referred to as alpha grade, meaning it is the first version to run. Often only a few functions are implemented, the primary focus of the alpha is to have a functional base code on to which features may be added. Once alpha grade software has most of the required features integrated into it, it becomes beta software for testing of the entire software and to adjust the program to respond correctly during situations unforeseen during development.
Often the end users may not be able to provide a complete set of application objectives, detailed input, processing, or output requirements in the initial stage. After the user evaluation, another prototype will be built based on feedback from users, and again the cycle returns to customer evaluation. The cycle starts by listening to the user, followed by building or revising a mock-up, and letting the user test the mock-up, then back. There is now a new generation of tools called Application Simulation Software which help quickly simulate application before their development.
Extreme programming uses iterative design to gradually add one feature at a time to the initial prototype.
Other programming/computing concepts
In many programming languages, a function prototype is the declaration of a subroutine or function (and should not be confused with software prototyping). This term is rather C/C++-specific; other terms for this notion are signature, type and interface. In prototype-based programming (a form of object-oriented programming), new objects are produced by cloning existing objects, which are called prototypes.
The term may also refer to the Prototype Javascript Framework.
Additionally, the term may refer to the prototype design pattern.
Continuous learning approaches within organizations or businesses may also use the concept of business or process prototypes through software models.
The concept of prototypicality is used to describe how much a website deviates from the expected norm, and leads to a lowering of user preference for that site's design.
Data prototyping
A data prototype is a form of functional or working prototype. The justification for its creation is usually a data migration, data integration or application implementation project and the raw materials used as input are an instance of all the relevant data which exists at the start of the project.
The objectives of data prototyping are to produce:
A set of data cleansing and transformation rules which have been seen to produce data which is all fit for purpose.
A dataset which is the result of those rules being applied to an instance of the relevant raw (source) data.
To achieve this, a data architect uses a graphical interface to interactively develop and execute transformation and cleansing rules using raw data. The resultant data is then evaluated and the rules refined. Beyond the obvious visual checking of the data on-screen by the data architect, the usual evaluation and validation approaches are to use Data profiling software and then to insert the resultant data into a test version of the target application and trial its use.
Prototyping for Human-Computer Interaction
When developing software or digital tools that humans interact with, a prototype is an artifact that is used to ask and answer a design question. Prototypes provide the means for examining design problems and evaluating solutions.
HCI practitioners can employ several different types of prototypes:
"Wizard of Oz" prototype: named after the Wizard of Oz in the film The Wizard of Oz. This is a prototyping method with which the computer-side of the interaction is faked by an offsite or hidden human. This prototyping technique is particularly useful for demonstrating functionality that is difficult or lengthy to engineer, such as applications like voice user interface.
role prototype: this prototype may not be engineered or look and feel like a finished product, but the purpose of this type of prototype is to investigate and evaluation a user need, or what the prototype could do for the user. They can present features and functionality that the user might benefit from, to demonstrate what role an artifact like the prototype might fulfill for the user. A famous example of this kind of prototype would be the block of wood carried by Jeff Hawkins, when developing the Palm Pilot.
paper prototype: this prototype may use cut paper, cardboard, or other inexpensive materials to demonstrate an interface. The purpose of this prototype is to test with users, without having to use a digital tool or develop a program to test functionality. Recently, paper prototyping has fallen out of favor within certain design circles, particularly because the low-fidelity nature of this method and the lack of effectiveness when testing with users.
Scale modeling
In the field of scale modeling (which includes model railroading, vehicle modeling, airplane modeling, military modeling, etc.), a prototype is the real-world basis or source for a scale model—such as the real EMD GP38-2 locomotive—which is the prototype of Athearn's (among other manufacturers) locomotive model. Technically, any non-living object can serve as a prototype for a model, including structures, equipment, and appliances, and so on, but generally prototypes have come to mean full-size real-world vehicles including automobiles (the prototype 1957 Chevy has spawned many models), military equipment (such as M4 Shermans, a favorite among US Military modelers), railroad equipment, motor trucks, motorcycles, and space-ships (real-world such as Apollo/Saturn Vs, or the ISS).
As of 2014, basic rapid prototype machines (such as 3D printers) cost about $2,000, but larger and more precise machines can cost as much as $500,000.
Architecture
In architecture, prototyping refers to either architectural model making (as form of scale modelling) or as part of aesthetic or material experimentation, such as the Forty Wall House open source material prototyping centre in Australia.
Architects prototype to test ideas structurally, aesthetically and technically. Whether the prototype works or not is not the primary focus: architectural prototyping is the revelatory process through which the architect gains insight.
Metrology
In the science and practice of metrology, a prototype is a human-made object that is used as the standard of measurement of some physical quantity to base all measurement of that physical quantity against. Sometimes this standard object is called an artifact. In the International System of Units (SI), there remains no prototype standard since May 20, 2019. Before that date, the last prototype used was the international prototype of the kilogram, a solid platinum-iridium cylinder kept at the Bureau International des Poids et Mesures (International Bureau of Weights and Measures) in Sèvres France (a suburb of Paris) that by definition was the mass of exactly one kilogram. Copies of this prototype are fashioned and issued to many nations to represent the national standard of the kilogram and are periodically compared to the Paris prototype. Now the kilogram is redefined in such a way that the Planck constant is prescribed a value of exactly
Until 1960, the meter was defined by a platinum-iridium prototype bar with two marks on it (that were, by definition, spaced apart by one meter), the international prototype of the metre, and in 1983 the meter was redefined to be the distance in free space covered by light in 1/299,792,458 of a second (thus defining the speed of light to be 299,792,458 meters per second).
Natural sciences
In many sciences, from pathology to taxonomy, prototype refers to a disease, species, etc. which sets a good example for the whole category. In biology, prototype is the ancestral or primitive form of a species or other group; an archetype. For example, the Senegal bichir is regarded as the prototypes of its genus, Polypterus.
See also
3D printing
Clay modeling
Minimum viable product
Rapid prototyping
Test article (disambiguation)
References
Industrial design
Product development
Prototype | Prototype | Engineering | 3,146 |
31,848,417 | https://en.wikipedia.org/wiki/Upper%20motor%20neuron%20syndrome | Upper motor neuron syndrome (UMNS) is the motor control changes that can occur in skeletal muscle after an upper motor neuron lesion.
Following upper motor neuron lesions, affected muscles potentially have many features of altered performance including:
weakness (decreased ability for the muscle to generate force)
decreased motor control including decreased speed, accuracy and dexterity
altered muscle tone (hypotonia or hypertonia) – a decrease or increase in the baseline level of muscle activity
decreased endurance
exaggerated deep tendon reflexes including spasticity, and clonus (a series of involuntary rapid muscle contractions)
Such signs are collectively termed the "upper motor neuron syndrome". Affected muscles typically show multiple signs, with severity depending on the degree of damage and other factors that influence motor control. In neuroanatomical circles, it is often joked, for example, that hemisection of the cervical spinal cord leads to an "upper lower motor neuron syndrome and a lower upper motor neuron syndrome". The saying refers to lower motor neuron symptoms in the upper extremity (arm) and upper motor neurons symptoms in the lower extremity (leg).
Health professionals' understanding of impairments in muscles after an upper motor neuron lesion has progressed considerably in recent decades. However, a diagnosis of "spasticity" is still often used interchangeably with upper motor neuron syndrome, and it is not unusual to see patients labeled as spastic who demonstrate an array of UMN findings.
Spasticity is an exaggerated stretch reflex, which means that a muscle has a reflex contraction when stretched, and that this contraction is stronger when the stretch is applied more quickly. The commonly quoted definition by Lance (1980) describes "a motor disorder, characterised by a velocity-dependent increase in tonic stretch reflexes with exaggerated tendon jerks, resulting from hyper-excitability of the stretch reflex as one component of the upper motor neurone (UMN) syndrome".
Spasticity is a common feature of muscle performance after upper motor neuron lesions, but is generally of much less clinical significance than other features such as decreased strength, decreased control and decreased endurance. The confusion in the use of the terminology complicates assessment and treatment planning by health professionals, as many confuse the other findings of upper motor neuron syndrome and describe them as spasticity. This confusion potentially leaves health professionals attempting to inhibit an exaggerated stretch reflex to improve muscle performance, potentially leaving more significant UMNS changes such as weakness unaddressed. Improved understanding of the multiple features of the upper motor neuron syndrome supports more rigorous assessment, and improved treatment planning.
Presentation
The upper motor neuron syndrome signs are seen in conditions where motor areas in the brain and/or spinal cord are damaged or fail to develop normally. These include spinal cord injury, cerebral palsy, multiple sclerosis and acquired brain injury including stroke. The impact of impairment of muscles for an individual is problems with movement, and posture, which often affects their function.
Diagnosis
Assessment of motor control may involve several health professionals depending on the affected individual's situation, and the severity of their condition. This may include physical therapists, physicians (including neurologists and physiatrists) and rehabilitation physicians, orthotists, occupational therapists, and speech-language pathologists. Assessment is needed of the affected individual's goals, their function, and any symptoms that may be related to the movement disorder, such as pain. A thorough assessment then uses a clinical reasoning approach to determine why difficulties are occurring. Elements of assessment will include analysis of posture, active movement, muscle strength, movement control and coordination, and endurance, as well as muscle tone and spasticity. Impaired muscles typically demonstrate a loss of selective movement, including a loss of eccentric control (decreased ability to actively lengthen); this decreased active lengthening of a muscle is a key factor that limits motor control. While multiple muscles in a limb are usually affected in the Upper Motor Neuron Syndrome, there is usually an imbalance of muscle activity (muscle tone), such that there is a stronger pull on one side of a joint, such as into elbow flexion. Decreasing the degree of this imbalance is a common focus of muscle strengthening programs. Impaired motor control also typically features a loss of stabilisation of an affected limb or the head from the trunk, so a thorough assessment requires this to be analysed as well, and exercise to improve proximal stability may be indicated.
Secondary effects are likely to impact on assessment of impaired muscles. If muscle tone is assessed with passive muscle lengthening, increased muscle stiffness may affect the feeling of resistance to passive stretch, in addition to neurological resistance to stretch. Other secondary changes such as loss of muscle fibres following acquired muscle weakness are likely to compound the weakness arising from the upper motor neuron lesion. In severely affected muscles, there may be marked secondary changes, such as muscle contracture, particularly if management has been delayed or absent.
Treatment
Treatment should be based on assessment by the relevant health professionals. For muscles with mild-to-moderate impairment, exercise should be the mainstay of management, and is likely to need to be prescribed by a physical therapist or other health professional skilled in neurological rehabilitation.
Muscles with severe impairment are likely to be more limited in their ability to exercise, and may require help to do this. They may require additional interventions, to manage the greater neurological impairment and also greater secondary complications. These interventions may include serial casting, flexibility exercise such as sustained positioning programs, and medical interventions.
Research has clearly shown that exercise is beneficial for impaired muscles, even though it was previously believed that strength exercise would increase muscle tone and impair muscle performance further. Also, in previous decades there has been a strong focus on other interventions for impaired muscles, particularly stretching and splinting, but the evidence does not support these as effective. One of the challenges for health professionals working with UMNS movement disorders is that the degree of muscle weakness makes developing an exercise programme difficult. For muscles that lack any volitional control, such as after complete spinal cord injury, exercise may be assisted, and may require equipment, such as using a standing frame to sustain a standing position. Often, muscles require specific stimulation to achieve small amounts of activity, which is most often achieved by weight-bearing (e.g. positioning and supporting a limb such that it supports body weight) or by stimulation to the muscle belly (such as electrical stimulation or vibration).
Medical interventions may include such medications as baclofen, diazepam, dantrolene, or clonazepam. Phenol injections or botulinum toxin injections into a muscle belly of the upper or lower extremities can be used to attempt to dampen the signals between nerve and muscle. The effectiveness of medications varies between individuals, and varies based on location of the upper motor neuron lesion (in the brain or the spinal cord). Medications are commonly used for movement disorders, but research has not shown functional benefit for some drugs. Some studies have shown that medications have been effective in decreasing spasticity, but that this has not been accompanied by functional benefits.
See also
Upper motor neuron lesion
Stroke rehabilitation
Strength training
Cerebral palsy
Spinal cord injury
Acquired brain injury
Physiotherapy
Traumatic brain injury
Motor control
References
Motor control
Syndromes | Upper motor neuron syndrome | Biology | 1,501 |
244,391 | https://en.wikipedia.org/wiki/Flash%20flood | A flash flood is a rapid flooding of low-lying areas: washes, rivers, dry lakes and depressions. It may be caused by heavy rain associated with a severe thunderstorm, hurricane, or tropical storm, or by meltwater from ice and snow. Flash floods may also occur after the collapse of a natural ice or debris dam, or a human structure such as a man-made dam, as occurred before the Johnstown Flood of 1889. Flash floods are distinguished from regular floods by having a timescale of fewer than six hours between rainfall and the onset of flooding.
Flash floods are a significant hazard, causing more fatalities in the U.S. in an average year than lightning, tornadoes, or hurricanes. They can also deposit large quantities of sediments on floodplains and destroy vegetation cover not adapted to frequent flood conditions.
Causes
Flash floods most often occur in dry areas that have recently received precipitation, but they may be seen anywhere downstream from the source of the precipitation, even many miles from the source. In areas on or near volcanoes, flash floods have also occurred after eruptions, when glaciers have been melted by the intense heat. Flash floods are known to occur in the highest mountain ranges of the United States and are also common in the arid plains of the Southwestern United States. Flash flooding can also be caused by extensive rainfall released by hurricanes and other tropical storms, as well as the sudden thawing effect of ice dams. Human activities can also cause flash floods to occur. When dams fail, a large quantity of water can be released and destroy everything in its path.
Hazards
The United States National Weather Service gives the advice "Turn Around, Don't Drown" for flash floods; that is, it recommends that people get out of the area of a flash flood, rather than trying to cross it. Many people tend to underestimate the dangers of flash floods. What makes flash floods most dangerous is their sudden nature and fast-moving water. A vehicle provides little to no protection against being swept away; it may make people overconfident and less likely to avoid the flash flood. More than half of the fatalities attributed to flash floods are people swept away in vehicles when trying to cross flooded intersections. As little as of water is enough to carry away most SUV-sized vehicles. The U.S. National Weather Service reported in 2005 that, using a national 30-year average, more people die yearly in floods, 127 on average, than by lightning (73), tornadoes (65), or hurricanes (16).
In deserts, flash floods can be particularly deadly for several reasons. First, storms in arid regions are infrequent, but they can deliver an enormous amount of water in a very short time. Second, these rains often fall on poorly absorbent and often clay-like soil, which greatly increases the amount of runoff that rivers and other water channels have to handle. These regions tend not to have the infrastructure that wetter regions have to divert water from structures and roads, such as storm drains, culverts, and retention basins, either because of sparse population or poverty, or because residents believe the risk of flash floods is not high enough to justify the expense. In fact, in some areas, desert roads frequently cross a dry river and creek beds without bridges. From the driver's perspective, there may be clear weather, when a river unexpectedly forms ahead of or around the vehicle in a matter of seconds. Finally, the lack of regular rain to clear water channels may cause flash floods in deserts to be headed by large amounts of debris, such as rocks, branches, and logs.
Deep slot canyons can be especially dangerous to hikers as they may be flooded by a storm that occurs on a mesa miles away. The flood sweeps through the canyon; the canyon makes it difficult to climb up and out of the way to avoid the flood. For example, a cloudburst in southern Utah on 14 September 2015 resulted in 20 flash flood fatalities, of which seven fatalities occurred at Zion National Park when hikers were trapped by floodwaters in a slot canyon.
Impacts
Flash floods induce severe impacts in both the built and the natural environment. The effects of flash floods can be catastrophic and show extensive diversity, ranging from damages in buildings and infrastructure to impacts on vegetation, human lives and livestock. The effects are particularly difficult to characterize in urban areas.
Researchers have used datasets such as the Severe Hazards Analysis and Verification Experiment (SHAVE) and the U.S. National Weather Service (NWS) Storm Data datasets to connect the impact of flash floods with the physical processes involved in flash flooding. This should increase the reliability of flash flood impact forecasting models. Analysis of flash floods in the United States between 2006 and 2012 shows that injuries and fatalities are most likely in small, rural catchments, that the shortest events are also the most dangerous, that the hazards are greatest after nightfall, and that a very high fraction of injuries and fatalities involve vehicles.
An impact severity scale is proposed in 2020 providing a coherent overview of the flash flood effects through the classification of impact types and severity and mapping their spatial extent in a continuous way across the floodplain. Depending on the affected elements, the flood effects are grouped into 4 categories: (i) impacts on built environment (ii) impacts on man-made mobile objects,(iii) impacts on the natural environment (including vegetation, agriculture, geomorphology, and pollution) and (iv) impacts on the human population (entrapments, injuries, fatalities). The scale was proposed as a tool on prevention planning, as the resulting maps offer insights on future impacts, highlighting the high severity areas.
Flash floods can cause rapid soil erosion. Much of the Nile delta sedimentation may come from flash flooding in the desert areas that drain into the Nile River. However, flash floods of short duration produce relatively little bedrock erosion or channel widening, having their greatest impact from sedimentation on the floodplain.
Some wetlands plants, such as certain varieties of rice, are adapted to endure flash flooding. However, plants that thrive in drier areas can be harmed by flooding, as the plants can become stressed by the large amount of water.
See also
References
Further reading
External links
Public clip of the Fochabers flood in Moray September 9
Decision tree to choose an uncertainty method for hydrological and hydraulic modelling, choosing an uncertainty analysis for flood modelling.
Great footage of flash floods in the arid midwest heading down dry washes after heavy rain.
Map of central Texas flash flood alley.
Workshop Proceedings Flash Flood Management
Workshop Proceedings Flash Flood Forecasting
Hydrologic Research Center
Flood
Hydrology
Hazards of outdoor recreation | Flash flood | Chemistry,Engineering,Environmental_science | 1,347 |
56,526,740 | https://en.wikipedia.org/wiki/Edward%20Je%C5%82owicki | Edward Bożeniec Jełowicki born 1803 in Hubnik now in Western Ukraine, died 10 November 1848 in Vienna, was a Polish landowner, decorated Colonel in the Polish army, insurgent, officer in the Foreign Legion and commander of the Vienna artillery. He was an engineer and inventor.
Biography
Family
Descended from Ruthenian aristocracy, his family had been integrated into the Polish Szlachta and converted from Orthodoxy to Roman Catholicism during the Republic of Two Nations. Edward was the eldest son of Wacław Jełowicki and his wife Franciszka née Izdebska. He had two younger brothers, the publisher, writer and priest, Aleksander and Eustachy and a sister, Hortensja, who married Piotr Sobański.
Career
An alumnus of the Vienna Theresian Military Academy, he was elected Marshal of the Haisyn district. He took a leading part in the November Uprising in Ukraine, with his father and two brothers, until its undoing in 1831 when with his younger brother, Aleksander, he evaded capture by escaping into Austria-Hungary. After much travel across Europe and Algeria, he pursued further studies at the postgraduate École d’état-major in Paris and the Ecole Centrale Paris. In 1836 during a quiet spell in London, he designed and took out two British Patents on his Steam turbine, one being in England. The other patent was granted in Edinburgh for "certain improvements to his steam engine", on 16 July 1836.
Back in Paris he frequented Adam Mickiewicz, whose Paris publisher was Edward's brother, Aleksander Jełowicki. Like his brother, he was also a friend of Frederic Chopin.
Caught up in the Spring of Nations that swept over Europe in 1848, he was executed in Vienna on the order of Alfred I, Prince of Windisch-Grätz. He left a widow and two children.
Distinctions
Order of Virtuti Militari
Légion d'Honneur
See also
Jełowicki family
Great Emigration
References
Further reading
Joseph Straszewicz (1839). Les Polonais et les Polonaises de la révolution du 29 novembre 1830 - biographie, Paris: chez l'Editeur, rue des Colombiers, 12, pp.1-10. (in French).
Polytechnisches Journal. 63. Band, Jahrgang 1837, N.F. 13. Band, Hefte 1-6 komplett. (= 18. Jahrgang, 1.-6. Heft ). Eine Zeitschrift zur Verbreitung gemeinnüziger Kenntnisse im Gebiete der Naturwissenschaft, der Chemie, der Pharmacie, der Mechanik, der Manufakturen, Fabriken, Künste, Gewerbe, der Handlung, der Haus- und Landwirthschaft etc. Herausgegeben von Johann Gottfried und Emil Maximilian Dingler.
Polytechnisches Journal. Hrsg. v. Johann Gottfried Dingler, Emil Maximilian Dingler und Julius Hermann Schultes:
Published by Stuttgart in der J. G. Cotta'schen Buchhandlung (1837)., 1837 (in German)
External links
The Gocla collection Museum of Warsaw has a bronze medallion of Edward Jełowicki
British Museum information entry
- Genealogy of Edward Jełowicki in Polish
1803 births
1848 deaths
People from Vinnytsia Oblast
Dukes of Poland
Polish nobility
Members of Polish government (November Uprising)
Activists of the Great Emigration
Polish Army officers
École Centrale Paris alumni
Polish exiles
Polish inventors
Mechanical engineers
Polish Roman Catholics
Recipients of the Virtuti Militari
Executed Polish people
19th-century executions by Austria
Polish recipients of the Legion of Honour
Engineers from the Austrian Empire | Edward Jełowicki | Engineering | 789 |
47,369,768 | https://en.wikipedia.org/wiki/Artmotion | Artmotion Ltd. is a Swiss-based data housing provider. It operates two data centers near the Swiss Alps that are designed for security-seeking businesses, motivated by the country's political neutrality and ironclad privacy laws.
History
Artmotion was founded in 2000 by Mateo Meier, who is also the current CEO of the company. In 2003, Artmotion acquired Cyberhost, a Swiss Cloud hosting services provider. Since 2011, the Swiss data center provider Everyware Ltd. holds 25% of Artmotion's shares. Artmotion increased its capital in 2015.
Artmotion acquired 58% of shares in Citadelo and became a shareholder and board member in 2020. However Artmotion and Citadelo continued to operate as separate companies. Artmotion also owns a stake in Citadelo Switzerland.
NSA PRISM scandal
After the 2013 PRISM leak, several cloud computing companies faced criticism for their lack of security, especially those based in US. Artmotion, among other Swiss-based cloud computing providers gained the most out of this scandal. Mostly due to the country's stringent data privacy laws, their customer base increased and the company had their revenue increased 45 percent in a month following the scandal.
According to the CEO Mateo Meier, except in a few European countries like Luxembourg and Switzerland, personal privacy is difficult to service in this modern world. Switzerland is not a member of European Union, hence it is exempt from pan-European agreements to share data with member states, as well as the United States.
References
Information technology companies of Switzerland
Companies based in the canton of Zug
Data centers
Cloud computing providers
Swiss companies established in 2000
Technology companies established in 2000 | Artmotion | Technology | 340 |
7,300,829 | https://en.wikipedia.org/wiki/Pi%20Josephson%20junction | A Josephson junction (JJ) is a quantum mechanical device which is made of two superconducting electrodes separated by a barrier (thin insulating tunnel barrier, normal metal, semiconductor, ferromagnet, etc.).
A Josephson junction is a Josephson junction in which the Josephson phase φ equals in the ground state, i.e. when no external current or magnetic field is applied.
Background
The supercurrent Is through a Josephson junction is generally given by Is = Icsin(φ),
where φ is the phase difference of the superconducting wave functions of the two
electrodes, i.e. the Josephson phase.
The critical current Ic is the maximum supercurrent that can exist through the Josephson junction.
In experiment, one usually causes some current through the Josephson junction and the junction reacts by changing the Josephson phase. From the above formula it is clear that the phase φ = arcsin(I/Ic), where I is the applied (super)current.
Since the phase is 2-periodic, i.e. φ and φ + 2n are physically equivalent, without losing generality, the discussion below refers to the interval 0 ≤ φ < 2.
When no current (I = 0) exists through the Josephson junction, e.g. when the junction is disconnected, the junction is in the ground state and the Josephson phase across it is zero (φ = 0). The phase can also be φ = , also resulting in no current through the junction. It turns out that the state with φ = is unstable and corresponds to the Josephson energy maximum, while the state φ = 0 corresponds to the Josephson energy minimum and is a ground state.
In certain cases, one may obtain a Josephson junction where the critical current is negative (Ic < 0). In this case, the first Josephson relation becomes
The ground state of such a Josephson junction is and corresponds to the Josephson energy minimum, while the conventional state φ = 0 is unstable and corresponds to the Josephson energy maximum. Such a Josephson junction with in the ground state is called a Josephson junction.
Josephson junctions have quite unusual properties. For example, if one connects (shorts) the superconducting electrodes with the inductance L (e.g. superconducting wire), one may expect the spontaneous supercurrent circulating in the loop, passing through the junction and through inductance clockwise or counterclockwise. This supercurrent is spontaneous and belongs to the ground state of the system. The direction of its circulation is chosen at random. This supercurrent will of course induce a magnetic field which can be detected experimentally. The magnetic flux passing through the loop will have the value from 0 to a half of magnetic flux quanta, i.e. from 0 to Φ0/2, depending on the value of inductance L.
Technologies and physical principles
Ferromagnetic Josephson junctions. Consider a Josephson junction with a ferromagnetic Josephson barrier, i.e. the multilayers superconductor-ferromagnet-superconductor (SFS) or superconductor-insulator-ferromagnet-superconductor (SIFS). In such structures the superconducting order parameter inside the F-layer oscillates in the direction perpendicular to the junction plane. As a result, for certain thicknesses of the F-layer and temperatures, the order parameter may become +1 at one superconducting electrode and −1 at the other superconducting electrode. In this situation one gets a Josephson junction. Note that inside the F-layer the competition of different solutions takes place and the one with the lower energy wins out. Various ferromagnetic junctions have been fabricated: SFS junctions with weak ferromagnetic interlayers; SFS junctions with strong ferromagnetic interlayers, such as Co, Ni, PdFe and NiFe SIFS junctions; and S-Fi-S junctions.
Josephson junctions with unconventional order parameter symmetry. Novel superconductors, notably high temperature cuprate superconductors, have an anisotropic superconducting order parameter which can change its sign depending on the direction. In particular, a so-called d-wave order parameter has a value of +1 if one looks along the crystal axis a and −1 if one looks along the crystal axis b. If one looks along the ab direction (45° between a and b) the order parameter vanishes. By making Josephson junctions between d-wave superconducting films with different orientations or between d-wave and conventional isotropic s-wave superconductors, one can get a phase shift of . Nowadays there are several realizations of Josephson junctions of this type:
tri-crystal grain boundary Josephson junctions,
tetra-crystal grain boundary Josephson junctions,
d-wave/s-wave ramp zigzag Josephson junctions,
tilt-twist grain boundary Josephson junctions,
p-wave based Josephson junctions.
Superconductor–normal metal–superconductor (SNS) Josephson junctions with non-equilibrium electron distribution in N-layer.
Superconductor–quantum dot–superconductor (S-QuDot-S) Josephson junctions (implemented by carbon nanotube Josephson junctions).
Historical developments
Theoretically, the first time the possibility of creating a Josephson junction was discussed by Bulaevskii et al. ,
who considered a Josephson junction with paramagnetic scattering in the barrier. Almost one decade later, the possibility of having a Josephson junction was discussed in the context of heavy fermion p-wave superconductors.
Experimentally, the first Josephson junction was a corner junction made of yttrium barium copper oxide (d-wave) and Pb (s-wave) superconductors. The first unambiguous proof of a Josephson junction with a ferromagnetic barrier was given only a decade later. That work used a weak ferromagnet consisting of a copper-nickel alloy (CuxNi1−x, with x around 0.5) and optimized it so that the Curie temperature was close to the superconducting transition temperature of the superconducting niobium leads.
See also
Josephson effect
φ Josephson junction
Semifluxon
Fractional vortices
Brian D. Josephson
References
Superconductivity
Josephson effect | Pi Josephson junction | Physics,Materials_science,Engineering | 1,369 |
6,084,563 | https://en.wikipedia.org/wiki/Plutonium-240 | Plutonium-240 ( or Pu-240) is an isotope of plutonium formed when plutonium-239 captures a neutron. The detection of its spontaneous fission led to its discovery in 1944 at Los Alamos and had important consequences for the Manhattan Project.
240Pu undergoes spontaneous fission as a secondary decay mode at a small but significant rate. The presence of 240Pu limits plutonium's use in a nuclear bomb, because the neutron flux from spontaneous fission initiates the chain reaction prematurely, causing an early release of energy that physically disperses the core before full implosion is reached.
It decays by alpha emission to uranium-236.
Nuclear properties
About 62% to 73% of the time when 239Pu captures a neutron, it undergoes fission; the remainder of the time, it forms 240Pu. The longer a nuclear fuel element remains in a nuclear reactor, the greater the relative percentage of 240Pu in the fuel becomes.
The isotope 240Pu has about the same thermal neutron capture cross section as 239Pu ( vs. barns), but only a tiny thermal neutron fission cross section (0.064 barns). When the isotope 240Pu captures a neutron, it is about 4500 times more likely to become plutonium-241 than to fission. In general, isotopes of odd mass numbers are more likely to absorb a neutron, and can undergo fission upon neutron absorption more easily than isotopes of even mass number. Thus, even mass isotopes tend to accumulate, especially in a thermal reactor.
Nuclear weapons
The inevitable presence of some 240Pu in a plutonium-based nuclear warhead core complicates its design, and pure 239Pu is considered optimal. This is for a few reasons:
240Pu has a high rate of spontaneous fission. A single stray neutron that is introduced while the core is supercritical will cause it to detonate almost immediately, even before it has been crushed to an optimal configuration. The presence of 240Pu would thus randomly cause fizzles, with an explosive yield well below the potential yield.
Isotopes besides 239Pu release significantly more radiation, which complicates its handling by workers.
Isotopes besides 239Pu produce more decay heat, which can cause phase change distortions of the precision core if allowed to build up.
The spontaneous fission problem was extensively studied by the scientists of the Manhattan Project during World War II. It blocked the use of plutonium in gun-type nuclear weapons in which the assembly of fissile material into its optimal supercritical mass configuration can take up to a millisecond to complete, and made it necessary to develop implosion-style weapons where the assembly occurs in a few microseconds. Even with this design, it was estimated in advance of the Trinity test that 240Pu impurity would cause a 12% chance of the explosion failing to reach its maximum yield.
The minimization of the amount of , as in weapons-grade plutonium (less than 7% 240Pu) is achieved by reprocessing the fuel after just 90 days of use. Such rapid fuel cycles are highly impractical for civilian power reactors and are normally only carried out with dedicated weapons plutonium production reactors. Plutonium from spent civilian power reactor fuel typically has under 70% 239Pu and around 26% , the rest being made up of other plutonium isotopes, making it more difficult to use it for the manufacturing of nuclear weapons. For nuclear weapon designs introduced after the 1940s, however, there has been considerable debate over the degree to which poses a barrier for weapons construction; see the article Reactor-grade plutonium.
See also
Burnup
Isotopes of plutonium
References
External links
NLM Hazardous Substances Databank – Plutonium, Radioactive
Actinides
Isotopes of plutonium
Fertile materials | Plutonium-240 | Chemistry | 763 |
20,554,835 | https://en.wikipedia.org/wiki/Atriplex%20truncata | Atriplex truncata is a species of saltbush known by the common names wedgeleaf saltbush, wedgescale, and wedge orach, native to western North America from British Columbia to California and to New Mexico. It grows in montane to desert habitats with saline soils, such as dry lake beds.
Description
Atriplex truncata is an annual herb producing erect, angled stems which can be higher than 70 centimeters. Leaves are 1 to 4 centimeters long and wedge-shaped. The stems and herbage are generally very scaly and scurfy. Male and female flowers are produced in small clusters in the leaf axils.
References
External links
Jepson Manual Treatment
USDA Plants Profile for Atriplex truncata (wedgescale saltbush)
Photo gallery
truncata
Halophytes
Flora of the Northwestern United States
Flora of Western Canada
Flora of British Columbia
Flora of California
Flora of Colorado
Flora of Nevada
Flora of New Mexico
Flora of Utah
Flora of the California desert regions
Flora of the Great Basin
Flora of the Rocky Mountains
Flora of the Sierra Nevada (United States)
Natural history of the Mojave Desert
Natural history of the Transverse Ranges
Plants described in 1871
Flora without expected TNC conservation status | Atriplex truncata | Chemistry | 252 |
613,749 | https://en.wikipedia.org/wiki/Entablature | An entablature (; nativization of Italian , from "in" and "table") is the superstructure of moldings and bands which lies horizontally above columns, resting on their capitals. Entablatures are major elements of classical architecture, and are commonly divided into the architrave (the supporting member immediately above; equivalent to the lintel in post and lintel construction), the frieze (an unmolded strip that may or may not be ornamented), and the cornice (the projecting member below the pediment). The Greek and Roman temples are believed to be based on wooden structures, the design transition from wooden to stone structures being called petrification.
Overview
The structure of an entablature varies with the orders of architecture. In each order, the proportions of the subdivisions (architrave, frieze, cornice) are defined by the proportions of the column. In Roman and Renaissance interpretations, it is usually approximately a quarter of the height of the column. Variants of entablature that do not fit these models are usually derived from them.
Doric
In the pure classical Doric order entablature is simple. The architrave, the lowest band, is split, from bottom to top, into the guttae, the regulae, and the taenia.
The frieze is dominated by the triglyphs, vertically channelled tablets, separated by metopes, which may or may not be decorated. The triglyphs sit on top of the taenia, a flat, thin, horizontal protrusion, and are finished at the bottom by decoration (often ornate) of 'drops' called guttae, which belong to the top of the architrave. The top of the triglyphs meet the protrusion of the cornice from the entablature. The underside of this protrusion is decorated with mutules, tablets that are typically finished with guttae.
The cornice is split into the soffit, the corona, and the cymatium. The soffit is simply the exposed underside. The corona and the cymatium are the principal parts of the cornice.
Ionic
The Ionic order of entablature adds the fascia in the architrave, which are flat horizontal protrusions, and the dentils under the cornice, which are tooth-like rectangular block moldings.
Corinthian
The Corinthian order adds a far more ornate cornice, divided, from bottom to top, into the cyma reversa, the dentils, the ovolo, the modillions, the fascia, and the cyma recta. The modillions are ornate brackets, similar in use to dentils, but often in the shape of acanthus leaves.
The frieze is sometimes omitted—for example, on the portico of the caryatides of the Erechtheum—and probably did not exist as a structure in the temple of Diana at Ephesus. Neither is it found in the Lycian tombs, which are reproductions in the rock of timber structures based on early Ionian work. The entablature is essentially an evolution of the primitive lintel, which spans two posts, supporting the ends of the roof rafters.
Non-classical architecture
The entablature together with the system of classical columns occurs rarely outside classical architecture. It is often used to complete the upper portion of a wall where columns are not present, and in the case of pilasters (flattened columns or projecting from a wall) or detached or engaged columns it is sometimes profiled around them. The use of the entablature, irrespective of columns, appeared after the Renaissance.
See also
Classical order
Classical architecture
Prastara, an entablature in the Hindu temple architecture
References
External links
Architectural elements | Entablature | Technology,Engineering | 795 |
662,229 | https://en.wikipedia.org/wiki/MOST%20%28spacecraft%29 | The Microvariability and Oscillations of Stars/Microvariabilité et Oscillations STellaire (MOST), was Canada's first space telescope. Up until nearly 10 years after its launch it was also the smallest space telescope in orbit (for which its creators nicknamed it the "Humble Space Telescope", in reference to one of the largest, the Hubble). MOST was the first spacecraft dedicated to the study of asteroseismology, subsequently followed by the now-completed CoRoT and Kepler missions. It was also the first Canadian science satellite launched since ISIS II, 32 years previously.
Description
As its name suggests, its primary mission was to monitor variations in star light, which it did by observing a single target for a long period of time (up to 60 days). Typically, larger space telescopes cannot afford to remain focused on a single target for so long due to the demand for their resources.
At , wide and tall, and deep, it was the size and weight of a small chest or an extra-large suitcase filled with electronics. This places it in the microsatellite category.
MOST was developed as a joint effort of the Canadian Space Agency, Dynacon Enterprises Limited (now Microsatellite Systems Canada Inc), the Space Flight Laboratory (SFL) at the University of Toronto Institute for Aerospace Studies, and the University of British Columbia. Led by Principal Investigator Jaymie Matthews, the MOST science team's plan was to use observations from MOST to use asteroseismology to help date the age of the universe, and to search for visible-light signatures from extrasolar planets.
The original SFL application to the CSA is available at
https://www.astro.utoronto.ca/~rucinski/MOST_proposal_1997.pdf
MOST featured an instrument comprising a visible-light dual-CCD camera, fed by a 15-cm aperture Maksutov telescope. One CCD gathered science images, while the other provided images used by star-tracking software that, along with a set of four reaction wheels (computer-controlled motorized flywheels that are similar to gyroscopes) maintained pointing with an error of less than 1 arc-second, better pointing by far than any other microsatellite to date.
The design of the rest of MOST was inspired by and based on microsatellite bus designs pioneered by AMSAT, and first brought to commercial viability by the microsatellite company SSTL (based at the University of Surrey in the United Kingdom); during the early stages of MOST development, the core group of AMSAT microsatellite satellite designers advised and mentored the MOST satellite design team, via a know-how transfer arrangement with UTIAS. This approach to satellite design is notable for making use of commercial-grade electronics, along with a "small team," "early prototyping" engineering development approach rather different from that used in most other space-engineering programs, to achieve relatively very low costs: MOST's life-cycle cost (design, build, launch and operate) was less than $10 million in Canadian funds (about 7 million Euros or 6 million USD, at exchange rates at time of launch).
Operation history
Development of the satellite was managed by the Canadian Space Agency's Space Science Branch, and was funded under its Small Payloads Program; its operations were (as of 2012) managed by the CSA's Space Exploration Branch. It was operated by SFL (where the primary MOST ground station is located) jointly with Microsat Systems Canada Inc. (since the sale of Dynacon's space division to MSCI in 2008). As of ten years after launch, despite failures of two of its components (one of the four reaction wheels and one of the two CCD driver boards), the satellite was still operating well, as a result of both on-going on-board software upgrades as well as built-in hardware redundancy, allowing improvements to performance and to reconfigure around failed hardware units.
In 2008, the MOST Satellite Project Team won the Canadian Aeronautics and Space Institute's Alouette Award, which recognizes outstanding contributions to advancement in Canadian space technology, applications, science or engineering.
Termination of operations funding by CSA
On 30 April 2014, the Canadian Space Agency announced that funding to continue operating MOST would be withdrawn as of 9 September 2014, apparently as a result of funding cuts to the Canadian Space Agency's budget by the Harper government, despite the fact that the satellite continues to be fully operational and capable of making on-going science observations. P.I. Jaymie Matthews responded by saying that "he will consider all options to keep the satellite in orbit, and that includes a direct appeal to the public."
Post-CSA-funded operations
In October 2014, the MOST Satellite was acquired by MSCI, which then commenced commercial operation of the satellite, offering a variety of potential uses including continuing the original MOST mission in partnership with Dr. Matthews, but also other planetary studies, attitude control system algorithm R&D, and Earth observation. MOST was finally decommissioned in March 2019, after an apparent failure of its power subsystem.
Discoveries
The MOST team has reported a number of discoveries. In 2004 they reported that the star Procyon does not oscillate to the extent that had been expected, although this has been disputed.
In 2006 observations revealed a previously unknown class of variable stars, the "slowly pulsating B supergiants" (SPBsg). In 2011, MOST detected transits by exoplanet 55 Cancri e of its primary star, based on two weeks of nearly continuous photometric monitoring, confirming an earlier detection of this planet, and allowing investigations into the planet's composition. In 2019, MOST photometry was used to disprove claims of permanent starspots on the surface of that were alleged to be caused by interactions between the magnetic fields of the star and its "hot Jupiter" exoplanet.
MOST target campaigns
Procyon
HD 209458 b
HD 163899 (guide star)
55 Cancri e
HIP 116454 b
See also
References
External links
MOST website by the Canadian Space Agency
MOST website by the University of British Columbia
MOST website by the University of Toronto
MOST website by Microsat Systems Canada Inc.
2003 in Canada
Derelict satellites orbiting Earth
Exoplanet search projects
Optical telescopes
Satellites of Canada
Space telescopes
Spacecraft launched by Rokot rockets
Spacecraft launched in 2003 | MOST (spacecraft) | Astronomy | 1,329 |
37,195,431 | https://en.wikipedia.org/wiki/Epsilon%20Hydri | Epsilon Hydri, Latinized from ε Hydri, is a single, blue-white hued star in the southern constellation of Hydrus. It is a faint star with an apparent visual magnitude of 4.12, but it can be seen with the naked eye. Measurements made by the Hipparcos spacecraft showed an annual parallax shift of 21.48 mas, which provides a distance estimate of 152 light years. The star is moving away from the Sun with a radial velocity of +13.6 km/s. It is a member of the Tucana-Horologium moving group, an association of stars that share a common motion through space.
The stellar classification for this star is B9 Va, indicating that is it a B-type main-sequence star that is generating energy through hydrogen fusion at its core. It is a young star, just 133 million years in age, and has a high rate of spin with a projected rotational velocity of 96 km/s. This is giving the star a mild oblate shape with an equatorial bulge that is 5% greater than the polar radius. Epsilon Hydri has an estimated 2.64 times the mass of the Sun and 2.2 times the Sun's radius. It is radiating 60 times the Sun's luminosity from its photosphere at an effective temperature of around 10,970 K.
References
B-type main-sequence stars
Hydrus
Hydri, Epsilon
Durchmusterung objects
016978
012394
0806 | Epsilon Hydri | Astronomy | 318 |
31,556,270 | https://en.wikipedia.org/wiki/Olimunllum | Olimunllum is a thermoplastic composite material containing a quasi-isotropic endless carbon fiber reinforcement and a semi-crystalline thermoplastic polymer matrix from the Polyaryletherketone (PAEK) family.
Recycling
Endless fiber-reinforced PAEK composite, as used in the Olimunllum sheets, can be recycled by chopping the old material, resulting in a composite with short fiber reinforcement that can be thermoformed again. The mechanical properties of pressed or injected CF/PEEK chopped compounds are significantly lower than the original endless fiber composite but exceed those of new injection-molded compounds.
References
Niederstadt, G.: Ökonomischer und ökologischer Leichtbau mit faserverstärkten Polymeren, Expert Verlag, 1997
External links
Olimunllum America LLC website
Polyethers
Organic polymers
Thermoplastics
Technical fabrics
Brand name materials | Olimunllum | Chemistry | 187 |
41,957 | https://en.wikipedia.org/wiki/Electrical%20impedance | In electrical engineering, impedance is the opposition to alternating current presented by the combined effect of resistance and reactance in a circuit.
Quantitatively, the impedance of a two-terminal circuit element is the ratio of the complex representation of the sinusoidal voltage between its terminals, to the complex representation of the current flowing through it. In general, it depends upon the frequency of the sinusoidal voltage.
Impedance extends the concept of resistance to alternating current (AC) circuits, and possesses both magnitude and phase, unlike resistance, which has only magnitude.
Impedance can be represented as a complex number, with the same units as resistance, for which the SI unit is the ohm ().
Its symbol is usually , and it may be represented by writing its magnitude and phase in the polar form . However, Cartesian complex number representation is often more powerful for circuit analysis purposes.
The notion of impedance is useful for performing AC analysis of electrical networks, because it allows relating sinusoidal voltages and currents by a simple linear law.
In multiple port networks, the two-terminal definition of impedance is inadequate, but the complex voltages at the ports and the currents flowing through them are still linearly related by the impedance matrix.
The reciprocal of impedance is admittance, whose SI unit is the siemens, formerly called mho.
Instruments used to measure the electrical impedance are called impedance analyzers.
History
Perhaps the earliest use of complex numbers in circuit analysis was by Johann Victor Wietlisbach in 1879 in analysing the Maxwell bridge. Wietlisbach avoided using differential equations by expressing AC currents and voltages as exponential functions with imaginary exponents (see ). Wietlisbach found the required voltage was given by multiplying the current by a complex number (impedance), although he did not identify this as a general parameter in its own right.
The term impedance was coined by Oliver Heaviside in July 1886. Heaviside recognised that the "resistance operator" (impedance) in his operational calculus was a complex number. In 1887 he showed that there was an AC equivalent to Ohm's law.
Arthur Kennelly published an influential paper on impedance in 1893. Kennelly arrived at a complex number representation in a rather more direct way than using imaginary exponential functions. Kennelly followed the graphical representation of impedance (showing resistance, reactance, and impedance as the lengths of the sides of a right angle triangle) developed by John Ambrose Fleming in 1889. Impedances could thus be added vectorially. Kennelly realised that this graphical representation of impedance was directly analogous to graphical representation of complex numbers (Argand diagram). Problems in impedance calculation could thus be approached algebraically with a complex number representation. Later that same year, Kennelly's work was generalised to all AC circuits by Charles Proteus Steinmetz. Steinmetz not only represented impedances by complex numbers but also voltages and currents. Unlike Kennelly, Steinmetz was thus able to express AC equivalents of DC laws such as Ohm's and Kirchhoff's laws. Steinmetz's work was highly influential in spreading the technique amongst engineers.
Introduction
In addition to resistance as seen in DC circuits, impedance in AC circuits includes the effects of the induction of voltages in conductors by the magnetic fields (inductance), and the electrostatic storage of charge induced by voltages between conductors (capacitance). The impedance caused by these two effects is collectively referred to as reactance and forms the imaginary part of complex impedance whereas resistance forms the real part.
Complex impedance
The impedance of a two-terminal circuit element is represented as a complex quantity . The polar form conveniently captures both magnitude and phase characteristics as
where the magnitude represents the ratio of the voltage difference amplitude to the current amplitude, while the argument (commonly given the symbol ) gives the phase difference between voltage and current. is the imaginary unit, and is used instead of in this context to avoid confusion with the symbol for electric current.
In Cartesian form, impedance is defined as
where the real part of impedance is the resistance and the imaginary part is the reactance .
Where it is needed to add or subtract impedances, the cartesian form is more convenient; but when quantities are multiplied or divided, the calculation becomes simpler if the polar form is used. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation. Conversion between the forms follows the normal conversion rules of complex numbers.
Complex voltage and current
To simplify calculations, sinusoidal voltage and current waves are commonly represented as complex-valued functions of time denoted as and .
The impedance of a bipolar circuit is defined as the ratio of these quantities:
Hence, denoting , we have
The magnitude equation is the familiar Ohm's law applied to the voltage and current amplitudes, while the second equation defines the phase relationship.
Validity of complex representation
This representation using complex exponentials may be justified by noting that (by Euler's formula):
The real-valued sinusoidal function representing either voltage or current may be broken into two complex-valued functions. By the principle of superposition, we may analyse the behaviour of the sinusoid on the left-hand side by analysing the behaviour of the two complex terms on the right-hand side. Given the symmetry, we only need to perform the analysis for one right-hand term. The results are identical for the other. At the end of any calculation, we may return to real-valued sinusoids by further noting that
Ohm's law
The meaning of electrical impedance can be understood by substituting it into Ohm's law. Assuming a two-terminal circuit element with impedance is driven by a sinusoidal voltage or current as above, there holds
The magnitude of the impedance acts just like resistance, giving the drop in voltage amplitude across an impedance for a given current . The phase factor tells us that the current lags the voltage by a phase (i.e., in the time domain, the current signal is shifted later with respect to the voltage signal).
Just as impedance extends Ohm's law to cover AC circuits, other results from DC circuit analysis, such as voltage division, current division, Thévenin's theorem and Norton's theorem, can also be extended to AC circuits by replacing resistance with impedance.
Phasors
A phasor is represented by a constant complex number, usually expressed in exponential form, representing the complex amplitude (magnitude and phase) of a sinusoidal function of time. Phasors are used by electrical engineers to simplify computations involving sinusoids (such as in AC circuits), where they can often reduce a differential equation problem to an algebraic one.
The impedance of a circuit element can be defined as the ratio of the phasor voltage across the element to the phasor current through the element, as determined by the relative amplitudes and phases of the voltage and current. This is identical to the definition from Ohm's law given above, recognising that the factors of cancel.
Device examples
Resistor
The impedance of an ideal resistor is purely real and is called resistive impedance:
In this case, the voltage and current waveforms are proportional and in phase.
Inductor and capacitor
Ideal inductors and capacitors have a purely imaginary reactive impedance:
the impedance of inductors increases as frequency increases;
the impedance of capacitors decreases as frequency increases;
In both cases, for an applied sinusoidal voltage, the resulting current is also sinusoidal, but in quadrature, 90 degrees out of phase with the voltage. However, the phases have opposite signs: in an inductor, the current is lagging; in a capacitor the current is leading.
Note the following identities for the imaginary unit and its reciprocal:
Thus the inductor and capacitor impedance equations can be rewritten in polar form:
The magnitude gives the change in voltage amplitude for a given current amplitude through the impedance, while the exponential factors give the phase relationship.
Deriving the device-specific impedances
What follows below is a derivation of impedance for each of the three basic circuit elements: the resistor, the capacitor, and the inductor. Although the idea can be extended to define the relationship between the voltage and current of any arbitrary signal, these derivations assume sinusoidal signals. In fact, this applies to any arbitrary periodic signals, because these can be approximated as a sum of sinusoids through Fourier analysis.
Resistor
For a resistor, there is the relation
which is Ohm's law.
Considering the voltage signal to be
it follows that
This says that the ratio of AC voltage amplitude to alternating current (AC) amplitude across a resistor is , and that the AC voltage leads the current across a resistor by 0 degrees.
This result is commonly expressed as
Capacitor
For a capacitor, there is the relation:
Considering the voltage signal to be
it follows that
and thus, as previously,
Conversely, if the current through the circuit is assumed to be sinusoidal, its complex representation being
then integrating the differential equation
leads to
The Const term represents a fixed potential bias superimposed to the AC sinusoidal potential, that plays no role in AC analysis. For this purpose, this term can be assumed to be 0, hence again the impedance
Inductor
For the inductor, we have the relation (from Faraday's law):
This time, considering the current signal to be:
it follows that:
This result is commonly expressed in polar form as
or, using Euler's formula, as
As in the case of capacitors, it is also possible to derive this formula directly from the complex representations of the voltages and currents, or by assuming a sinusoidal voltage between the two poles of the inductor. In the latter case, integrating the differential equation above leads to a constant term for the current, that represents a fixed DC bias flowing through the inductor. This is set to zero because AC analysis using frequency domain impedance considers one frequency at a time and DC represents a separate frequency of zero hertz in this context.
Generalised s-plane impedance
Impedance defined in terms of jω can strictly be applied only to circuits that are driven with a steady-state AC signal. The concept of impedance can be extended to a circuit energised with any arbitrary signal by using complex frequency instead of jω. Complex frequency is given the symbol and is, in general, a complex number. Signals are expressed in terms of complex frequency by taking the Laplace transform of the time domain expression of the signal. The impedance of the basic circuit elements in this more general notation is as follows:
For a DC circuit, this simplifies to . For a steady-state sinusoidal AC signal .
Formal derivation
The impedance of an electrical component is defined as the ratio between the Laplace transforms of the voltage over it and the current through it, i.e.
where is the complex Laplace parameter. As an example, according to the I-V-law of a capacitor, , from which it follows that .
In the phasor regime (steady-state AC, meaning all signals are represented mathematically as simple complex exponentials and oscillating at a common frequency ), impedance can simply be calculated as the voltage-to-current ratio, in which the common time-dependent factor cancels out:
Again, for a capacitor, one gets that , and hence . The phasor domain is sometimes dubbed the frequency domain, although it lacks one of the dimensions of the Laplace parameter. For steady-state AC, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular:
The magnitude of the complex impedance is the ratio of the voltage amplitude to the current amplitude;
The phase of the complex impedance is the phase shift by which the current lags the voltage.
These two relationships hold even after taking the real part of the complex exponentials (see phasors), which is the part of the signal one actually measures in real-life circuits.
Resistance vs reactance
Resistance and reactance together determine the magnitude and phase of the impedance through the following relations:
In many applications, the relative phase of the voltage and current is not critical so only the magnitude of the impedance is significant.
Resistance
Resistance is the real part of impedance; a device with a purely resistive impedance exhibits no phase shift between the voltage and current.
Reactance
Reactance is the imaginary part of the impedance; a component with a finite reactance induces a phase shift between the voltage across it and the current through it.
A purely reactive component is distinguished by the sinusoidal voltage across the component being in quadrature with the sinusoidal current through the component. This implies that the component alternately absorbs energy from the circuit and then returns energy to the circuit. A pure reactance does not dissipate any power.
Capacitive reactance
A capacitor has a purely reactive impedance that is inversely proportional to the signal frequency. A capacitor consists of two conductors separated by an insulator, also known as a dielectric.
The minus sign indicates that the imaginary part of the impedance is negative.
At low frequencies, a capacitor approaches an open circuit so no current flows through it.
A DC voltage applied across a capacitor causes charge to accumulate on one side; the electric field due to the accumulated charge is the source of the opposition to the current. When the potential associated with the charge exactly balances the applied voltage, the current goes to zero.
Driven by an AC supply, a capacitor accumulates only a limited charge before the potential difference changes sign and the charge dissipates. The higher the frequency, the less charge accumulates and the smaller the opposition to the current.
Inductive reactance
Inductive reactance is proportional to the signal frequency and the inductance .
An inductor consists of a coiled conductor. Faraday's law of electromagnetic induction gives the back emf (voltage opposing current) due to a rate-of-change of magnetic flux density through a current loop.
For an inductor consisting of a coil with loops this gives:
The back-emf is the source of the opposition to current flow. A constant direct current has a zero rate-of-change, and sees an inductor as a short-circuit (it is typically made from a material with a low resistivity). An alternating current has a time-averaged rate-of-change that is proportional to frequency, this causes the increase in inductive reactance with frequency.
Total reactance
The total reactance is given by
( is negative)
so that the total impedance is
Combining impedances
The total impedance of many simple networks of components can be calculated using the rules for combining impedances in series and parallel. The rules are identical to those for combining resistances, except that the numbers in general are complex numbers. The general case, however, requires equivalent impedance transforms in addition to series and parallel.
Series combination
For components connected in series, the current through each circuit element is the same; the total impedance is the sum of the component impedances.
Or explicitly in real and imaginary terms:
Parallel combination
For components connected in parallel, the voltage across each circuit element is the same; the ratio of currents through any two elements is the inverse ratio of their impedances.
Hence the inverse total impedance is the sum of the inverses of the component impedances:
or, when n = 2:
The equivalent impedance can be calculated in terms of the equivalent series resistance and reactance .
Measurement
The measurement of the impedance of devices and transmission lines is a practical problem in radio technology and other fields. Measurements of impedance may be carried out at one frequency, or the variation of device impedance over a range of frequencies may be of interest. The impedance may be measured or displayed directly in ohms, or other values related to impedance may be displayed; for example, in a radio antenna, the standing wave ratio or reflection coefficient may be more useful than the impedance alone. The measurement of impedance requires the measurement of the magnitude of voltage and current, and the phase difference between them. Impedance is often measured by "bridge" methods, similar to the direct-current Wheatstone bridge; a calibrated reference impedance is adjusted to balance off the effect of the impedance of the device under test. Impedance measurement in power electronic devices may require simultaneous measurement and provision of power to the operating device.
The impedance of a device can be calculated by complex division of the voltage and current. The impedance of the device can be calculated by applying a sinusoidal voltage to the device in series with a resistor, and measuring the voltage across the resistor and across the device. Performing this measurement by sweeping the frequencies of the applied signal provides the impedance phase and magnitude.
The use of an impulse response may be used in combination with the fast Fourier transform (FFT) to rapidly measure the electrical impedance of various electrical devices.
The LCR meter (Inductance (L), Capacitance (C), and Resistance (R)) is a device commonly used to measure the inductance, resistance and capacitance of a component; from these values, the impedance at any frequency can be calculated.
Example
Consider an LC tank circuit.
The complex impedance of the circuit is
It is immediately seen that the value of is minimal (actually equal to 0 in this case) whenever
Therefore, the fundamental resonance angular frequency is
Variable impedance
In general, neither impedance nor admittance can vary with time, since they are defined for complex exponentials in which . If the complex exponential voltage to current ratio changes over time or amplitude, the circuit element cannot be described using the frequency domain. However, many components and systems (e.g., varicaps that are used in radio tuners) may exhibit non-linear or time-varying voltage to current ratios that seem to be linear time-invariant (LTI) for small signals and over small observation windows, so they can be roughly described as if they had a time-varying impedance. This description is an approximation: Over large signal swings or wide observation windows, the voltage to current relationship will not be LTI and cannot be described by impedance.
See also
Transmission line impedance
Notes
References
Kline, Ronald R., Steinmetz: Engineer and Socialist, Plunkett Lake Press, 2019 (ebook reprint of Johns Hopkins University Press, 1992 ).
External links
ECE 209: Review of Circuits as LTI Systems – Brief explanation of Laplace-domain circuit analysis; includes a definition of impedance.
Electrical resistance and conductance
Physical quantities
Antennas (radio) | Electrical impedance | Physics,Mathematics | 3,945 |
39,183,674 | https://en.wikipedia.org/wiki/Bioregenerative%20life%20support%20system | Bioregenerative life support systems (BLSS) are artificial ecosystems consisting of many complex symbiotic relationships among higher plants, animals, and microorganisms. As the most advanced life support technology, BLSS can provide a habitation environment similar to Earth's biosphere for space missions with extended durations, in deep space, and with multiple crews. These systems consist of artificial ecosystems into which plants and microorganisms that allow oxygen production, carbon dioxide fixation of carbon, water purification, waste recycling, and production of foods. In these systems, photosynthetic organisms would be used as plants and algae that provide biomass for food and oxygen, as well as microorganisms that degrade and recycle waste compounds generated by human activity, as well as unused plant debris in food.
See also
References
Space research | Bioregenerative life support system | Astronomy | 175 |
60,602,011 | https://en.wikipedia.org/wiki/Ana%20Ach%C3%BAcarro | Ana Achúcarro Jiménez (born 1962) is a Spanish researcher, academic, and professor of particle astrophysics and quantum field theory at the University of Leiden in Leiden, Netherlands. Her research considers the early universe, supergravity, black holes and solitons.
Early life and education
Achúcarro graduated from the University of the Basque Country in Spain, graduating with a B.A. in physics in 1984. She moved to the United Kingdom for her doctoral studies, completing the Part III of the Mathematical Tripos in 1985. She was awarded the St Catharine's College, Cambridge graduate prize in mathematics. She remained in Cambridge, England for her PhD, working with Paul Townsend and Stephen Hawking. She was awarded the J. T. Knight Prize and a British Council Fleming Scholarship. She completed her PhD in 1988, with a thesis titled Classical Properties of Supersymmetric Extended Objects. Achúcarro remains a member of the Isaac Newton Institute for Mathematical Studies, based in Cambridge.
Research and career
Achúcarro joined Imperial College London as a postdoctoral research associate in 1988. A year later she was appointed assistant professor at Tufts University in the United States where she worked on cosmic strings. Her research there showed that the presence of global symmetries in gauge theory can result in stringlike defects. Achúcarro's research also concerns string theory and the early universe. She moved to the Netherlands in 2002 to join Leiden University. She was awarded a National Science Foundation ADVANCE lectureship at Case Western Reserve University in 2004.
Achúcarro leads the theoretical cosmology group at the University of Leiden, working on astroparticle physics and quantum field theory. She was a pioneer in the application of string theory to cosmology. In 2011 she participated in Science of the Cosmos, an annual lecture series hosted by the BBVA Foundation. The lectures considered the origins of the universe. She was part of the European Cooperation in Science and Technology (COST) public directive on the string theory universe. She also serves on the European Science Foundation's steering committee for Cosmology in the Laboratory.
In 2015 she was awarded a €2.3 million grant from the Netherlands Organisation for Scientific Research (NWO). She was appointed to the Galileo Galilei Institute for Theoretical Physics in 2016. Achúcarro leads the Leiden de Sitter cosmology programme, which trains young scientists in the interdisciplinary area of modern cosmology. She serves on the advisory council of the Spanish National Research Council.
Awards and honours
She was elected a Member of the Academia Europaea (MAE) in 2011.
See also
List of Spanish inventors and discoverers
References
External links
Homepage, Leiden University.
Spanish women scientists
Academic staff of Leiden University
University of the Basque Country alumni
Tufts University faculty
Members of Academia Europaea
Academics of Imperial College London
Astroparticle physics
Spanish cosmologists
1962 births
Living people
20th-century Spanish physicists
Expatriates in England | Ana Achúcarro | Physics | 603 |
679,929 | https://en.wikipedia.org/wiki/Stone%20%28unit%29 | The stone or stone weight (abbreviation: st.) is an English and British imperial unit of mass equal to 14 avoirdupois pounds (6.35 kg). The stone continues in customary use in the United Kingdom and Ireland for body weight.
England and other Germanic-speaking countries of Northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (2.3 to 18.1 kg) depending on the location and objects weighed. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century onward.
Antiquity
The name "stone" derives from the historical use of stones for weights, a practice that dates back into antiquity. The Biblical law against the carrying of "diverse weights, a large and a small" is more literally translated as "you shall not carry a stone and a stone (), a large and a small". There was no standardised "stone" in the ancient Jewish world, but in Roman times stone weights were crafted to multiples of the Roman pound. Such weights varied in quality: the Yale Medical Library holds 10- and 50-pound examples of polished serpentine, while a 40-pound example at the Eschborn Museum is made of sandstone.
Great Britain and Ireland
The 1772 edition of the Encyclopædia Britannica defined the stone:STONE also denotes a certain quantity or weight of some commodities. A stone of beef, in London, is the quantity of eight pounds; in Hertfordshire, twelve pounds; in Scotland sixteen pounds.
The Weights and Measures Act 1824 (5 Geo. 4. c. 74), which applied to all of the United Kingdom of Great Britain and Ireland, consolidated the weights and measures legislation of several centuries into a single document. It revoked the provision that bales of wool should be made up of 20 stones, each of 14 pounds, but made no provision for the continued use of the stone. Ten years later, a stone still varied from 5 pounds (glass) to 8 pounds (meat and fish) to 14 pounds (wool and "horseman's weight").
The Weights and Measures Act 1835 permitted using a stone of 14 pounds for trade but other values remained in use. James Britten, in 1880 for example, catalogued a number of different values of the stone in various British towns and cities, ranging from 4 lb to 26 lb. The value of the stone and associated units of measure that were legalised for purposes of trade were clarified by the Weights and Measures Act 1835 as follows:
England
The English stone under law varied by commodity and in practice varied according to local standards. The Assize of Weights and Measures, a statute of uncertain date from , describes stones of 5 merchants' pounds used for glass; stones of 8 lb. used for beeswax, sugar, pepper, alum, cumin, almonds, cinnamon, and nutmegs; stones of 12 lb. used for lead; and the of lb. used for wool. In 1350 Edward III issued a new statute defining the stone weight, to be used for wool and "other Merchandizes", at 14 pounds, reaffirmed by Henry VII in 1495.
In England, merchants traditionally sold potatoes in half-stone increments of 7 pounds. Live animals were weighed in stones of 14 lb; but, once slaughtered, their carcasses were weighed in stones of 8 lb. Thus, if the animal's carcass accounted for of the animal's weight, the butcher could return the dressed carcasses to the animal's owner stone for stone, keeping the offal, blood and hide as his due for slaughtering and dressing the animal. Smithfield market continued to use the 8 lb stone for meat until shortly before the Second World War. The Oxford English Dictionary also lists:
Scotland
The Scottish stone was equal to 16 Scottish pounds (17 lb 8 oz avoirdupois or 7.936 kg). In 1661, the Royal Commission of Scotland recommended that the Troy stone be used as a standard of weight and that it be kept in the custody of the burgh of Lanark. The tron (or local) stone of Edinburgh, also standardised in 1661, was 16 tron pounds (22 lb 1 oz avoirdupois or 9.996 kg). In 1789 an encyclopedic enumeration of measurements was printed for the use of "his Majesty's Sheriffs and Stewards Depute, and Justices of Peace, ... and to the Magistrates of the Royal Boroughs of Scotland" and provided a county-by-county and commodity-by-commodity breakdown of values and conversions for the stone and other measures. The Scots stone ceased to be used for trade when the Weights and Measures Act 1824 (5 Geo. 4. c. 74) established a uniform system of measure across the whole of the United Kingdom, which at that time included all of Ireland.
Ireland
Before the early 19th century, as in England, the stone varied both with locality and with commodity. For example, the Belfast stone for measuring flax equaled 16.75 avoirdupois pounds. The most usual value was 14 pounds. Among the oddities related to the use of the stone was the practice in County Clare of a stone of potatoes being 16 lb in the summer and 18 lb in the winter.
Modern use
In 1965, the Federation of British Industry informed the British government that its members favoured adopting the metric system. The Board of Trade, on behalf of the government, agreed to support a ten-year metrication programme. There would be minimal legislation, as the programme was to be voluntary and costs were to be borne where they fell. Under the guidance of the Metrication Board, the agricultural product markets achieved a voluntary switchover by 1976. The stone was not included in the Directive 80/181/EEC as a unit of measure that could be used within the EEC for "economic, public health, public safety or administrative purposes", though its use as a "supplementary unit" was permitted. The scope of the directive was extended to include all aspects of the EU internal market from 1 January 2010.
With the adoption of metric units by the agricultural sector, the stone was, in practice, no longer used for trade; and, in the Weights and Measures Act 1985, passed in compliance with EU directive 80/181/EEC, the stone was removed from the list of units permitted for trade in the United Kingdom. In 1983, in response to the same directive, similar legislation was passed in Ireland. The act repealed earlier acts that defined the stone as a unit of measure for trade. (British law had previously been silent regarding other uses of the stone.)
The stone remains widely used in the United Kingdom and Ireland for human body weight: in those countries people may commonly be said to weigh, e.g., "11 stone 4" (11 stones and 4 pounds), rather than "72 kilograms" as in most of the other countries, or "158 pounds", the conventional way of expressing the same weight in the US and in Canada. The invariant plural form of stone in this context is stone (as in, "11 stone" or "12 stone 6 pounds"); in other contexts, the correct plural is stones (as in, "Please enter your weight in stones and pounds"). In Australia and New Zealand, metrication has entirely displaced stones and pounds since the 1970s.
In many sports in both the UK and Ireland, such as professional boxing, wrestling, and horse racing, the stone is used to express body weights.
Elsewhere
The use of the stone in the former British Empire was varied. In Canada for example, it never had a legal status.
Shortly after the United States declared independence, Thomas Jefferson, then Secretary of State, presented a report on weights and measures to the U.S. House of Representatives. Even though all the weights and measures in use in the United States at the time were derived from English weights and measures, his report made no mention of the stone being used. He did, however, propose a decimal system of weights in which his "[decimal] pound" would have been and the "[decimal] stone" would have been .
Before the advent of metrication, units called "stone" (; ; ) were used in many northwestern European countries. Its value, usually between 3 and 10 kg, varied from city to city and sometimes from commodity to commodity. The number of local "pounds" in a stone also varied from city to city. During the early 19th century, states such as the Netherlands (including Belgium) and the South Western German states, which had redefined their system of measures using the as a reference for weight (mass), also redefined their stone to align it with the kilogram.
This table shows a selection of stones from various northern European cities:
Metric stone
In the Netherlands, where the metric system was adopted in 1817, the pond (pound) was set equal to half a kilogram, and the steen (stone), which had previously been 8 Amsterdam pond (3.953 kg), was redefined as being 3 kg. In modern colloquial Dutch, a pond is used as an alternative for 500 grams or half a kilogram, while the ons is used for a weight of 100 grams, being pond.
See also
English, imperial, and German units of measurement
Sack, a unit of wool equal to 28 stone
Notes
References
External links
UK: The Units of Measurement Regulations 1995
"Stone" entry on the Encyclopædia Britannica
Units of mass
Stone (weight)
Imperial units | Stone (unit) | Physics,Mathematics | 1,986 |
19,320,593 | https://en.wikipedia.org/wiki/Imazapyr | Imazapyr is a non-selective herbicide used for the control of a broad range of weeds including terrestrial annual and perennial grasses and broadleaved herbs, woody species, and riparian and emergent aquatic species. It is used to control annual and perennial grass and broadleaved weeds, brush, vines and many deciduous trees. Imazapyr is absorbed by the leaves and roots, and moves rapidly through the plant. It accumulates in the meristem region (active growth region) of the plant. In plants, imazapyr disrupts protein synthesis and interferes with cell growth and DNA synthesis.
Imazapyr is an ingredient of the commercial product Ortho GroundClear. A related herbicide, imazapic is an ingredient in Roundup Extended Control. Both chemicals are non-selective, long-lasting, and effective in weed control. They are, however, water-soluble, and depending on soil type and moisture they can move into parts of the landscape where they were not sprayed. Some desirable landscape plants are especially sensitive to them and can be damaged.
References
Carboxylic acids
Herbicides
Pyridines
Imidazolines
Isopropyl compounds | Imazapyr | Chemistry,Biology | 246 |
7,219,407 | https://en.wikipedia.org/wiki/Open-Architecture-System | Open-Architecture-System (OAS) is the main User interface and synthesizer software of the Wersi keyboard line. OAS improves on prior organ interfaces by allowing the user to add sounds, rhythms, third party programs and future software enhancements without changing hardware. Compared to previous organs which relied on buttons, OAS uses a touch screen to make programming easier. OAS can host up to 4 separate VST software instruments, allowing for an expandable system similar to the Korg OASYS. OAS can support dynamic touch and aftertouch, but cannot support horizontal touch like the Yamaha Stagea Electone.
OAS Version 7
OAS Version 7 expands on previous versions by adding a new effects section. Separate effects are available for the accompaniment section, sequencer and drums. Added effects include delay, reverb, phasing, wah wah, distortion, compressor, and flanger. In addition, version 7 includes 300 new sounds, 700 sounds in total.
Version 7 adds the Wersi Open Art Arranger. This software enables the Wersi to use all Yamaha styles, including those from the Tyros 2.
References
External links
Wersi USA home page
Wersi international home
Global OAS Users Group
Electric and electronic keyboard instruments
Software synthesizers | Open-Architecture-System | Technology | 258 |
33,615,399 | https://en.wikipedia.org/wiki/Panathlon%20International | Panathlon International (PI) is a global umbrella organization of "Panathlon clubs", which are nonprofit non-governmental organizations promoting sports ethics and fair play and opposing discrimination and politicisation in sport. Panathalon International is recognized by the International Olympic Committee (IOC). It is a member of the International Fair Play Committee (CIFP), and associate member of the Global Association of International Sports Federations. The name "Panathlon" is from Greek "all" + "sport".
PI has more than 300 clubs in 30 countries in 4 continents, with a head office in Rapallo, Italy. As well as promotional work, PI and its members support research on topics concerning sport and its relations with society.
History
The first club was created on 12 June 1951 in Venice by Mario Viali of the Italian National Olympic Committee and some friends, mostly members of the local Rotary Club. Ludovico Foscari coined the name "Panathlon" and the motto , Latin for . In 1953 seven clubs formed the Italian Panathlon Association. In 1960 Panathlon International was formed by members from Italy, Switzerland, Spain, and France.
, there is no Panathlon International club in the United Kingdom. On that date PI met with the Panathlon Foundation, a UK charity promoting youth disabled sports, to discuss co-operation to enable a UK PI club to use the name "Panathlon".
Goals
As an independent organization, Panathlon aims at:
Promoting culture and ethics in sport
Working together with organizations that have the same goals
Presenting suggestions to handle acute and chronic problems in sport
Stimulating reflection and discussion on "ethics and integrity" (both values-based and rules-based approaches) in modern sports based on scientific research
Actions
Panathlon's actions on integrity in sport have been fueled by the fact that sport is often beset by poor practice, corruption, and harmful behaviors. Sport has to remain credible and must be continuously proactive if it wants to sustain its positive values. Panathlon is therefore considering what should be done to ensure that the positive potential of sport can prevail in the complex commercialized and globalized sporting landscape of the 21st century. Its position is that it would be naïve to think that sport automatically elicits and promotes positive effects and that remaining silent on obvious aberrations would condone complicity.
A modern integrity management framework aims at preventing serious integrity violations on the one hand (rules-based approach), and promoting integrity through stimulating understanding commitment and capacity for ethical decision-making. On the other hand, PI demonstrates a "values-based approach." To direct its actions, Panathlon International adopts the "values-based approach" which is about supporting and stimulating (code of ethics) and limits itself to stimulate sports federations and sports authorities to address controlling and sanctioning (code of ethics). The Panathlon Declaration of Ethics in Youth Sports adopted by UNICEF, the IOC, SportAccord, international federations (FIFA, UCI, IAAF, FIBA, FIG and others), organizations (ENGSO, EUPEA and others) as well as National Olympic Committees (Belgium, Netherlands, Uruguay and others) exemplifies this "values-based approach."
Presidents
Flambeau d'Or
The Flambeau d'Or (Golden Torch) is an award that is presented every four years. It aims to reward distinguished international sports personalities. It is awarded in three categories for outstanding achievements in sport promotion, sport culture and organisation.
References
Citations
Sources
External links
Official Site
Discrimination
International organisations based in Italy
International sports organizations
Politics and sports
Sports organizations established in 1960
Sportsmanship | Panathlon International | Biology | 736 |
52,942,558 | https://en.wikipedia.org/wiki/Dobesilic%20acid | Dobesilic acid is a chemical compound with the molecular formula . It is classified as both a phenol and a sulfonic acid.
Uses
Salts of dobesilic are used as pharmaceutical drugs. The calcium salt, calcium dobesilate, is used as a vasoprotective drug. The diethylamine salt, etamsylate, is an antihemorrhagic agent.
References
Phenols
Sulfonic acids | Dobesilic acid | Chemistry | 95 |
3,562,876 | https://en.wikipedia.org/wiki/Diffraction%20spike | Diffraction spikes are lines radiating from bright light sources, causing what is known as the starburst effect or sunstars in photographs and in vision. They are artifacts caused by light diffracting around the support vanes of the secondary mirror in reflecting telescopes, or edges of non-circular camera apertures, and around eyelashes and eyelids in the eye.
While similar in appearance, this is a different effect to "vertical smear" or "blooming" that appears when bright light sources are captured by a charge-coupled device (CCD) image sensor.
Causes
Support vanes
In the vast majority of reflecting telescope designs, the secondary mirror has to be positioned at the central axis of the telescope and so has to be held by struts within the telescope tube. No matter how fine these support rods are they diffract the incoming light from a subject star and this appears as diffraction spikes which are the Fourier transform of the support struts. The spikes represent a loss of light that could have been used to image the star.
Although diffraction spikes can obscure parts of a photograph and are undesired in professional contexts, some amateur astronomers like the visual effect they give to bright stars – the "Star of Bethlehem" appearance – and even modify their refractors to exhibit the same effect, or to assist with focusing when using a CCD.
A small number of reflecting telescopes designs avoid diffraction spikes by placing the secondary mirror off-axis. Early off-axis designs such as the Herschelian and the Schiefspiegler telescopes have serious limitations such as astigmatism and long focal ratios, which make them useless for research. The brachymedial design by Ludwig Schupmann, which uses a combination of mirrors and lenses, is able to correct chromatic aberration perfectly over a small area and designs based on the Schupmann brachymedial are currently used for research of double stars.
There are also a small number of off-axis unobstructed all-reflecting anastigmats which give optically perfect images.
Refracting telescopes and their photographic images do not have the same problem as their lenses are not supported with spider vanes.
Non-circular aperture
Iris diaphragms with moving blades are used in most modern camera lenses to restrict the light received by the film or sensor. While manufacturers attempt to make the aperture circular for a pleasing bokeh, when stopped down to high f-numbers (small apertures), its shape tends towards a polygon with the same number of sides as blades. Diffraction spreads out light waves passing through the aperture perpendicular to the roughly-straight edge, each edge yielding two spikes 180° apart. As the blades are uniformly distributed around the circle, on a diaphragm with an even number of blades, the diffraction spikes from blades on opposite sides overlap. Consequently, a diaphragm with n blades yields n spikes if n is even, and 2n spikes if n is odd.
Segmented mirrors
Images from telescopes with segmented mirrors also exhibit diffraction spikes due to diffraction from the mirrors' edges. As before, two spikes are perpendicular to each edge orientation, resulting in six spikes (plus two fainter ones due to the spider supporting the secondary mirror) in photographs taken by the James Webb Space Telescope.
Dirty optics
An improperly cleaned lens or cover glass, or one with a fingerprint may have parallel lines which diffract light similarly to support vanes. They can be distinguished from spikes due to non-circular aperture as they form a prominent smear in a single direction, and from CCD bloom by their oblique angle.
In vision
In normal vision, diffraction through eyelashes – and due to the edges of the eyelids if one is squinting – produce many diffraction spikes. If it is windy, then the motion of the eyelashes cause spikes that move around and scintillate. After a blink, the eyelashes may come back in a different position and cause the diffraction spikes to jump around. This is classified as an entoptic phenomenon.
Diffraction spike in normal human vision can also be caused by some fibers in the eye lens sometimes called suture lines.
Other uses
Special effects
A cross screen filter, also known as a star filter, creates a star pattern using a very fine diffraction grating embedded in the filter, or sometimes by the use of prisms in the filter. The number of stars varies by the construction of the filter, as does the number of points each star has.
A similar effect is achieved by photographing bright lights through a window screen with vertical and horizontal wires. The angles of the bars of the cross depend on the orientation of the screen relative to the camera.
Bahtinov mask
In amateur astrophotography, a Bahtinov mask can be used to focus small astronomical telescopes accurately. Light from a bright point such as an isolated bright star reaching different quadrants of the primary mirror or lens is first passed through grilles at three different orientations. Half of the mask generates a narrow "X" shape from four diffraction spikes (blue and green in the illustration); the other half generates a straight line from two spikes (red). Changing the focus causes the shapes to move with respect to each other. When the line passes exactly through the middle of the "X", the telescope is in focus and the mask can be removed.
References
External links
Diffraction spikes explained by Astronomy Picture of the Day.
Astrophotography
Science of photography
Diffraction | Diffraction spike | Physics,Chemistry,Materials_science | 1,143 |
64,027,350 | https://en.wikipedia.org/wiki/Biodiversity%20and%20conservation%20in%20Manitoba | Manitoba is home to a variety of ecosystems across the province that need to be considered in development and conservation plans. There are terrestrial ecosystems, which includes prairies, boreal forest, and tundra. Manitoba is also the home to a number of aquatic ecosystems, including wetlands, rivers, and lakes. There is also a wide variety of wildlife and plants that thrive in this particular region. However, human impact has become more apparent and the need to protect and conserve is becoming clear.
The Province of Manitoba created a protection act in March 1990 called The Endangered Species Ecosystem Act. The Act protects animals, plants, and ecosystems by supporting and monitoring development at a provincial level. This includes monitoring the land use, protection areas, planning, environmental assessment, and natural resources harvesting policies by incorporating the biodiversity values in their decision making.
In June 2003, the federal government created the Species at Risk Act (SARA). The Act is to prevent wildlife species in Canada from disappearing, provide recovery of wildlife species that are endangered or threatened as a result of human activity, and prevent future loss of species (Canada.ca, 2020). A key component of the success of SARA is the participation of other levels of government to contribute in enforcing and protecting wildlife. Consulting and having cooperation of aboriginal communities and all stakeholders that may be affected in the protection of wildlife is another component of the Act's success. There are a number of Manitoban species on the SARA list that are now protected at a federal level, which will help reinforce their safety.
More information about what Manitoba is doing to help protect the environment apart from biodiversity can be found on their website. The website highlights a number of subcategories of concerns regarding the environment, including:
Invasive species
Air quality management
Climate change
Pollution prevention
Pesticide use
Management of protected areas
Protection of at-risk species
Biodiversity conservation
Management of invasive species
Ecological reserve programs
Storage and handling of hazardous products like petroleum
References
Benitez-lopez, A., Alkemade, R., & Verweij, P. (2010, June). The impacts of roads and other infrastructure on mammal and bird populations: A meta-analysis. Biological Conservation, 143(6), 1307–1316.
Canada.ca. (2020, may 14). about species at risk act. Retrieved from www.canada.ca: https://www.canada.ca/en/environment-climate-change/services/environmental-enforcement/acts-regulations/about-species-at-risk-act.html
Cardinale, B. D. (2012). Biodiversity loss and its impact on humanity. Nature, 486, 59–67.
Danish, Recep, U., & Ud-Din, K. (2020, March). Determinants of the ecological footprint: Role of renewable energy, natural resources, and urbanization. Sustainable Cities and Society, 54.
Kimmins, J. (2004). Forest Ecology. A Foundation for Sustainable Forest Management and Environmental Ethics in Forestry (Vol. third edition). Pearson Education Inc.
Manitoba Government. (2020, May 14). environment and biodiversity. Retrieved from https://www.gov.mb.ca: https://www.gov.mb.ca/sd/environment_and_biodiversity/species_ecosystems/index.html
Miller, T., Hackett, D., & Wolfe, C. (2017). Living in The Environment (Vol. Fourth Canadian Edition). Toronto, Canada: Nelson Education.
UNESCO. (2011). UNESCO Biodiversity Initiative. Retrieved from WWW.UNESCO.ORG: https://unesdoc.unesco.org/ark:/48223/pf0000213313?posInSet=3&queryId=0001ebeb-008e-44a8-8da2-7bff026ed89a
WWF. (2018). Living Planet Report.
Biodiversity | Biodiversity and conservation in Manitoba | Biology | 806 |
553,950 | https://en.wikipedia.org/wiki/X%20band | The X band is the designation for a band of frequencies in the microwave radio region of the electromagnetic spectrum. In some cases, such as in communication engineering, the frequency range of the X band is rather indefinitely set at approximately 7.0–11.2 GHz. In radar engineering, the frequency range is specified by the Institute of Electrical and Electronics Engineers (IEEE) as 8.0–12.0 GHz. The X band is used for radar, satellite communication, and wireless computer networks.
Radar
X band is used in radar applications, including continuous-wave, pulsed, single-polarization, dual-polarization, synthetic aperture radar, and phased arrays. X-band radar frequency sub-bands are used in civil, military, and government institutions for weather monitoring, air traffic control, maritime vessel traffic control, defense tracking, and vehicle speed detection for law enforcement.
X band is often used in modern radars. The shorter wavelengths of the X band provide higher-resolution imagery from high-resolution imaging radars for target identification and discrimination. X-band weather radars offer significant potential for short-range observations, but the loss of signal strength (attenuation) under rainy conditions limits their use at longer range.
Terrestrial communications and networking
The X band 10.15 to 10.7 GHz segment is used for terrestrial broadband in many countries, such as Brazil, Mexico, Saudi Arabia, Denmark, Ukraine, Spain and Ireland. Alvarion, CBNL, CableFree and Ogier make systems for this, though each has a proprietary airlink. DOCSIS (Data Over Cable Service Interface Specification) the standard used for providing cable internet to customers, uses some X band frequencies. The home / business customer-premises equipment (CPE) has a single coaxial cable with a power adapter connecting to an ordinary cable modem. The local oscillator is usually 9750 MHz, the same as for Ku band satellite TV LNB. Two way applications such as broadband typically use a 350 MHz TX offset.
Space communications
Space communications for science and research
Small portions of the X band are assigned by the International Telecommunication Union (ITU) exclusively for deep space telecommunications. The primary user of this allocation is the American NASA Deep Space Network (DSN). DSN facilities are in Goldstone, California (in the Mojave Desert), near Canberra, Australia, and near Madrid, Spain, and provide continual communications from the Earth to almost any point in the Solar System independent of Earth rotation. (DSN stations are also capable of using the older and lower S band deep-space radio communications allocations, and some higher frequencies on a more-or-less experimental basis, such as in the K band.)
Notable deep space probe programs that have employed X band communications include the Viking Mars landers; the Voyager missions to Jupiter, Saturn, and beyond; the Galileo Jupiter orbiter; the New Horizons mission to Pluto and the Kuiper belt, the Curiosity rover and the Cassini-Huygens Saturn orbiter.
An important use of the X band communications came with the two Viking program landers. When the planet Mars was passing near or behind the Sun, as seen from the Earth, a Viking lander would transmit two simultaneous continuous-wave carriers, one in the S band and one in the X band in the direction of the Earth, where they were picked up by DSN ground stations. By making simultaneous measurements at the two different frequencies, the resulting data enabled theoretical physicists to verify the mathematical predictions of Albert Einstein's General Theory of Relativity. These results are some of the best confirmations of the General Theory of Relativity.
The new European double Mars Mission ExoMars will also use X band communication, on the instrument LaRa, to study the internal structure of Mars, and to make precise measurements of the rotation and orientation of Mars by monitoring two-way Doppler frequency shifts between the surface platform and Earth. It will also detect variations in angular momentum due to the redistribution of masses, such as the migration of ice from the polar caps to the atmosphere.
X band NATO frequency requirements
The International Telecommunication Union (ITU), the international body which allocates radio frequencies for civilian use, is not authorised to allocate frequency bands for military radio communication. This is also the case pertaining to X band military communications satellites. However, in order to meet military radio spectrum requirements, e.g. for fixed-satellite service and mobile-satellite service, the NATO nations negotiated the NATO Joint Civil/Military Frequency Agreement (NJFA).
Amateur radio
The Radio Regulations of the International Telecommunication Union allow amateur radio operations in the frequency range 10.000 to 10.500 GHz, and amateur satellite operations are allowed in the range 10.450 to 10.500 GHz. This is known as the 3-centimeter band by amateurs and the X-band by AMSAT.
Other uses
Motion detectors often use 10.525 GHz. 10.4 GHz is proposed for traffic light crossing detectors. Comreg in Ireland has allocated 10.450 GHz for traffic sensors as SRD.
Many electron paramagnetic resonance (EPR) spectrometers operate near 9.8 GHz.
Particle accelerators may be powered by X-band RF sources. The frequencies are then standardized at 11.9942 GHz (Europe) or 11.424 GHz (US), which is the second harmonic of C-band and fourth harmonic of S-band. The European X-band frequency is used for the Compact Linear Collider (CLIC).
See also
Cassegrain reflector
Directional antenna
XTAR
Sea-based X band Radar
New Horizons telecommunications
Voyager program#Spacecraft design
Earth observation satellites transmission frequencies
TerraSAR-X: a German Earth observation satellite
References
External links
United States Frequency Allocations
10GHz wideband transceiver
Microwave bands
Radar
Radio frequency propagation | X band | Physics | 1,190 |
1,043,263 | https://en.wikipedia.org/wiki/Excitotoxicity | In excitotoxicity, nerve cells suffer damage or death when the levels of otherwise necessary and safe neurotransmitters such as glutamate become pathologically high, resulting in excessive stimulation of receptors. For example, when glutamate receptors such as the NMDA receptor or AMPA receptor encounter excessive levels of the excitatory neurotransmitter, glutamate, significant neuronal damage might ensue. Excess glutamate allows high levels of calcium ions (Ca2+) to enter the cell. Ca2+ influx into cells activates a number of enzymes, including phospholipases, endonucleases, and proteases such as calpain. These enzymes go on to damage cell structures such as components of the cytoskeleton, membrane, and DNA. In evolved, complex adaptive systems such as biological life it must be understood that mechanisms are rarely, if ever, simplistically direct. For example, NMDA, in subtoxic amounts, can block glutamate toxicity and thereby induce neuronal survival.
Excitotoxicity may be involved in cancers, spinal cord injury, stroke, traumatic brain injury, hearing loss (through noise overexposure or ototoxicity), and in neurodegenerative diseases of the central nervous system such as multiple sclerosis, Alzheimer's disease, amyotrophic lateral sclerosis (ALS), Parkinson's disease, alcoholism, alcohol withdrawal or hyperammonemia and especially over-rapid benzodiazepine withdrawal, and also Huntington's disease. Other common conditions that cause excessive glutamate concentrations around neurons are hypoglycemia. Blood sugars are the primary glutamate removal method from inter-synaptic spaces at the NMDA and AMPA receptor site. Persons in excitotoxic shock must never fall into hypoglycemia. Patients should be given 5% glucose (dextrose) IV drip during excitotoxic shock to avoid a dangerous build up of glutamate around NMDA and AMPA neurons. When 5% glucose (dextrose) IV drip is not available high levels of fructose are given orally. Treatment is administered during the acute stages of excitotoxic shock along with glutamate antagonists. Dehydration should be avoided as this also contributes to the concentrations of glutamate in the inter-synaptic cleft and "status epilepticus can also be triggered by a build up of glutamate around inter-synaptic neurons."
History
The harmful effects of glutamate on the central nervous system were first observed in 1954 by T. Hayashi, a Japanese scientist who stated that direct application of glutamate caused seizure activity, though this report went unnoticed for several years. D. R. Lucas and J. P. Newhouse, after noting that "single doses of [20–30 grams of sodium glutamate in humans] have ... been administered intravenously without permanent ill-effects", observed in 1957 that a subcutaneous dose described as "a little less than lethal", destroyed the neurons in the inner layers of the retina in newborn mice. In 1969, John Olney discovered that the phenomenon was not restricted to the retina, but occurred throughout the brain, and coined the term excitotoxicity. He also assessed that cell death was restricted to postsynaptic neurons, that glutamate agonists were as neurotoxic as their efficiency to activate glutamate receptors, and that glutamate antagonists could stop the neurotoxicity.
In 2002, Hilmar Bading and co-workers found that excitotoxicity is caused by the activation of NMDA receptors located outside synaptic contacts. The molecular basis for toxic extrasynaptic NMDA receptor signaling was uncovered in 2020 when Hilmar Bading and co-workers described a death signaling complex that consists of extrasynaptic NMDA receptor and TRPM4. Disruption of this complex using NMDAR/TRPM4 interface inhibitors (also known as ‚interface inhibitors‘) renders extrasynaptic NMDA receptor non-toxic.
Pathophysiology
Excitotoxicity can occur from substances produced within the body (endogenous excitotoxins). Glutamate is a prime example of an excitotoxin in the brain, and it is also the major excitatory neurotransmitter in the central nervous system of mammals. During normal conditions, glutamate concentration can be increased up to 1mM in the synaptic cleft, which is rapidly decreased in the lapse of milliseconds. When the glutamate concentration around the synaptic cleft cannot be decreased or reaches higher levels, the neuron kills itself by a process called apoptosis.
This pathologic phenomenon can also occur after brain injury and spinal cord injury. Within minutes after spinal cord injury, damaged neural cells within the lesion site spill glutamate into the extracellular space where glutamate can stimulate presynaptic glutamate receptors to enhance the release of additional glutamate. Brain trauma or stroke can cause ischemia, in which blood flow is reduced to inadequate levels. Ischemia is followed by accumulation of glutamate and aspartate in the extracellular fluid, causing cell death, which is aggravated by lack of oxygen and glucose. The biochemical cascade resulting from ischemia and involving excitotoxicity is called the ischemic cascade. Because of the events resulting from ischemia and glutamate receptor activation, a deep chemical coma may be induced in patients with brain injury to reduce the metabolic rate of the brain (its need for oxygen and glucose) and save energy to be used to remove glutamate actively. (The main aim in induced comas is to reduce the intracranial pressure, not brain metabolism).
Increased extracellular glutamate levels leads to the activation of Ca2+ permeable NMDA receptors on myelin sheaths and oligodendrocytes, leaving oligodendrocytes susceptible to Ca2+ influxes and subsequent excitotoxicity. One of the damaging results of excess calcium in the cytosol is initiating apoptosis through cleaved caspase processing. Another damaging result of excess calcium in the cytosol is the opening of the mitochondrial permeability transition pore, a pore in the membranes of mitochondria that opens when the organelles absorb too much calcium. Opening of the pore may cause mitochondria to swell and release reactive oxygen species and other proteins that can lead to apoptosis. The pore can also cause mitochondria to release more calcium. In addition, production of adenosine triphosphate (ATP) may be stopped, and ATP synthase may in fact begin hydrolysing ATP instead of producing it, which is suggested to be involved in depression.
Inadequate ATP production resulting from brain trauma can eliminate electrochemical gradients of certain ions. Glutamate transporters require the maintenance of these ion gradients to remove glutamate from the extracellular space. The loss of ion gradients results in not only the halting of glutamate uptake, but also in the reversal of the transporters. The Na+-glutamate transporters on neurons and astrocytes can reverse their glutamate transport and start secreting glutamate at a concentration capable of inducing excitotoxicity. This results in a buildup of glutamate and further damaging activation of glutamate receptors.
On the molecular level, calcium influx is not the only factor responsible for apoptosis induced by excitoxicity. Recently, it has been noted that extrasynaptic NMDA receptor activation, triggered by both glutamate exposure or hypoxic/ischemic conditions, activate a CREB (cAMP response element binding) protein shut-off, which in turn caused loss of mitochondrial membrane potential and apoptosis. On the other hand, activation of synaptic NMDA receptors activated only the CREB pathway, which activates BDNF (brain-derived neurotrophic factor), not activating apoptosis.
Exogenous excitotoxins
Exogenous excitotoxins refer to neurotoxins that also act at postsynaptic cells but are not normally found in the body. These toxins may enter the body of an organism from the environment through wounds, food intake, aerial dispersion etc. Common excitotoxins include glutamate analogs that mimic the action of glutamate at glutamate receptors, including AMPA and NMDA receptors.
BMAA
The L-alanine derivative β-methylamino-L-alanine (BMAA) has long been identified as a neurotoxin which was first associated with the amyotrophic lateral sclerosis/parkinsonism–dementia complex (Lytico-bodig disease) in the Chamorro people of Guam. The widespread occurrence of BMAA can be attributed to cyanobacteria which produce BMAA as a result of complex reactions under nitrogen stress. Following research, excitotoxicity appears to be the likely mode of action for BMAA which acts as a glutamate agonist, activating AMPA and NMDA receptors and causing damage to cells even at relatively low concentrations of 10 μM. The subsequent uncontrolled influx of Ca2+ then leads to the pathophysiology described above. Further evidence of the role of BMAA as an excitotoxin is rooted in the ability of NMDA antagonists like MK801 to block the action of BMAA. More recently, evidence has been found that BMAA is misincorporated in place of L-serine in human proteins. A considerable portion of the research relating to the toxicity of BMAA has been conducted on rodents. A study published in 2016 with vervets (Chlorocebus sabaeus) in St. Kitts, which are homozygous for the apoE4 (APOE-ε4) allele (a condition which in humans is a risk factor for Alzheimer's disease), found that vervets orally administered BMAA developed hallmark histopathology features of Alzheimer's Disease including amyloid beta plaques and neurofibrillary tangle accumulation. Vervets in the trial fed smaller doses of BMAA were found to have correlative decreases in these pathology features. This study demonstrates that BMAA, an environmental toxin, can trigger neurodegenerative disease as a result of a gene/environment interaction. While BMAA has been detected in brain tissue of deceased ALS/PDC patients, further insight is required to trace neurodegenerative pathology in humans to BMAA.
See also
Glutamatergic system
Glutamic acid (flavor)
NMDA receptor antagonist
Dihydropyridine
References
Further reading
Invited Review
Food safety
Neurochemistry
Neurotrauma
Toxins | Excitotoxicity | Chemistry,Biology,Environmental_science | 2,357 |
1,904,007 | https://en.wikipedia.org/wiki/Wave%20front%20set | In mathematical analysis, more precisely in microlocal analysis, the wave front (set) WF(f) characterizes the singularities of a generalized function f, not only in space, but also with respect to its Fourier transform at each point. The term "wave front" was coined by Lars Hörmander around 1970.
Introduction
In more familiar terms, WF(f) tells not only where the function f is singular (which is already described by its singular support), but also how or why it is singular, by being more exact about the direction in which the singularity occurs. This concept is mostly useful in dimension at least two, since in one dimension there are only two possible directions. The complementary notion of a function being non-singular in a direction is microlocal smoothness.
Intuitively, as an example, consider a function ƒ whose singular support is concentrated on a smooth curve in the plane at which the function has a jump discontinuity. In the direction tangent to the curve, the function remains smooth. By contrast, in the direction normal to the curve, the function has a singularity. To decide on whether the function is smooth in another direction v, one can try to smooth the function out by averaging in directions perpendicular to v. If the resulting function is smooth, then we regard ƒ to be smooth in the direction of v. Otherwise, v is in the wavefront set.
Formally, in Euclidean space, the wave front set of ƒ is defined as the complement of the set of all pairs (x0,v) such that there exists a test function with (x0) ≠ 0 and an open cone Γ containing v such that the estimate
holds for all positive integers N. Here denotes the Fourier transform. Observe that the wavefront set is conical in the sense that if (x,v) ∈ Wf(ƒ), then (x,λv) ∈ Wf(ƒ) for all λ > 0. In the example discussed in the previous paragraph, the wavefront set is the set-theoretic complement of the image of the tangent bundle of the curve inside the tangent bundle of the plane.
Because the definition involves cutoff by a compactly supported function, the notion of a wave front set can be transported to any differentiable manifold X. In this more general situation, the wave front set is a closed conical subset of the cotangent bundle T*(X), since the ξ variable naturally localizes to a covector rather than a vector. The wave front set is defined such that its projection on X is equal to the singular support of the function.
Definition
In Euclidean space, the wave front set of a distribution ƒ is defined as
where is the singular fibre of ƒ at x. The singular fibre is defined to be the complement of all directions such that the Fourier transform of f, localized at x, is sufficiently regular when restricted to an open cone containing . More precisely, a direction v is in the complement of if there is a compactly supported smooth function φ with φ(x) ≠ 0 and an open cone Γ containing v such that the following estimate holds for each positive integer N:
Once such an estimate holds for a particular cutoff function φ at x, it also holds for all cutoff functions with smaller support, possibly for a different open cone containing v.
On a differentiable manifold M, using local coordinates on the cotangent bundle, the wave front set WF(f)
of a distribution ƒ can be defined in the following general way:
where the singular fibre is again the complement of all directions such that the Fourier transform of f, localized at x, is sufficiently regular when restricted to a conical neighbourhood of . The problem of regularity is local, and so it can be checked in the local coordinate system, using the Fourier transform on the x variables. The required regularity estimate transforms well under diffeomorphism, and so the notion of regularity is independent of the choice of local coordinates.
Generalizations
The notion of a wave front set can be adapted to accommodate other notions of regularity of a function. Localized can here be expressed by saying that f is truncated by some smooth cutoff function not vanishing at x. (The localization process could be done in a more elegant fashion, using germs.)
More concretely, this can be expressed as
where
are compactly supported smooth functions not vanishing at x,
are conical neighbourhoods of , i.e. neighbourhoods V such that for all ,
denotes the Fourier transform of the (compactly supported generalized) function u, restricted to V,
is a fixed presheaf of functions (or distributions) whose choice enforces the desired regularity of the Fourier transform.
Typically, sections of O are required to satisfy some growth (or decrease) condition at infinity, e.g. such that belong to some Lp space.
This definition makes sense, because the Fourier transform becomes more
regular (in terms of growth at infinity) when f is truncated with the smooth cutoff .
The most difficult "problem", from a theoretical point of view,
is finding the adequate sheaf O characterizing functions belonging to a given subsheaf E of the space G of generalized functions.
Example
If we take G = D′ the space of Schwartz distributions and want to characterize distributions which are locally functions, we must take for O(Ω) the classical function spaces called O′M(Ω) in the literature.
Then the projection on the first component of a distribution's wave front set is nothing else than its classical singular support, i.e. the complement of the set on which its restriction would be a smooth function.
Applications
The wave front set is useful, among others, when studying propagation of singularities by pseudodifferential operators.
See also
FBI transform
Singular spectrum
Essential support
References
Lars Hörmander, Fourier integral operators I, Acta Math. 127 (1971), pp. 79–183.
Chapter VIII, Spectral Analysis of Singularities
Mathematical analysis
Generalized functions | Wave front set | Mathematics | 1,220 |
22,459,005 | https://en.wikipedia.org/wiki/EPPML | EPPML (Extensible Postal Product Model and Language) is a conceptual model for the interactions between parties of a postal communication system. Examples of such parties are mailers, posts, mail aggregators, providers of postal services and equipment and recipients. They create, publish, consume and deliver postal products.
The central concept of EPPML is the postal product. EPPML defines the structure and meaning for the information that represents a postal product. The postal product definition may be viewed as an interface between posts, their customers and other parties. The formal representation of postal products allows automated systems to efficiently consume new postal products as they are introduced by postal operators.
The current implementation of EPPML is based on XML technology, but the EPPML concepts may be implemented with other technologies (e.g. relational databases, semantic web). Each postal product is fully represented by one (and only one) postal product definition file (PPDF) which is an XML document. A PPDF must be valid under the EPPML schema.
EPPML as a model of interaction
The 'M' in EPPML stands for Model. EPPML was defined by its authors as a model for the interaction between parties that create, publish, consume or deliver postal products.
The interactions defined by the model include actions on physical items, information exchanges and financial transactions. The typical action on a physical item is a change of location (e.g. delivery). The information exchanges may use either electronic channels (e.g. web services, email) or physical channels (e.g. bar codes printed on mail items by mailers and read posts). An example of a financial transaction is a refund for late delivery of a mail item.
EPPML as a markup language
The first practical implementation of EPPML relies on an XML schema to define the structure of the information that describes each postal product. The use of the XML schema makes EPPML a markup language for defining postal products. This is the reason why EPPML is sometimes referred to as Extensible Postal Product Markup Language.
Postal Product Definition File (PPDF)
A postal product definition file (PPDF) is an XML document which fully describes a postal product. The PPDF
contains all necessary and sufficient information for customers to purchase and use the product. The structure
and meaning of the XML elements in a PPDF is defined by an XML schema, known as EPPML schema.
EPPML Schema
The EPPML schema is an XML Schema which provides sufficient information for postal operators to generate postal product definition files to describe all aspects of their postal products, both existing and new. The EPPML schema also provides information necessary and sufficient for mailing equipment providers to create automated mailing systems capable of using postal product definition files for all mailer's activities, including postal product selection, the production of mail units compatible with new postal products and requesting new features of postal products.
The EPPML schema is a Universal Postal Union standard, UPU S54. The standard contains a complete description of the EPPML schema, its hierarchical structure, information types and semantics of its elements.
References
DESIGNING AND USING PRODUCTS IN THE EPPML- ENABLED OPEN INNOVATION ENVIRONMENT (pg 188) in Handbook of Worldwide Postal Reform (Edited by Michael Crew, Crew, M.A. Kleindorfer, P.R. Campbell Jr.) Published by Edward Elgar Publishing, 2009, ,
Overview of EPPML at POSTCOM
EPPML concepts presented at UPU Forum, December 5, 2007, Bern, Switzerland
UPU Standards Board status 0 proposal
Postal systems | EPPML | Technology | 749 |
313,384 | https://en.wikipedia.org/wiki/Long%20division | In arithmetic, long division is a standard division algorithm suitable for dividing multi-digit Hindu-Arabic numerals (positional notation) that is simple enough to perform by hand. It breaks down a division problem into a series of easier steps.
As in all division problems, one number, called the dividend, is divided by another, called the divisor, producing a result called the quotient. It enables computations involving arbitrarily large numbers to be performed by following a series of simple steps. The abbreviated form of long division is called short division, which is almost always used instead of long division when the divisor has only one digit.
History
Related algorithms have existed since the 12th century.
Al-Samawal al-Maghribi (1125–1174) performed calculations with decimal numbers that essentially require long division, leading to infinite decimal results, but without formalizing the algorithm.
Caldrini (1491) is the earliest printed example of long division, known as the Danda method in medieval Italy, and it became more practical with the introduction of decimal notation for fractions by Pitiscus (1608).
The specific algorithm in modern use was introduced by Henry Briggs 1600.
Education
Inexpensive calculators and computers have become the most common way to solve division problems, eliminating a traditional mathematical exercise and decreasing the educational opportunity to show how to do so by paper and pencil techniques. (Internally, those devices use one of a variety of division algorithms, the faster of which rely on approximations and multiplications to achieve the tasks.) In North America, long division has been especially targeted for de-emphasis or even elimination from the school curriculum by reform mathematics, though it has been traditionally introduced in the 4th, 5th or even 6th grades.
Method
In English-speaking countries, long division does not use the division slash or division sign symbols but instead constructs a tableau. The divisor is separated from the dividend by a right parenthesis or vertical bar ; the dividend is separated from the quotient by a vinculum (i.e., an overbar). The combination of these two symbols is sometimes known as a long division symbol or division bracket. It developed in the 18th century from an earlier single-line notation separating the dividend from the quotient by a left parenthesis.
The process is begun by dividing the left-most digit of the dividend by the divisor. The quotient (rounded down to an integer) becomes the first digit of the result, and the remainder is calculated (this step is notated as a subtraction). This remainder carries forward when the process is repeated on the following digit of the dividend (notated as 'bringing down' the next digit to the remainder). When all digits have been processed and no remainder is left, the process is complete.
An example is shown below, representing the division of 500 by 4 (with a result of 125).
125 (Explanations)
4)500
4 ( 4 × 1 = 4)
10 ( 5 - 4 = 1)
8 ( 4 × 2 = 8)
20 (10 - 8 = 2)
20 ( 4 × 5 = 20)
0 (20 - 20 = 0)
A more detailed breakdown of the steps goes as follows:
Find the shortest sequence of digits starting from the left end of the dividend, 500, that the divisor 4 goes into at least once. In this case, this is simply the first digit, 5. The largest number that the divisor 4 can be multiplied by without exceeding 5 is 1, so the digit 1 is put above the 5 to start constructing the quotient.
Next, the 1 is multiplied by the divisor 4, to obtain the largest whole number that is a multiple of the divisor 4 without exceeding the 5 (4 in this case). This 4 is then placed under and subtracted from the 5 to get the remainder, 1, which is placed under the 4 under the 5.
Afterwards, the first as-yet unused digit in the dividend, in this case the first digit 0 after the 5, is copied directly underneath itself and next to the remainder 1, to form the number 10.
At this point the process is repeated enough times to reach a stopping point: The largest number by which the divisor 4 can be multiplied without exceeding 10 is 2, so 2 is written above as the second leftmost quotient digit. This 2 is then multiplied by the divisor 4 to get 8, which is the largest multiple of 4 that does not exceed 10; so 8 is written below 10, and the subtraction 10 minus 8 is performed to get the remainder 2, which is placed below the 8.
The next digit of the dividend (the last 0 in 500) is copied directly below itself and next to the remainder 2 to form 20. Then the largest number by which the divisor 4 can be multiplied without exceeding 20, which is 5, is placed above as the third leftmost quotient digit. This 5 is multiplied by the divisor 4 to get 20, which is written below and subtracted from the existing 20 to yield the remainder 0, which is then written below the second 20.
At this point, since there are no more digits to bring down from the dividend and the last subtraction result was 0, we can be assured that the process finished.
If the last remainder when we ran out of dividend digits had been something other than 0, there would have been two possible courses of action:
We could just stop there and say that the dividend divided by the divisor is the quotient written at the top with the remainder written at the bottom, and write the answer as the quotient followed by a fraction that is the remainder divided by the divisor.
We could extend the dividend by writing it as, say, 500.000... and continue the process (using a decimal point in the quotient directly above the decimal point in the dividend), in order to get a decimal answer, as in the following example.
31.75
4)127.00
12 (12 ÷ 4 = 3)
07 (0 remainder, bring down next figure)
4 (7 ÷ 4 = 1 r 3)
3.0 (bring down 0 and the decimal point)
2.8 (7 × 4 = 28, 30 ÷ 4 = 7 r 2)
20 (an additional zero is brought down)
20 (5 × 4 = 20)
0
In this example, the decimal part of the result is calculated by continuing the process beyond the units digit, "bringing down" zeros as being the decimal part of the dividend.
This example also illustrates that, at the beginning of the process, a step that produces a zero can be omitted. Since the first digit 1 is less than the divisor 4, the first step is instead performed on the first two digits 12. Similarly, if the divisor were 13, one would perform the first step on 127 rather than 12 or 1.
Basic procedure for long division of
Find the location of all decimal points in the dividend and divisor .
If necessary, simplify the long division problem by moving the decimals of the divisor and dividend by the same number of decimal places, to the right (or to the left), so that the decimal of the divisor is to the right of the last digit.
When doing long division, keep the numbers lined up straight from top to bottom under the tableau.
After each step, be sure the remainder for that step is less than the divisor. If it is not, there are three possible problems: the multiplication is wrong, the subtraction is wrong, or a greater quotient is needed.
In the end, the remainder, , is added to the growing quotient as a fraction, .
Invariant property and correctness
The basic presentation of the steps of the process (above)
focus on the what steps are to be performed,
rather than the properties of those steps that ensure the result will be correct
(specifically, that q × m + r = n, where q is the final quotient and r the final remainder).
A slight variation of presentation requires more writing,
and requires that we change, rather than just update, digits of the quotient,
but can shed more light on why these steps actually produce the right answer
by allowing evaluation of q × m + r at intermediate points in the process.
This illustrates the key property used in the derivation of the algorithm
(below).
Specifically, we amend the above basic procedure so that
we fill the space after the digits of the quotient under construction with 0's, to at least the 1's place,
and include those 0's in the numbers we write below the division bracket.
This lets us maintain an invariant relation at every step:
q × m + r = n, where q is the partially-constructed quotient (above the division bracket)
and r the partially-constructed remainder (bottom number below the division bracket).
Note that, initially q=0 and r=n, so this property holds initially;
the process reduces r and increases q with each step,
eventually stopping when r<m if we seek the answer in quotient + integer remainder form.
Revisiting the 500 ÷ 4 example above, we find
125 (q, changes from 000 to 100 to 120 to 125 as per notes below)
4)500
400 ( 4 × 100 = 400)
100 (500 - 400 = 100; now q=100, r=100; note q×4+r = 500.)
80 ( 4 × 20 = 80)
20 (100 - 80 = 20; now q=120, r= 20; note q×4+r = 500.)
20 ( 4 × 5 = 20)
0 ( 20 - 20 = 0; now q=125, r= 0; note q×4+r = 500.)
Example with multi-digit divisor
A divisor of any number of digits can be used. In this example, 1260257 is to be divided by 37. First the problem is set up as follows:
37)1260257
Digits of the number 1260257 are taken until a number greater than or equal to 37 occurs. So 1 and 12 are less than 37, but 126 is greater. Next, the greatest multiple of 37 less than or equal to 126 is computed. So 3 × 37 = 111 < 126, but 4 × 37 > 126. The multiple 111 is written underneath the 126 and the 3 is written on the top where the solution will appear:
3
37)1260257
111
Note carefully which place-value column these digits are written into. The 3 in the quotient goes in the same column (ten-thousands place) as the 6 in the dividend 1260257, which is the same column as the last digit of 111.
The 111 is then subtracted from the line above, ignoring all digits to the right:
3
37)1260257
111
15
Now the digit from the next smaller place value of the dividend is copied down and appended to the result 15:
3
37)1260257
111
150
The process repeats: the greatest multiple of 37 less than or equal to 150 is subtracted. This is 148 = 4 × 37, so a 4 is added to the top as the next quotient digit. Then the result of the subtraction is extended by another digit taken from the dividend:
34
37)1260257
111
150
148
22
The greatest multiple of 37 less than or equal to 22 is 0 × 37 = 0. Subtracting 0 from 22 gives 22, we often don't write the subtraction step. Instead, we simply take another digit from the dividend:
340
37)1260257
111
150
148
225
The process is repeated until 37 divides the last line exactly:
34061
37)1260257
111
150
148
225
222
37
Mixed mode long division
For non-decimal currencies (such as the British £sd system before 1971) and measures (such as avoirdupois) mixed mode division must be used. Consider dividing 50 miles 600 yards into 37 pieces:
mi - yd - ft - in
1 - 634 1 9 r. 15"
37) 50 - 600 - 0 - 0
37 22880 66 348
13 23480 66 348
1760 222 37 333
22880 128 29 15
===== 111 348 ==
170 ===
148
22
66
==
Each of the four columns is worked in turn. Starting with the miles: 50/37 = 1 remainder 13. No further division is
possible, so perform a long multiplication by 1,760 to convert miles to yards, the result is 22,880 yards. Carry this to the top of the yards column and add it to the 600 yards in the dividend giving 23,480. Long division of 23,480 / 37 now proceeds as normal yielding 634 with remainder 22. The remainder is multiplied by 3 to get feet and carried up to the feet column. Long division of the feet gives 1 remainder 29 which is then multiplied by twelve to get 348 inches. Long division continues with the final remainder of 15 inches being shown on the result line.
Interpretation of decimal results
When the quotient is not an integer and the division process is extended beyond the decimal point, one of two things can happen:
The process can terminate, which means that a remainder of 0 is reached; or
A remainder could be reached that is identical to a previous remainder that occurred after the decimal points were written. In the latter case, continuing the process would be pointless, because from that point onward the same sequence of digits would appear in the quotient over and over. So a bar is drawn over the repeating sequence to indicate that it repeats forever (i.e., every rational number is either a terminating or repeating decimal).
Notation in non-English-speaking countries
China, Japan, Korea use the same notation as English-speaking nations including India. Elsewhere, the same general principles are used, but the figures are often arranged differently.
Latin America
In Latin America (except Argentina, Bolivia, Mexico, Colombia, Paraguay, Venezuela, Uruguay and Brazil), the calculation is almost exactly the same, but is written down differently as shown below with the same two examples used above. Usually the quotient is written under a bar drawn under the divisor. A long vertical line is sometimes drawn to the right of the calculations.
500 ÷ 4 = 125 (Explanations)
4 ( 4 × 1 = 4)
10 ( 5 - 4 = 1)
8 ( 4 × 2 = 8)
20 (10 - 8 = 2)
20 ( 4 × 5 = 20)
0 (20 - 20 = 0)
and
127 ÷ 4 = 31.75
124
30 (bring down 0; decimal to quotient)
28 (7 × 4 = 28)
20 (an additional zero is added)
20 (5 × 4 = 20)
0
In Mexico, the English-speaking world notation is used, except that only the result of the subtraction is annotated and the calculation is done mentally, as shown below:
125 (Explanations)
4)500
10 ( 5 - 4 = 1)
20 (10 - 8 = 2)
0 (20 - 20 = 0)
In Bolivia, Brazil, Paraguay, Venezuela, French-speaking Canada, Colombia, and Peru, the European notation (see below) is used, except that the quotient is not separated by a vertical line, as shown below:
127|4
−124 31,75
30
−28
20
−20
0
Same procedure applies in Mexico, Uruguay and Argentina, only the result of the subtraction is annotated and the calculation is done mentally.
Eurasia
In Spain, Italy, France, Portugal, Lithuania, Romania, Turkey, Greece, Belgium, Belarus, Ukraine, and Russia, the divisor is to the right of the dividend, and separated by a vertical bar. The division also occurs in the column, but the quotient (result) is written below the divider, and separated by the horizontal line. The same method is used in Iran, Vietnam, and Mongolia.
127|4
−124|31,75
30
−28
20
−20
0
In Cyprus, as well as in France, a long vertical bar separates the dividend and subsequent subtractions from the quotient and divisor, as in the example below of 6359 divided by 17, which is 374 with a remainder of 1.
6359|17
−51 |374
125 |
−119 |
69|
−68|
1|
Decimal numbers are not divided directly, the dividend and divisor are multiplied by a power of ten so that the division involves two whole numbers. Therefore, if one were dividing 12,7 by 0,4 (commas being used instead of decimal points), the dividend and divisor would first be changed to 127 and 4, and then the division would proceed as above.
In Austria, Germany and Switzerland, the notational form of a normal equation is used. <dividend> : <divisor> = <quotient>, with the colon ":" denoting a binary infix symbol for the division operator (analogous to "/" or "÷"). In these regions the decimal separator is written as a comma. (cf. first section of Latin American countries above, where it's done virtually the same way):
127 : 4 = 31,75
−12
07
−4
30
−28
20
−20
0
The same notation is adopted in Denmark, Norway, Bulgaria, North Macedonia, Poland, Croatia, Slovenia, Hungary, Czech Republic, Slovakia, Vietnam and in Serbia.
In the Netherlands, the following notation is used:
12 / 135 \ 11,25
12
15
12
30
24
60
60
0
In Finland, the Italian method detailed above was replaced by the Anglo-American one in the 1970s. In the early 2000s, however, some textbooks have adopted the German method as it retains the order between the divisor and the dividend.
Algorithm for arbitrary base
Every natural number can be uniquely represented in an arbitrary number base as a sequence of digits where for all , where is the number of digits in . The value of in terms of its digits and the base is
Let be the dividend and be the divisor, where is the number of digits in . If , then quotient and remainder . Otherwise, we iterate from , before stopping.
For each iteration , let be the quotient extracted so far, be the intermediate dividend, be the intermediate remainder, be the next digit of the original dividend, and be the next digit of the quotient. By definition of digits in base , . By definition of remainder, . All values are natural numbers. We initiate
the first digits of .
With every iteration, the three equations are true:
There only exists one such such that .
The final quotient is and the final remainder is
Examples
In base 10, using the example above with and , the initial values and .
Thus, and .
In base 16, with and , the initial values are and .
Thus, and .
If one doesn't have the addition, subtraction, or multiplication tables for base memorised, then this algorithm still works if the numbers are converted to decimal and at the end are converted back to base . For example, with the above example,
and
with . The initial values are and .
Thus, and .
This algorithm can be done using the same kind of pencil-and-paper notations as shown in above sections.
d8f45 r. 5
12 ) f412df
ea
a1
90
112
10e
4d
48
5f
5a
5
Rational quotients
If the quotient is not constrained to be an integer, then the algorithm does not terminate for . Instead, if then by definition. If the remainder is equal to zero at any iteration, then the quotient is a -adic fraction, and is represented as a finite decimal expansion in base positional notation. Otherwise, it is still a rational number but not a -adic rational, and is instead represented as an infinite repeating decimal expansion in base positional notation.
Binary division
Performance
On each iteration, the most time-consuming task is to select . We know that there are possible values, so we can find using comparisons. Each comparison will require evaluating . Let be the number of digits in the dividend and be the number of digits in the divisor . The number of digits in . The multiplication of is therefore , and likewise the subtraction of . Thus it takes to select . The remainder of the algorithm are addition and the digit-shifting of and to the left one digit, and so takes time and in base , so each iteration takes , or just . For all digits, the algorithm takes time , or in base .
Generalizations
Rational numbers
Long division of integers can easily be extended to include non-integer dividends, as long as they are rational. This is because every rational number has a recurring decimal expansion. The procedure can also be extended to include divisors which have a finite or terminating decimal expansion (i.e. decimal fractions). In this case the procedure involves multiplying the divisor and dividend by the appropriate power of ten so that the new divisor is an integer – taking advantage of the fact that a ÷ b = (ca) ÷ (cb) – and then proceeding as above.
Polynomials
A generalised version of this method called polynomial long division is also used for dividing polynomials (sometimes using a shorthand version called synthetic division).
See also
Algorism
Arbitrary-precision arithmetic
Egyptian multiplication and division
Elementary arithmetic
Fourier division
Polynomial long division
Short division
References
External links
Long Division Algorithm
Long Division and Euclid's Lemma
Algorithms
Computer arithmetic algorithms
Digit-by-digit algorithms
Division (mathematics) | Long division | Mathematics | 4,541 |
399,127 | https://en.wikipedia.org/wiki/51%20%28number%29 | 51 (fifty-one) is the natural number following 50 and preceding 52.
In mathematics
Fifty-one is
a pentagonal number as well as a centered pentagonal number and an 18-gonal number
the 6th Motzkin number, telling the number of ways to draw non-intersecting chords between any six points on a circle's boundary, no matter where the points may be located on the boundary.
a Perrin number, coming after 22, 29, 39 in the sequence (and the sum of the first two)
a Størmer number, since the greatest prime factor of 512 + 1 = 2602 is 1301, which is substantially more than 51 twice.
There are 51 different cyclic Gilbreath permutations on 10 elements, and therefore there are 51 different real periodic points of order 10 on the Mandelbrot set.
Since 51 is the product of the distinct Fermat primes 3 and 17, a regular polygon with 51 sides is constructible with compass and straightedge, the angle is constructible, and the number cos is expressible in terms of square roots.
References
Integers | 51 (number) | Mathematics | 228 |
10,204,161 | https://en.wikipedia.org/wiki/Syncrude%20Tailings%20Dam | The Syncrude Tailings Dam, impounding the Mildred Lake Settling Basin (MLSB), is an embankment dam that is, by volume of construction material, the largest earth structure in the world in 2001. It is located north of Fort McMurray, Alberta, Canada, at the northern end of the Mildred Lake lease area owned by Syncrude Canada Ltd. The dam and the tailings reservoir within it are constructed and maintained as part of ongoing operations by Syncrude in extracting oil from the Athabasca oil sands. Other tailings dams constructed and operated in the same area by Syncrude include the Southwest Sand Storage (SWSS), which is the third largest dam in the world by volume of construction material after the Tarbela Dam.
Oil sands tailings pond water
According to Canada’s Oil Sands Innovation Alliance (COSIA), an alliance of oil sands producers formed in 2012, who share research on Environmental Priority Areas (EPAs) such as tailing pond water and greenhouse gases, "Tailings are the sand, silt, clay, soil and water found naturally in oil sands that remain following the mining and bitumen extraction process." The Clark Hot Water Extraction (CHWE) process used by Suncor and Syncrude in their open-pit mining operations, to extract bitumen from the Athabasca Oil Sands (AOS) produces large quantities of tailings pond sludge which remains stable for decades. By 1990 it was considered to be the "imminent environmental constraint to future use of the hot water process." Oil sands tailings pond water contains toxic chemicals such as "naphthenic acids (NAs) and process chemicals (e.g., alkyl sulphates, quaternary ammonium compounds, and alkylphenol ethoxylates)."
Other Syncrude tailings dams
By 2012 Syncrude Canada Ltd had oilsands mining operations on three lease areas (Mildred Lake, Aurora North and Aurora South), all about 40 km north of Fort McMurray. There are many tailings dams on those leases. The lease that has the greatest number of tailings dams, and the largest tailings dams, is the Mildred Lake lease. According to Syncrude's 2010 Baseline Report submitted to the Energy Resources Conservation Board (since replaced by the Alberta Energy Regulator (AER),
the Mildred Lake and Aurora North leases together contain: the Mildred Lake Settling Basin (MLSB), Southwest Sand Storage (SWSS), West In-Pit (WIP), East In-Pit (EIP), Southwest In-Pit (SWIP), Aurora Settling Basin (ASB) and Aurora East Pit Northeast (AEPN-E). Those referred to as "in pit" have only small containing embankments. In Aurora South lease the main tailings dam will be the External Tailings Area (ETA).
By 2016, Syncrude four tailings areas at Aurora North consisted of the out-of-pit Aurora Settling Basin (ASB), in operation since 2000, and three in-pit basins: Aurora East Pit North East (AEPN-E) since 2010, Aurora East Pit North West (AEPN-W) since 2011 (in-pit), and Aurora East Pit South (AEPS) since 2014.
Mildred Lake Settling Basin (MLSB)
The Mildred Lake Settling Basin is located on the north side of the Mildred Lake lease area. It is a tailings pond that serves three purposes. Firstly, the embankment was planned as storage for a substantial volume of sand. Secondly, the basin acts as a storage basin for process water, which is recycled, with a planned ultimate storage capacity of 350×106 m3. Thirdly, the fines that are not captured elsewhere settle and compact in the basin, and are later pumped out for long term storage. This means that the MLSB is a true dam in the sense that it is filled with water in the long term, rather than being quickly filled by solids as in many other tailings dams. The embankment has a circumference of about 18 km, an average height of about 40 m and a maximum height of about 88 m.
Two starter dams were constructed during 1976 to 1978 and were required until sufficient sand was available for building the embankments. The north starter dam had a crest elevation of 312 m. The original ground surface varied from 294 m to 305 m while up to 1.5 m of original ground was stripped for a trench above which was a compacted clay core. The crest width was 30 metres.
The main embankment was taken to a final elevation of 352 m for more than half of its length by 1994 and completed in 1995. For construction purposes the embankment was considered to be in a collection of 30 "cells", each with a crest length of about 600 metres. Acceptable side slopes were determined on a cell-by-cell basis, based on the strength of available materials and foundation movement. The slope of the outer part of the embankment is much smaller than that of the inner part, in a ratio of about 4:1. By 1997 Syncrude's large open-pit operations were producing up to 250,000 tons a day of tailings that were collected in this tailings pond built on the upstream construction method.
As this dam frequently appears in lists as the largest dam structure in the world, it is necessary to estimate the accuracy of the quoted volume of construction material. The quoted length of the embankment of 18 km is reliable. The average height of the embankment is quoted to be 40 m, and a check on this using four cross sections yields 45 m, which is of the same order of magnitude. The average base width of the embankment is variously 1,800 m, 800 m from Google Earth and 660 m. So whereas one report gives an embankment volume of 720×106 m3, calculations based on the width of the embankment base from these three sources give embankment volumes of 660, 290 and 240×106 m3 respectively. So there's some uncertainty as to the total volume of construction material.
South West Sand Storage (SWSS)
The SWSS facility is located in the southwest corner of the Mildred Lake lease area. It was commissioned in 1993. The facility was designed to provide coarse tailings sand storage, returning water and thin fine tailings to other sites within the Mildred Lake Project area. The crest elevation is 400 m or 390 m. An upgrade to increase the water storage to a maximum water surface level of 397 m was constructed in 2009 to 2010.
The embankment length is 19.5 km. The maximum embankment height is at least 30 m and the average embankment height is about 18.5 m. The average embankment base width is about 800 m from Google Earth. So the total embankment volume is about 145×106 m3.
Aurora Settling Basin (ASB)
This is located to the South East of the Aurora North lease, adjacent to the Muskeg River.
The embankment length is 11.6 km and the average embankment base width is about 260 m from Google Earth. The surface elevation is 341.4 m.
See also
Syncrude Canada Ltd.
Athabasca Oil Sands
List of largest dams in the world
List of articles about Canadian oil sands
References
External links
ERCB Directive 074 tailings plans 2010
Dams in Alberta
Mining in Alberta
Regional Municipality of Wood Buffalo
Tailings dams
Athabasca oil sands | Syncrude Tailings Dam | Technology,Engineering | 1,514 |
32,505 | https://en.wikipedia.org/wiki/Vapor | In physics, a vapor (American English) or vapour (Commonwealth English; see spelling differences) is a substance in the gas phase at a temperature lower than its critical temperature, which means that the vapor can be condensed to a liquid by increasing the pressure on it without reducing the temperature of the vapor. A vapor is different from an aerosol. An aerosol is a suspension of tiny particles of liquid, solid, or both within a gas.
For example, water has a critical temperature of , which is the highest temperature at which liquid water can exist at any pressure. In the atmosphere at ordinary temperatures gaseous water (known as water vapor) will condense into a liquid if its partial pressure is increased sufficiently.
A vapor may co-exist with a liquid (or a solid). When this is true, the two phases will be in equilibrium, and the gas-partial pressure will be equal to the equilibrium vapor pressure of the liquid (or solid).
Properties
Vapor refers to a gas phase at a temperature where the same substance can also exist in the liquid or solid state, below the critical temperature of the substance. (For example, water has a critical temperature of 374 °C (647 K), which is the highest temperature at which liquid water can exist.) If the vapor is in contact with a liquid or solid phase, the two phases will be in a state of equilibrium. The term gas refers to a compressible fluid phase. Fixed gases are gases for which no liquid or solid can form at the temperature of the gas, such as air at typical ambient temperatures. A liquid or solid does not have to boil to release a vapor.
Vapor is responsible for the familiar processes of cloud formation and condensation. It is commonly employed to carry out the physical processes of distillation and headspace extraction from a liquid sample prior to gas chromatography.
The constituent molecules of a vapor possess vibrational, rotational, and translational motion. These motions are considered in the kinetic theory of gases.
Vapor pressure
The vapor pressure is the equilibrium pressure from a liquid or a solid at a specific temperature. The equilibrium vapor pressure of a liquid or solid is not affected by the amount of contact with the liquid or solid interface.
The normal boiling point of a liquid is the temperature at which the vapor pressure is equal to normal atmospheric pressure.
For two-phase systems (e.g., two liquid phases), the vapor pressure of the individual phases are equal. In the absence of stronger inter-species attractions between like-like or like-unlike molecules, the vapor pressure follows Raoult's law, which states that the partial pressure of each component is the product of the vapor pressure of the pure component and its mole fraction in the mixture. The total vapor pressure is the sum of the component partial pressures.
Examples
Perfumes contain chemicals that vaporize at different temperatures and at different rate in scent accords, known as notes.
Atmospheric water vapor is found near the earth's surface, and may condense into small liquid droplets and form meteorological phenomena, such as fog, mist, and haar.
Mercury-vapor lamps and sodium vapor lamps produce light from atoms in excited states.
Flammable liquids do not burn when ignited. It is the vapor cloud above the liquid that will burn if the vapor's concentration is between the lower flammable limit (LFL) and upper flammable limit (UFL), of the flammable liquid.
E-cigarettes produce aerosols, not vapors.
Measuring vapor
Since it is in the gas phase, the amount of vapor present is quantified by the partial pressure of the gas. Also, vapors obey the barometric formula in a gravitational field, just as conventional atmospheric gases do.
See also
References
Gases
Pressure
Chemical properties | Vapor | Physics,Chemistry | 771 |
66,240,342 | https://en.wikipedia.org/wiki/Conbercept | Conbercept, sold under the commercial name Lumitin, is a novel vascular endothelial growth factor (VEGF) inhibitor used to treat neovascular age-related macular degeneration (AMD) and diabetic macular edema (DME). The anti-VEGF was approved for the treatment of neovascular AMD by the China State FDA (CFDA) in December 2013. As of December 2020, conbercept is undergoing phase III clinical trials through the U.S. Food and Drug Administration’s PANDA-1 and PANDA-2 development programs.
Conbercept was developed by Chengdu Kanghong Biotech Company in the People’s Republic of China and is marketed under the name Lumitin.
Medical uses
It is used for the treatment of neovascular age-related macular degeneration (nAMD), choroidal neovascularization secondary to pathologic myopia (), diabetic macular edema (DME). The medication is given through intravitreal injection (IVT).
Contraindications
Conbercept is contraindicated in patients with known hypersensitivity to the active ingredient, in patients with ocular or periocular infections, and in patients with active intraocular inflammation.
Adverse effects
Common adverse effects of the eye formulation include eye pain, transient intraocular pressure (IOP) increase and conjunctival hemorrhage.
Mechanism of action
Conbercept is a soluble receptor decoy that binds specifically to VEGF-B, placental growth factor (PlGF), and various isoforms of VEGF-A. Conbercept has a VEGF-R2 kinase insert domain receptor (KDR) Ig-like region 4 (KDRd4) which improves the three-dimensional structure and efficiency of dimer formation, thereby increasing the binding capacity of conbercept to VEGF.
Composition
Conbercept is a recombinant fusion protein composed of VEGFR-1 (second domain) and VEGFR-2 (third and fourth domains) regions fused to the Fc portion of human IgG1 immunoglobulin.
History
Chengdu Kanghong Pharmaceutical Group, a medical company based in Sichuan, started the development of conbercept in 2005. In 2012, the drug was included on the World Health Organization’s Drug Information 67th List of Recommended International Nonproprietary Names, which was the first Chinese innovator biotech drug to be recognized on the list.
In November 2013, the Chinese Food and Drug Administration approved conbercept for the treatment of AMD. By 2014, conbercept was marketed for treatment of wAMD in China. In 2016, Phase III clinical trials of conbercept were authorized by the U.S. Food and Drug Administration.
In 2017, Kanghong Pharmaceutical Group partnered with Syneos Health to process Phase III clinical trials simultaneously in more than 30 countries around the world with an investment of $228 million. In 2020, conbercept was approved for use in Mongolia.
Clinical trials in China
Conbercept is the only anti-VEGF drug confirmed by randomized controlled trials (RCT) to sustain visual improvements with 3+Q3M regimens (PHOENIX study)
Conbercept significantly improves visual acuity and anatomical outcomes in patient with PCV (AURORA Study).
Conbercept provides significantly visual acuity improvement in DME patients (SAILING study).
Society and culture
Legal Status
In 2013, the CFDA approved conbercept for the treatment of neovascular age-related macular degeneration (nAMD)
In 2017, the CFDA approved it for the treatment of pathologic myopia associated choroidal neovascularization ()
In 2019, the CFDA approved it for the treatment of diabetic macular edema (DME)
Economic
Conbercept has been shown to be a cost-effective wAMD treatment option in China. Compared to two similar anti-VEGF intravitreal drugs, ranibizumab and aflibercept, conbercept has been shown to be the most cost-effective option for treatment of wAMD in China.
In 2017, the national basic medical insurance in China began covering conbercept.
References
External links
Conbercept, Drug Information Portal. U.S. National Library of Medicine.
Ophthalmology drugs
Angiogenesis inhibitors
Engineered proteins | Conbercept | Biology | 937 |
44,570,580 | https://en.wikipedia.org/wiki/Tylopilus%20rufonigricans | Tylopilus rufonigricans is a bolete fungus in the family Boletaceae found in the Pakaraima Mountains of Guyana. It was described as new to science in 1999 by mycologist Terry Henkel. Its fruit bodies have convex to flattened caps measuring in diameter; caps can form a central depression in age. The cap surface is covered with black scales, while the surface between the scales is initially greenish yellow, and later dull green and eventually greyish black. The flesh turns reddish when it is cut or injured. Tubes on the cap underside are 5–7.5 mm long, and there are 2–3 pores per mm. The stipe measures long by 10–20 mm thick. The spore print is cinnamon brown; spores are smooth and roughly spindle-shaped (subfusiform) with dimensions of 12–15.2 by 3.6–4.8 μm. T. exiguus fruits singly on root mats on trunks of Dicymbe corymbosa. The specific epithet rufonigricans refers to the reddening reaction of the flesh upon injury.
References
External links
rufonigricans
Fungi described in 1999
Fungi of Guyana
Fungus species | Tylopilus rufonigricans | Biology | 248 |
44,301,990 | https://en.wikipedia.org/wiki/MicroSolutions%20Backpack | MicroSolutions Backpack was a line of peripheral devices introduced in 1990 allowing users to attach a peripheral drive, namely hard drives, CD-ROM drives, and DVD±RW drives, to their system. When the original model was released, USB ports did not yet exist, so the drive plugged into a system's printer port. Backpacks could be daisy-chained and still allow for printer usage. Some models also offered audio capability via expansion. Later models introduced faster connectivity to the host system by means of a proprietary PC Card, and later USB. MicroSolutions was located in DeKalb, Illinois, USA.
References
Computer peripherals | MicroSolutions Backpack | Technology | 134 |
38,613,120 | https://en.wikipedia.org/wiki/Lithium%20molybdenum%20purple%20bronze | Lithium molybdenum purple bronze is a chemical compound with formula , that is, a mixed oxide of molybdenum and lithium. It can be obtained as flat crystals with a purple-red color and metallic sheen (hence the "purple bronze" name).
This compound is one of several molybdenum bronzes with general formula where A is an alkali metal or thallium Tl. It stands out among them (and also among the sub-class of "purple" molybdenum bronzes) for its peculiar electrical properties, including a marked anisotropy that makes it a "quasi-1D" conductor, and a metal-to-insulator transition as it is cooled below 30 K.
Preparation
The compound was first obtained by Martha Greenblatt and others by a temperature gradient flux technique. In a typical preparation, a stoichometric melt of , and is maintained in a temperature gradient from 490 to 640 °C oven 15 cm in vacuum over several days. Excess reagents are dissolved with a hot potassium carbonate solution releasing metallic-purple plate-like crystals, a couple mm wide and less than a mm thick.
Structure
The crystal structure of was determined by Onoda and others through single-crystal X-ray diffraction. The crystal system is monoclinic, with approximate unit cell dimensions a = 1.2762 nm, b = 0.5523 nm, and c = 0.9499 nm, with angle β = 90.61°, volume V = 0.6695 nm3 and Z = 2. In typical crystals, a is the shortest dimension (perpendicular to the plates) and b the longest. The density is 4.24 g/cm3. The structure is rather different from that of potassium molybdenum purple bronze , except that both are organized in layers. The difference may be explained by the relative sizes of the and ions.
The unit cell contains six crystallographically independent molybdenum sites. One-third of the molybdenum atoms are surrounded by four oxygens, two thirds are surrounded by six oxygens. The crystal is a stack of slabs; each slab consists of three layers of distorted octahedra sharing corners. The lithium ions are inserted in the large vacant sites between the slabs. There are zigzag chains of alternating molybdenum and oxygen atoms extending along the b axis.
Properties
Lithium molybdenum purple bronze is quite different than the sodium, potassium and thallium analogs. It has a three-dimensional crystal structure, but a pseudo-one-dimensional (1D) metallic character, eventually becoming a superconductor at about 2 K. Its properties are most spectacular below 5 meV. The Tomonaga-Luttinger liquid theory has been invoked to explain its anomalous behavior.
Electrical conductivity
At room temperature, Greenblatt and others (in 1984) measured the resistivity of lithium purple bronze along the a, b and c axes as 2.47 Ω cm, 0.0095 Ω cm, and on the order of 0.25 Ω cm, respectively. The conductivities would be in the ratio 1:250:10, which would make this compound an almost one-dimensional conductor. However, Da Luz and others (2007) measured 0.079, 0.018, and 0.050 Ω cm, respectively, which corresponds to conductivity ratios 1:6:2.4 for a:b:c; whereas H. Chen and others (2010) measured 0.854, 0.016, and 0.0645 Ω cm, respectively, which correspond to conductivity ratios of 1:53:13.
This anisotropy has been attributed to the crystal structure, specifically to the zig-zag chains of molybdenum and oxygen atoms
Resistivity and temperature
The resistivity along all three axes increases linearly with temperature from about 30 K to 300 K, as in a metal. This is anomalous since such a law is expected above the Debye temperature (= 400 K for this compound) The resistivity ratios along the three axes are preserved in that range.
Metal-insulator transition
As the lithium purple bronze is cooled from 30 K to 20, it changes abruptly to an insulator. After reaching a minimum at about 24 K, the resistivity increases 10-fold and becomes somewhat more isotropic, with conductivities 1:25:14. The anisotropy is partially restored if a magnetic field is applied perpendicular to the b axis. The transition may be related to the onset of a charge density wave. Santos and others have observed that the thermal expansion coefficient is largest along the a axis, so cooling will bring the conducting chains closer together, leading to a dimensional cross-over. The theory of Luttinger liquids then predicts such behavior. Anyway, as of 2010 there was no consensus explanation for this transition. In 2023 it has been suggested that the strange behaviour could be by emergent symmetry (in contrast to symmetry breaking) from interference between the conduction electrons and dark excitons
Superconducting state
Lithium molybdenum purple bronze becomes superconductor between 1 and 2 K.
Thermal conductivity
Li0.9Mo6O17, due to spin–charge separation, can have a much higher thermal conductivity than predicted by the Wiedemann-Franz law.
Magnetoresistance
The magnetoresistance of lithium purple bronze is negative when the magnetic field is applied along the b-axis, but large and positive when the field is applied along the a-axis and the c-axis.
See also
Sodium tungsten bronze , a golden to purple metallic-looking compound.
Magnetochromism
References
Molybdenum compounds
Lithium compounds
Oxides | Lithium molybdenum purple bronze | Chemistry | 1,203 |
27,646,402 | https://en.wikipedia.org/wiki/Microbial%20arene%20oxidation | Microbial arene oxidation (MAO) refers to the process by which microbial enzymes convert aromatic compounds into more oxidized products. The initial intermediates are arene oxides. A number of oxidized products are possible, the most commonly employed for organic synthesis are cis-1,2-dihydroxy-cyclohexa-3,5-dienes ("dihydrodiols").
The oxidation of aromatic compounds to dearomatized products is a step in the catabolism of arenes. Seminal work on this area was reported by Gibson on enzymes inPseudomonas putida. The following of enzymes have been identified that oxidize arenes to dihydrodiols:
Toluene dioxygenases (TDs)
Naphthalene dioxygenases (NDs)
Biphenyl dioxygenases (BPDs)
Benzoic acid dioxygenases (BZDs)
Benzene 1,2-dioxygenase
The substrate specificity of these enzymes is low. Enantiomeric purities in excess of 90% are routine but varies with substrate. For instance, 1,4-substituted benzenes often render diols of lower enantiomeric purity. However, accessing the "unnatural" enantiomer of product is often difficult without tailored enzymes.
(1)
Mechanism and stereochemistry
Oxidations by bacterial dioxygenases give cis-dihydrodiols. In contrast, mammalian and fungal arene dioxygenases yield trans-dihydrodiols. The cis configuration of the product together with isotopic labeling studies implicate the intermediacy of dioxetanes. This intermediate has not been observed, however.
cis-1,2-Dihydrocatechol is a versatile synthetic intermediate.
Scope
Toluene dioxygenase oxidizes toluene to 1,2-dihydroxyl-6-methylcyclohexa-3,5-diene. Aromatic esters are also good substrates for these enzymes, giving dihydrodiols in moderate yields along with some other oxidation products.
Naphthalene dioxygenase is found in a variety of Pseudomonas organisms. It catalyzes the oxidation of other polyclic aromatic compounds as well, although yields tend to be low for substrates other than naphthalene.
Biphenyl dioxygenase oxidizes a relatively wide array of aromatic substrates and exhibits low substrate specificity. Biphenyl oxidation can also be accomplished using TDOs or NDOs.
The site selectivity of BZDs differs from that of the other three classes. Oxidation takes place in an ipso-cis fashion, independent of the substitution pattern of the arene.
Undesirable oxidized side products are often observed during microbial arene oxidations, particularly for "unnatural" substrates. Benzylic oxidation has been noted in a number of cases. Sulfides are always oxidized to sulfoxides.
An important limitation of the reaction is that only a single enantiomer of product is available when the wild type enzyme is used. Enzymes that generate "unnatural" enantiomers must be engineered via site-directed mutagenesis or other biochemical techniques. The development of organisms and enzymes that exhibit "unnatural" stereoselectivity is an ongoing research activity.
Because of concerns about the efficiency and selectivity of oxidation of more complex substrates, MAO is usually carried out early in synthetic sequences. However, simple dihydrodiols may be manipulated to give complex products through a variety of methods. In addition, the microbial oxidation process is compatible with a number of functional groups.
Iodo-containing dihydrodiols may be accessed by the oxidation of iodobenzene.
Dihydrodiols have been elaborated to a variety of alkaloid natural products. Two examples are shown below.
Conditions of MAO reactions require handling microbes in an aseptic environment. Often, specialized bacterial strains are needed to effect particular transformations. Dihydrodiols themselves must be stored under basic conditions (pH > 9) to prevent acid-catalyzed dehydration.
References
Organic oxidation reactions | Microbial arene oxidation | Chemistry | 882 |
42,369,267 | https://en.wikipedia.org/wiki/Perstraction | Perstraction is a membrane extraction process, where two liquid phases are contacted across a membrane. The desired species in the feed (solute), selectively crosses the membrane into the extracting solution. Perstraction was originally developed to overcome the downsides of liquid–liquid extraction, for example extractant toxicity and emulsion formation. Perstraction has been applied to many fields including fermentation, waste water treatment and alcohol-free beverage production.
Introduction
Perstraction is the separation technique developed from liquid-liquid extraction. Due to the presence of the membrane a wider selection of extractants can be used, this can include the use of miscible solutions, for example the recovery of ammonia from waste water using sulphuric acid.
This process is analogous to pervaporation in some ways. But the permeate is in liquid phase. Perstraction technique eliminates the problem of phase dispersion and separation altogether.
A basic perstraction is called the single perstraction or membrane perstraction. An advantage is minimizing toxic damage to microorganisms or enzymes. Nevertheless, perstraction includes problems like expensive membranes, clogging and fouling of membranes.
Applications
Perstraction in butanol fermentation
Perstraction has been combined with the ABE (acetone butanol ethanol) fermentation for butanol production. Butanol is toxic to the fermentation, therefore perstraction can be applied to remove the butanol from the vicinity of the bacteria as soon as it is produced. Liquid-liquid extraction (LLE) was combined with the ABE fermentation for in situ product recovery, but the extractants with the highest affinity for butanol tend to be toxic to the bacteria. The application of LLE would also require the extractant to be sterilised prior to contact with the fermentation broth. Perstraction can overcome these problems due to a membrane separating the fermentation broth from the extractant. As an in situ product recovery technique for the ABE fermentation perstraction is still in its development stages.
Amino acids separation through the charged membrane
A membrane brings many new elements for the separation. Amino acids has been separated by perstraction. Membranes did not only separate extractants and the primary solution but also were selective for amino acids. Charged membranes were used. So they selected amino acids by pKa. Besides the selectivity of a membrane is affected by its thickness, pore diameter and charge potential. The bigger pore is, the better amino acids permeate the membrane. The higher charge potential is, the bigger electrostatic rejection effects are. The thinner membrane, the less it is selective.
The clean groundwater
Pollutants can be deleted from groundwater by perstraction. Different techniques have been patented. The oldest one has published in 1990 and the youngest one in 1998. In the 2000s has been done few patent applications but no granted patents.
Organic compounds through a membrane has been concentrated from groundwater. The concentration factor is from 1 000 to 10 000 bringing 0.1 ppb concentrations to between 0.1 and 1.0 ppm. Besides the concentration of a contaminant has been analyzed in real-time. The membrane is polymer like polysulphane. The hole diameter is 300 μm and thickness is 30 μm.
Removal of pharmaceuticals from water
The pharmaceuticals pass sewage treatment plants. They like estrogen conjugates may cause problems. Drugs of the research were common, present in the aquatic environment and inability to be adequately removed by sewage treatment plants. There were seven different drugs in the research. Dibutyl sebacate and oleic acid formed liquid cores in capsules because they do not diffuse away from capsules and have affinity for drugs. Capsule external diameters were 740 μm and 680 μm and internal diameters were 570 μm and 500 μm. Agitation was 300 rpm. Equilibrium times were 30, 50 and 90 minutes.
Since dibutyl sebacate and oleic acid were different affinity for drugs, they were used concurrently. Four drugs were extracted effectively for 40–50 minutes (at least 50% removed). Extraction rates did not change significantly above 150 rpm. Membrane thickness did not affect the result significantly. On the contrary the capsule size was remarkable for mass transfer.
Hydrophobic gelganamycin separated from aqueous media
An antibiotic called geldanamycin was separated from media by the capsular perstraction. Geldanamycin is hydrophobic. Outer particle diameter varied from than less 500 to 750 μm. Alginate formed the shell of capsule and its thickness varied from 30 to 90 μm. Dibutyl sebacate or oleic acid as liquid core extracted geldanamycin well. The bigger agitation and thinner capsule membrane were, the faster transfer rate was.
Geldanamycin was back-extracted from capsules. Dibutyl sebacate capsules were disposable because liquid core came out from capsules in the back-extraction. On the contrary, oleic acid remained in capsules during the back-extraction when an extractant was saturated with oleic acid. Nevertheless, the presence of oleic acid in the back-extraction solution demanded more purification steps (precipitation, centrifugation and filtration). Oleic acid was removed because it prevents crystallization of geldanamycin. Therefore, geldanamycin was crystallized and the end product was highly purified.
Enzymes can be immobilized to the capsule membrane. In this case, the capsule external diameter was 500 μm and internal diameter 300 μm. The product of enzyme-catalyzed reaction can be concentrated to capsules and the end-product inhibition is low. Enzyme recycling could be performed by back-extracting the product. The technique has been applied to the hydrolysis of penicillin G.
See also
Osmosis - A process by which solvent molecules cross between liquids separated by a membrane
References
Analytical chemistry
Membrane technology | Perstraction | Chemistry | 1,234 |
7,297,180 | https://en.wikipedia.org/wiki/Digital%20Earth | Digital Earth is the name given to a concept by former US vice president Al Gore in 1998, describing a virtual representation of the Earth that is georeferenced and connected to the world's digital knowledge archives.
Concept
Original vision
In a speech prepared for the California Science Center in Los Angeles on January 31, 1998, Gore described a digital future where schoolchildren - indeed all the world's citizens - could interact with a computer-generated three-dimensional spinning virtual globe and access vast amounts of scientific and cultural information to help them understand the Earth and its human activities. The greater part of this knowledge store would be free to all via the Internet, however a commercial marketplace of related products and services was envisioned to co-exist, in part in order to support the expensive infrastructure such a system would require. The origin of the idea can be traced back to Buckminster Fuller's Geoscope, a large spherical display to represent geographic phenomena.
Many aspects of his proposal have been realized - for instance, virtual globe geo-browsers such as NASA World Wind, Google Earth and Microsoft's Bing Maps 3D for commercial, social and scientific applications. But the Gore speech outlined a truly global, collaborative linking of systems that has yet to happen. That vision has been continually interpreted and defined by the growing global community of interest described below. The Digital Earth imagined in the speech has been defined as an "organizing vision" to steer scientists and technologists towards a shared goal, promising substantial advances in many scientific and engineering areas, similar to the Information superhighway.
An emerging view
Two noteworthy excerpts from the Beijing Declaration on Digital Earth, ratified September 12, 2009 at the 6th International Symposium on Digital Earth in Beijing:
"Digital Earth is an integral part of other advanced technologies including: earth observation, geo-information systems, global positioning systems, communication networks, sensor webs, electromagnetic identifiers, virtual reality, grid computation, etc. It is seen as a global strategic contributor to scientific and technological developments, and will be a catalyst in finding solutions to international scientific and societal issues."
"Digital Earth should play a strategic and sustainable role in addressing such challenges to human society as natural resource depletion, food and water insecurity, energy shortages, environmental degradation, natural disasters response, population explosion, and, in particular, global climate change."
Next-generation digital Earth
A group of international geographic and environmental scientists from government, industry, and academia brought together by the Vespucci Initiative for the Advancement of Geographic Information Science, and the Joint Research Centre of the European Commission recently published "Next-Generation Digital Earth" a position paper that suggests its eight key elements:
Not one Digital Earth, but multiple connected globes/infrastructures addressing the needs of different audiences: citizens, communities, policymakers, scientists, educationalists.
Problem oriented: e.g. environment, health, societal benefit areas, and transparent on the impacts of technologies on the environment
Allowing search through time and space to find similar/analogue situations with real time data from both sensors and humans (different from what existing GIS can do, and different from adding analytical functions to a virtual globe)
Asking questions about change, identification of anomalies in space in both human and environmental domains (flag things that are not consistent with their surroundings in real time)
Enabling access to data, information, services, and models as well as scenarios and forecasts: from simple queries to complex analyses across the environmental and social domains.
Supporting the visualization of abstract concepts and data types (e.g. low income, poor health, and semantics)
Based on open access, and participation across multiple technological platforms, and media (e.g. text, voice and multi-media)
Engaging, interactive, exploratory, and a laboratory for learning and for multidisciplinary education and science.
Key developments
Significant progress towards Digital Earth has been achieved over the last decade as collected in a survey paper by Mahdavi-Amiri et al., including work in these categories:
Spatial Data Infrastructure (SDI)
The number of Spatial Data Infrastructures has grown steadily since the early 1990s, aided in part by interoperability standards maintained by the Open Geospatial Consortium and the International Organization for Standardization (ISO). Significant recent efforts to link and coordinate SDI's include Infrastructure for Spatial Information in Europe (INSPIRE) and the UNSDI Initiative of the UN Geographic Information Working Group (UNIGWG). Between 1998 and 2001, the NASA-chaired Interagency Digital Earth Working Group (IDEW) contributed to this growth with a particular focus on interoperability issues, giving rise to the Web Map Service standard among others.
Geobrowsers
The scientific use of geo-browser virtual globes such as Google Earth, NASA's World Wind, and ESRI's ArcGIS Explorer has grown significantly as their functionality has improved and with the KML format having become the de facto standard for globe visualizations. Numerous examples can be viewed at the Google Earth Outreach Showcase and at the World Wind Java Demo Applications and Applets.
Sensor networks
Geosensors are defined as "...any device receiving and measuring environmental stimuli that can be geographically referenced." Large scale networks of geosensors have been in place for many years, measuring Earth surface, hydrological and atmospheric phenomena. The advent of the Internet led to a large expansion of such networks, and efforts like Global Earth Observation System of Systems (GEOSS) Initiative aim to connect them.
Volunteered Geographic Information (VGI)
The term Volunteered Geographic Information was coined in 2007 by geographer Michael Goodchild, referring to the rapidly growing volume of social and scientific georeferenced user-generated content being made available on the Web by both expert and non-expert individuals and groups. This phenomenon is seen as an emerging Geoweb that provides Application Programming Interfaces (API's) to software developers and increasingly user-friendly web mapping software to both scientists and the public at large.
International community
The International Journal of Digital Earth is a peer-reviewed research journal, launched in 2008, concerned with the science and technology of Digital Earth and its applications in all major disciplines.
The International Society for Digital Earth is a non-political, non-governmental and not-for-profit international organization, principally for promotion of academic exchange, science and technology innovation, education, and international collaboration.
Several International Symposia on Digital Earth (ISDE) have been held.
There have been seven ISDE symposia and three Digital Earth Summits. Proceedings for many of them are available. The 7th Symposium was held in Perth, Western Australia in 2011.
The 4th Digital Earth Summit was held in Wellington, New Zealand in September, 2012.
Digital Earth Reference Model (DERM)
The term Digital Earth Reference Model (DERM) was coined by Tim Foresman in context with a vision for an all encompassing geospatial platform as an abstract for information flow in support of Al Gore's vision for a Digital Earth. The Digital Earth reference model seeks to facilitate and promote the use of georeferenced information from multiple sources over the Internet.
A digital Earth reference model defines a fixed global reference frame for the Earth using four principles of a digital system, namely:
Discrete partitioning using regular or irregular cell mesh, tiling or Grid;
Data acquisition using signal processing theory (sampling and quantizing) for assigning binary values from continuous analog or other digital sources to the discrete cell partitions;
An ordering or naming of cells that can provide both unique spatial indexing and geographic location address;
A set of mathematical operations built on the indexing for algebraic, geometric, Boolean and image processing transforms, etc.
The Open Geospatial Consortium has a spatial reference system standard based on the DERM called a [Discrete Global Grid] System (DGGS). According to OGC "a DGGS is a spatial reference system that uses a hierarchical tessellation of cells to partition and address the globe. DGGS are characterized by the properties of their cell structure, geo-encoding, quantization strategy and associated mathematical functions. The OGC DGGS standard supports the specification of standardized DGGS infrastructures that enable the integrated analysis of very large, multi-source, multi-resolution, multi-dimensional, distributed geospatial data. Interoperability between OGC DGGS implementations is anticipated through extension interface encodings of OGC Web Services.". Thus, the DGGS is a discrete, hierarchical, information grid with an addressing (or indexing) scheme to assign unique addresses to each cell across the entire DGGS Domain.
Background
United States
Technology developments that support the current Digital Earth technological framework can be traced to U.S. computing advances derived from the Cold War competition, the space race, and commercial innovations. Therefore, many innovations can be tracked to corporations working for the Department of Defense or NASA. However, the philosophical foundations for Digital Earth can be more closely aligned with the increased awareness of global changes and the need to better understand the concepts of sustainability for the planet's survival. These roots can be traced back to visionaries such as Buckminster Fuller who proposed development of a GeoScope half a century ago, analogous to a microscope to examine and improve our understanding of the planet Earth.
From Fall 1998 until Fall 2000, NASA led the U.S. Digital Earth initiative in cooperation with its sister government agencies, including the Federal Geospatial Data Committee (FGDC). Attention to consensus development of standards, protocols and tools through cooperative test-bed initiatives was the primary process for advancement of this initiative within the government community.
In 1999, NASA was selected to head a new Interagency Digital Earth Working Group (IDEW), due to its reputation for technology innovations and its focus on the study of planetary change. The new initiative was located in the NASA's Office of Earth Sciences. This titular focus was considered necessary to help align over 17 government agencies and keep sustainability and Earth oriented applications as a guiding principle for the Digital Earth enterprise. Components for development of 3-D Earth graphic-user-interfaces (GUIs) were placed into various technological sectors to stimulate cooperative development support. While initially limited to government personnel, industry and academia were early observers attending IDEW workshops to discuss topics such as, visualization, information fusion, standards and interoperability, advanced computational algorithms, digital libraries and museums. In March 2000, at a special IDEW meeting hosted by Oracle Corporation in Herndon, Virginia, industry representatives demonstrated several promising 3-D visualization prototypes. Within two years, these were captivating international audiences, including Kofi Annan and Colin Powell, in government, business, science, and mass media who began to purchase the early commercial geo-browsers. Just as the spectacular Apollo photography of Earthrise provided an inspiring Earth-centric image for new generations to appreciate the fragility of our biosphere, the 3-D Digital Earths began inspiring growing numbers of people to the possibility of better understanding and possibly saving our planet. Introduction of satellite data into commercially accessible spatial toolboxes significantly advanced the capacity to map, monitor, and manage our planet's resources and provide a unifying perspective on the Digital Earth vision.
After Al Gore lost the 2000 presidential election, the incoming administration considered the programmatic moniker Digital Earth a political liability. Digital Earth was relegated to a minority status within the FGDC, used primarily to define 3-D visualization reference models.
China
In 1999, with the Chinese government's full backing, the inaugural International Symposium on Digital Earth in Beijing provided a venue for the extensive international support for implementing the Gore Digital Earth vision introduced a year earlier. Hundreds of digital earth cities created by governments and universities resulted. In China, Digital Earth became a metaphor for modernization and automation with computers, leading to its incorporation into a five-year modernization plan. Originating from China's satellite remote sensing community, Digital Earth prowess spread to a range of applications including flood predictions, dust cloud modeling, environmental assessments, and city planning. China has been omnipresent at all international Digital Earth conferences since and has recently founded the International Society for Digital Earth, one of the first NGOs created by the Chinese Academy of Sciences. In 2009, the International Symposium on Digital Earth returned to Beijing for its 6th meeting.
United Nations
In 2000, the United Nations Environment Programme (UNEP) advanced the Digital Earth to enhance decision-makers' access to information for then Secretary-General Kofi Annan and the United Nations Security Council. UNEP promoted use of web-based geospatial technologies with the ability to access the world's environmental information, in association with economic and social policy issues. A reorganization of UNEP's data and information resources was initiated in 2001, based on the GSDI/DE architecture for a network of distributed and interoperable databases creating a framework of linked servers. The design concept was based upon using a growing network of internet mapping software and database content with advanced capabilities to link GIS tools and applications. UNEP.net, launched in February 2001, provided UN staff with an unparalleled facility for accessing authoritative environmental data resources and a visible example to others in the UN community. However, a universal user interface for UNEP.net, suitable for members of Security Council, that is non-scientists, did not exist. UNEP began actively testing prototypes for a UNEP geo-browser beginning in mid-2001 with a showcase for the African community displayed at the 5th African GIS Conference in Nairobi, Kenya November 2001. Keyhole Technology, Inc. (later purchased in 2004 by Google and to become Google Earth) was contracted to develop and demonstrate the first full globe 3-D interactive Digital Earth using web-stream data from a distributed database located on servers around the planet. A concerted effort within the UN community, via the Geographic Information Working Group (UNGIWG), followed immediately, including purchase of early Keyhole systems by 2002. UNEP provided further public demonstrations for this early Digital Earth system at the World Summit on Sustainable Development in September, 2002 at Johannesburg, South Africa. In seeking an engineering approach to system-wide development of the Digital Earth model, recommendations were made at the 3rd UNGIWG Meeting, June 2002, Washington, D.C. for creating a document on the Functional User Requirements for geo-browsers. This proposal was communicated to the ISDE Secretariat in Beijing and the organizing committee for the 3rd International Symposium on Digital Earth and agreement was reached by the Chinese Academy of Sciences-sponsored Secretariat to host the first of the two Digital Earth geo-browser meetings.
Japan
Japan, led by Keio University and JAXA, has also played a prominent international role in Digital Earth helping to create the Digital Asia Network with a secretariat located in Bangkok to promote regional cooperation and initiatives. Citizens in the Gifu Prefecture upload information to community-scale Digital Earth programs with from their smartphones on topics ranging from first sightings of fireflies in spring to location of blocked handicap access ramps.
Events
See also
Destination Earth (European Union)
Digital twin
Geocode
Geodesic grid
Géoportail
Geoweb
Grid reference
International Cartographic Association (ICA)
International Society for Digital Earth (ISDE)
Spatial index
References
Further reading
External links
Digital Earth technologies
ADEPT - Alexandria Digital Earth Prototype (1999–2004)
US Government Digital Earth Reference Model (DERM)
Global Spatial Data Model (GSDM)
Planetary Skin: A global platform for a new Era of Collaboration
Digital marketing China
PYXIS WorldView Studio: Digital Earth platform for spatial analysis and sharing map data
Geographic data and information | Digital Earth | Technology | 3,166 |
916,555 | https://en.wikipedia.org/wiki/Corkscrew%20%28program%29 | Corkscrew is a computer program, written by Patrick Padgett, that enables the user to tunnel SSH connections through most HTTP and HTTPS proxy servers. Combined with features of SSH such as port forwarding, this can allow many types of services to be run securely over the SSH via HTTP connections.
Supported proxy servers:
Gauntlet
CacheFlow
Internet Junkbuster
Squid
Apache's mod_proxy
References
External links
Homepage at GitHub
Cryptographic software
Secure Shell | Corkscrew (program) | Mathematics | 98 |
23,041,996 | https://en.wikipedia.org/wiki/Pivhydrazine | {{Drugbox
| verifiedrevid = 451218082
| IUPAC_name = ''N-benzyl-2,2-dimethyl-propanehydrazide
| image = Pivalylbenzhydrazine.svg
| image_class = skin-invert-image
| width = 150px
| tradename =
| pregnancy_category =
| legal_status = Rx-only
| routes_of_administration = Oral
| bioavailability =
| metabolism =
| elimination_half-life =
| excretion =
| CAS_number_Ref =
| CAS_number = 306-19-4
| UNII_Ref =
| UNII = TK1T520ASG
| ATC_prefix = none
| ATC_suffix =
| PubChem = 9375
| ChEMBL = 2106941
| ChemSpiderID_Ref =
| ChemSpiderID = 9007
| C=12 | H=18 | N=2 | O=1
| StdInChI_Ref =
| StdInChI = 1S/C12H18N2O/c1-12(2,3)11(15)14-13-9-10-7-5-4-6-8-10/h4-8,13H,9H2,1-3H3,(H,14,15)
| StdInChIKey_Ref =
| StdInChIKey = FWWDFDMCZLOXQI-UHFFFAOYSA-N
| synonyms = Angorvid, Betamezid, Neomarsilid, Pivazide, Pivhydrazine
}}Pivhydrazine (trade name Tersavid), also known as pivalylbenzhydrazine and pivazide''', is an irreversible and non-selective monoamine oxidase inhibitor (MAOI) of the hydrazine family. It was formerly used as an antidepressant in the 1960s, but has since been discontinued.
See also
Hydrazine (antidepressant)
References
Hydrazides
Monoamine oxidase inhibitors
Withdrawn drugs | Pivhydrazine | Chemistry | 461 |
13,982,712 | https://en.wikipedia.org/wiki/BE%26K | BE&K, Inc., was a global engineering and construction company based in Birmingham, Alabama, United States.
History
BE&K, Inc. was founded in 1972 and named after its founders: Peter Bolvig, William Edmonds, and Ted Kennedy. BE&K was acquired by KBR, Inc. in 2008.
Operations
BE&K, Inc. operated in commercial and industrial sectors, including healthcare, telecommunications and education markets. BE&K, Inc. had a presence across the United States and in Europe through its subsidiaries. Its subsidiaries and affiliates included Allstates Technical Services, Inc.; Enprima; MEI Consultants, Inc.; NorthStar Communications Group, Inc.; QBEK; Rintekno Group; BE&K's Saginaw Warehouse; SW&B Construction Corporation; and Terranext, LLC.
In 1989, BE&K acquired FN Thompson Company, a Charlotte, North Carolina–based commercial construction firm. Seven years later, in 1996, BE&K acquired Suitt Construction Company, based in Greenville, South Carolina. In 2004, FN Thompson and Suitt Construction were merged into a separate subsidiary, BE&K Building Group, LLC, under the BE&K brand.
In 2007, BE&K was ranked 47th in the Engineering News-Record (ENR) Top 500 Design Firms list, as well as appearing in several other rankings by ENR.
Acquisition
On May 7, 2008, BE&K, Inc. announced that it would be acquired by the Houston, Texas–based construction firm KBR, Inc. for $550 million.
In 2015, KBR sold the BE&K Building Group LLC subsidiary and the BE&K brand, including all associated trademarks and domains, to Pernix Group, Inc. The BE&K brand is now wholly owned by Pernix Group and solely used by BE&K Building Group, LLC. The remainder of BE&K's business and associated subsidiaries have either been fully absorbed by KBR or divested to other purchasers with all operations in Birmingham, Alabama discontinued.
References
Construction | BE&K | Engineering | 430 |
59,627,363 | https://en.wikipedia.org/wiki/Cancer%20selection | Cancer selection can be viewed through the lens of natural selection. The animal host's body is the environment which applies the selective pressures upon cancer cells. The most fit cancer cells will have traits that will allow them to out compete other cancer cells which they are related to, but are genetically different from. This genetic diversity of cells within a tumor gives cancer an evolutionary advantage over the host's ability to inhibit and destroy tumors. Therefore, other selective pressures such as clinical treatments and pharmaceutical treatments are needed to help destroy the large amount of genetically diverse cancerous cells within a tumor. It is because of the high genetic diversity between cancer cells within a tumor that makes cancer a formidable foe for the survival of animal hosts. It has also been proposed that cancer selection is a selective force that has driven the evolution of animals. Therefore, cancer and animals have been paired as competitors in co-evolution throughout time.
Natural selection
Evolution, which is driven by natural selection, is the cornerstone for nearly all branches of biology including cancer biology. In 1859, Charles Darwin's book On the Origin of Species was published, in which Darwin proposed his theory of evolution by means of natural selection. Natural selection is the force that drives changes in the phenotypes observed in populations over time, and is therefore responsible for the diversity amongst all living things. It is through the pressures applied by natural selection upon individuals that leads to evolutionary change over time. Natural selection is simply the selective pressures acting upon individuals within a population due to changes in their environment which picks the traits that are best fit for the selective change.
Selection and cancer
These same observations that Darwin proposed for the diversity in phenotypes amongst all living things can also be applied to cancer biology to explain how selection drives change in the population of cells within a tumor over time. Therefore for the purpose of cancer evolution the body of the organism is the environment, and changes in the environment, whether via natural processes or clinical therapies, apply the selective pressures upon cancer that can drive a selective adaptation in cancerous tumor cells.
Cancers as a product of host evolution
Cancer is a very ancient pathology that emerged with multicellular organisms. Hosts, therefore, have had billions of years to co-evolve with cancers. Over evolutionary time hosts develop an increasing number of cancer suppressors (e.g. cytotoxic lymphocytes, Natural Killer cells, suppressor gene such as p53 copy number of those genes. Cancers are the outcome of cells that escape these evolved suppression mechanisms.
Diversity is a selective advantage
Cancer is a disease which is highly diverse not only its pathology, but in the initiation and progression from non-cancerous tissue to malignant tumor tissue . Cancer is considered to be stochastic in nature, in that there are many variables and probabilities that contribute to how a cell or tissue progresses from a state of non-cancerous, to cancerous, and eventually to metastasis. Cancer differs from many other diseases due to the uniquely long lifespan of the disease which contributes to the diversity of cancer cells both within a tumor and between related tumors in a host.
Tumor heterogeneity
As time passes cancerous tumors can progress in genetic diversity amongst clonal cells due to the ability to accumulate changes over time, until the tumor reaches homeostasis thus allowing for the spread of the disease throughout the body of a host. Overlap this pathway with all of the other developmental pathways and possible events that can lead to the same outcome of metastasis, and it becomes apparent that cancer has a unique ability to find a way to progress into its cancerous phenotype. Therefore, from the moment of initiation putting cells or tissues down a pathway towards metastasis the majority of tumor cells will accumulate mutations that increases genetic diversity within the tumor (intra-tumor genetic heterogeneity). Not only can tumors be composed of genetically diverse cells, it can also lead to inter-tumor heterogeneity meaning that related tumors within the same host are genetically different. This tumor heterogeneity gives a selective advantage to the best fit clonal and sub-clonal cells of a tumor. Due to the heterogeneity and the unchecked proliferation of tumor cells, cancer is given a selective advantage not only over non-cancerous cells, but also against selective pressures that choose against it, such as pharmaceutical and clinical therapies, and also the host's immune system.
Resistance
Due to its diverse nature, cancer has been able to evolve very defined and specific mechanisms to resist selective pressures. The goal of selective pressures upon cancer is to rid the disease of its diversity and thus in doing so forcing it back into an initial less harmful, more easily treatable, less diverse, non cancerous neoplastic state in which it is considered not to be lethal. A neoplastic state or neoplasm is simply an abnormal growth of tissue which can range from a harmless non-cancerous mole to a cancerous tumor. Cancer can circumvent negative selective pressures due to its ability to accumulate mutations that cause genetic diversity in tumor cells as the cells proliferate. Cancer seems to have evolved a propensity for, or at the very least, a selection for fitness. This is demonstrated in the ability for tumors which are undergoing large amounts of mutations to find a way for the cells composing the tumor to survive and to produce cellular offspring which are better fit for survival. Therefore, cancer initiation and progression have to be highly conserved evolutionarily or a tumor would dissociate merely due to inordinate volume of mutations that occur within it.
Evolution in animals driven by cancer
Another interesting way to look at cancer evolution is not through the lens of how selective pressures shape the disease throughout its time spent within an organism, but rather to think of cancer as a selective force itself shaping the evolution of the populations of animal hosts. By taking this approach cancer selection would be defined in the same terms that natural selection and artificial selection are defined. This means that like natural and artificial selection, cancer selection would be defined as a selective force that is capable of driving population diversity and over time lead to evolution.
Cancer is selfish
Cancer is often defined as being selfish, in that it is composed of selfish cell lines which produce progeny that have higher fitness and reproductive success than parental cells which allow them to out compete other clonal cells. This increase in fitness (cancer progression) is of course detrimental to the host within which cancer resides. Therefore, one could possibly look at cancer and animal hosts to be intertwined in the complicated dance known to biologists as co-evolution. This theory would propose that as animals evolve new morphological traits and life-history behaviors they become more susceptible to developing cancer. Therefore, cancer gains the evolutionary advantage over animals because of newly evolved animal traits that it can select against or for its own survival. This then places the selective pressure back upon animal species to evolve or forever succumb to cancer selective pressures. Most recently it has been theorized that all of the morphological and life history diversity seen today in animals, is the result of the uncountable deaths caused by cancer in ancestral animal lineages.
Examples
Cancer is a disease with a long lifespan. Therefore, as animals evolved into bigger and more complex organisms with longer lifespans themselves their morphologies were highly restricted by the need to adapt a resistance to the negative selective pressures that cancer placed upon them. For cancer cells to thrive they must be able to proliferate unchecked and uncontrolled within the tissues of their animal hosts. Therefore, animals have adapted to cancer selection by evolving tumor suppressing genes. These genes help inhibit the initiation and progression of cancerous cells.
References
Biological interactions
Cancer
Cancer research
Natural selection | Cancer selection | Biology | 1,566 |
12,661,414 | https://en.wikipedia.org/wiki/Field%20emitter%20array | A field emitter array (FEA) is a particular form of large-area field electron source. FEAs are prepared on a silicon substrate by lithographic techniques similar to those used in the fabrication of integrated circuits. Their structure consists of many individual, similar, small-field electron emitters, usually organized in a regular two-dimensional pattern. FEAs need to be distinguished from "film" or "mat" type large-area sources, where a thin film-like layer of material is deposited onto a substrate, using a uniform deposition process, in the hope or expectation that (as a result of statistical irregularities in the process) this film will contain a sufficiently large number of individual emission sites.
Spindt arrays
The original field emitter array was the Spindt array, in which the individual field emitters are small sharp molybdenum cones. Each is deposited inside a cylindrical void in an oxide film, with a counterelectrode deposited on the top of the film. The counterelectrode (called the "gate") contains a separate circular aperture for each conical emitter. The device is named after Charles A. Spindt, who developed this technology at SRI International, publishing the first article describing a single emitter tip microfabricated on a wafer in 1968.
Spindt, Shoulders and Heynick filed a U.S. Patent in 1970 for a vacuum device comprising an array of emitter tips.
Each individual cone is referred to as a Spindt tip. Because Spindt tips have sharp apices, they can generate a high local electric field using a relatively low gate voltage (less than 100 V). Using lithographic manufacturing techniques, individual emitters can be packed extremely close together, resulting in a high average (or "macroscopic") current density of up to 2×107 A/m2 . Spindt-type emitters have a higher emission intensity and a more narrow angular distribution than other FEA technologies.
nano-Spindt arrays
Nano-Spindt arrays represent an evolution of the traditional Spindt-type emitter. Each individual tip is several orders of magnitude smaller; as a result, gate voltages can be lower, since the distance from tip to gate is reduced. In addition, the current extracted from each individual tip is lower, which should result in improved reliability.
Carbon Nanotube (CNT) arrays
An alternative form of FEA is fabricated by creating voids in an oxide film (as for a Spindt array) and then using standard methods to grow one or more carbon nanotubes (CNTs) in each void.
It is also possible to grow "free-standing" CNT arrays.
Applications
Essentially very small electron beam generators, FEAs, have been applied in many different domains. FEAs have been used to create flat panel displays (where they are known as field emission displays (or "nano-emissive displays"). They may also be used in microwave generators, and in RF communications, where they could serve as the cathode in traveling wave tubes (TWTs).
Recently, there has been renewed interest in using field effect arrays as cold cathodes in X-ray tubes. FEAs offer a number of potential advantages over conventional thermionic cathodes, including low power consumption, instantaneous switching, and independence of current and voltage.
References
See also
Field emission display
Field electron emission
Vacuum tubes using field electron emitters
Cold cathode
Vacuum tubes | Field emitter array | Physics | 707 |
53,535,975 | https://en.wikipedia.org/wiki/Bloomsbury%20Dispensary%20for%20the%20Relief%20of%20the%20Sick%20Poor | The Bloomsbury Dispensary for the Relief of the Sick Poor was an institution founded in 1801 to provide medical aid and suitable nourishment to the poor people of that part of London.
George Pinckard founded the dispensary on 26 October 1801 and became its first physician, remaining there for thirty years. It was established at 62 Great Russell Street.
References
Charities based in London
19th century in London
Pharmacy
Bloomsbury
1801 establishments in England | Bloomsbury Dispensary for the Relief of the Sick Poor | Chemistry | 93 |
21,011,497 | https://en.wikipedia.org/wiki/Circinus%20X-1 | Circinus X-1 is an X-ray binary star system that includes a neutron star. Observation of Circinus X-1 in July 2007 revealed the presence of X-ray jets normally found in black hole systems; it is the first of the sort to be discovered that displays this similarity to black holes. Circinus X-1 may be among the youngest X-ray binaries observed.
Location, distance
On June 14, 1969, an Aerobee 150 rocket, launched from Natal, Rio Grande do Norte, Brazil, obtained X-ray data during a scan of the Norma-Lupus-Circinus region that detected a well-isolated source at ℓ = 321.4±0.9° b = -0.5±2° (galactic), RA Dec within the constellation Circinus and referred to as Circinus XR-1 (Cir XR-1).
The distance of Circinus X-1 was not well established, with a low estimate of 13,400 light years and high estimate of 26,000 light years.
On June 23, 2015, an article published on NASA's Chandra X-Ray Observatory's website, revealed that an international team of astronomers has succeeded in determining its distance from Earth with more precision - via a method of triangulation of X-ray light emitted by the star, echoing through stellar clouds and interstellar dust - as being about 30,700 light-years.
X-ray source and age related to supernova remnant
A 16.6 day X-ray period was found by Kaluzienski et al. The X-ray source is assumed to be a neutron star as part of a low-mass X-ray binary (LMXB), type-I X-ray burster. The X-ray and radio nebulae surrounding Circinus X-1 have properties consistent with a young supernova remnant. This rare case of an X-ray binary apparently associated with a supernova remnant suggests the binary is very young on cosmic time scales, possibly less than 4600 years old. An association of Circinus X-1 with a different nearby supernova remnant, G321.9-0.3, has been ruled out.
Other spectral regions
The binary nature of Cir X-1 has been established. The binary's radio component and a possible visual counterpart were identified by Whelan et al. Its infrared counterpart was located and found to flare with a 16.6-day period by Glass. A (heavily reddened) precise optical counterpart (now known as BR Cir) was identified by Moneti.
References
External links
X-ray binaries
Circinus
Circini, BR
B-type supergiants
Neutron stars
A-type supergiants | Circinus X-1 | Astronomy | 569 |
4,677,186 | https://en.wikipedia.org/wiki/Wien%20approximation | Wien's approximation (also sometimes called Wien's law or the Wien distribution law) is a law of physics used to describe the spectrum of thermal radiation (frequently called the blackbody function). This law was first derived by Wilhelm Wien in 1896. The equation does accurately describe the short-wavelength (high-frequency) spectrum of thermal emission from objects, but it fails to accurately fit the experimental data for long-wavelength (low-frequency) emission.
Details
Wien derived his law from thermodynamic arguments, several years before Planck introduced the quantization of radiation.
Wien's original paper did not contain the Planck constant. In this paper, Wien took the wavelength of black-body radiation and combined it with the Maxwell–Boltzmann energy distribution for atoms. The exponential curve was created by the use of Euler's number e raised to the power of the temperature multiplied by a constant. Fundamental constants were later introduced by Max Planck.
The law may be written as
(note the simple exponential frequency dependence of this approximation) or, by introducing natural Planck units,
where:
This equation may also be written as
where is the amount of energy per unit surface area per unit time per unit solid angle per unit wavelength emitted at a wavelength λ. Wien acknowledges Friedrich Paschen in his original paper as having supplied him with the same formula based on Paschen's experimental observations.
The peak value of this curve, as determined by setting the derivative of the equation equal to zero and solving, occurs at
a wavelength
and frequency
Relation to Planck's law
The Wien approximation was originally proposed as a description of the complete spectrum of thermal radiation, although it failed to accurately describe long-wavelength (low-frequency) emission. However, it was soon superseded by Planck's law, which accurately describes the full spectrum, derived by treating the radiation as a photon gas and accordingly applying Bose–Einstein in place of Maxwell–Boltzmann statistics. Planck's law may be given as
The Wien approximation may be derived from Planck's law by assuming . When this is true, then
and so the Wien approximation gets ever closer to Planck's law as the frequency increases.
Other approximations of thermal radiation
The Rayleigh–Jeans law developed by Lord Rayleigh may be used to accurately describe the long wavelength spectrum of thermal radiation but fails to describe the short wavelength spectrum of thermal emission.
See also
ASTM Subcommittee E20.02 on Radiation Thermometry
Sakuma–Hattori equation
Ultraviolet catastrophe
Wien's displacement law
References
Statistical mechanics
Electromagnetic radiation
1896 in science
1896 in Germany | Wien approximation | Physics | 521 |
5,037,234 | https://en.wikipedia.org/wiki/Hyalin | Hyalin is a protein released from the cortical granules of a fertilized animal egg. The released hyalin modifies the extracellular matrix of the fertilized egg to block other sperm from binding to the egg, and is known as the slow-block to polyspermy. All animals have this slow-block mechanism.
Hyalin is a large, acidic protein which aids in embryonic development. The protein has strong adhesive properties which can help with cell differentiation and as a polyspermy prevention component. It forms the hyaline layer which covers the surface of the egg after insemination.
Structure
Its physical structure has a major and minor component. One is filamentous, having flexible molecules containing a globular domain head at the end. Its conformation is retained mainly by disulfide bonds, as virtually all cysteine amino acids are found in the disulfide form, but also hydrophobic forces and salt linkages stabilize the molecule. The filament length is about 75 nm long, and the head being club-shaped with a diameter of 12 nm. An isoform of the molecule exists, having a longer filament of 125 nm instead. Both forms of these filaments often fold on themselves, making the protein heterogeneous, resulting in poorly resolved stains on a gel. This makes the exact mass uncertain, as the protein is very difficult to purify. Estimates place the mass at about 350 kDa. About 2-3% of its mass is carbohydrates. Aggregates of hyalin also form by associating the heads of the protein, and hyalin remains associated with a high, molecular weight core protein throughout purification.
Hyalin mRNA is about 12kb in length. It encodes for approximately 25% acidic residues with only 3.5% basic residues. Within its sequence is a region containing tandem repeats of about 84 amino acids. This sequence is highly conserved between species, and is believed to be the adhesive substrate of hyalin. A recombinant part of this sequence was created and its adhesive properties were tested. It was found to be about as adhesive as native hyalin. Antibodies bound to the recombinant hyalin and blocked its adhesion similar to normal hyalin. The tandem repeat region was then found to be on the filamentous part of hyalin when the antibodies bound to it. As many as 21 of these long repeats can be present, accounting for 230 kDa of the total mass and two-thirds of the filamentous region. These repeats shows no resemblance to anything within the genbank, making hyalin a unique protein.
Embryonic development
Location in cell
Hyalin is located in the cortical granules within an egg. Here, the protein is in a solated form. Cortical granules migrate to the inner plasma membrane where they remain inactive until the cell depolarizes. All protein at this point is of a maternal origin. Hyalin is confined within a subregion of the cortical granules, showing that and these vesicles hold enough hyalin to support the cell and form the hyalin layer until the gastrulation stage. Another source of hyalin is in the cytoplasm. This is also maternally derived. A hyalin layer which coats the embryo forms even after hyalin has been removed from the cortical granules, showing that this secondary reservoir exists.
New hyalin is not expressed until after the gastrula has been formed. This is shown by the accumulation of hyalin mRNA. This mRNA is expressed around the blastopore at the endoderm-ectoderm boundary, which is rich in rough endoplasmic reticulum. New hyalin appeared on the apical surface of the ectoderm cells. It also had to be specifically trafficked as it did in the cortical granules. Hyalin does not penetrate into the endoderm. Some monoclonal antibodies were identified to carry molecules to the apical surface of ectodermal cells. Maternal hyalin persists throughout development and appears in the archenteron of the gastrula. Since the same genomic DNA gene encodes for both maternal and new hyalin, some alternative splicing must occur in order for the antibodies to carry the correct hyalin to the correct area.
Hyaline layer formation
Hyalin's structure is dependent upon calcium ions. It stabilizes against denaturation from the high concentrations of NaCl in seawater. Stabilization happens when concentrations are as low as 1mM. Calcium also causes hyalin to precipitate and form aggregates with itself and other proteins. Doing this would require higher concentrations of calcium. Another divalent ion, Mg2+, causes further precipitation of hyalin. When acting alone, magnesium cannot cause precipitation, but increases the effect of calcium precipitation.
As stated before, the hyaline layer coats the external surface of an embryo. Once the egg is fertilized, then the cortical granules exocytose their contents into the extracellular matrix. When this happens, hyalin comes in contact with calcium ions and solubilizes. Binding with calcium also induces hyalin-protein interactions, creating aggregates of itself and other proteins. A gel like layer results, and the hyaline layer is formed around the egg. The hyaline layer grows to be about 2–3 mm thick within fifteen minutes after insemination. This layer forms in the extracellular matrix and functions as an adhesive substance for the blastomeres.
Function
The hyaline layer is responsible for the adhesion and proper orientation of the cells of an embryo. Throughout development, certain cells change their binding affinity towards hyalin. Hyalin helps cells differentiate into the animal and the vegetal halves during oogenesis by utilizing zinc and lithium ions. Zinc enhances the amount of hyalin precipitated, while lithium keeps the hyalin solubilized when around calcium. Zinc, then, causes an animalizing effect since the binding of the blastula cells would be stronger, while the weaker attachment of the cells would form the vegetal half.
Cells can further differentiate if they gain an affinity towards other membranes. Invagination of the blastula occurs when the endoderm loses its affinity towards hyalin, while the ectoderm retains it. This leads to the keystone shape of the gastrula, with the different layers forming into separate biological systems.
Hyalin has a secondary effect of aiding with polyspermy when in the hyaline layer. The fertilization envelope is the hardened mechanical barrier that blocks additional sperm from penetrating the cell. It is created by the products secreted by the cortical granules. Underneath the fertilization envelope is the hyaline layer, which covers up sperm receptors in the egg's plasma membrane. Should the fertilization envelope not form or dissociate, the hyaline layer alone blocks against polyspermy.
External links
Extracellular matrix proteins
Developmental genes and proteins | Hyalin | Biology | 1,489 |
11,711,579 | https://en.wikipedia.org/wiki/Salamandrella%20keyserlingii | Salamandrella keyserlingii, the Siberian salamander, is a species of salamander found in Northeast Asia. It lives in wet woods and riparian groves.
Distribution
It is found primarily in Siberia east of the Sosva River and the Urals, in the East Siberian Mountains, including the Verkhoyansk Range, northeast to the Anadyr Highlands, east to the Kamchatka Peninsula and south into Manchuria, with outlying populations also in northern Kazakhstan and Mongolia, northeastern China, and on the Korean Peninsula. It is believed to be extirpated from South Korea. An isolated population exists on Hokkaidō, Japan, in the Kushiro Shitsugen National Park. A breeding ground of Siberian salamanders in Paegam, South Hamgyong, is designated North Korean natural monument #360.
Description
Adults are from 9.0 to 12.5 cm in length. Their bodies are bluish-brown in color, with a purple stripe along the back. Thin, dark brown stripes occur between and around the eyes, and also sometimes on the tail. Four clawless toes are on each foot. The tail is longer than the body. Males are typically smaller than females.
The species is known for surviving deep freezes (as low as −45 °C). In some cases, they have been known to remain frozen in permafrost for years, and upon thawing, walking off. They accomplish this by reducing to a fourth of their body weight through water loss and liver shrinkage, and by increasing the concentration of glycerol in their body.
Discovery
In 1870, Dybowski gave it the name of Salamandrella Keyserlingii. It was renamed in 1910, the 1910 scientific name hardly used. Boulenger gave it the new (but unused) name.
General Behavior
The Siberian salamander is fairly nocturnal, foraging above ground at night and staying under moist logs or woody debris during the day.
Habitat
Within its extensive range, the habitat of the Siberian newt is wet conifer, mixed deciduous forests in the taiga and riparian grooves in the tundra and forest steppe. They can be found near ephemeral or permanent pools, wetlands, sedge meadows, off near oxbow lakes.
Reproduction
Their breeding season occur during May or beginning of June, in pools of water. A single egg sac contains 50-80 eggs on average, with a female typically laying up to 240 eggs in a season. The light-brown eggs hatch three to four weeks after being laid, releasing larval salamanders of 11–12 mm in length.
References
Further reading
External links
Distribution map
Malyarchuk B., Derenko M. et al. Phylogeography and molecular adaptation of Siberian salamander Salamandrella keyserlingii based on mitochondrial DNA variation, 2010
keyserlingii
Cryozoa
Amphibians of China
Amphibians of Japan
Amphibians of Korea
Amphibians of Mongolia
Amphibians of Russia
Amphibians described in 1870 | Salamandrella keyserlingii | Chemistry | 620 |
37,653,046 | https://en.wikipedia.org/wiki/HD%20100307 | HD 100307 is a suspected variable star in the constellation of Hydra. Its apparent magnitude is 6.16, but interstellar dust makes it appear 0.346 magnitudes dimmer than it should be. It is located some 340 light-years (104 parsecs) away, based on parallax.
HD 100307 is a M-type red giant. It has evolved from the main sequence to a radius of 67.6 times that of the Sun. It emits 687 times as much energy as the Sun at a surface temperature of 3,598 K.
References
Hydra (constellation)
M-type giants
Durchmusterung objects
100307
056293
4445 | HD 100307 | Astronomy | 146 |
2,902,827 | https://en.wikipedia.org/wiki/81%20Aquarii | 81 Aquarii is a star in the constellation of Aquarius. It has an orange hue and is barely visible to the naked eye with an apparent visual magnitude of 6.23. 81 Aquarii is its Flamsteed designation. The star is located at a distance of approximately 451 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +1.6 km/s. It is positioned near the ecliptic and thus is subject to lunar occultations.
This is an aging giant star with a stellar classification of K5 III, indicating it has exhausted the supply of hydrogen at its core then cooled and expanded off the main sequence. The stellar spectrum displays strong lines of cyanogen. It presently has 18 times the radius of the Sun and is radiating 102 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,324 K.
References
K-type giants
Aquarius (constellation)
Aquarii, 081
Durchmusterung objects
217531
113674
8757 | 81 Aquarii | Astronomy | 222 |
41,823,957 | https://en.wikipedia.org/wiki/The%20Copernican%20Question | The Copernican Question: Prognostication, Skepticism, and Celestial Order is a 704-page book written by Robert S. Westman and published by University of California Press (Berkeley, Los Angeles, London) in 2011 and in 2020 (paperback). The book is a broad historical overview of Europe's astronomical and astrological culture leading to Copernicus’s De revolutionibus and follows the scholarly debates that took place roughly over three generations after Copernicus.
Summary
In 1543, Nicolaus Copernicus publicly defended his hypothesis that the earth is a planet and the sun a body resting near the center of a finite universe. This view challenged a long-held, widespread consensus about the order of the planets. But why did Copernicus make this bold proposal? And why did it matter? The Copernican Question revisits this pivotal moment in the history of science and puts political and cultural developments at the center rather than the periphery of the story. When Copernicus first hit on his theory around 1510, European society at all social levels was consumed with chronic warfare, the syphilis pandemic and recurrence of the bubonic plague, and, soon thereafter, Martin Luther’s break with the Catholic church. Apocalyptic prophecies about the imminent end of the world were rife; the relatively new technology of print was churning out reams of alarming astrological prognostications even as astrology itself came under serious attack in July 1496 from the Renaissance Florentine polymath Giovanni Pico della Mirandola (1463-1494). Copernicus knew Pico’s work, possibly as early as the year of its publication in Bologna, the city in which he lived with the astrological prognosticator and astronomer, Domenico Maria di Novara (1454-1504). Against Pico’s multi-pronged critique, Copernicus sought to protect the credibility of astrology by reforming the astronomical foundations on which astrology rested. But, his new hypothesis came at the cost of introducing new uncertainties and engendering enormous resistance from traditionalists in the universities. Westman shows that efforts to answer Pico’s critique became a crucial unifying theme over the three generations of the first phase of what he calls the early modern scientific movement—a “long sixteenth century,” from the 1490s to the 1610s—that laid the foundations for the great transformations in natural philosophy in the century that followed.
Central Themes, Patterns, Theses
1. To avoid projecting current classifications of knowledge onto the past, Westman argues that categories of knowledge and their meanings should be regarded as bound to time and place. In Copernicus’s lifetime (1473-1543) and well into the seventeenth century, astronomy and astrology constituted a compound subject, called “the science of the stars.” Each part of this disciplinary couple could be further subdivided into theoretical and practical parts. Authors who contributed to the literature of the heavens described themselves with various names that might look familiar but which no longer carry currently specialized meanings, such as “mathematician” or “physician and astronomer.” Westman uses “astronomer-astrologer,” first coined by Gérard Simon (Kepler astronome astrologue, Paris, 1979) and also his own term, “heavenly practitioner”—terms of reference intended to be consistent with the historical agents’ own self-designations. Similarly, he contends that historical actors located the topic of planetary order within the domains of theoretical astronomy and theoretical astrology—as opposed to their practical counterparts.
2. Copernicus’s initial turn to the heliocentric planetary arrangement occurred in the context of his encounter with Pico della Mirandola’s wide-ranging attack on the science of the stars and, in particular, Pico’s contention that astrologers did not agree about the order of the planets (Disputationes adversus astrologiam divinatricem [Bologna: Benedictus Hectoris, 1496]).
Copernicus was especially bothered by the uncertain ordering of Venus and Mercury. However, like Johannes Regiomontanus’s Epitome of Ptolemy’s Almagest (Venice, 1496), which was an important model for Copernicus's De revolutionibus (Nuremberg, 1543), astrology is nowhere mentioned in either work.
3. The controversy about the principles of astrological prognostication persisted as a major motive that drove debates about the heavens from the late fifteenth- to the early seventeenth century. Those debates took place within a nexus of political-cultural arrangements defined by the churches (both Catholic and Protestant), the universities and the royal, princely and imperial courts. At mid-sixteenth century Wittenberg, the Lutheran reformer university rector, Philipp Melanchthon (1497-1560), advocated a theology that stressed divine presence in nature and prophetic or prognosticative activity as a natural, inborn human desire to know God’s works. Astronomer-astrologers at Wittenberg, most notably Erasmus Reinhold (1511-1553) and his many students, read De revolutionibus selectively, ignoring the re-ordering of the planets and, instead, extracted from Copernicus’s work those calculational models that could be geometrically transformed into the framework of a stationary earth. Superficially, this “Wittenberg Interpretation” has sometimes been taken to refer to the methodological view known as “instrumentalism”—that scientific theories are just useful instruments of prediction—but Reinhold retained a “realist” view with respect to the solid spheres that carried the planets.
4. In the face of Pico’s critique there were different kinds of efforts to improve astrological prognostication during the sixteenth century and Copernicus’s proposal to reform theoretical astronomy was but one of them.
5. The appearance of unforeseen, singular, celestial novelties between 1572 and 1604 pushed a handful of astronomer-astrologers to consider whether alternative planetary orderings, including those of Copernicus, Tycho Brahe (1546-1601), Nicolaus Reimars Baer (1551-1600) and Paul Wittich (1546-1586) could better explain the unanticipated phenomena.
6. This consideration of alternatives was the first major instance of underdetermination in the history of science (where the same observational evidence equally supports two, logically different hypotheses), although the historical agents were unaware of the epistemological generality of that problem. It resulted in new kinds of controversies and raised unprecedented questions about weighting the criteria for adjudicating among different hypotheses, including ancient authority, scriptural compatibility, simplicity, explanatory breadth, predictive accuracy and physical coherence.
7. The second generation followers of Copernicus (Michael Maestlin [1550-1631], Thomas Digges [1546-1595], Giordano Bruno [1548-1600], Christopher Rothmann [ca. 1550-1600]) did not constitute a socially- and intellectually-unified movement. And the failure of the high-profile third-generation proponents Johannes Kepler (1571-1630) and Galileo Galilei (1564-1638) to forge a cooperative and productive alliance around their defense of the Copernican theory is a particularly notable instance of this larger pattern of disunity.
8. Shared social context underdetermined the adoption of new theoretical claims. Many Copernicans, for example, were attracted to court settings because those spaces were more open to novelty than traditional university settings. But while court patronage allowed for rhetorical and philosophical diversity, it fails to explain why particular figures, like Galileo, adopted specific theoretical claims, such as the Copernican hypothesis.
9. Galileo’s famous telescopic claims can be thought of as introducing the discovery of recurrent novelties into the debate about alternative hypotheses. Unlike novas and comets, which seemed to appear only when God wanted to send a message, a human being with an instrument could make phenomena like the moon’s rough surface, never-before-seen distant stars or Jupiter’s ‘planets’ appear and disappear. Success in convincing others of the reality of these phenomena occurred largely through print rather than by live demonstrations with the instrument.
10. The main social locus of change of belief was not some twentieth-century-like “scientific community,” but the master-disciple relationship that was rooted in the all-male-cultures of the universities and modeled on the paternalistic structures of the family.
11. The Copernican Question proposes a new periodization that argues for an “early modern scientific movement”—chronologically, a “Long Sixteenth Century” that began with the late-fifteenth century conflict about the status of astrological prognostication and ended in the early seventeenth century when the Catholic Church extended its skepticism (and its enforcement machinery) about naturalistic foreknowledge to the reality of the heliocentric planetary ordering. Rather than revolutionary, paradigmatic rupture, this periodization offers a picture of gradual, multi-generational change that broadly conjoins a backward-looking veneration for ancient tradition with a forward-looking, modernizing valuation of change and novelty.
12. Kepler’ Epitome of Copernican Astronomy (1618–21) and Galileo’s Dialogue Concerning the Two Chief World Systems (1632) consolidated a critical mass of novel physical claims and arguments developed between the 1580s and the telescopic discoveries of 1610-13. While Kepler and Galileo disagreed on some important issues (Galileo, for example, never accepted Kepler’s elliptical orbits), together their presentations made possible a multifaceted, robust public debate (1620s-40s) that a new generation of modernizing natural philosophers—including René Descartes (1596-1650), Pierre Gassendi (1592-1655), Marin Mersenne (1588-1648), and Thomas Hobbes (1588-1679)—selectively incorporated into their own original arguments with tradition-bound natural philosophers in the universities. In this period, the Copernican Question became a struggle that overtly involved natural philosophy, a battle framed as one between competing “world systems”.
13. By 1651, the Jesuit astronomer Giovanni Battista Riccioli (1598-1651), writing in the shadow of Galileo’s Trial in 1633, produced a massive work (Almagestum novum) in which he assembled 49 arguments in favor of the daily and annual motions of the Earth as against 77 arguments contrary to the Earth’s motions. While presenting the decision between the two as a contrasting of probable arguments, in the end, he decisively eliminated any uncertainty by appeal to “both sacred authority and Divine Scriptures.”
14. Although the midcentury modernizers were all followers of Copernicus’s system, like the late-sixteenth defenders of Copernicus, they continued to be disunified in the kinds of principles and arguments to which they appealed. For example, a proposal that side-stepped the difficult technical arguments grounding Kepler’s ellipses and Galileo’s falling bodies and which helped to popularize support with new audiences was the argument for a plurality of worlds. Building on Giordano Bruno’s claim that God must have used his omnipotence to create an infinite universe with innumerable worlds and Galileo’s telescopic discovery of a moon with Earthlike characteristics, John Wilkins argued that an infinite, omnipotent god must have used his power to create other living beings to occupy a plurality of other sun-centered worlds. In his Discovery of a World in the Moone (1638), Wilkins broached the probability of an Earthlike moon with lunar inhabitants, the dark areas interpreted as seas, the entire body surrounded by a vaporous atmosphere. The existence of Lunatics was further testimony to the divine wisdom. And the pluralist argument became a significant resource for attracting adherents to a multiplicity of sun-centered systems in an infinite universe.
15. In England, prominent midcentury astrologers like Vincent Wing (1619-1668) and Thomas Streete (1621?-1689) learned their Copernicus through Kepler and Descartes and associated the accuracy of their predictions with Kepler’s Rudolfine Tables (1627).
16. Isaac Newton (1642-1727) and Robert Hooke (1635-1702) were members of a generation that encountered the Copernican Question not directly through Copernicus’s De revolutionibus but as a controversy already matured and refracted through the midcentury literature of the heavens and natural philosophy. Newton himself made his earliest acquaintance with the Copernican ordering and Kepler’s elliptical orbits through the astronomer-astrologers Wing and Streete. Ultimately, however, he rejected the claims of astrology as a form of idolatry, much as did Pico—based upon the projection of human qualities onto the stars and planets—and contrary to Newton’s belief in the power of God to act directly in the world without need of intermediaries. However, like Copernicus, Newton never published his views about astrology.
16. About the problem of closure. Westman argues that “the diversity already evident at the beginning of the [seventeenth] century persisted among Copernicus’s midcentury followers. To identify oneself publicly with the Copernican arrangement or to declare its truth did not entail allegiance to the uniform set of commitments in natural philosophy evoked by the nineteenth- and twentieth century term “Copernicanism”. The Copernican question achieved closure—an end to questioning and criticism from competing alternatives—in different ways among different audiences. These endings occurred through no single proof and with audiences as variously overlapping as almanac readers, practicing astrologers, planetary table makers, extraterrestrializers, itinerant scientific lecturers and, of course, philosophizing astronomers and high-end, new-style natural philosophers.”
17. Newton’s powerful achievement was his construction of a natural philosophy of mathematizable forces in which the sun’s position at or near the center of the planets could be deduced rather than assumed as a premise, as Copernicus had done: “The Copernican system is proved a priori,” Newton wrote, “for if the common center of gravity is calculated for any position of the planets it either falls in the body of the Sun or will always be very close to it.” And, unlike Copernicus, Tycho Brahe or Kepler in the long sixteenth century, he made no effort to fix astrology.
See also
The Scientific Revolution
The History of science in the Renaissance
The Renaissance
History of astronomy
References
External links
Official website
2011 non-fiction books
Books about the history of physics
History of astrology
American non-fiction books
Works about the history of astronomy
Copernican Revolution
English-language non-fiction books
University of California Press books
History books about philosophy
Philosophy of science books | The Copernican Question | Astronomy | 3,125 |
2,991,908 | https://en.wikipedia.org/wiki/Area%20compatibility%20factor | In survival analysis, the area compatibility factor, F, is used in indirect standardisation of population mortality rates.
where:
is the standardised central exposed-to risk from age x to x + t for the standard population,
is the central exposed-to risk from age x to x + t for the population under study and
is the mortality rate in the standard population for ages x to x + t.
The expression can be thought of as the crude mortality rate for the standard population divided by what the crude mortality rate is for the region being studied, assuming the mortality rates are the same as for the standard population.
F is then multiplied by the crude mortality rate to arrive at the indirectly standardised mortality rate.
References
Actuarial science
Demography
Epidemiology | Area compatibility factor | Mathematics,Environmental_science | 155 |
30,149,926 | https://en.wikipedia.org/wiki/JBoss%20operations%20network | JBoss Operations Network (or JBoss ON or JON) is free software/open-source Java EE-based network management software. JBoss Operations Network is part of the JBoss Enterprise Middleware portfolio of software. JBoss ON is an administration and management platform for the development, testing, deployment, and monitoring of the application lifecycle. Because it is Java-based, the JBoss application server operates cross-platform: usable on any operating system that supports Java. JBoss ON was developed by JBoss, now a division of Red Hat.
Product features
JBoss ON provides performance, configuration, and inventory management in order to deploy, manage, and monitor the JBoss middleware portfolio, applications, and services.
JBoss ON provides management of the following:
Discovery and inventory
Configuration management
Application deployment
Perform and schedule actions on servers, applications and services
Availability management
Performance management
Provisioning (IT)
JBoss ON is the downstream of RHQ (see also section Associated Acronyms).
Licensing & Pricing
The various JBoss application platforms are open source, but Red Hat charges to provide a support subscription for JBoss Enterprise Middleware.
Associated acronyms
Acronyms associated with JBoss ON:
RHQ - upstream open source project of JBoss ON. Current stable version is RHQ 4.13; main difference between RHQ 4 and RHQ 3 is the transition of the UI framework to Google Web Toolkit.
Jopr - previously the JBossAS management bits (upstream) of JBoss ON - now integrated into the RHQ source base (since September 2009). Jopr used to use RHQ as its upstream. There will be no more separate Jopr releases.
JON - JBoss Operations Network (ON)
See also
List of JBoss software
Network monitoring system
Comparison of network monitoring systems
HyPerformix IPS Performance Optimizer
IBM Tivoli Framework
References
Bibliography
External links
JBoss application server website
Securing JBoss
JBoss Wiki
JBoss Community Projects
JBoss Introduction by Javid Jamae
Java enterprise platform
Red Hat software
Cross-platform software
Network management | JBoss operations network | Engineering | 447 |
78,911,217 | https://en.wikipedia.org/wiki/Alexey%20Dobryden | Alexey Afanasyevich Dobryden (Russian: Алексей Афанасьевич Добрыдень; 20 May 1926 - 9 October 1980) was a Soviet metallurgist and party leader.
Biography
Dobryden was born and raised in Ol'khovatka, Olkhovatsky District, Voronezh Oblast by his parents, Afanasy Andreevich Dobryden and Pelageya Korneevna Dobryden née Poltavtseva.
He was called into service for the Soviet Army on 9 December 1943 when he was 17 years old, from December 1945 he served as corporal in the 10th brigade of the NKVD troops.
After he got demobilised from the army on 25 July 1951, he graduated from the working youth school No. 1 at the Sverdlovsk station. He then married Inna Mikhailovna Peshkova (27 August 1928 - 12 April 2015) on 7 November 1954 in Yekaterinburg. They had two children together, Elena Alekseevna Stepanova (1956 - ) and Mikhail Alekseevich Dobryden (1963 - ).
He later studied at the metallurgical faculty of the Ural State Technical University, from which he graduated in 1956 and received a diploma as a metallurgical engineer with a specialty in Foundry.
He passed away on 9 October 1980 in Yekaterinburg. He was buried at the Shirokorechenskoye Cemetery.
References
Soviet politicians
Metallurgists
Sverdlovsk Oblast
Soviet scientists
1926 births
1980 deaths | Alexey Dobryden | Chemistry,Materials_science | 331 |
332,222 | https://en.wikipedia.org/wiki/Hushmail | Hushmail is an encrypted proprietary web-based email service offering PGP-encrypted e-mail and vanity domain service. Hushmail uses OpenPGP standards. If public encryption keys are available to both recipient and sender (either both are Hushmail users or have uploaded PGP keys to the Hush keyserver), Hushmail can convey authenticated, encrypted messages in both directions. For recipients for whom no public key is available, Hushmail will allow a message to be encrypted by a password (with a password hint) and stored for pickup by the recipient, or the message can be sent in cleartext. In July 2016, the company launched an iOS app that offers end-to-end encryption and full integration with the webmail settings. The company is located in Vancouver, British Columbia, Canada.
History
Hushmail was founded by Cliff Baltzley in 1999 after he left Ultimate Privacy.
Accounts
Individuals
There is one type of paid account, Hushmail for Personal Use, which provides 10GB of storage, as well as IMAP and POP3 service.
Businesses
The standard business account provides the same features as the paid individual account, plus other features like vanity domain, email forwarding, catch-all email, user admin, archive, and Business Associate Agreements for healthcare plans. Features like secure forms and electronic signatures are available in specific plans.
Additional security features include hidden IP addresses in e-mail headers, two-step verification and HIPAA-compliant encryption.
Instant messaging
An instant messaging service, Hush Messenger, was offered until July 1, 2011.
Compromises to email privacy
Hushmail received favorable reviews in the press. It was believed that possible threats, such as demands from the legal system to reveal the content of traffic through the system, were not imminent in Canada unlike the United States and that if data were to be handed over, encrypted messages would be available only in encrypted form.
Developments in November 2007 led to doubts amongst security-conscious users about Hushmail's security specifically, concern over a backdoor. The issue originated with the non-Java version of the Hush system. It performed the encrypt/decrypt steps on Hush's servers, and then used SSL to transmit the data to the user. The data is available as cleartext during this small window of time, with the passphrase being capturable at this point, facilitating the decryption of all stored messages and future messages using this passphrase. Hushmail stated that the Java version is also vulnerable, in that they may be compelled to deliver a compromised Java applet to a user.
Hushmail supplied cleartext copies of private email messages associated with several addresses at the request of law enforcement agencies under a Mutual Legal Assistance Treaty with the United States: e.g. in the case of United States v. Stumbo. In addition, the contents of emails between Hushmail addresses were analyzed, and 12 CDs were supplied to U.S. authorities. Hushmail privacy policy states that it logs IP addresses in order "to analyze market trends, gather broad demographic information, and prevent abuse of our services."
Hush Communications, the company that provides Hushmail, states that it will not release any user data without a court order from the Supreme Court of British Columbia, Canada and that other countries seeking access to user data must apply to the government of Canada via an applicable Mutual Legal Assistance Treaty. Hushmail states, "...that means that there is no guarantee that we will not be compelled, under a court order issued by the Supreme Court of British Columbia, Canada, to treat a user named in a court order differently, and compromise that user's privacy" and "[...]if a court order has been issued by the Supreme Court of British Columbia compelling us to reveal the content of your encrypted email, the "attacker" could be Hush Communications, the actual service provider."
See also
Comparison of mail servers
Comparison of webmail providers
References
External links
Cryptographic software
Webmail
Internet privacy software
OpenPGP
Internet properties established in 1999 | Hushmail | Mathematics | 841 |
27,490,882 | https://en.wikipedia.org/wiki/Goodwin%20model%20%28biology%29 | In biology, the Goodwin model describes negative feedback oscillators in cellular systems, for example, circadian rhythms or enzymatic regulation (such as lactose in bacteria). The Goodwin model, though, shows no stable limit cycles.
limit cycles can exist, see references. But not in the original Goodwin model which only has two variables.
References
Biological models | Goodwin model (biology) | Biology | 75 |
20,240,791 | https://en.wikipedia.org/wiki/Z5%20%28computer%29 | The Z5 was a computer designed by Konrad Zuse and manufactured by Zuse KG following an order by Ernst Leitz GmbH in Wetzlar in 1950. The computer was delivered in July 1953 and was the first commercial built-to-order mainframe in Germany. The computer was purchased to help with the design of optical lens systems.
The Z5 is the successor of the Z4, and is much more compact and powerful. Zuse implemented the machine with relays, since vacuum tubes were too unreliable at the time. The Z5 used the same principles as the Z4, but was six times faster.
It also had punched tape readers, which the Z4 did not have. It had conditional branching and five subroutine loops.
Specifications
Frequency: ca. 40 Hz
Arithmetic unit: Floating point numbers (36 bit length)
Memory: 12 words, 36 bit
Speed: addition 0.1 second, multiplication 0.4 s, division 0.7 s
Power consumption: 5000 watts
Weight: ca.
References
External links
Z5 information (German), (Google translation), (English)
1950s computers
Computer-related introductions in 1953
Konrad Zuse
Computers designed in Germany
Serial computers | Z5 (computer) | Technology | 240 |
74,867,869 | https://en.wikipedia.org/wiki/List%20of%20countries%20in%20the%20Americas%20by%20life%20expectancy | This is a list of American countries by life expectancy.
United Nations (2023)
Estimation of the analytical agency of the UN.
UN: Estimate of life expectancy for various ages in 2023
UN: Change of life expectancy from 2019 to 2023
World Bank Group (2022)
Estimation of the World Bank Group for 2022. The data is filtered according to the list of countries in the Americas. The values in the World Bank Group tables are rounded. All calculations are based on raw data; so due to the nuances of rounding, in some places illusory inconsistencies of indicators arose, with a size of 0.01 year.
In 2014, some of the world's leading countries had a local peak in life expectancy, so this year is chosen for comparison with 2019 and 2022.
WHO (2019)
Estimation of the World Health Organization for 2019.
Charts
See also
References
Life expectancy
Americas | List of countries in the Americas by life expectancy | Biology | 190 |
48,571,940 | https://en.wikipedia.org/wiki/Leymann%20Inventory%20of%20Psychological%20Terror | The Swedish psychologist and university professor Heinz Leymann developed the LIPT questionnaire. LIPT stands for Leymann Inventory of Psychological Terror.
Structure
The LIPT questionnaire lists 45 mobbing actions at the workplace. A person is regarded as being bullied if one or more of the 45 actions happen at least once per week over a period of at least one year. Alternative mobbing definitions require a shorter period of at least 3–6 months and the frequent occurrence of more than one action.
The effects of the actions on the mobbing victim are divided into five categories:
Effects on self-expression and communication, e.g., the mobbing victim is constantly interrupted, criticized, or yelled at.
Effects on social contacts, e.g., colleagues and coworkers are forbidden to talk with the victim.
Effects on personal reputation, e.g., unfounded rumors about the mobbing victim are circulated.
Effects on occupational situation and quality of life, e.g., the victim is given meaningless jobs or tasks that affect the self-esteem.
Effects on physical health, e.g., threats of physical violence, damage to the workplace, or outright sexual harassment.
Completeness
The 45 mobbing actions are derived by Leymann from 300 individual interviews in the years of 1981 to 1984. After 1984 the interviews did not find further actions and ended.
Legal aspects
See also
References
Leymann, Heinz (1990). Handbok för användning av LIPT-formuläret för kartläggning av risker för psykiskt våld (Manual of the LIPT questionnaire for assessing the risk of psychological violence at work). Stockholm: Violen.
Leymann, Heinz (1990). Mobbing and Psychological Terror at Workplaces. Violence and Victims 5(2), pp. 119-126.
Further reading
Kohut, Margaret R. (2008). The Complete Guide to Understanding, Controlling, and Stopping Bullies & Bullying at Work: A Complete Guide for Managers, Supervisors, and Co-Workers. Atlantic Publishing Group Inc., .
External links
Albrecht Leymann Inventory of Psychological Terror
Organizational behavior
Abuse
Deviance (sociology)
Workplace harassment and bullying
Psychological tests and scales | Leymann Inventory of Psychological Terror | Biology | 451 |
1,375,317 | https://en.wikipedia.org/wiki/American%20Meteor%20Society | The American Meteor Society, Ltd. (AMS) is a non-profit scientific organization established to encourage and support the research activities of both amateur and professional astronomers who are interested in the field of meteor astronomy. Its affiliates observe, monitor, collect data on, study, and report on meteors, meteor showers, meteoric fireballs, and related meteoric phenomena.
The society publishes observations and scientific interpretations quarterly in Meteor Trails, The Journal of American Meteor Society. Once per year they give the American Meteor Society Award to a person who has contributed to research on meteors. They also provide an annual research grant to a student of SUNY-Geneseo who has contributed to meteor research or to the AMS.
History
The society was founded in 1911 by Charles P. Olivier of the Leander McCormick Observatory. The initial enrollment was fifteen members. These were recruited by Olivier by letter. The first paper based on the observations of the members appeared in the Astronomical Journal in 1912, describing the η Aquarid meteor shower. In 1926, Olivier began to publish meteor notes from the society on a nearly monthly basis in Popular Astronomy magazine under the title "Monthly Notes". This continued until his editor, Curvin H. Gingrich, died.
Some time before 1932, Olivier appointed regional directors to facilitate the data collection for the society. A director was appointed to the Pacific Northwest region in 1932. Initially this consisted of Washington state and Oregon, but later came to include the western provinces of Canada plus Idaho and Montana. In 1938, the Canadian provinces were withdrawn from the society, while California was added. This western division was headquartered at the University of Oregon in Eugene.
In 1960, Olivier published the first catalogue of hourly meteor rates based upon the data collected by the society members from 1901 to 1958. The second catalogue was published in 1965, which included data up to 1963.
During the late 1970s, David Meisel became Executive Director of the society. Its headquarters were relocated to Geneseo, New York. The society research was expanded to include radio meteor studies, then spectroscopy of meteors.
See also
International Meteor Organization
List of astronomical societies
References
External links
Amateur astronomy organizations
Astronomy societies
Scientific societies based in the United States
1911 establishments in the United States
Scientific organizations established in 1911
Astronomy in the United States | American Meteor Society | Astronomy | 459 |
8,484,738 | https://en.wikipedia.org/wiki/Colby%27s%20Clubhouse | Colby's Clubhouse is an American Christian live-action children's television show that teaches principles from the Bible through songs and everyday situations. The main character is Colby, an anthropomorphic computer. Colby has the entire Bible programmed into his memory. The show was written and produced by Peter and Hanneke Jacobs and was originally aired on Trinity Broadcasting Network, with Peter playing the part of Colby. It originally aired from 1995 to 2000 with several changes of cast members. The first episode aired on November 19, 1995, and it was last aired on December 31, 2000. The show was primarily shot in Orange County, California. On December 30, 2006, the show was removed from TBN, although it remains on the network's Smile of a Child digital subchannel. Albums and videotapes featuring Colby the Computer were produced. However, many of these were produced before the TBN series and feature different children.
Record Albums (1984–1995)
Prior to the TV series, a series of record albums was released during the 80s and early 90s. Two of them were adapted into videos. There was also a VHS release titled "Colby's Place" in 1989. It wasn't based on any albums, but it was possibly a pilot episode for another planned series that never came to fruition (most likely because it was rejected for being too similar to "Kids Incorporated"). The last album, "A Heart to Give" was adapted into the first episode of the final TV series, which was divided into two parts. The following is a list of the albums in their release order.
Colby 1: Make a Joyful Noise (1984)
Colby 2: Colby's Missing Memory (1985 [Video 1986])
Colby 3: Save Colby's Clubhouse (1986)
Colby 4: God Uses Kids (1987)
Colby 5: Putting Feet on Faith (1990 [Video, 1992])
Colby 6: Colby's Bible Camp Catastrophe (1991)
Colby 7: A Heart to Give (1995)
Episodes
Season 1 (1995–96)
Season 2 (1996–97)
Season 3 (1997–2000)
Cast
Peter Jacobs as Colby the Computer
Children
Jessica Adams
Rachel Balich
Saxon Christin
Beau Clark
Kiera Cope
Jonathan Curry
Ryan Devin
Brittany Durlach
Laura Fager
Zane Gerson
Gina Gonzalez
Adam Hill
Danielle Hogg
Paulina Johnson
Kevin Jones
Dawn Jordan
Danielle Kincebach
Casey Lagos
Randy Landingham
Matthew Lomakin
Jake Mann
Brandon Muchow
Tyler Newman
Andrew Pollaro
Jason Rausavljevick
Krysta Rodriguez
Matt Sackett
Randy Stuck
Delana Tillman
Brett Traina
Aaron Vaughan
Stephanie Wall
Lindsey Weeks
Christopher Williams
Peter Woo
See also
Gerbert (TV series)
References
External links
Colby's Clubhouse at CEGAnMo.com
Colby's Clubhouse at Salt Cover.com/br
Colby's Clubhouse at SmileofaChild.tv
Trinity Broadcasting Network original programming
Fictional computers
1995 American television series debuts
1990s American children's comedy television series
2000s American children's comedy television series
1990s American musical comedy television series
2000s American musical comedy television series
2000 American television series endings
Christian children's television series
American children's education television series
American children's musical television series
American television shows featuring puppetry
Television series about children
Television series about robots | Colby's Clubhouse | Technology | 658 |
65,496,203 | https://en.wikipedia.org/wiki/Hemibiotrophs | Hemibiotrophs are the spectrum of plant pathogens, including bacteria, oomycete and a group of plant pathogenic fungi that keep its host alive while establishing itself within the host tissue, taking up the nutrients with brief biotrophic-like phase. It then, in later stages of infection switches to a necrotrophic life-style, where it rampantly kills the host cells, deriving its nutrients from the dead tissues.
This mode of interaction, in which initial biotrophy followed by a switch to necrotrophy, has been observed in the fungal model Magnaporthe oryzae (rice blast fungus) and other pathogens such as many Colletotrichum spp. (often called anthracnose diseases, e.g. Colletotrichum lindemuthianum), Southern corn leaf blight (Bipolaris maydis) and, Zymoseptoria tritici (syn. Mycosphaerella graminicola, leaf blotch of wheat). Collectively, they represent some of the most destructive plant parasites, causing huge economic losses, threatening global food security.
A spectrum of hemibiotrophic plant pathogens, including the bacterium Pseudomonas syringae and the oomycete Phytophthora infestans (potato blight), also exhibit characteristics of both biotrophs and necrotrophs and thus are called hemibiotrophs, depending on the stages of their life cycle.
Life style
In contrast to biotrophs, hemibiotrophs have dual life-styles. The initial biotrophic life-style of hemibiotrophs causes minimum damage to the plant tissues, while the fungus obtains nutrients from living plant tissues Hemibiotrophic fungi require living plant tissue to survive to complete their life cycle.
Most fungal hemibiotrophs develop haustoria, whereas some produce intracellular hyphae to acquire nutrients from the host cytoplasm. However, in the hemibiotrophic life-style the pathogen later breaks down host cell walls through secretion of hydrolytic enzymes and feeds on the released nutrients. These hydrolytic enzymes and toxins are synthesized during the later necrotrophic phase. They also produce extracellular hyphae between the host cells to facilitate nutrient assimilation. Plant pathogenic fungi produce and secrete many so‐called effector proteins that interact with the host and play an important role in virulence.
The rice blast fungus Magnaporthe oryzae and Colletotrichum species are generally considered to be hemibiotrophs. Three hemibiotrophic species, Colletotrichum pisicola, C. vignae and C. destructivum belong to the Colletotrichum destructivum complex. Fusarium oxysporum is the cause of fusarium wilt disease and Moniliophthora roreri, which causes frosty pod rot disease of cacao, are hemibiotrophs that affect many agricultural and floricultural crops worldwide
In the early stages of infection, the pathogens proliferate asymptomatically in the host by suppressing programmed cell death (PCD) or thwarting host defense responses, but in the later stages of infection they undergo a physiological transition from asymptomatic biotrophic growth to a highly destructive necrotrophic phase. Hemibiotrophic bacteria are known to secrete a range of so-called effector proteins, including transcription factors and others with enzymatic activities, into host cells via the type III secretion system (T3SS) whereupon they suppress PCD and other host defenses.
Hemibiotrophy genes
Studies indicate that fungal hemibiotrophic C. lindemuthianum species undergo two distinct phases during host invasion. Initially, the biotrophic phase involves generating intracellular hyphae within intact plant cells. Subsequently, the necrotrophic phase occurs where extracellular hyphae penetrate cellular boundaries, traversing plasmodesmata and spreading between host cells.
The suggestion that these fungi undergo a distinct metabolic switch from biotrophic to necrotrophic growth was boosted by the discovery of a gene that functions between the biotrophic and necrotrophic phases. The gene CLTA1 encodes a GAL4‐like transcriptional activator, which is consistent with a role in reprogramming metabolism. It is clear that all pathogens are obliged to alter metabolic fluxes in numerous ways upon penetration to prepare for proliferation. This is a key postulated attribute of the hemibiotrophs and seems to be a priority subject for study.
Life cycle of hemibiotrophs
The hemibiotrophic life cycle involves an initial biotrophic phase and later a necrotrophic phase. Colletotrichum lindemuthianum is a hemibiotrophic fungus on beans (common bean anthracnose). Conidia then, on the host surface, germinate and differentiate to form a melanized infection structure devoted to mechanical penetration of the epidermal cells. After the penetration step, the infection cycle is characterized by two successive phases. In the first phase, lasting 3 to 4 days, the fungus grows biotrophically inside the infected epidermal cells. During this phase, referred to as the biotrophic phase, the appressoria develops into a primary penetration hypha, which is surrounded by the invaginated plant plasma membrane, during this phase the penetrated host cell remains alive with minimum damage. The second phase, which corresponds to the appearance of symptoms, is completed 6 to 8 days after inoculation. During this phase, the necrotrophic phase, the fungus develops secondary hyphae that grow both intracellularly and intercellularly and thus acts as a typical necrotrophic pathogen. During the necrotrophic phase the fungus secretes cell wall-degrading enzymes that break down the host cell wall. After a few days the plant cell membrane disintegrates and ultimately the host cell dies. Thereafter the fungus grows as a necrotroph.
Another hemibiotroph is Moniliophthora roreri, which causes frosty pod rot on Theobroma sp. (Cacao). It produces meiospores, via meiosis, from the modified basidium. These spores are important as dispersal agents, for infection and survival. Meiospores germinate and produce hyphae made up of haploid cells throughout the biotrophic phase. The necrotrophic phase is thought to start from the formation of dikaryotic hyphae and continues until sporulation on the pod surface.
References
Fungal diseases
Fungi and humans | Hemibiotrophs | Biology | 1,404 |
3,223,425 | https://en.wikipedia.org/wiki/Catchwater | A catchwater device is a large-scale man-made device for catching surface runoff from hills and the sky from precipitation by channeling it to reservoirs for commercial and domestic use later. Freshwater is a scarce natural resource due to pollution, droughts, and overpopulation. Catchwater is a sustainable mechanism to increase freshwater in areas facing droughts or polluted waterways. A catchwater drain decreases the velocity of storm-water runoff to reduce and prevent erosion and other environmental problems.
Types
Catchwater drains
Catchwater drains may take the form of concrete canals, such as in Hong Kong, where there are many. Alternatively, they may take the form of a large concrete sheet, smothering a hill, and preventing rainfall from entering the rock strata, with a smaller channeling system for transport of the water to the storage tank - this latter system is in operation in Gibraltar. In Hong Kong there are approximately 120 km of concrete channels, used as gutters built along hillsides in order to direct freshwater runoff into reservoirs for local water consumption. These catchwaters can overflow, causing dangerous hazards, erosive streams and blockages.
Earthship drains
An earthship drain is water that is collected from impervious surfaces that are channeled into cisterns. A cistern is a reservoir located underground. The water within these underground reservoirs is heated by the sun. The water that is stored is used in domestic ways for washing dishes and bathing. Once water is used it is cycled and filtered in a module to be reused again.
Rain barrels
Rainwater tanks, also known as rain barrels in North America, are used to collect runoff coming from precipitation to prevent contamination from entering waterways. The only use of water from rain barrels is used for commercial use such as gardening and agriculture. Rain barrels are large containers that are connected to buildings through a gutter system which catches runoff from roofs. Many households use rain barrels as a substitute for a to reduce the amount of water they waste for recreational activities.
Rain gardens
A Rain garden is another man-made device created by digging a hole in an area and planting a garden with a variety of vegetation. The vegetation helps catch storm-water runoff, then filters the water to reduce the pollutants before the water reenters the hydrology cycle. Rain gardens are used to decrease the speed of water by capturing the water, so it does not become surface runoff through infiltrating the soil.
Gallery
Advantages and disadvantages
Sustainability
Some drains are able to self-maintain through geomorphological equilibrium. Catchwater drains are predominantly used for agriculture. Agriculture uses the water to in catchwater drains for irrigation and the use for controlling flooding or other functions to direct large amounts of water away from crops during wet seasons. Catchwater drains also allow communities to wore down the water tables when they need to an allow the retentive of the water table to be restore after times of heavy use.
Environmental safety
Catchwater drains need a lot of landscaping and management.
Rain gardens are not suitable for steep slopes unlike other types of catchwater drain.
Gardens can get congested and become impervious if land around the garden is not managed.
Due to the expense of these systems, they are generally only to be found where there is an extreme shortage of freshwater, because of geographical or political issues.
See also
Blue roof
References
Water supply
Stormwater management | Catchwater | Chemistry,Engineering,Environmental_science | 680 |
74,571,655 | https://en.wikipedia.org/wiki/Architectural%20exaptation | Architectural exaptation is a concept in architecture and urban design that involves repurposing buildings, structures, or architectural elements for new uses that differ significantly from their original intended purpose. This practice extends beyond mere adaptation, as it involves a transformative process where the original functions are replaced or augmented by entirely new ones. It is a concept that embraces flexibility, creativity, and innovation in the use of architectural spaces and structures.
The term "exaptation," originally coined by paleontologists Stephen J. Gould and Elisabeth Vrba, is borrowed from evolutionary biology. It describes how certain features, evolved for a specific function, can be repurposed or co-opted for a different function. In architectural terms, this can be seen when buildings or their parts, designed for a specific purpose, find new life serving a completely different need. This phenomenon is manifested through two primary mechanisms: functional shift and functional co-optation (referred to as exaptation by Telmo Pievani).
Architectural exaptation is an interdisciplinary concept that connects the fields of architecture and archaeology. Architectural exaptation challenges traditional views in design and architecture that emphasize a deterministic approach where form strictly follows function. Instead, it highlights the adaptability and potential for innovation in existing structures. This concept is especially relevant in contemporary discussions about sustainability, as it promotes the reuse and creative repurposing of existing architectural resources.
Examples of architectural exaptation can range from the conversion of industrial buildings into cultural or residential spaces, to the creative reuse of minor architectural elements within a larger structure. Iconic examples include the Tate Modern in London, which transformed from a power station into a modern art gallery, the High Line in New York City, an elevated railway turned into an urban park, Ponte Vecchio in Florence, Plaza de Toros de las Arenas in Barcelona, and Temporary appropriations of public spaces exemplify this adaptability.
Architectural exaptation is not just about physical transformation; it also encompasses a philosophical shift in how we perceive the built environment. It advocates for a more fluid, imaginative approach to design, where the potential for a building or space is not limited to its original function. This concept encourages architects, urban planners, and designers to think beyond conventional boundaries and explore the multifaceted potential of the built environment.
References
Architectural terminology
Architectural elements | Architectural exaptation | Technology,Engineering | 475 |
49,239,358 | https://en.wikipedia.org/wiki/Kirklington%20Hall%20Research%20Station | Kirklington Hall Research Station was a geophysical research institute of BP in Kirklington, Nottinghamshire. During the 1950s it was the main research site of BP.
Background
Cricketer John Boddam-Whetham was born at the site in 1843. Sir Albert Bennett, 1st Baronet, Liberal MP from 1922-23 for Mansfield, lived there from 1920. The Bennett baronets was formed in 1929. Lady Evelyn Maude Robinson, was the owner from around 1930, and the wife of Sir John Robinson of Worksop, who died aged 74 on Saturday 2 December 1944.
The previous owner died aged 73 on Friday 14 December 1945, leaving £138,365 in her will. In June 1945 it was put up for auction, with 631 acres, 15 bedrooms, 6 bathrooms, abd 11 servants rooms. It was sold for £24,500 in Derby in July 1945.
Nottinghamshire County Council for two years was looking to buy the property, but it was a big investment, for a residential further education college; it chose another site in July 1947.
History
BP
As part of the East Midlands Oil Province, oil was found in eastern Nottinghamshire. It was also known as the BP Research Centre or the Geophysical Centre, part of BP's Exploration Division.
The site was acquired due to proximity of Eakring, in July 1949. The local church, with Hockerton, held garden parties at the site in the summer.
The research centre was established in 1950. Its first employee was Jack Birks, later managing director of BP. From 1950 it was the main geophysical research site of BP, until BP sold the site in 1957 for £12,000. Research moved to Sunbury-on-Thames, in Surrey, in 1957. Sunbury Research Centre had been built around the same time as the Kirklington site, in the early 1950s.
Private property
It was put up for sale in November 1957. In 1958 there was the possibility of the site being a teacher training college.
From 1958 it was a private school, which had been formed in Southwell in 1945.
It was put up for sale in 1987, with a guide price of £850,000.
Kirklington Hall today is a private school.
Structure
The former site is situated north of the A617.
Function
It conducted geophysical research for exploration for BP. This part of BP is now known as BP Exploration. Work would be conducted on core samples and with seismic methods.
See also
British Geological Survey, also in Nottinghamshire
Sunbury Research Centre, where most of BP's research takes place in the UK today.
:Category:Petroleum geology
:Category:Seismology measurement
References
British Petroleum and Global Oil 1950-1975: The Challenge of Nationalism, James Bamberg, page 33
External links
Our Nottinghamshire
1950 establishments in England
1957 disestablishments in England
BP buildings and structures
Buildings and structures in Nottinghamshire
Energy research institutes
Engineering research institutes
Earth science research institutes
Petroleum industry in the United Kingdom
Petroleum organizations
Research institutes established in 1950
Research institutes in England
Science and technology in Nottinghamshire
Research stations | Kirklington Hall Research Station | Chemistry,Engineering | 615 |
10,601,794 | https://en.wikipedia.org/wiki/Carbon%E2%80%93fluorine%20bond | The carbon–fluorine bond is a polar covalent bond between carbon and fluorine that is a component of all organofluorine compounds. It is one of the strongest single bonds in chemistry (after the B–F single bond, Si–F single bond, and H–F single bond), and relatively short, due to its partial ionic character. The bond also strengthens and shortens as more fluorines are added to the same carbon on a chemical compound. As such, fluoroalkanes like tetrafluoromethane (carbon tetrafluoride) are some of the most unreactive organic compounds.
Electronegativity and bond strength
The high electronegativity of fluorine (4.0 for fluorine vs. 2.5 for carbon) gives the carbon–fluorine bond a significant polarity or dipole moment. The electron density is concentrated around the fluorine, leaving the carbon relatively electron poor. This introduces ionic character to the bond through partial charges (Cδ+—Fδ−). The partial charges on the fluorine and carbon are attractive, contributing to the unusual bond strength of the carbon–fluorine bond. The bond is labeled as "the strongest in organic chemistry," because fluorine forms the strongest single bond to carbon. Carbon–fluorine bonds can have a bond dissociation energy (BDE) of up to 130 kcal/mol. The BDE (strength of the bond) of C–F is higher than other carbon–halogen and carbon–hydrogen bonds. For example, the BDEs of the C–X bond within a CH3–X molecule is 115, 104.9, 83.7, 72.1, and 57.6 kcal/mol for X = fluorine, hydrogen, chlorine, bromine, and iodine, respectively.
Bond length
The carbon–fluorine bond length is typically about 1.35 ångström (1.39 Å in fluoromethane). It is shorter than any other carbon–halogen bond, and shorter than single carbon–nitrogen and carbon–oxygen bonds. The short length of the bond can also be attributed to the ionic character of the bond (the electrostatic attractions between the partial charges on the carbon and the fluorine). The carbon–fluorine bond length varies by several hundredths of an ångstrom depending on the hybridization of the carbon atom and the presence of other substituents on the carbon or even in atoms farther away. These fluctuations can be used as indication of subtle hybridization changes and stereoelectronic interactions. The table below shows how the average bond length varies in different bonding environments (carbon atoms are sp3-hybridized unless otherwise indicated for sp2 or aromatic carbon).
{| class="wikitable"
|-
! Bond !! Mean bond length (Å)
|-
| CCH2F, C2CHF || 1.399
|-
| C3CF || 1.428
|-
| C2CF2, H2CF2, CCHF2 || 1.349
|-
| CCF3 || 1.346
|-
| FCNO2 || 1.320
|-
| FCCF || 1.371
|-
| Csp2F || 1.340
|-
| CarF || 1.363
|-
| FCarCarF || 1.340
|}
The variability in bond lengths and the shortening of bonds to fluorine due to their partial ionic character are also observed for bonds between fluorine and other elements, and have been a source of difficulties with the selection of an appropriate value for the covalent radius of fluorine. Linus Pauling originally suggested 64 pm, but that value was eventually replaced by 72 pm, which is half of the fluorine–fluorine bond length. However, 72 pm is too long to be representative of the lengths of the bonds between fluorine and other elements, so values between 54 pm and 60 pm have been suggested by other authors.
Bond strength effect of geminal bonds
With increasing number of fluorine atoms on the same (geminal) carbon the other bonds become stronger and shorter. This can be seen by the changes in bond length and strength (BDE) for the fluoromethane series, as shown on the table below; also, the partial charges (qC and qF) on the atoms change within the series. The partial charge on carbon becomes more positive as fluorines are added, increasing the electrostatic interactions, and ionic character, between the fluorines and carbon.
{| class="wikitable"
|-
!Compound
!C-F bond length (Å)
!BDE (kcal/mol)
!qC
!qF
|-
|CH3F
|1.385
|109.9 ± 1
|0.01
| −0.23
|-
|CH2F2
|1.357
|119.5
|0.40
| −0.23
|-
|CHF3
|1.332
|127.5
|0.56
| −0.21
|-
|CF4
|1.319
|130.5 ± 3
|0.72
| −0.18
|}
Gauche effect
When two fluorine atoms are in vicinal (i.e., adjacent) carbons, as in 1,2-difluoroethane (H2FCCFH2), the gauche conformer is more stable than the anti conformer—this is the opposite of what would normally be expected and to what is observed for most 1,2-disubstituted ethanes; this phenomenon is known as the gauche effect. In 1,2-difluoroethane, the gauche conformation is more stable than the anti conformation by 2.4 to 3.4 kJ/mole in the gas phase. This effect is not unique to the halogen fluorine, however; the gauche effect is also observed for 1,2-dimethoxyethane. A related effect is the alkene cis effect. For instance, the cis isomer of 1,2-difluoroethylene is more stable than the trans isomer.
There are two main explanations for the gauche effect: hyperconjugation and bent bonds. In the hyperconjugation model, the donation of electron density from the carbon–hydrogen σ bonding orbital to the carbon–fluorine σ* antibonding orbital is considered the source of stabilization in the gauche isomer. Due to the greater electronegativity of fluorine, the carbon–hydrogen σ orbital is a better electron donor than the carbon–fluorine σ orbital, while the carbon–fluorine σ* orbital is a better electron acceptor than the carbon–hydrogen σ* orbital. Only the gauche conformation allows good overlap between the better donor and the better acceptor.
Key in the bent bond explanation of the gauche effect in difluoroethane is the increased p orbital character of both carbon–fluorine bonds due to the large electronegativity of fluorine. As a result, electron density builds up above and below to the left and right of the central carbon–carbon bond. The resulting reduced orbital overlap can be partially compensated when a gauche conformation is assumed, forming a bent bond. Of these two models, hyperconjugation is generally considered the principal cause behind the gauche effect in difluoroethane.
Spectroscopy
The carbon–fluorine bond stretching appears in the infrared spectrum between 1000 and 1360 cm−1. The wide range is due to the sensitivity of the stretching frequency to other substituents in the molecule. Monofluorinated compounds have a strong band between 1000 and 1110 cm−1; with more than one fluorine atoms, the band splits into two bands, one for the symmetric mode and one for the asymmetric. The carbon–fluorine bands are so strong that they may obscure any carbon–hydrogen bands that might be present.
Organofluorine compounds can also be characterized using NMR spectroscopy, using carbon-13, fluorine-19 (the only natural fluorine isotope), or hydrogen-1 (if present). The chemical shifts in 19F NMR appear over a very wide range, depending on the degree of substitution and functional group. The table below shows the ranges for some of the major classes.
{| class="wikitable"
|-
! Type of Compound !! Chemical Shift Range (ppm) Relative to neat CFCl3
|-
| F–C=O || −70 to −20
|-
| CF3 || +40 to +80
|-
| CF2 || +80 to +140
|-
| CF || +140 to +250
|-
| ArF || +80 to +170
|}
Breaking C–F bonds
Breaking C–F bonds is of interest as a way to decompose and destroy organofluorine "forever chemicals" such as PFOA and perfluorinated compounds (PFCs). Candidate methods include catalysts, such as platinum atoms; photocatalysts; UV, iodide, and sulfite, radicals; etc.
Some metal complexes cleave C-F bonds. These reactions are of interest from the perspectives of organic synthesis and remediation of xenochemicals. C-F bond activation has been classified as follows "(i) oxidative addition of fluorocarbon, (ii) M–C bond formation with HF elimination, (iii) M–C bond formation with fluorosilane elimination, (iv) hydrodefluorination of fluorocarbon with M–F bond formation, (v) nucleophilic attack on fluorocarbon, and (vi) defluorination of fluorocarbon". An illustrative metal-mediated C-F activation reaction is the defluorination of fluorohexane by a zirconocene dihydride:
See also
Fluorocarbon
Organofluorine chemistry
Carbon–hydrogen bond
Carbon–carbon bond
Carbon–nitrogen bond
Carbon–oxygen bond
References
Fluorine
Organic chemistry
Chemical bonding | Carbon–fluorine bond | Physics,Chemistry,Materials_science | 2,163 |
492,612 | https://en.wikipedia.org/wiki/Bimodule | In abstract algebra, a bimodule is an abelian group that is both a left and a right module, such that the left and right multiplications are compatible. Besides appearing naturally in many parts of mathematics, bimodules play a clarifying role, in the sense that many of the relationships between left and right modules become simpler when they are expressed in terms of bimodules.
Definition
If R and S are two rings, then an R-S-bimodule is an abelian group such that:
M is a left R-module and a right S-module.
For all r in R, s in S and m in M:
An R-R-bimodule is also known as an R-bimodule.
Examples
For positive integers n and m, the set Mn,m(R) of matrices of real numbers is an , where R is the ring Mn(R) of matrices, and S is the ring Mm(R) of matrices. Addition and multiplication are carried out using the usual rules of matrix addition and matrix multiplication; the heights and widths of the matrices have been chosen so that multiplication is defined. Note that Mn,m(R) itself is not a ring (unless ), because multiplying an matrix by another matrix is not defined. The crucial bimodule property, that , is the statement that multiplication of matrices is associative (which, in the case of a matrix ring, corresponds to associativity).
Any algebra A over a ring R has the natural structure of an R-bimodule, with left and right multiplication defined by and respectively, where is the canonical embedding of R into A.
If R is a ring, then R itself can be considered to be an by taking the left and right actions to be multiplication – the actions commute by associativity. This can be extended to Rn (the n-fold direct product of R).
Any two-sided ideal of a ring R is an , with the ring multiplication both as the left and as the right multiplication.
Any module over a commutative ring R has the natural structure of a bimodule. For example, if M is a left module, we can define multiplication on the right to be the same as multiplication on the left. (However, not all R-bimodules arise this way: other compatible right multiplications may exist.)
If M is a left R-module, then M is an , where Z is the ring of integers. Similarly, right R-modules may be interpreted as . Any abelian group may be treated as a .
If M is a right R-module, then the set of R-module endomorphisms is a ring with the multiplication given by composition. The endomorphism ring acts on M by left multiplication defined by . The bimodule property, that , restates that f is a R-module homomorphism from M to itself. Therefore any right R-module M is an -bimodule. Similarly any left R-module N is an -bimodule.
If R is a subring of S, then S is an . It is also an and an .
If M is an S-R-bimodule and N is an , then is an S-T-bimodule.
Further notions and facts
If M and N are R-S-bimodules, then a map is a bimodule homomorphism if it is both a homomorphism of left R-modules and of right S-modules.
An R-S-bimodule is actually the same thing as a left module over the ring , where Sop is the opposite ring of S (where the multiplication is defined with the arguments exchanged). Bimodule homomorphisms are the same as homomorphisms of left modules. Using these facts, many definitions and statements about modules can be immediately translated into definitions and statements about bimodules. For example, the category of all is abelian, and the standard isomorphism theorems are valid for bimodules.
There are however some new effects in the world of bimodules, especially when it comes to the tensor product: if M is an and N is an , then the tensor product of M and N (taken over the ring S) is an in a natural fashion. This tensor product of bimodules is associative (up to a unique canonical isomorphism), and one can hence construct a category whose objects are the rings and whose morphisms are the bimodules. This is in fact a 2-category, in a canonical way – 2 morphisms between M and N are exactly bimodule homomorphisms, i.e. functions
that satisfy
,
for , , and . One immediately verifies the interchange law for bimodule homomorphisms, i.e.
holds whenever either (and hence the other) side of the equation is defined, and where ∘ is the usual composition of homomorphisms. In this interpretation, the category is exactly the monoidal category of with the usual tensor product over R the tensor product of the category. In particular, if R is a commutative ring, every left or right R-module is canonically an , which gives a monoidal embedding of the category into . The case that R is a field K is a motivating example of a symmetric monoidal category, in which case , the category of vector spaces over K, with the usual tensor product giving the monoidal structure, and with unit K. We also see that a monoid in is exactly an R-algebra.
Furthermore, if M is an and L is an , then the set of all S-module homomorphisms from M to L becomes a in a natural fashion. These statements extend to the derived functors Ext and Tor.
Profunctors can be seen as a categorical generalization of bimodules.
Note that bimodules are not at all related to bialgebras.
See also
Profunctor
References
Module theory | Bimodule | Mathematics | 1,273 |
8,537,444 | https://en.wikipedia.org/wiki/Cosmic%20Calendar | The Cosmic Calendar is a method to visualize the chronology of the universe, scaling its currently understood age of 13.8 billion years to a single year in order to help intuit it for pedagogical purposes in science education or popular science. A similar analogy used to visualize the geologic time scale and the history of life on Earth is the Geologic Calendar.
In this visualization, the Big Bang took place at the beginning of January 1 at midnight, and the current moment maps onto the end of December 31 just before midnight. At this scale, there are 438 years per cosmic second, 1.58 million years per cosmic hour, and 37.8 million years per cosmic day.
The solar system materialized in Cosmic September. The Phanerozoic corresponds only to the latter half of December, with the Cenozoic only happening on the penultimate day on the Calendar. The Quaternary only applies to the last four hours on the final Cosmic Day, with the Holocene only applying to the final 23 Cosmic Seconds. On the other hand, relic radiation is dated at the first fifteen minutes of the very first Cosmic Day; even if we stretch the Cosmic Calendar to 100 years, the relic radiation would still happen just after the start of the second Cosmic Day.
The concept was popularized by Carl Sagan in his 1977 book The Dragons of Eden and on his 1980 television series Cosmos. Sagan goes on to extend the comparison in terms of surface area, explaining that if the Cosmic Calendar were scaled to the size of a football field, then "all of human history would occupy an area the size of [his] hand". The Cosmic Calendar was reused in the 2014 sequel series, Cosmos: A Spacetime Odyssey.
References
Units of time
Calendar
Time in astronomy
Popular science
Carl Sagan
Scientific visualization
Analogy | Cosmic Calendar | Physics,Astronomy,Mathematics | 367 |
2,507,104 | https://en.wikipedia.org/wiki/Aposematism | Aposematism is the advertising by an animal, whether terrestrial or marine, to potential predators that it is not worth attacking or eating. This unprofitability may consist of any defenses which make the prey difficult to kill and eat, such as toxicity, venom, foul taste or smell, sharp spines, or aggressive nature. These advertising signals may take the form of conspicuous coloration, sounds, odours, or other perceivable characteristics. Aposematic signals are beneficial for both predator and prey, since both avoid potential harm.
The term was coined in 1877 by Edward Bagnall Poulton for Alfred Russel Wallace's concept of warning coloration. Aposematism is exploited in Müllerian mimicry, where species with strong defences evolve to resemble one another. By mimicking similarly coloured species, the warning signal to predators is shared, causing them to learn more quickly at less of a cost.
A genuine aposematic signal that a species actually possesses chemical or physical defences is not the only way to deter predators. In Batesian mimicry, a mimicking species resembles an aposematic model closely enough to share the protection, while many species have bluffing deimatic displays which may startle a predator long enough to enable an otherwise undefended prey to escape.
Etymology
The term aposematism was coined by the English zoologist Edward Bagnall Poulton in his 1890 book The Colours of Animals. He based the term on the Ancient Greek words ἀπό apo 'away' and σῆμα sēma 'sign', referring to signs that warn other animals away.
Defence mechanism
The function of aposematism is to prevent attack, by warning potential predators that the prey animal has defenses such as being unpalatable or poisonous. The easily detected warning is a primary defense mechanism, and the non-visible defenses are secondary. Aposematic signals are primarily visual, using bright colours and high-contrast patterns such as stripes. Warning signals are honest indications of noxious prey, because conspicuousness evolves in tandem with noxiousness. Thus, the brighter and more conspicuous the organism, the more toxic it usually is. This is in contrast to deimatic displays, which attempt to startle a predator with a threatening appearance but which are bluffing, unsupported by any strong defences.
The most common and effective colours are red, yellow, black, and white. These colours provide strong contrast with green foliage, resist changes in shadow and lighting, are highly chromatic, and provide distance dependent camouflage. Some forms of warning coloration provide this distance dependent camouflage by having an effective pattern and color combination that do not allow for easy detection by a predator from a distance, but are warning-like from a close proximity, allowing for an advantageous balance between camouflage and aposematism. Warning coloration evolves in response to background, light conditions, and predator vision. Visible signals may be accompanied by odors, sounds or behavior to provide a multi-modal signal which is more effectively detected by predators.
Unpalatability, broadly understood, can be created in a variety of ways. Some insects such as the ladybird or tiger moth contain bitter-tasting chemicals, while the skunk produces a noxious odor, and the poison glands of the poison dart frog, the sting of a velvet ant or neurotoxin in a black widow spider make them dangerous or painful to attack. Tiger moths advertise their unpalatability by either producing ultrasonic noises which warn bats to avoid them, or by warning postures which expose brightly coloured body parts (see Unkenreflex), or exposing eyespots. Velvet ants (actually parasitic wasps) such as Dasymutilla occidentalis both have bright colours and produce audible noises when grabbed (via stridulation), which serve to reinforce the warning. Among mammals, predators can be dissuaded when a smaller animal is aggressive and able to defend itself, as for example in honey badgers.
Prevalence
In terrestrial ecosystems
Aposematism is widespread in insects, but less so in vertebrates, being mostly confined to a smaller number of reptile, amphibian, and fish species, and some foul-smelling or aggressive mammals. Pitohuis, red and black birds whose toxic feathers and skin apparently comes from the poisonous beetles they ingest, could be included. It has been proposed that aposematism played a role in human evolution, body odour carrying a warning to predators of large hominins able to defend themselves with weapons.
Perhaps the most numerous aposematic vertebrates are the poison dart frogs (family: Dendrobatidae). These neotropical anuran amphibians exhibit a wide spectrum of coloration and toxicity. Some species in this poison frog family (particularly Dendrobates, Epipedobates, and Phyllobates) are conspicuously coloured and sequester one of the most toxic alkaloids among all living species. Within the same family, there are also cryptic frogs (such as Colostethus and Mannophryne) that lack these toxic alkaloids. Although these frogs display an extensive array of coloration and toxicity, there is very little genetic difference between the species. Evolution of their conspicuous coloration is correlated to traits such as chemical defense, dietary specialization, acoustic diversification, and increased body mass.
Some plants are thought to employ aposematism to warn herbivores of unpalatable chemicals or physical defences such as prickled leaves or thorns. Many insects, such as cinnabar moth caterpillars, acquire toxic chemicals from their host plants. Among mammals, skunks and zorillas advertise their foul-smelling chemical defences with sharply contrasting black-and-white patterns on their fur, while the similarly-patterned badger and honey badger advertise their sharp claws, powerful jaws, and aggressive natures. Some brightly coloured birds such as passerines with contrasting patterns may also be aposematic, at least in females; but since male birds are often brightly coloured through sexual selection, and their coloration is not correlated with edibility, it is unclear whether aposematism is significant.
The sound-producing rattle of rattlesnakes is an acoustic form of aposematism. Sound production by the caterpillar of the Polyphemus moth, Antheraea polyphemus, may similarly be acoustic aposematism, connected to and preceded by chemical defences. Similar acoustic defences exist in a range of Bombycoidea caterpillars.
In marine ecosystems
The existence of aposematism in marine ecosystems has been debated. Many marine organisms, particularly those on coral reefs, are brightly coloured or patterned, including sponges, corals, molluscs, and fish, with little or no connection to chemical or physical defenses. Caribbean reef sponges are brightly coloured, and many species are full of toxic chemicals, but there is no statistical relationship between the two factors.
Nudibranch molluscs are the most commonly cited examples of aposematism in marine ecosystems, but the evidence for this has been contested, mostly because (1) there are few examples of mimicry among species, (2) many species are nocturnal or cryptic, and (3) bright colours at the red end of the colour spectrum are rapidly attenuated as a function of water depth. For example, the Spanish Dancer nudibranch (genus Hexabranchus), among the largest of tropical marine slugs, potently chemically defended, and brilliantly red and white, is nocturnal and has no known mimics.
Mimicry is to be expected as Batesian mimics with weak defences can gain a measure of protection from their resemblance to aposematic species. Other studies have concluded that nudibranchs such as the slugs of the family Phyllidiidae from Indo-Pacific coral reefs are aposematically coloured. Müllerian mimicry has been implicated in the coloration of some Mediterranean nudibranchs, all of which derive defensive chemicals from their sponge diet.
The crown-of-thorns starfish, like other starfish such as Metrodira subulata, has conspicuous coloration and conspicuous long, sharp spines, as well as cytolytic saponins, chemicals which could function as an effective defence; this evidence is argued to be sufficient for such species to be considered aposematic.
It has been proposed that aposematism and mimicry is less evident in marine invertebrates than terrestrial insects because predation is a more intense selective force for many insects, which disperse as adults rather than as larvae and have much shorter generation times. Further, there is evidence that fish predators such as blueheads may adapt to visual cues more rapidly than do birds, making aposematism less effective. However, there is experimental evidence that pink warty sea cucumbers are aposematic, and that the chromatic and achromatic signals that they provide to predators both independently reduce the rate of attack.
Blue-ringed octopuses are venomous. They spend much of their time hiding in crevices whilst displaying effective camouflage patterns with their dermal chromatophore cells. However, if they are provoked, they quickly change colour, becoming bright yellow with each of the 50-60 rings flashing bright iridescent blue within a third of a second. It is often stated this is an aposematic warning display, but the hypothesis has rarely if ever been tested.
Behaviour
The mechanism of defence relies on the memory of the would-be predator; a bird that has once experienced a foul-tasting grasshopper will endeavor to avoid a repetition of the experience. As a consequence, aposematic species are often gregarious. Before the memory of a bad experience attenuates, the predator may have the experience reinforced through repetition. Aposematic organisms are often slow-moving, as they have little need for speed and agility. Instead, their morphology is frequently tough and resistant to injury, thereby allowing them to escape once the predator is warned off.
Aposematic species do not need to hide or stay still as cryptic organisms do, so aposematic individuals benefit from more freedom in exposed areas and can spend more time foraging, allowing them to find more and better quality food. They may make use of conspicuous mating displays, including vocal signals, which may then develop through sexual selection.
Origins of the theory
Wallace, 1867
In a letter to Alfred Russel Wallace dated 23 February 1867, Charles Darwin wrote, "On Monday evening I called on Bates & put a difficulty before him, which he could not answer, & as on some former similar occasion, his first suggestion was, 'you had better ask Wallace'. My difficulty is, why are caterpillars sometimes so beautifully & artistically coloured?" Darwin was puzzled because his theory of sexual selection (where females choose their mates based on how attractive they are) could not apply to caterpillars since they are immature and hence not sexually active.
Wallace replied the next day with the suggestion that since some caterpillars "...are protected by a disagreeable taste or odour, it would be a positive advantage to them never to be mistaken for any of the palatable catterpillars [sic], because a slight wound such as would be caused by a peck of a bird's bill almost always I believe kills a growing . Any gaudy & conspicuous colour therefore, that would plainly distinguish them from the brown & green eatable , would enable birds to recognise them easily as at a kind not fit for food, & thus they would escape seizure which is as bad as being eaten."
Since Darwin was enthusiastic about the idea, Wallace asked the Entomological Society of London to test the hypothesis. In response, the entomologist John Jenner Weir conducted experiments with caterpillars and birds in his aviary, and in 1869 he provided the first experimental evidence for warning coloration in animals. The evolution of aposematism surprised 19th-century naturalists because the probability of its establishment in a population was presumed to be low, since a conspicuous signal suggested a higher chance of predation.
Poulton, 1890
Wallace coined the term "warning colours" in an article about animal coloration in 1877. In 1890 Edward Bagnall Poulton renamed the concept aposematism in his book The Colours of Animals. He described the derivation of the term as follows:
Evolution
Aposematism is paradoxical in evolutionary terms, as it makes individuals conspicuous to predators, so they may be killed and the trait eliminated before predators learn to avoid it. If warning coloration puts the first few individuals at such a strong disadvantage, it would never last in the species long enough to become beneficial.
Supported explanations
There is evidence for explanations involving dietary conservatism, in which predators avoid new prey because it is an unknown quantity; this is a long-lasting effect. Dietary conservatism has been demonstrated experimentally in some species of birds and fish.
Further, birds recall and avoid objects that are both conspicuous and foul-tasting longer than objects that are equally foul-tasting but cryptically coloured. This suggests that Wallace's original view, that warning coloration helped to teach predators to avoid prey thus coloured, was correct. However, some birds (inexperienced starlings and domestic chicks) also innately avoid conspicuously coloured objects, as demonstrated using mealworms painted yellow and black to resemble wasps, with dull green controls. This implies that warning coloration works at least in part by stimulating the evolution of predators to encode the meaning of the warning signal, rather than by requiring each new generation to learn the signal's meaning. All of these results contradict the idea that novel, brightly coloured individuals would be more likely to be eaten or attacked by predators.
Alternative hypotheses
Other explanations are possible. Predators might innately fear unfamiliar forms (neophobia) long enough for them to become established, but this is likely to be only temporary.
Alternatively, prey animals might be sufficiently gregarious to form clusters tight enough to enhance the warning signal. If the species was already unpalatable, predators might learn to avoid the cluster, protecting gregarious individuals with the new aposematic trait. Gregariousness would assist predators to learn to avoid unpalatable, gregarious prey. Aposematism could also be favoured in dense populations even if these are not gregarious.
Another possibility is that a gene for aposematism might be recessive and located on the X chromosome. If so, predators would learn to associate the colour with unpalatability from males with the trait, while heterozygous females carry the trait until it becomes common and predators understand the signal. Well-fed predators might also ignore aposematic morphs, preferring other prey species.
A further explanation is that females might prefer brighter males, so sexual selection could result in aposematic males having higher reproductive success than non-aposematic males if they can survive long enough to mate. Sexual selection is strong enough to allow seemingly maladaptive traits to persist despite other factors working against the trait.
Once aposematic individuals reach a certain threshold population, for whatever reason, the predator learning process would be spread out over a larger number of individuals and therefore is less likely to wipe out the trait for warning coloration completely. If the population of aposematic individuals all originated from the same few individuals, the predator learning process would result in a stronger warning signal for surviving kin, resulting in higher inclusive fitness for the dead or injured individuals through kin selection.
A theory for the evolution of aposematism posits that it arises by reciprocal selection between predators and prey, where distinctive features in prey, which could be visual or chemical, are selected by non-discriminating predators, and where, concurrently, avoidance of distinctive prey is selected by predators. Concurrent reciprocal selection (CRS) may entail learning by predators or it may give rise to unlearned avoidances by them. Aposematism arising by CRS operates without special conditions of the gregariousness or the relatedness of prey, and it is not contingent upon predator sampling of prey to learn that aposematic cues are associated with unpalatability or other unprofitable features.
Mimicry
Aposematism is a sufficiently successful strategy to have had significant effects on the evolution of both aposematic and non-aposematic species.
Non-aposematic species have often evolved to mimic the conspicuous markings of their aposematic counterparts. For example, the hornet moth is a deceptive mimic of the yellowjacket wasp; it resembles the wasp, but has no sting. A predator which avoids the wasp will to some degree also avoid the moth. This is known as Batesian mimicry, after Henry Walter Bates, a British naturalist who studied Amazonian butterflies in the second half of the 19th century. Batesian mimicry is frequency dependent: it is most effective when the ratio of mimic to model is low; otherwise, predators will encounter the mimic too often.
A second form of mimicry occurs when two aposematic organisms share the same anti-predator adaptation and non-deceptively mimic each other, to the benefit of both species, since fewer individuals of either species need to be attacked for predators to learn to avoid both of them. This form of mimicry is known as Müllerian mimicry, after Fritz Müller, a German naturalist who studied the phenomenon in the Amazon in the late 19th century.
Many species of bee and wasp that occur together are Müllerian mimics. Their similar coloration teaches predators that a striped pattern is associated with being stung. Therefore, a predator which has had a negative experience with any such species will likely avoid any that resemble it in the future. Müllerian mimicry is found in vertebrates such as the mimic poison frog (Ranitomeya imitator) which has several morphs throughout its natural geographical range, each of which looks very similar to a different species of poison frog which lives in that area.
See also
Handicap principle
References
Sources
External links
Signalling theory
Animal communication
Antipredator adaptations
Evolution by phenotype
Warning coloration
Ecology
Chemical ecology | Aposematism | Chemistry,Biology | 3,710 |
68,291,117 | https://en.wikipedia.org/wiki/KELT-6 | KELT-6, also known as BD+31 2447, is a star in the constellation Coma Berenices. With an apparent magnitude of 10.34, it is impossible to see with the unaided eye, but can be seen with a powerful telescope. The star is located 791 light years away from the Solar System based on parallax, but is drifting away with a radial velocity of 1.62 km/s.
Properties
KELT-6 is an F-type star that is 13% more massive and 53% larger than the Sun. It radiates at 3.25 times the Sun's luminosity from its photosphere at an effective temperature of 6,727 K. KELT-6 has a projected rotational velocity of 4.53 km/s, and is slightly older than the Sun, with an age of 4.9 billion years. Unlike most host stars of exoplanets, it has a poor metallicity, with 52.5% the abundance of heavy metals compared to the Sun.
Planetary system
In 2013, a long period "hot Jupiter" was discovered orbiting the star using the transit method. Another planet was discovered in 2015 using the radial velocity (doppler spectroscopy) method.
See also
List of most luminous stars
List of most massive stars
Lists of stars
Lists of stars by constellation
References
F-type subgiants
Coma Berenices
Durchmusterung objects
Planetary systems with two confirmed planets | KELT-6 | Astronomy | 299 |
39,217,162 | https://en.wikipedia.org/wiki/Russula%20parvovirescens | Russula parvovirescens is a basidiomycete mushroom of the genus Russula. Found in the eastern United States, it was described as new to science in 2006.
Description
The green cap is convex with a nearly flat top and central depression, and wide; it has a quilted appearance due to cracks that increase near the margin. The stem is long and thick. The spore print is cream.
Similar species
It is similar in appearance to the more widespread Russula virescens and R. crustosa, but can be distinguished from those species by its smaller stature, and microscopically by the voluminous terminal cells of the cap cuticle.
Distribution and habitat
It appears in the eastern United States from June to September.
See also
List of Russula species
References
External links
Fungi described in 2006
Fungi of the United States
parvovirescens
Fungi without expected TNC conservation status
Fungus species | Russula parvovirescens | Biology | 185 |
51,798,738 | https://en.wikipedia.org/wiki/Phenestrol | Phenestrol, or fenestrol, also known as hexestrol bis[4-[bis(2-chloroethyl)amino]phenylacetate, is a synthetic, nonsteroidal estrogen and cytostatic antineoplastic agent (i.e., chemotherapy drug) and a chlorphenacyl nitrogen mustard ester of hexestrol which was developed in the early 1960s for the treatment of hormone-dependent tumors but was never marketed.
See also
List of hormonal cytostatic antineoplastic agents
List of estrogen esters
References
Abandoned drugs
Antineoplastic drugs
Carboxylate esters
Estrogen esters
Hormonal antineoplastic drugs
Nitrogen mustards
Synthetic estrogens
Chloroethyl compounds | Phenestrol | Chemistry | 169 |
41,673,989 | https://en.wikipedia.org/wiki/S%C5%8Dk%C5%8D%20Sagy%C5%8D%20Ki | The , also known as the , was a fulltrack engineering vehicle of the Imperial Japanese Army (IJA) introduced in 1931. The vehicle was considered by the IJA to be one of its most versatile multi-function support vehicles.
History
During the 1930s, the Imperial Japanese Army required a specialised vehicle in preparation for war against the Soviet Union, which would be capable of destroying Soviet fortified positions along the Manchurian border. During the development and planning, it was decided that its capabilities should include: destruction of pillboxes, trench digging, mine clearing, barbed wire cutting, smoke discharge, mass decontamination, chemical weapons employment, use as a crane vehicle, as a flamethrower tank and as a bridgelayer.
The first prototype was built in 1931. Following testing, the Imperial Japanese Army ordered several vehicles, with the first four assigned to the 1st Mixed Tank Brigade sent to China. During the Battle of Beiping–Tianjin in 1937, the vehicles were used as flamethrower tanks; however, for later battles the vehicles were exclusively used as engineering vehicles. They were eventually sent to the Soviet-Manchurian border within a combat engineer regiment.
During December 1941, approximately 20 SS-Ki vehicles were transferred to the Philippines as part of the engineer unit of the 2nd Tank Division. Eight SS-Ki vehicles were captured there by the United States military in the Battle of Luzon in 1945, which classified the vehicles as flamethrower tanks.
Design
The design used the Type 89 I-Go medium tank chassis and hull and a few of its parts. It also featured other parts from various mass-production vehicles. The suspension was made from two blocks of four roadwheels with two return rollers and no independent forward bogie, in addition to semi-elliptical leaf springs. The steering sprocket was placed within the front of the vehicle, whilst the drive sprocket was placed within the rear.
The turret was removed and replaced with a small commander cupola with fitted observational devices; two large claws used for mine clearing were placed in the front, while a winch designed to pull heavy objects was placed in the rear, and was directly powered by the engine. In addition, the chassis had a "tow coupling". The thickness of the armor was reduced to 6mm on the roof and bottom, 13mm at the sides, and 25mm at the front hull, since the vehicle was not intended for combat at the front lines. The vehicle weighed 13 tons and accommodated five crewmembers.
The SS-Ki was powered by a Mitsubishi I6 diesel engine, which provided 145 horsepower at 1800 rpm, allowing the vehicle to travel at a top speed of 37 km/h; this was in conjunction with a mechanical transmission.
Variants
Until 1943, 119 vehicles were built with the following variants:
: Armored engineering vehicle with suspension tracks consisting of four return rollers. These were equipped with folded bridge, crane arm and three flamethrowers with external tanks. Thirteen were produced.
: Armored bridgelaying vehicle with three return rollers and modified drive sprockets. These were equipped with folded bridge, twin mine rake arms, and three flamethrowers with external tanks. Eight were produced.
: Armored trench digger with identical suspension as the Otsu Gata, with additional armor plates. it was equipped with folded bridge, twin mine rake arms, and three flamethrowers with external tanks. One was produced.
: Armored engineering vehicle with identical suspension as the Otsu Gata. These were equipped with folded bridge, single arm mine rake and three flamethrowers with external tanks. Twenty were produced.
: Armored bridgelaying vehicle based on the design of the SS Ki. These were equipped with a catapult bridge, single arm mine rake, and five flamethrowers with internal tanks. Seventy-seven were produced.
Notes
References
Taki's Imperial Japanese Army page: Armored Engineer Vehicle "SS" - Akira Takizawa
Military engineering vehicles
Flame tanks
Military bridging equipment
World War II armoured fighting vehicles of Japan
Mitsubishi
Military vehicles introduced in the 1930s | Sōkō Sagyō Ki | Engineering | 828 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.