text
stringlengths
2
132k
source
dict
properties, such as carbon and silicon in the sequence: carbon, nitrogen, oxygen, fluorine, sodium, magnesium and silicon. He called this a Law of Octaves. Three years later, in 1866, he presented his ideas to the Chemical Society, unfortunately for Newlands, the musical analogy was not well received – the audience suggesting he might as well have ordered the elements alphabetically. Today, Newlands' Octaves are known as the Law of Periodicity, and Mendeleev was thinking along the same lines. === Mendeleev's periodic table === By 1869 Mendeleev had been trying to find an order for the elements for a decade. One day he struck upon the idea of making up a pack of cards with the elements' names on and began playing a game he called 'chemical solitaire'. He began laying out the cards, over and over, just to see if he could form a pattern where everything fitted together. To date, chemists had tried to group elements in one of two ways: By their atomic weights (Berzelius' and Cannizzaro's Atomic Weights); By their chemical properties (Döbereiner's Triads and Newland's Octaves). Mendeleev's genius was to combine those two methods together. However, the odds were stacked against him – little more than half the known elements had been discovered: he was playing with an incomplete deck of cards. He stayed up for three days and nights then, finally, on 17 February 1869, he fell asleep and dreamt of all 63 known elements laid out in a grand table. Mendeleev's table reveals the relationship between all the elements in their order: Atomic weights increase reading from left to right; Triads and Octaves are visible reading down the columns. Notice carbon and silicon are in Group IV and the volatile gases fluorine, chlorine and bromine are in Group VII. Mendeleev was sufficiently confident in
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
the layout of his table that he was willing to leave gaps for unknown elements to make the pattern fit – believing other elements would later be discovered that filled the gaps. After Calcium (Ca, weight 40) he left a gap, predicting a metallic element slightly heavier than calcium; After Zinc (Zn, weight 65) he left a gap, predicting a metal with a low melting point and atomic weight 68; Immediately after that gap, he left a further gap, predicting another metal, dark grey in colour. So, for Mendeleev to be vindicated, the gaps needed to be filled, and luckily, in 1859, new instrumentation had been developed for discovering elements. === Bunsen's burner and Kirchhoff's spectrometer === Robert Bunsen knew that when certain elements burned in the flames of his burner they each turned the flame a different colour. Copper burned green, strontium red and potassium lilac – Bunsen wondered if every element had a unique colour. Bunsen was joined in his research by Gustav Kirchhoff. Kirchhoff used the concept of the dispersion of white light by a prism in the invention of the spectroscope, a device with a prism at its centre which split the light from Bunsen's flames into distinct bands of its constituent colours – the element's spectral lines. Kirchhoff and Bunsen realised these spectral lines were unique to each element, and, using this technique they discovered two new elements, cesium and rubidium. === Paul Emile Lecoq de Boisbaudran discovers gallium === In 1875, the Parisian chemist Paul Emile Lecoq de Boisbaudran used a spectroscope to discover a new metallic element. It was a silvery-white, soft metal with an atomic weight of 68, which he named gallium, after his native France. It also turned out to have a very low melting point, thus matching all the expected
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
properties of the element Mendeleev expected to fill the gap he had left after zinc; indeed, this is exactly where the element was placed in the periodic table. Even though Mendeleev had left the necessary gap for gallium as well as other elements, it was becoming clear there was an entire group that was missing altogether. === Pierre Janssen and Norman Lockyer discover helium === In 1868, the French astronomer Pierre Janssen travelled to India in time for the total solar eclipse that occurred in August of that year. As well as his telescope, he also went equipped with a spectroscope, to study the spectral lines of the light emitted from the sun. Normally, due to the intensity of sunlight many weaker spectral lines are not visible next to the extreme brightness of the stronger lines. Janssen hoped that he would observe more spectral lines during the eclipse when the sun's light was less intense. The eclipse allowed Janssen to observe a spectral line never seen before, which was not associated with any known element. The same spectral line was confirmed by the English astronomer Norman Lockyer, who thinking the element only existed in the sun, named it helium, after the Greek Sun God. However, it wasn't long before another British scientist had discovered helium on Earth. === William Ramsay discovers the noble gases === By dissolving the radioactive ore cleveite in acid, William Ramsay was able to collect a gas trapped within the rock, which had an atomic weight of 4, and the same spectral lines which Lockyer had observed: helium. Prior to this, Ramsay had already isolated a new gas from the atmosphere; argon, with an atomic weight of 40. A problem now arose – Mendeleev had not left any gaps which were suitable for either of these
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
two new elements, which led Ramsay to conclude an entire group was missing from the periodic table – only two of whose members were now known to exist, helium and argon. Ramsey successfully discovered all the other stable elements in the group which he named neon (Greek for new), krypton (Greek for hidden) and xenon (Greek for stranger). All the elements of this new group had one overwhelming characteristic; their lack of reactivity. It was this particular characteristic that brought to mind a name for the new group: the noble gases. === Mendeleev vindicated === Mendeleev's periodic table had brought order to all the elements, allowing him to make predictions that future scientists tested and found to be true. By the time he died he was world-renowned in chemistry. His periodic table was set in stone in St Petersburg and an element was eventually named after him: mendelevium. The periodic table does not however tell us why some elements are highly reactive, others completely inert, why some are volatile, whilst others less so. It wasn't until the beginning of the 20th century that an entirely different branch of science began to unravel the answers to these questions. === Niels Bohr's fixed shell model === In 1909, the physicist Ernest Rutherford proposed the structure of the atom was like that of a solar system: mostly empty space with electrons floating around a dense nucleus. Subsequently, the Danish Physicist Niels Bohr introduced the idea that electrons occupied "fixed shells" around the nucleus, which was further developed when it was suggested that each such shell could only accommodate a fixed number of electrons: 2 in the first shell; 8 in the second shell; 18 in the third shell, and so on, each shell holding an increasing number of electrons. The chemical behaviour of
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
all elements is explained by the number of electrons in their outer shells: to increase the energetic stability of their electron configurations atoms have a tendency to gain or lose electrons in such a way so as to achieve a full outer shell. Sodium, with 11 electrons – one in its outer-most occupied shell, will transfer an electron in the presence of fluorine to its outer-most occupied shell, which contains seven electrons. The result is both sodium and fluorine now have a full outer shell, and Sodium Fluoride is formed. This theory explained why all elements react in the way they do and why some formed the compounds they do, while others did not. It also explained why elements had the physical properties they did, which in turn explained why the periodic table had the shape it did. However, there was one fundamental question left unanswered: how many elements were there – could there be an infinite number of elements between Hydrogen and Uranium? === Henry Moseley's proton numbers === Early 20th century chemist Henry Moseley speculated that the answer to the number of protons lay in the nucleus. By firing a radioactive source at copper, he was able to knock electrons from their atoms, releasing a burst of energy in the form of an x-ray. When measured, the x-rays always had the same energy, unique to copper. He discovered each element released x-rays of different energies. Moseley's brilliance was to realise the x-ray energy is related to the number of protons inside the atom: the atomic number. Because this is the number of protons, the atomic number must be a whole number – there cannot be any fractional values. Moseley realised it was the atomic number, not the atomic weight that determines the order of the elements. What's more,
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
because the atomic number increases in whole numbers from one element to the next there can be no extra elements between Hydrogen (atomic number 1) and Uranium (atomic number 92) – there can only be 92 elements, there is no room for any more. Moseley was just 26 when he completed this research. Aged 27 he was killed in action during the First World War – shot through the head by a sniper. == Episode 3: The Power of the Elements == === Introduction === Just 92 elements combine to form all the compounds on Earth. Iron, when combined with chromium, carbon and nickel makes stainless steel. Glass is made of silicon and oxygen. Since prehistoric times, people have been engaging in 'bucket chemistry' – adding all sorts of chemicals together, just to see what would happen. As a result, many early discoveries in chemistry were accidental. === Heinrich Diesbach produces the first synthetic paint === In 18th century Prussia, Heinrich Diesbach was trying to produce a synthetic red paint. He started by heating potash (potassium carbonate), with no idea that his potash had been contaminated with blood. When heated, the proteins in blood are altered, allowing them to combine with the iron in the blood, whilst the carbonate reacts with the haemoglobin to produce a solid. After heating the resulting solid to an ash, filtering and diluting, Diesbach added green vitriol (iron sulphate) to create a complex ion: ferric ferrocyanide. Finally, adding spirit of salt (hydrochloric acid) draws out a brilliant colour: Prussian blue. === Justus von Liebig and Friedrich Wöhler encounter isomerism === Ever since seeing fireworks as a child, another German chemist, Justus von Liebig, had become obsessed with trying to better understand the elements by creating explosive combinations. Specifically, he was interested in the explosive compound
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
silver fulminate. In 1825 he read a paper written by Friedrich Wöhler in which he describes a compound called silver cyanate, made in equal parts of silver, carbon, nitrogen and oxygen, which he described as harmless and stable. Von Liebig immediately wrote back a furious letter condemning Wöhler as a hopeless analyst: those elements combined in equal proportions were exactly what made the explosive silver fulminate. Instead of backing down, Wöhler challenged von Liebig to make silver cyanate for himself. The results would have astounded him – the same elements that combined according to von Liebig's method, when combined according to Wöhler's method made two completely different compounds. Wöhler and von Liebig had inadvertently discovered isomerism: the same number of atoms of the same elements combining in different ways to make different compounds. In time, this would explain how just 92 elements could make the vast array of compounds we know today. Chemists started to realise that understanding the arrangement of atoms within compounds was crucial if they wished to design new compounds, and the first step in this direction was taken by studying carbon. === Smithson Tennant discovers what diamonds are made of === In 1796 Smithson Tennant was experimenting on diamonds when he decided to burn one. Using only sunlight and a magnifying glass he managed to ignite a diamond sufficiently for it to produce a gas, which he collected and was able to identify as carbon dioxide. Having started with only diamond and oxygen, and produced a gas which contains only carbon and oxygen, Tennant had discovered that diamonds are made of carbon. Unaware of atomic theory at the time, scientists were unable to explain how carbon, already known to exist as one of the softest substances in the form of graphite, could also be the sole
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
constituent element of the hardest known substance: diamond. Exactly 50 years later, a young Scottish chemist discovered there are no prizes in Science for coming second. === Archibald Scott Couper formulates the theory of chemical bonds === In 1856 Archibald Scott Couper went to work for a French chemist, Charles-Adolphe Wurtz. Whilst in Paris he came up with the idea of links between atoms that could explain how individual atoms formed compounds. He called these links bonds. Somehow, Couper realised that carbon can form four bonds, thereby attaching itself with different strengths to other carbon atoms in a compound: In diamond all four bonds are connected to other carbon atoms in three-dimensions, making it so hard. In graphite only three bonds are connected to other carbon atoms in a two-dimensional hexagonal lattice, allowing layers to slide over each other, making graphite soft. The ability of carbon to form four bonds also means it can exist in a huge variety of chemical structures, such as long chains and even rings, making it a rarity amongst the elements. This helped to explain the abundance of carbon in all life forms, from protein and fat, to DNA and cellulose, and why carbon exists in more compounds than any other element. All that remained for Couper was to get his paper published... === Friedrich Kekulé formulates the same theory of chemical bonds === Friedrich Kekulé was a German scientist who spent some time studying in London. It was apparently whilst riding a London bus he struck upon the idea of atoms 'holding hands' to form long chains. Kekulé rushed to compose a paper formalising his ideas on an equivalent theory of chemical bonds. Meanwhile, in Paris, Wurtz had been slow to publish Couper's paper and Kekulé, whose work appeared in print first, claimed all
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
the credit. When Couper discovered Wurtz had delayed in sending his paper to be published he flew into a rage and was promptly expelled from the laboratory by Wurtz. The crushing disappointment at having lost out on his chance of scientific recognition led him first to withdraw from Science and then to suffer a nervous breakdown. He spent years in and out of an asylum. However, now that scientists were beginning to understand the way carbon combines with itself and other elements, it was possible to create new compounds by design and industrial chemistry was born. === Wallace Carothers invents nylon === Two decades after the world's first plastic – Bakelite – had been invented in 1907, Wallace Carothers successfully drew off a fibre from the interface of two liquids: hexane-1,6-diamine and decanedioyl-dichloride, which could be spun into a very fine, very strong thread. It was given the name nylon. Shockingly, only three weeks after the patent for nylon had been filed, a depressed Carothers slipped another carbon based compound into his own drink, potassium cyanide, and killed himself. Evidently, industrial chemistry wasn't without its downsides, and one chemist was arguably responsible for single-handedly polluting the entire Earth with lead. === Thomas Midgley Jr. prevents engines from knocking === In his capacity as an engineer with General Motors, Thomas Midgley Jr. experimented with a myriad of different compounds, which he added to petrol in an attempt to prevent engines from knocking. Eventually, he discovered one compound that worked brilliantly: tetraethyllead. By the 1970s the use of leaded petrol was ubiquitous worldwide, but research was emerging about the damage that it was doing to humans and the environment. In 1983, a Royal Commission asked the question: "Is there any part of the Earth’s surface, or any form of life that remains
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
uncontaminated?" Today nearly all petrol is unleaded, although lead lives on in motor vehicles in their batteries. === Henri Becquerel discovers radioactivity === In 1896 the French scientist Henri Becquerel was working with uranium crystals when he found UV light made them glow. Leaving the uranium crystals on an unexposed photographic plate overnight, he returned the next morning to discover they had caused the part of the plate they were sat on to develop. Becquerel correctly reasoned the only source of energy that could have caused this was the crystals themselves. He had discovered radioactivity, and a young Polish scientist began to investigate. === Marie Curie investigates radioactivity === Marie Curie began her investigations by testing a uranium ore called pitchblende with an electrometer. She discovered it was four times more radioactive than pure uranium, and wondered if this was due to the presence of an even more radioactive element in the pitchblende. Curie began stockpiling tonnes of pitchblende, then in the most basic of workshops with primitive equipment she undertook a multitude of complex and dangerous procedures in an attempt to isolate this new element. In the event, Curie discovered two new elements, polonium named after her native Poland and radium. Whilst these were naturally occurring elements, they fuelled a scientific desire to create entirely new, artificial elements. === Ernest Rutherford explains radioactivity === At the beginning of the 20th century it was widely believed that atoms never change: an atom of one element stayed that way forever. Rutherford had already revealed the structure of an atom to consist mostly of empty space with a dense nucleus of protons at the centre, and Henry Mosley had shown that it is the number of protons that gives an atom its identity as a particular element. An atom of the element
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
carbon has 6 protons, whilst an atom with 7 protons is one of nitrogen. Rutherford came to the conclusion that the number of protons in a radioactive element could change – through a process of decay where parts of the nucleus are ejected from the atom. Rutherford named these fragments of ejected nucleus alpha particles. Rutherford realised that if an atom is losing protons, its identity is changing at the same time, since an atom's identity is governed by its proton number. Radioactive decay causes atoms of one element to transmute into atoms of a different element. He then sought to artificially engineer a specific transmutation. Rutherford fixed a source of alpha particles – each of which contains two protons – at one end of a cylindrical chamber. At the other end he fixed a screen. Each time an alpha particle reached the screen it produced a flash. He then introduced nitrogen into the chamber and observed additional, different flashes on the screen. Occasionally, an alpha particle would collide with a nitrogen nucleus and get absorbed by it, knocking out a proton in the process. These protons then travelled on through the chamber to the screen to produce the additional flashes. However, the nucleus of nitrogen – having absorbed two protons but lost only one – had gained a proton and become a nucleus of oxygen. Rutherford's work gave hope to scientists trying to create new elements, but one final discovery about the atom was necessary. In 1932 the Cambridge scientist James Chadwick discovered the neutron – electrically neutral particles which also sit inside the nucleus along with the protons. === Enrico Fermi claims to have made elements heavier than uranium === Now in Italy, Enrico Fermi – nicknamed 'the pope' by his colleagues for his infallibility, realised the potential
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
of the newly discovered neutron in the search for elements heavier than uranium. Until now, scientists had been bombarding uranium with alpha particles in the hope they would enter the nucleus. Unfortunately, this was very unlikely because both alpha particles and nuclei are positively charged – the alpha particles could never overcome the electrostatic repulsion of the nucleus. Fermi reasoned that because neutrons carried no electric charge, they would have a much better chance of penetrating the nucleus of a uranium atom. So Fermi set about firing neutrons at uranium. Fermi thought that this, coupled with his knowledge of beta decay, whereby an unstable nucleus attempts stabilisation by converting one neutron to a proton and ejecting a newly formed electron, would result in an element with one extra proton than uranium: element 93. Indeed, Fermi discovered elements he did not recognise. He tested for elements below uranium in the periodic table: radon, actinium, polonium, as far back as lead – it was none of these. So, in 1934, the infallible Fermi declared to the world he had created elements heavier than uranium. === Otto Hahn disproves Fermi’s claims === In 1938, a team of German scientists, led by Otto Hahn, decided to investigate Fermi's bold claim. Unfortunately for Fermi, they quickly disproved his assertion; one of the elements produced was barium, which, with 56 protons, was nowhere near the 92 protons the nucleus started with when it was uranium. Hahn wrote of his confusion to his colleague Lise Meitner who, as an Austrian Jew, had recently fled Nazi Germany for Sweden. === Lise Meitner explains Fermi's work === Over Christmas 1938, Meitner considered the problem of the uranium nucleus, which she reasoned, given its relative size, must be quite unstable. She decided to model the nucleus as a drop of
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
water, ready to divide with the impact of a single neutron. She realised the nucleus had split in half, and both Fermi and Hahn had witnessed what is now known as nuclear fission. However, in doing the calculations for such an event, Meitner was unable to make the equations balance. She calculated that the products of the fission reaction were lighter than the initial uranium, by about one fifth of a proton. Somehow, a small amount of mass had disappeared. Then slowly, the solution to this discrepancy occurred to Meitner – Einstein and E = mc2 – the missing mass had been converted to energy. === The Manhattan Project === Meitner's work was published in 1939, but as well generating interest amongst the scientific community, Meitner's revelations were also coming to the attention of governments on the verge of war. Fuelled by fears Nazi Germany was investigating nuclear weapons of its own, scientists were assembled in America to work on the Manhattan Project aimed at creating the first atomic bomb. For an explosion to occur, there must be a rapid release of energy – a slow release of energy from uranium nuclei would give a uranium fire, but no explosion. Both sides poured their effort into creating the necessary conditions for a chain reaction. In 1942 Enrico Fermi, now living in America, successfully induced a chain reaction in uranium, but processing uranium for bombs was both difficult and costly. America had just come up with a different solution to win the atomic race. Now finally, scientists' dream of creating an element beyond the end of the periodic table was about to be realized. === Edwin McMillan and Philip H. Abelson create the first synthetic element === In California, scientists were trying to create a new element heavier than uranium using
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
cyclotron machines. This involved using huge magnets to steer atoms round in circles faster and faster until they reached a tenth of the speed of light, whereupon they were smashed into a uranium target. Edwin McMillan and Philip H. Abelson blasted uranium with a beam of particles to create the first synthetic element, heavier than uranium – element 93, which they named neptunium. The next synthetic element, plutonium, quickly followed in 1941, which scientists realized was readily able to undergo fission in a way capable of producing the desired chain reaction. It was soon being made into a bomb. A mere seven years after the discovery of nuclear fission, on 6 August 1945, half a gram of uranium was converted into energy when the world's first atomic bomb was dropped on Hiroshima. As Lisa Meitner's calculations suggested, this conversion released energy equivalent to 13,000 tons of TNT. A plutonium bomb was dropped on Nagasaki three days later. === GSI Helmholtz Centre for Heavy Ion Research === Using one of the world's largest particle accelerators, scientists working at the Heavy Ion Research facility in Darmstadt, Germany, have so far confirmed the existence of element 112, which they have named copernicium, after Polish astronomer Nicholas Copernicus. These physicists have become the new chemists – testing the foundations of the periodic table, and hence our understanding of the universe, in light of new discoveries. In addition to producing new elements, scientists are also attempting to discern their properties. Copernicium is found to be a volatile metal that would be liquid at room temperature if enough were ever made – exactly what Mendeleev would predict for an element that sits directly beneath liquid mercury in the periodic table. == Broadcast in the United States == It aired in the United States under the title
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
"Unlocking the Universe." == Region 2 DVD release == The full series was released as a region 2 DVD set in 2015 by the Dutch company B-Motion. == References == == External links == Chemistry: A Volatile History at BBC Online Chemistry: A Volatile History at IMDb
{ "page_id": 29952420, "source": null, "title": "Chemistry: A Volatile History" }
Ecosystem Functional Type (EFT) is an ecological concept to characterize ecosystem functioning. Ecosystem Functional Types are defined as groups of ecosystems or patches of the land surface that share similar dynamics of matter and energy exchanges between the biota and the physical environment. The EFT concept is analogous to the Plant Functional Types (PFTs) concept, but defined at a higher level of the biological organization. As plant species can be grouped according to common functional characteristics, ecosystems can be grouped according to their common functional behavior. One of the most used approaches to implement this concept has been the identification of EFTs from the satellite-derived dynamics of primary production, an essential and integrative descriptor of ecosystem functioning. == History == In 1992, Soriano and Paruelo proposed the concept of Biozones to identify vegetation units that share ecosystem functional characteristics using time-series of satellite images of spectral vegetation indices. Biozones were later renamed to EFTs by Paruelo et al. (2001), using an equivalent definition and methodology. was one of the first authors that used the term EFT as "aggregated components of ecosystems whose interactions with one another and with the environment produce differences in patterns of ecosystem structure and dynamics". Walker (1997) proposed the use of a similar term, vegetation functional types, for groups of PFTs in sets that constitute the different states of vegetation succession in non-equilibrium ecosystems. The same term was applied by Scholes et al. in a wider sense for those areas having similar ecological attributes, such as PFTs composition, structure, phenology, biomass or productivity. Several studies have applied hierarchy and patch dynamic theories for the definition of ecosystem and landscape functional types at different spatial scales, by scaling-up emergent structural and functional properties from patches to regions. Valentini et al. defined land functional units by focusing on
{ "page_id": 47057320, "source": null, "title": "Ecosystem Functional Type" }
patches of the land surface that are able to exchange mass and energy with the atmosphere and show a coordinated and specific response to environmental factors. Paruelo et al. (2001) and Alcaraz-Segura et al. (2006, 2013) refined the EFT concept and proposed a remote-sensing based methodology to derive them. Since then, several authors have implemented the idea under the same or similar approaches using NOAA-AVHRR, MODIS and Landsat archives. In brief, all these approaches use the seasonal dynamics of spectral indices related to key functional aspects of ecosystems such as primary production, water exchange, heat exchange and radiative balance. == Identification == The functional classification of EFTs developed by Paruelo et al. (2001) and Alcaraz-Segura et al. (2006, 2013) uses time series of spectral vegetation indexes to capture the carbon gains dynamics, the most integrative indicator of ecosystem functioning. To build EFTs, these authors derive three descriptors or metrics from the seasonal dynamics (annual curve) of spectral vegetation indexes (VI) that capture most of the variance in the time series (Fig.2): Annual mean of VI (VI_Mean): estimator of annual primary production, one of the most essential and integrative descriptors of ecosystem functioning. Intra-annual coefficient of variation of VI (VI_sCV): descriptor of seasonality or differences in carbon gains between the growing and non growing seasons. Date of maximum VI value (VI_DMAX): phenological indicator of when in the year does the growing season take place. The range of values of each VI metric is divided into four intervals, giving the potential number of 4x4x4=64 EFTs. Each EFT is assigned a code of two letters and a number (three characters). The first letter of the code (capital) corresponds to the VI_Mean level, ranging from A to D for low to high (increasing) VI_Mean or productivity. The second letter (small) shows the seasonal CV,
{ "page_id": 47057320, "source": null, "title": "Ecosystem Functional Type" }
ranging from a to d for high (decreasing) to low VI_sCV or seasonality. The numbers refer to DMAX or phenology and indicate the season of maximum VI (1–4: spring, summer, autumn and winter). == Current known applications == To characterize the spatial and temporal heterogeneity of ecosystem functioning at the local and regional scales. To describe the biogeographical patterns of functional diversity at the ecosystem level. To assess the functional diversity at the ecosystem level by determining the EFTs richness and equity in the landscape. To evaluate the environmental and human controls of ecosystem functional diversity. To identify priorities for Biodiversity Conservation. To assess the representativeness of protected area networks to capture the functional diversity at the ecosystem level. To quantify and monitor the level of provision of intermediate support ecosystem services. To assess the effects of land-use changes on ecosystem functioning. To improve weather forecast models by introducing the effects of inter-annual changes in ecosystem biophysical properties into land-surface and general circulation atmospheric models. == Advantages == Functional classifications provide a useful framework for understanding the large-scale ecological changes. Environmental changes are particularly noticeable at the ecosystem level. Ecosystem functional attributes, such as the exchange of energy and matter of an ecosystem, have shorter time response to environmental changes than structural or compositional attributes, such as species composition or vegetation physiognomy. Ecosystem functioning can be more easily monitored than structural attributes by using remote sensing at different spatial scales, over large extents, and utilizing a common protocol in space and time. Functional attributes allow the qualitative and quantitative evaluation of ecosystem services. == References ==
{ "page_id": 47057320, "source": null, "title": "Ecosystem Functional Type" }
A Knudsen gas is a gas in a state of such low density that the average distance travelled by the gas molecules between collisions (mean free path) is greater than the diameter of the receptacle that contains it. If the mean free path is much greater than the diameter, the flow regime is dominated by collisions between the gas molecules and the walls of the receptacle, rather than intermolecular collisions with each other. It is named after Martin Knudsen. == Knudsen number == For a Knudsen gas, the Knudsen number must be greater than 1. The Knudsen number can be defined as: K n = λ L {\displaystyle {\rm {{Kn}={\frac {\lambda }{L}}}}} where λ {\displaystyle \lambda } is the mean free path [m] L {\displaystyle L} is the diameter of the receptacle [m]. When 10 − 1 < K n < 10 {\displaystyle 10^{-1}<{\rm {{Kn}<10}}} , the flow regime of the gas is transitional flow. In this regime the intermolecular collisions between gas particles are not yet negligible compared to collisions with the wall. However when K n > 10 {\displaystyle {\rm {{Kn}>10}}} , the flow regime is free molecular flow, so the intermolecular collisions between the particles are negligible compared to the collisions with the wall. == Example == For example, consider a receptacle of air at room temperature and pressure with a mean free path of 68nm. If the diameter of the receptacle is less than 68nm, the Knudsen number would greater than 1, and this sample of air would be considered a Knudsen gas. It would not be a Knudsen gas if the diameter of the receptacle is greater than 68nm. == See also == Free streaming Kinetic theory == References ==
{ "page_id": 1837480, "source": null, "title": "Knudsen gas" }
The Sciencenter's Sagan Planet Walk is a walkable scale model of the Solar System, located in Ithaca, New York. The model scales the entire Solar System—both planet size and distances between them—down to one five billionth of its actual size. The exhibition was originally created in 1997 in memory of Ithaca resident and Cornell Professor Carl Sagan. Consisting of eleven obelisks situated along a 1.18 km (0.73 mi) path through the streets of downtown Ithaca, the original Planet Walk leads from the Sun at Center Ithaca to Pluto at the Ithaca Sciencenter. In 2012, the model was expanded 7,630 kilometers (4,740 mi) to include a representation of Alpha Centauri, the Sun's closest neighboring star, at the ʻImiloa Astronomy Center in the University of Hawaiʻi at Hilo. The addition of the Alpha Centauri Obelisk made it the world's largest exhibition, until the Akaa Solar System Scale Model added Proxima Centauri in 2018 at a distance of 13,370 kilometres (8,310 mi) away from Akaa. In 2014, the inner planets and Sun were removed as part of extensive construction being done to the Ithaca Commons, but have since been replaced. In 2015, a grant was approved to further expand the exhibition by installing an exoplanet Kepler-37d station on the Moon 384,500 kilometers (238,900 mi) away. == Individual models == The scaled size of the Sun is given by a 27.8 cm (10.9 in) diameter circular frame repeated at the top of each 6 ft (1.8 m) tall obelisk. Centered within each sun-sized frame, the proportional size of each planet is represented by a small sphere affixed in a glass window. All of the original planet locations were chosen not only for accuracy to scale but also to highlight local landmarks and public spaces within Ithaca. However, since the original installation of the Planet
{ "page_id": 37685680, "source": null, "title": "Sagan Planet Walk" }
Walk the public library has moved to a new location, leaving Saturn no longer attached to a local landmark. Additionally, as of 2016-2017, the original planets in plexiglass at each station have been replaced by simple yellow disks with the planet simply represented by a relatively-sized hole. Some attendant moons are now represented by tiny holes in the disks. === The Sun === The Sun Obelisk is located in the center of the Ithaca Commons, a pedestrian shopping area in the heart of downtown Ithaca. The round window representing the size of the Sun at this station is roughly the size of a basketball and, in this obelisk only, contains no glass. All the subsequent obelisks in the Solar System model have Sun-sized glass windows containing their respective planets for the sake of comparison. === The inner planets === The obelisks for the four inner planets are all contained within the commons area stretching north towards Seneca Street. Mercury is situated to scale about 12.7 yards (11.6 meters) away from the Sun, Venus another 10 yards (9.1 meters) away, Earth another 9.3 yards (8.5 meters) away, and Mars another 17.1 yards (15.6 meters) away. This keeps the four inner planets within eyesight of the Sun, yet the representations of each planet appear quite small within their glass windows, and can even be somewhat hard to see. The contrast between the size of the Sun and the size of the inner planets coupled with the visible distance between them illustrate the vastness and emptiness of space. === The asteroid belt === Once leaving the inner planets, a visitor to the Planet Walk turns west along Seneca Street to continue towards Jupiter. Between Mars and Jupiter lies the asteroid belt. The obelisk representing the asteroid belt was added several years after the
{ "page_id": 37685680, "source": null, "title": "Sagan Planet Walk" }
initial installation. Its display contains the only public, unguarded meteorite in the world. === Jupiter === The Jupiter Obelisk sits at the corner of Seneca Street and Cayuga Street, outside the downtown Dewitt Mall, and not far from the famed Moosewood Restaurant. The model of Jupiter within the glass window is the first planet representation on the walk that is easily visible, demonstrating how much bigger than the inner planets it is. === Saturn === To reach the Saturn Obelisk, visitors turn north and continue along Cayuga Street. At the corner of Cayuga Street and Court Street, Saturn's obelisk sits outside the former location of the Tompkins County Public Library. The rings of Saturn are clearly visible within the circular window. === Uranus === Visitors continue northward along Cayuga Street from Saturn, reaching the Uranus Obelisk just across Cascadilla Creek at the entrance to Thompson Park. === Neptune and the Carl Sagan Bridge === From Uranus, visitors follow Willow Avenue northwest and cross the Carl Sagan Bridge at Adams Street to reach the Neptune Obelisk. The Carl Sagan Bridge, built in 2000, features nine circular windows adorned with the signs of the nine planets. The obelisk for Neptune is located just across the bridge in Conley Park. === Pluto and the Sciencenter === The Planet Walk was conceived and built prior to Pluto losing its planetary status in 2006, and the model includes the Pluto Obelisk, which is located just outside the Sciencenter on First Street. === Alpha Centauri === The model was expanded 7,630 kilometers (4,740 mi) in 2012 to include Alpha Centauri, the star system closest to the Sun. The Alpha Centauri Obelisk is exhibited at the ʻImiloa Astronomy Center in the University of Hawaiʻi at Hilo in Hilo, Hawaii. The volcanic stone Hawaiian figure representing Alpha Centauri
{ "page_id": 37685680, "source": null, "title": "Sagan Planet Walk" }
in female form has a 280 millimetres (11 in) semicircle under its chin to represent its scale size. This extension made the Sagan Planet Walk the world's largest exhibition. == Bill Nye == Television host and former student of Carl Sagan, Bill Nye, narrated a podcast tour of the Planet Walk in 2006 which can be accessed free by calling 703-637-6237 as you walk through the scale-model representation of the Solar System. == Table of scaled sizes and distances == == History and timetable of expansions == 1997 Original Planet Walk created in Ithaca, with ten obelisks for the Sun and nine planets 2000 Carl Sagan Bridge built across Cascadilla Creek on the way to the Neptune Obelisk 2006 Audio Tour narrated by Bill Nye added 2009 Asteroid Belt Obelisk added between the inner and outer planets 2009 Newly designed Passport to the Solar System 2012 Station depicting Alpha Centauri erected in Hawaii Kepler-37d station to be installed on Moon (installation date TBD) == Model gallery == The models of the Solar System, in order: == Models inspired by the Sagan Planet Walk == The Sagan Planet Walk has inspired the creation of other scale-model Solar Systems in the United States. The Delmar Loop Planet Walk in St. Louis, Missouri: "In 2006, I became aware of the Sagan Planet Walk in Ithaca, NY, and as an amateur astronomy buff, I found the idea of a scale model solar system to be a fascinating concept. While researching other such models on the Internet, I developed the idea to build one in St. Louis."—Stephen Walker The Anchorage Light Speed Planet Walk in Alaska: "The idea for this project was ignited by [Eli Menaker's] visit to the Carl Sagan Memorial Planet Walk in Ithaca, New York." == See also == Solar System model
{ "page_id": 37685680, "source": null, "title": "Sagan Planet Walk" }
== References == == External links == Sagan Planet Walk at the Ithaca Sciencenter Sciencenter Sagan Planet Walk Station Exploration Carl Sagan's Ithaca Memorial === Other walkable scale model Solar Systems in the United States === Montshire Planet Walk, Norwich, Vermont Solar System Walking Tour, Gainesville, Georgia Solar System Walk, Cleveland, Ohio Scale Model Solar System, Boulder, Colorado Scale Model Solar System, Eugene, Oregon Delmar Loop Planet Walk, St. Louis, Missouri Anchorage Light Speed Planet Walk, Anchorage, Alaska The B&A Trail Planet Walk in Anne Arundel County, Maryland. The Robert Ferguson Observatory Planet Walk in Kenwood, California. The Greater Lansing Planet Walk in Lansing, Michigan. The University of Alaska Planet Walk in Fairbanks, Alaska. The Planet Walk at Crossroads at Big Creek at the Leif Everson Observatory, Sturgeon Bay, Wisconsin.
{ "page_id": 37685680, "source": null, "title": "Sagan Planet Walk" }
The Berlin Graduate School of Natural Sciences and Engineering (BIG-NSE) is part of the Cluster of Excellence "Unifying Concepts in Catalysis" (UniCat) founded in November 2007 by Technische Universität Berlin and five further institutions in the Berlin area within the framework of the German government‘s Excellence Initiative. The main research interest of the UniCat and BIG-NSE Faculty is Catalysis, in a broad sense. The research fields involved cover a broad range of topics, from natural sciences to engineering. The faculty consists of professors and junior researchers from 54 research groups at 6 participating institutions and active in 13 research fields, who will be intensively involved in the supervision and mentoring of the BIG-NSE students, among them the Fritz Haber Institute of the Max Planck Society, the working place of Professor Gerhard Ertl, the winner of the Nobel Prize in Chemistry 2007. == Ph.D. Curriculum == The BIG-NSE offers a structured curriculum for obtaining the degree of "Doctor" within 3 years. The main characteristic of the BIG-NSE is a comprehensive integration and mentoring programme for its students, especially foreign students. It includes: An "Initial Phase", with intensive support, especially for administrative and integration aspects. Preparation of a schedule by the students themselves during the first semester. Continuous supervision by two professors/senior scientists and one mentor. Regular evaluation of the students’ work/study achievements. Continuous support for all professional and social aspects. Regular lectures presented by guest scientists from all over the world. Language and soft skill courses. Financial support for scientific and teaching materials. == Entry Requirements == The entry requirements for the BIG-NSE are: A Master‘s degree or German Diploma in chemistry, biology, physics or engineering. A Certificate of English Proficiency (TOEFL with a minimum of 550 - paper-based version, or equivalent) for applicants whose native language is neither English nor
{ "page_id": 15862193, "source": null, "title": "BIG-NSE" }
German. Two letters of recommendation. == See also == Cluster of Excellence Unifying Systems in Catalysis (UniSysCat) (follow-up project of Unifying Concepts in Catalysis (UniCat)) Technische Universität Berlin Free University Berlin (Freie Universität Berlin) Humboldt University of Berlin (Humboldt Universität zu Berlin) University of Potsdam (Universität Potsdam) Fritz Haber Institute of the MPG (Fritz-Haber-Institut der Max-Planck-Gesellschaft) Max Planck Institute for Colloids and Interfaces (Max-Planck-Institut für Kolloid- und Grenzflächenforschung)
{ "page_id": 15862193, "source": null, "title": "BIG-NSE" }
In probability theory, a Markov model is a stochastic model used to model pseudo-randomly changing systems. It is assumed that future states depend only on the current state, not on the events that occurred before it (that is, it assumes the Markov property). Generally, this assumption enables reasoning and computation with the model that would otherwise be intractable. For this reason, in the fields of predictive modelling and probabilistic forecasting, it is desirable for a given model to exhibit the Markov property. == Introduction == Andrey Andreyevich Markov (14 June 1856 – 20 July 1922) was a Russian mathematician best known for his work on stochastic processes. A primary subject of his research later became known as the Markov chain There are four common Markov models used in different situations, depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made: == Markov chain == The simplest Markov model is the Markov chain. It models the state of a system with a random variable that changes through time. In this context, the Markov property indicates that the distribution for this variable depends only on the distribution of a previous state. An example use of a Markov chain is Markov chain Monte Carlo, which uses the Markov property to prove that a particular method for performing a random walk will sample from the joint distribution. == Hidden Markov model == A hidden Markov model is a Markov chain for which the state is only partially observable or noisily observable. In other words, observations are related to the state of the system, but they are typically insufficient to precisely determine the state. Several well-known algorithms for hidden Markov models exist. For example, given a sequence of observations, the Viterbi
{ "page_id": 22022581, "source": null, "title": "Markov model" }
algorithm will compute the most-likely corresponding sequence of states, the forward algorithm will compute the probability of the sequence of observations, and the Baum–Welch algorithm will estimate the starting probabilities, the transition function, and the observation function of a hidden Markov model. One common use is for speech recognition, where the observed data is the speech audio waveform and the hidden state is the spoken text. In this example, the Viterbi algorithm finds the most likely sequence of spoken words given the speech audio. == Markov decision process == A Markov decision process is a Markov chain in which state transitions depend on the current state and an action vector that is applied to the system. Typically, a Markov decision process is used to compute a policy of actions that will maximize some utility with respect to expected rewards. == Partially observable Markov decision process == A partially observable Markov decision process (POMDP) is a Markov decision process in which the state of the system is only partially observed. POMDPs are known to be NP complete, but recent approximation techniques have made them useful for a variety of applications, such as controlling simple agents or robots. == Markov random field == A Markov random field, or Markov network, may be considered to be a generalization of a Markov chain in multiple dimensions. In a Markov chain, state depends only on the previous state in time, whereas in a Markov random field, each state depends on its neighbors in any of multiple directions. A Markov random field may be visualized as a field or graph of random variables, where the distribution of each random variable depends on the neighboring variables with which it is connected. More specifically, the joint distribution for any random variable in the graph can be computed as
{ "page_id": 22022581, "source": null, "title": "Markov model" }
the product of the "clique potentials" of all the cliques in the graph that contain that random variable. Modeling a problem as a Markov random field is useful because it implies that the joint distributions at each vertex in the graph may be computed in this manner. == Hierarchical Markov models == Hierarchical Markov models can be applied to categorize human behavior at various levels of abstraction. For example, a series of simple observations, such as a person's location in a room, can be interpreted to determine more complex information, such as in what task or activity the person is performing. Two kinds of Hierarchical Markov Models are the Hierarchical hidden Markov model and the Abstract Hidden Markov Model. Both have been used for behavior recognition and certain conditional independence properties between different levels of abstraction in the model allow for faster learning and inference. == Tolerant Markov model == A Tolerant Markov model (TMM) is a probabilistic-algorithmic Markov chain model. It assigns the probabilities according to a conditioning context that considers the last symbol, from the sequence to occur, as the most probable instead of the true occurring symbol. A TMM can model three different natures: substitutions, additions or deletions. Successful applications have been efficiently implemented in DNA sequences compression. == Markov-chain forecasting models == Markov-chains have been used as a forecasting methods for several topics, for example price trends, wind power and solar irradiance. The Markov-chain forecasting models utilize a variety of different settings, from discretizing the time-series to hidden Markov-models combined with wavelets and the Markov-chain mixture distribution model (MCM). == See also == Markov chain Monte Carlo Markov blanket Andrey Markov Variable-order Markov model == References ==
{ "page_id": 22022581, "source": null, "title": "Markov model" }
Sub-Doppler cooling is a class of laser cooling techniques that reduce the temperature of atoms and molecules below the Doppler cooling limit. In experiment implementation, Doppler cooling is limited by the broad natural linewidth of the lasers used in cooling. Regardless of the transition used, however, Doppler cooling processes have an intrinsic cooling limit that is characterized by the momentum recoil from the emission of a photon from the particle. This is called the recoil temperature and is usually far below the linewidth-based limit mentioned above. By laser cooling methods beyond the two-level approximations of atoms, temperature below this limit can be achieved. Optical pumping between the sublevels that make up an atomic state introduces a new mechanism for achieving ultra-low temperatures. The essential feature of sub-Doppler cooling is the non-adiabaticity of the moving atoms to the light field. For a spatially dependent light field, the orientation of moving atoms is adjusted by optical pumping to fit the conditions of the light field. Yet the moving atoms do not instantly adjust to the light field as they move, their orientation always lags behind the orientation that would exist for stationary atoms, which determines the velocity-dependent differential absorption and hence the cooling. With this cooling process, lower temperatures can be obtained. Various methods have been used independently or combined in an experimental sequence to achieve sub-Doppler cooling. One method to produce spatially dependent optical pumping is polarization gradient cooling, where the superposition of two counter-propagating laser beams of orthogonal polarizations lead to a light field with polarization varying on the wavelength scale. A specific mechanism within polarization gradient cooling is Sisyphus cooling, where atoms climb "potential hills" created by the interaction of their internal energy states with spatially varying light fields. The light field in optical molasses in three-dimension also has
{ "page_id": 46926261, "source": null, "title": "Sub-Doppler cooling" }
polarization gradient. Other methods of sub-Doppler cooling include evaporative cooling, free space Raman cooling, Raman side-band cooling, resolved sideband cooling, electromagnetically induced transparency (EIT) cooling, and the use of a dark magneto-optical trap. These techniques can be used depending on the minimum temperature needed and specifications of the individual setup. For example, an optical molasses time-of-flight technique was used to cool sodium (Doppler limit T D ≈ 240 μ K {\displaystyle T_{D}\approx 240\ \mu K} ) to 43 ± 20 μ K {\textstyle 43\pm 20\ \mu K} . Motivations for sub-doppler cooling include motional ground state cooling, cooling to the motional ground state, a requirement for maintaining fidelity during many quantum computation operations. == Dark magneto-optical trap == A magneto-optical trap (MOT) is commonly used for cooling and trapping a substance by Doppler cooling. In the process of Doppler cooling, the red detuned light would be absorbed by atoms from one certain direction and re-emitted in a random direction. The electrons of the atoms would decay to an alternative ground states if the atoms have more than one hyperfine ground level. There is the case of all the atoms in the other ground states rather than the ground states of Doppler cooling, then system cannot cool the atoms further. In order to solve this problem, the other re-pumping light would be incident on the system to repopulate the atoms to restart the Doppler cooling process. This would induce higher amounts of fluorescence being emitted from the atoms which can be absorbed by other atoms, acting as a repulsive force. Due to this problem, the Doppler limit would increase and is easy to meet. When there is a dark spot or lines on the shape of the re-pumping light, the atoms in the middle of the atomic gas would not be
{ "page_id": 46926261, "source": null, "title": "Sub-Doppler cooling" }
excited by the re-pumping light which can decrease the repulsion force from the previous cases. This can help to cool the atoms to a lower temperature than the typical Doppler cooling limit. This is called a dark magneto-optical trap (DMOT). == Limits == The Doppler cooling limit is set by balancing the heating from the momentum kicks. Applying the results from the Fokker-Planck equation to the sub-Doppler processes would lead to an arbitrarily low final temperature as the damping coefficient become arbitrarily large. A few more considerations are needed. For instance, When a photon is scattered, the momentum change of the atom is assumed to be small relative to its overall momentum, but when the atom slows down to around the region of v r = ℏ k M {\textstyle v_{r}={\frac {\hbar k}{M}}} , the momentum change becomes significant. Thus at low velocities, spontaneous emission would leave the atom with a residual momentum around ℏ k {\textstyle \hbar k} , which sets a minimum velocity scale. The velocity distribution around v r {\textstyle v_{r}} cannot be well described by the Fokker Planck equation, and this sets an intuitive lower limit on the temperature. Furthermore, polarization gradient cooling depends on the ability to localize atoms to a scale of ∼ λ / 2 π {\textstyle \sim \lambda /{2\pi }} , where λ {\textstyle \lambda } is the wavelength of the light. Due to the uncertainty principle, this localization also imposes a minimum momentum spread ∼ ℏ k {\textstyle \sim \hbar k} , which also leads to a limit on how much the atoms can be cooled. These theories are tested in the analytical and numerical calculations in with a one-dimensional polarization gradient molasses. It was shown that in the limit of large detuning, the velocity distribution depends only on a dimensionless
{ "page_id": 46926261, "source": null, "title": "Sub-Doppler cooling" }
parameter, the light shift of the ground state divided by the recoil energy. The minimum kinetic energy was found to be on the order of 40 times the recoil energy. == References ==
{ "page_id": 46926261, "source": null, "title": "Sub-Doppler cooling" }
In physics and astronomy, an N-body simulation is a simulation of a dynamical system of particles, usually under the influence of physical forces, such as gravity (see n-body problem for other applications). N-body simulations are widely used tools in astrophysics, from investigating the dynamics of few-body systems like the Earth-Moon-Sun system to understanding the evolution of the large-scale structure of the universe. In physical cosmology, N-body simulations are used to study processes of non-linear structure formation such as galaxy filaments and galaxy halos from the influence of dark matter. Direct N-body simulations are used to study the dynamical evolution of star clusters. == Nature of the particles == The 'particles' treated by the simulation may or may not correspond to physical objects which are particulate in nature. For example, an N-body simulation of a star cluster might have a particle per star, so each particle has some physical significance. On the other hand, a simulation of a gas cloud cannot afford to have a particle for each atom or molecule of gas as this would require on the order of 1023 particles for each mole of material (see Avogadro constant), so a single 'particle' would represent some much larger quantity of gas (often implemented using Smoothed Particle Hydrodynamics). This quantity need not have any physical significance, but must be chosen as a compromise between accuracy and manageable computer requirements. == Dark matter simulation == Dark matter plays an important role in the formation of galaxies. The time evolution of the density f (in phase space) of dark matter particles, can be described by the collisionless Boltzmann equation d f d t = ∂ f ∂ t + v ⋅ ∇ f − ∂ f ∂ v ⋅ ∇ Φ {\displaystyle {\frac {df}{dt}}={\frac {\partial f}{\partial t}}+\mathbf {v} \cdot \nabla f-{\frac {\partial
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
f}{\partial \mathbf {v} }}\cdot \nabla \Phi } In the equation, v {\displaystyle \mathbf {v} } is the velocity, and Φ is the gravitational potential given by Poisson's Equation. These two coupled equations are solved in an expanding background Universe, which is governed by the Friedmann equations, after determining the initial conditions of dark matter particles. The conventional method employed for initializing positions and velocities of dark matter particles involves moving particles within a uniform Cartesian lattice or a glass-like particle configuration. This is done by using a linear theory approximation or a low-order perturbation theory. == Direct gravitational N-body simulations == In direct gravitational N-body simulations, the equations of motion of a system of N particles under the influence of their mutual gravitational forces are integrated numerically without any simplifying approximations. These calculations are used in situations where interactions between individual objects, such as stars or planets, are important to the evolution of the system. The first direct gravitational N-body simulations were carried out by Erik Holmberg at the Lund Observatory in 1941, determining the forces between stars in encountering galaxies via the mathematical equivalence between light propagation and gravitational interaction: putting light bulbs at the positions of the stars and measuring the directional light fluxes at the positions of the stars by a photo cell, the equations of motion can be integrated with ⁠ O ( N ) {\displaystyle O(N)} ⁠ effort. The first purely calculational simulations were then done by Sebastian von Hoerner at the Astronomisches Rechen-Institut in Heidelberg, Germany. Sverre Aarseth at the University of Cambridge (UK) dedicated his entire scientific life to the development of a series of highly efficient N-body codes for astrophysical applications which use adaptive (hierarchical) time steps, an Ahmad-Cohen neighbour scheme and regularization of close encounters. Regularization is a mathematical trick to
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
remove the singularity in the Newtonian law of gravitation for two particles which approach each other arbitrarily close. Sverre Aarseth's codes are used to study the dynamics of star clusters, planetary systems and galactic nuclei. == General relativity simulations == Many simulations are large enough that the effects of general relativity in establishing a Friedmann-Lemaitre-Robertson-Walker cosmology are significant. This is incorporated in the simulation as an evolving measure of distance (or scale factor) in a comoving coordinate system, which causes the particles to slow in comoving coordinates (as well as due to the redshifting of their physical energy). However, the contributions of general relativity and the finite speed of gravity can otherwise be ignored, as typical dynamical timescales are long compared to the light crossing time for the simulation, and the space-time curvature induced by the particles and the particle velocities are small. The boundary conditions of these cosmological simulations are usually periodic (or toroidal), so that one edge of the simulation volume matches up with the opposite edge. == Calculation optimizations == N-body simulations are simple in principle, because they involve merely integrating the 6N ordinary differential equations defining the particle motions in Newtonian gravity. In practice, the number N of particles involved is usually very large (typical simulations include many millions, the Millennium simulation included ten billion) and the number of particle-particle interactions needing to be computed increases on the order of N2, and so direct integration of the differential equations can be prohibitively computationally expensive. Therefore, a number of refinements are commonly used. Numerical integration is usually performed over small timesteps using a method such as leapfrog integration. However all numerical integration leads to errors. Smaller steps give lower errors but run more slowly. Leapfrog integration is roughly 2nd order on the timestep, other integrators such as
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
Runge–Kutta methods can have 4th order accuracy or much higher. One of the simplest refinements is that each particle carries with it its own timestep variable, so that particles with widely different dynamical times don't all have to be evolved forward at the rate of that with the shortest time. There are two basic approximation schemes to decrease the computational time for such simulations. These can reduce the computational complexity to O(N log N) or better, at the loss of accuracy. === Tree methods === In tree methods, such as a Barnes–Hut simulation, an octree is usually used to divide the volume into cubic cells and only interactions between particles from nearby cells need to be treated individually; particles in distant cells can be treated collectively as a single large particle centered at the distant cell's center of mass (or as a low-order multipole expansion). This can dramatically reduce the number of particle pair interactions that must be computed. To prevent the simulation from becoming swamped by computing particle-particle interactions, the cells must be refined to smaller cells in denser parts of the simulation which contain many particles per cell. For simulations where particles are not evenly distributed, the well-separated pair decomposition methods of Callahan and Kosaraju yield optimal O(n log n) time per iteration with fixed dimension. === Particle mesh method === Another possibility is the particle mesh method in which space is discretised on a mesh and, for the purposes of computing the gravitational potential, particles are assumed to be divided between the surrounding 2x2 vertices of the mesh. The potential energy Φ can be found with the Poisson equation ∇ 2 Φ = 4 π G ρ , {\displaystyle \nabla ^{2}\Phi =4\pi G{\rho },\,} where G is Newton's constant and ρ {\displaystyle \rho } is the density
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
(number of particles at the mesh points). The fast Fourier transform can solve this efficiently by going to the frequency domain where the Poisson equation has the simple form Φ ^ = − 4 π G ρ ^ k 2 , {\displaystyle {\hat {\Phi }}=-4\pi G{\frac {\hat {\rho }}{k^{2}}},} where k → {\displaystyle {\vec {k}}} is the comoving wavenumber and the hats denote Fourier transforms. Since g → = − ∇ → Φ {\displaystyle {\vec {g}}=-{\vec {\nabla }}\Phi } , the gravitational field can now be found by multiplying by − i k → {\displaystyle -i{\vec {k}}} and computing the inverse Fourier transform (or computing the inverse transform and then using some other method). Since this method is limited by the mesh size, in practice a smaller mesh or some other technique (such as combining with a tree or simple particle-particle algorithm) is used to compute the small-scale forces. Sometimes an adaptive mesh is used, in which the mesh cells are much smaller in the denser regions of the simulation. === Special-case optimizations === Several different gravitational perturbation algorithms are used to get fairly accurate estimates of the path of objects in the Solar System. People often decide to put a satellite in a frozen orbit. The path of a satellite closely orbiting the Earth can be accurately modeled starting from the 2-body elliptical orbit around the center of the Earth, and adding small corrections due to the oblateness of the Earth, gravitational attraction of the Sun and Moon, atmospheric drag, etc. It is possible to find a frozen orbit without calculating the actual path of the satellite. The path of a small planet, comet, or long-range spacecraft can often be accurately modeled starting from the 2-body elliptical orbit around the Sun, and adding small corrections from the gravitational attraction
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
of the larger planets in their known orbits. Some characteristics of the long-term paths of a system of particles can be calculated directly. The actual path of any particular particle does not need to be calculated as an intermediate step. Such characteristics include Lyapunov stability, Lyapunov time, various measurements from ergodic theory, etc. == Two-particle systems == Although there are millions or billions of particles in typical simulations, they typically correspond to a real particle with a very large mass, typically 109 solar masses. This can introduce problems with short-range interactions between the particles such as the formation of two-particle binary systems. As the particles are meant to represent large numbers of dark matter particles or groups of stars, these binaries are unphysical. To prevent this, a softened Newtonian force law is used, which does not diverge as the inverse-square radius at short distances. Most simulations implement this quite naturally by running the simulations on cells of finite size. It is important to implement the discretization procedure in such a way that particles always exert a vanishing force on themselves. === Softening === Softening is a numerical trick used in N-body techniques to prevent numerical divergences when a particle comes too close to another (and the force goes to infinity). This is obtained by modifying the regularized gravitational potential of each particle as Φ = − 1 r 2 + ϵ 2 , {\displaystyle \Phi =-{\frac {1}{\sqrt {r^{2}+\epsilon ^{2}}}},} (rather than 1/r) where ϵ {\displaystyle \epsilon } is the softening parameter. The value of the softening parameter should be set small enough to keep simulations realistic. == Results from N-body simulations == N-body simulations give findings on the large-scale dark matter distribution and the structure of dark matter halos. According to simulations of cold dark matter, the overall distribution of
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
dark matter on a large scale is not entirely uniform. Instead, it displays a structure resembling a network, consisting of voids, walls, filaments, and halos. Also, simulations show that the relationship between the concentration of halos and factors such as mass, initial fluctuation spectrum, and cosmological parameters is linked to the actual formation time of the halos. In particular, halos with lower mass tend to form earlier, and as a result, have higher concentrations due to the higher density of the Universe at the time of their formation. Shapes of halos are found to deviate from being perfectly spherical. Typically, halos are found to be elongated and become increasingly prolate towards their centers. However, interactions between dark matter and baryons would affect the internal structure of dark matter halos. Simulations that model both dark matters and baryons are needed to study small-scale structures. == Incorporating baryons, leptons and photons into simulations == Many simulations simulate only cold dark matter, and thus include only the gravitational force. Incorporating baryons, leptons and photons into the simulations dramatically increases their complexity and often radical simplifications of the underlying physics must be made. However, this is an extremely important area and many modern simulations are now trying to understand processes that occur during galaxy formation which could account for galaxy bias. == Computational complexity == Reif and Tate prove that if the n-body reachability problem is defined as follows – given n bodies satisfying a fixed electrostatic potential law, determining if a body reaches a destination ball in a given time bound where we require a poly(n) bits of accuracy and the target time is poly(n) is in PSPACE. On the other hand, if the question is whether the body eventually reaches the destination ball, the problem is PSPACE-hard. These bounds are based on
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
similar complexity bounds obtained for ray tracing. == Example simulations == === Common boilerplate code === The simplest implementation of N-body simulations where n ≥ 3 {\textstyle n\geq 3} is a naive propagation of orbiting bodies; naive implying that the only forces acting on the orbiting bodies is the gravitational force which they exert on each other. In object-oriented programming languages, such as C++, some boilerplate code is useful for establishing the fundamental mathematical structures as well as data containers required for propagation; namely state vectors, and thus vectors, and some fundamental object containing this data, as well as the mass of an orbiting body. This method is applicable to other types of N-body simulations as well; a simulation of point masses with charges would use a similar method, however the force would be due to attraction or repulsion by interaction of electric fields. Regardless, acceleration of particle is a result of summed force vectors, divided by the mass of the particle: a → = 1 m ∑ F → {\displaystyle {\vec {a}}={\frac {1}{m}}\sum {\vec {F}}} An example of a programmatically stable and scalable method for containing kinematic data for a particle is the use of fixed length arrays, which in optimised code allows for easy memory allocation and prediction of consumed resources; as seen in the following C++ code: Note that OrbitalEntity contains enough room for a state vector, where: e 0 = x {\textstyle e_{0}=x} , the projection of the objects position vector in Cartesian space along [ 1 0 0 ] {\displaystyle \left[1\;0\;0\right]} e 1 = y {\textstyle e_{1}=y} , the projection of the objects position vector in Cartesian space along [ 0 1 0 ] {\displaystyle \left[0\;1\;0\right]} e 2 = z {\textstyle e_{2}=z} , the projection of the objects position vector in Cartesian space along [
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
0 0 1 ] {\displaystyle \left[0\;0\;1\right]} e 3 = x ˙ {\textstyle e_{3}={\dot {x}}} , the projection of the objects velocity vector in Cartesian space along [ 1 0 0 ] {\displaystyle \left[1\;0\;0\right]} e 4 = y ˙ {\textstyle e_{4}={\dot {y}}} , the projection of the objects velocity vector in Cartesian space along [ 0 1 0 ] {\displaystyle \left[0\;1\;0\right]} e 5 = z ˙ {\textstyle e_{5}={\dot {z}}} , the projection of the objects velocity vector in Cartesian space along [ 0 0 1 ] {\displaystyle \left[0\;0\;1\right]} Additionally, OrbitalEntity contains enough room for a mass value. === Initialisation of simulation parameters === Commonly, N-body simulations will be systems based on some type of equations of motion; of these, most will be dependent on some initial configuration to "seed" the simulation. In systems such as those dependent on some gravitational or electric potential, the force on a simulation entity is independent on its velocity. Hence, to seed the forces of the simulation, merely initial positions are needed, but this will not allow propagation- initial velocities are required. Consider a planet orbiting a star- it has no motion, but is subject to gravitational attraction to its host star. As a time progresses, and time steps are added, it will gather velocity according to its acceleration. For a given instant in time, t n {\displaystyle t_{n}} , the resultant acceleration of a body due to its neighbouring masses is independent of its velocity, however, for the time step t n + 1 {\displaystyle t_{n+1}} , the resulting change in position is significantly different due the propagation's inherent dependency on velocity. In basic propagation mechanisms, such as the symplectic euler method to be used below, the position of an object at t n + 1 {\displaystyle t_{n+1}} is only dependent on its velocity at
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
t n {\displaystyle t_{n}} , as the shift in position is calculated via r → t n + 1 = r → t n + v → t n ⋅ Δ t {\displaystyle {\vec {r}}_{t_{n+1}}={\vec {r}}_{t_{n}}+{\vec {v}}_{t_{n}}\cdot \Delta t} Without acceleration, v → t n {\textstyle {\vec {v}}_{t_{n}}} is static, however, from the perspective of an observer seeing only position, it will take two time steps to see a change in velocity. A solar-system-like simulation can be accomplished by taking average distances of planet equivalent point masses from a central star. To keep code simple, a non-rigorous approach based on semi-major axes and mean velocities will be used. Memory space for these bodies must be reserved before the bodies are configured; to allow for scalability, a malloc command may be used: where N_ASTEROIDS is a variable which will remain at 0 temporarily, but allows for future inclusion of significant numbers of asteroids, at the users discretion. A critical step for the configuration of simulations is to establish the time ranges of the simulation, t 0 {\displaystyle t_{0}} to t end {\displaystyle t_{\text{end}}} , as well as the incremental time step d t {\displaystyle dt} which will progress the simulation forward: The positions and velocities established above are interpreted to be correct for t = t 0 {\displaystyle t=t_{0}} . The extent of a simulation would logically be for the period where t 0 ≤ t < t end {\displaystyle t_{0}\leq t<t_{\text{end}}} . === Propagation === An entire simulation can consist of hundreds, thousands, millions, billions, or sometimes trillions of time steps. At the elementary level, each time step (for simulations with particles moving due to forces exerted on them) involves calculating the forces on each body calculating the accelerations of each body ( a → {\displaystyle {\vec {a}}} ) calculating
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
the velocities of each body ( v → n = v → n − 1 + a → n ⋅ Δ t {\displaystyle {\vec {v}}_{n}={\vec {v}}_{n-1}+{\vec {a}}_{n}\cdot \Delta t} calculating the new position of each body ( r → n + 1 = r → n + v → n ⋅ Δ t {\displaystyle {\vec {r}}_{n+1}={\vec {r}}_{n}+{\vec {v}}_{n}\cdot \Delta t} The above can be implemented quite simply with a while loop which continues while t {\displaystyle t} exists in the aforementioned range: Focusing on the inner four rocky planets in the simulation, the trajectories resulting from the above propagation is shown below: == See also == Millennium Run – Computer simulation of the universe Large-scale structure of the cosmos – All of space observable from the Earth at the presentPages displaying short descriptions of redirect targets GADGET – Computer software for cosmological simulations Galaxy formation and evolution – Subfield of cosmology Natural units – Units of measurement based on universal physical constants Virgo Consortium Barnes–Hut simulation – Approximation algorithm for the n-body problem Bolshoi cosmological simulation – Computer simulation of the universe == References == === Further reading === von Hoerner, Sebastian (1960). "Die numerische Integration des n-Körper-Problemes für Sternhaufen. I". Zeitschrift für Astrophysik (in German). 50: 184. Bibcode:1960ZA.....50..184V. von Hoerner, Sebastian (1963). "Die numerische Integration des n-Körper-Problemes für Sternhaufen. II". Zeitschrift für Astrophysik (in German). 57: 47. Bibcode:1963ZA.....57...47V. Aarseth, Sverre J. (2003). Gravitational N-body Simulations: Tools and Algorithms. Cambridge University Press. ISBN 978-0-521-12153-8. Bertschinger, Edmund (1998). "Simulations of structure formation in the universe". Annual Review of Astronomy and Astrophysics. 36 (1): 599–654. Bibcode:1998ARA&A..36..599B. doi:10.1146/annurev.astro.36.1.599. Binney, James; Tremaine, Scott (1987). Galactic Dynamics. Princeton University Press. ISBN 978-0-691-08445-9. Callahan, Paul B.; Kosaraju, Sambasiva Rao (1992). "A decomposition of multidimensional point sets with applications to k-nearest-neighbors and n-body potential fields (preliminary version)".
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
STOC '92: Proc. ACM Symp. Theory of Computing. ACM..
{ "page_id": 4917686, "source": null, "title": "N-body simulation" }
In physics, Hamilton's principle is William Rowan Hamilton's formulation of the principle of stationary action. It states that the dynamics of a physical system are determined by a variational problem for a functional based on a single function, the Lagrangian, which may contain all physical information concerning the system and the forces acting on it. The variational problem is equivalent to and allows for the derivation of the differential equations of motion of the physical system. Although formulated originally for classical mechanics, Hamilton's principle also applies to classical fields such as the electromagnetic and gravitational fields, and plays an important role in quantum mechanics, quantum field theory and criticality theories. == Mathematical formulation == Hamilton's principle states that the true evolution q(t) of a system described by N generalized coordinates q = (q1, q2, ..., qN) between two specified states q1 = q(t1) and q2 = q(t2) at two specified times t1 and t2 is a stationary point (a point where the variation is zero) of the action functional S [ q ] = d e f ∫ t 1 t 2 L ( q ( t ) , q ˙ ( t ) , t ) d t {\displaystyle {\mathcal {S}}[\mathbf {q} ]\ {\stackrel {\mathrm {def} }{=}}\ \int _{t_{1}}^{t_{2}}L(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)\,dt} where L ( q , q ˙ , t ) {\displaystyle L(\mathbf {q} ,{\dot {\mathbf {q} }},t)} is the Lagrangian function for the system. In other words, any first-order perturbation of the true evolution results in (at most) second-order changes in S {\displaystyle {\mathcal {S}}} . The action S {\displaystyle {\mathcal {S}}} is a functional, i.e., something that takes as its input a function and returns a single number, a scalar. In terms of functional analysis, Hamilton's principle states that the true evolution of a
{ "page_id": 4852151, "source": null, "title": "Hamilton's principle" }
physical system is a solution of the functional equation That is, the system takes a path in configuration space for which the action is stationary, with fixed boundary conditions at the beginning and the end of the path. === Euler–Lagrange equations derived from the action integral === Requiring that the true trajectory q(t) be a stationary point of the action functional S {\displaystyle {\mathcal {S}}} is equivalent to a set of differential equations for q(t) (the Euler–Lagrange equations), which may be derived as follows. Let q(t) represent the true evolution of the system between two specified states q1 = q(t1) and q2 = q(t2) at two specified times t1 and t2, and let ε(t) be a small perturbation that is zero at the endpoints of the trajectory ε ( t 1 ) = ε ( t 2 ) = d e f 0 {\displaystyle {\boldsymbol {\varepsilon }}(t_{1})={\boldsymbol {\varepsilon }}(t_{2})\ {\stackrel {\mathrm {def} }{=}}\ 0} To first order in the perturbation ε(t), the change in the action functional δ S {\displaystyle \delta {\mathcal {S}}} would be δ S = ∫ t 1 t 2 [ L ( q + ε , q ˙ + ε ˙ ) − L ( q , q ˙ ) ] d t = ∫ t 1 t 2 ( ε ⋅ ∂ L ∂ q + ε ˙ ⋅ ∂ L ∂ q ˙ ) d t {\displaystyle \delta {\mathcal {S}}=\int _{t_{1}}^{t_{2}}\;\left[L(\mathbf {q} +{\boldsymbol {\varepsilon }},{\dot {\mathbf {q} }}+{\dot {\boldsymbol {\varepsilon }}})-L(\mathbf {q} ,{\dot {\mathbf {q} }})\right]dt=\int _{t_{1}}^{t_{2}}\;\left({\boldsymbol {\varepsilon }}\cdot {\frac {\partial L}{\partial \mathbf {q} }}+{\dot {\boldsymbol {\varepsilon }}}\cdot {\frac {\partial L}{\partial {\dot {\mathbf {q} }}}}\right)\,dt} where we have expanded the Lagrangian L to first order in the perturbation ε(t). Applying integration by parts to the last term results in δ S = [ ε
{ "page_id": 4852151, "source": null, "title": "Hamilton's principle" }
⋅ ∂ L ∂ q ˙ ] t 1 t 2 + ∫ t 1 t 2 ( ε ⋅ ∂ L ∂ q − ε ⋅ d d t ∂ L ∂ q ˙ ) d t {\displaystyle \delta {\mathcal {S}}=\left[{\boldsymbol {\varepsilon }}\cdot {\frac {\partial L}{\partial {\dot {\mathbf {q} }}}}\right]_{t_{1}}^{t_{2}}+\int _{t_{1}}^{t_{2}}\;\left({\boldsymbol {\varepsilon }}\cdot {\frac {\partial L}{\partial \mathbf {q} }}-{\boldsymbol {\varepsilon }}\cdot {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\mathbf {q} }}}}\right)\,dt} The boundary conditions ε ( t 1 ) = ε ( t 2 ) = d e f 0 {\displaystyle {\boldsymbol {\varepsilon }}(t_{1})={\boldsymbol {\varepsilon }}(t_{2})\ {\stackrel {\mathrm {def} }{=}}\ 0} causes the first term to vanish δ S = ∫ t 1 t 2 ε ⋅ ( ∂ L ∂ q − d d t ∂ L ∂ q ˙ ) d t {\displaystyle \delta {\mathcal {S}}=\int _{t_{1}}^{t_{2}}\;{\boldsymbol {\varepsilon }}\cdot \left({\frac {\partial L}{\partial \mathbf {q} }}-{\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\mathbf {q} }}}}\right)\,dt} Hamilton's principle requires that this first-order change δ S {\displaystyle \delta {\mathcal {S}}} is zero for all possible perturbations ε(t), i.e., the true path is a stationary point of the action functional S {\displaystyle {\mathcal {S}}} (either a minimum, maximum or saddle point). This requirement can be satisfied if and only if These equations are called the Euler–Lagrange equations for the variational problem. === Canonical momenta and constants of motion === The conjugate momentum pk for a generalized coordinate qk is defined by the equation p k = d e f ∂ L ∂ q ˙ k . {\displaystyle p_{k}\ {\overset {\mathrm {def} }{=}}\ {\frac {\partial L}{\partial {\dot {q}}_{k}}}.} An important special case of the Euler–Lagrange equation occurs when L does not contain a generalized coordinate qk explicitly, ∂ L ∂ q k = 0 ⇒ d d t ∂ L ∂ q ˙ k = 0
{ "page_id": 4852151, "source": null, "title": "Hamilton's principle" }
⇒ d p k d t = 0 , {\displaystyle {\frac {\partial L}{\partial q_{k}}}=0\quad \Rightarrow \quad {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{k}}}=0\quad \Rightarrow \quad {\frac {dp_{k}}{dt}}=0\,,} that is, the conjugate momentum is a constant of the motion. In such cases, the coordinate qk is called a cyclic coordinate. For example, if we use polar coordinates t, r, θ to describe the planar motion of a particle, and if L does not depend on θ, the conjugate momentum is the conserved angular momentum. === Example: Free particle in polar coordinates === Trivial examples help to appreciate the use of the action principle via the Euler–Lagrange equations. A free particle (mass m and velocity v) in Euclidean space moves in a straight line. Using the Euler–Lagrange equations, this can be shown in polar coordinates as follows. In the absence of a potential, the Lagrangian is simply equal to the kinetic energy L = 1 2 m v 2 = 1 2 m ( x ˙ 2 + y ˙ 2 ) {\displaystyle L={\frac {1}{2}}mv^{2}={\frac {1}{2}}m\left({\dot {x}}^{2}+{\dot {y}}^{2}\right)} in orthonormal (x,y) coordinates, where the dot represents differentiation with respect to the curve parameter (usually the time, t). Therefore, upon application of the Euler–Lagrange equations, d d t ( ∂ L ∂ x ˙ ) − ∂ L ∂ x = 0 ⇒ m x ¨ = 0 {\displaystyle {\frac {d}{dt}}\left({\frac {\partial L}{\partial {\dot {x}}}}\right)-{\frac {\partial L}{\partial x}}=0\qquad \Rightarrow \qquad m{\ddot {x}}=0} And likewise for y. Thus the Euler–Lagrange formulation can be used to derive Newton's laws. In polar coordinates (r, φ) the kinetic energy and hence the Lagrangian becomes L = 1 2 m ( r ˙ 2 + r 2 φ ˙ 2 ) . {\displaystyle L={\frac {1}{2}}m\left({\dot {r}}^{2}+r^{2}{\dot {\varphi }}^{2}\right).} The radial r and φ components of the Euler–Lagrange equations become,
{ "page_id": 4852151, "source": null, "title": "Hamilton's principle" }
respectively d d t ( ∂ L ∂ r ˙ ) − ∂ L ∂ r = 0 ⇒ r ¨ − r φ ˙ 2 = 0 {\displaystyle {\frac {d}{dt}}\left({\frac {\partial L}{\partial {\dot {r}}}}\right)-{\frac {\partial L}{\partial r}}=0\qquad \Rightarrow \qquad {\ddot {r}}-r{\dot {\varphi }}^{2}=0} d d t ( ∂ L ∂ φ ˙ ) − ∂ L ∂ φ = 0 ⇒ φ ¨ + 2 r r ˙ φ ˙ = 0. {\displaystyle {\frac {d}{dt}}\left({\frac {\partial L}{\partial {\dot {\varphi }}}}\right)-{\frac {\partial L}{\partial \varphi }}=0\qquad \Rightarrow \qquad {\ddot {\varphi }}+{\frac {2}{r}}{\dot {r}}{\dot {\varphi }}=0.} remembering that r is also dependent on time and the product rule is needed to compute the total time derivative d d t m r 2 φ ˙ {\textstyle {\frac {d}{dt}}mr^{2}{\dot {\varphi }}} . The solution of these two equations is given by r = ( a t + b ) 2 + c 2 {\displaystyle r={\sqrt {(at+b)^{2}+c^{2}}}} φ = tan − 1 ⁡ ( a t + b c ) + d {\displaystyle \varphi =\tan ^{-1}\left({\frac {at+b}{c}}\right)+d} for a set of constants a, b, c, d determined by initial conditions. Thus, indeed, the solution is a straight line given in polar coordinates: a is the velocity, c is the distance of the closest approach to the origin, and d is the angle of motion. == Applied to deformable bodies == Hamilton's principle is an important variational principle in elastodynamics. As opposed to a system composed of rigid bodies, deformable bodies have an infinite number of degrees of freedom and occupy continuous regions of space; consequently, the state of the system is described by using continuous functions of space and time. The extended Hamilton Principle for such bodies is given by ∫ t 1 t 2 [ δ W e + δ T − δ U
{ "page_id": 4852151, "source": null, "title": "Hamilton's principle" }
] d t = 0 {\displaystyle \int _{t_{1}}^{t_{2}}\left[\delta W_{e}+\delta T-\delta U\right]dt=0} where T is the kinetic energy, U is the elastic energy, We is the work done by external loads on the body, and t1, t2 the initial and final times. If the system is conservative, the work done by external forces may be derived from a scalar potential V. In this case, δ ∫ t 1 t 2 [ T − ( U + V ) ] d t = 0. {\displaystyle \delta \int _{t_{1}}^{t_{2}}\left[T-(U+V)\right]dt=0.} This is called Hamilton's principle and it is invariant under coordinate transformations. == Comparison with Maupertuis' principle == Hamilton's principle and Maupertuis' principle are occasionally confused and both have been called the principle of least action. They differ in three important ways: their definition of the action... Maupertuis' principle uses an integral over the generalized coordinates known as the abbreviated action or reduced action S 0 = d e f ∫ p ⋅ d q {\displaystyle {\mathcal {S}}_{0}\ {\stackrel {\mathrm {def} }{=}}\ \int \mathbf {p} \cdot d\mathbf {q} } where p = (p1, p2, ..., pN) are the conjugate momenta defined above. By contrast, Hamilton's principle uses S {\displaystyle {\mathcal {S}}} , the integral of the Lagrangian over time. the solution that they determine... Hamilton's principle determines the trajectory q(t) as a function of time, whereas Maupertuis' principle determines only the shape of the trajectory in the generalized coordinates. For example, Maupertuis' principle determines the shape of the ellipse on which a particle moves under the influence of an inverse-square central force such as gravity, but does not describe per se how the particle moves along that trajectory. (However, this time parameterization may be determined from the trajectory itself in subsequent calculations using the conservation of energy). By contrast, Hamilton's principle directly specifies the
{ "page_id": 4852151, "source": null, "title": "Hamilton's principle" }
motion along the ellipse as a function of time. ...and the constraints on the variation. Maupertuis' principle requires that the two endpoint states q1 and q2 be given and that energy be conserved along every trajectory (same energy for each trajectory). This forces the endpoint times to be varied as well. By contrast, Hamilton's principle does not require the conservation of energy, but does require that the endpoint times t1 and t2 be specified as well as the endpoint states q1 and q2. == Action principle for fields == === Classical field theory === The action principle can be extended to obtain the equations of motion for fields, such as the electromagnetic field or gravity. The Einstein equation utilizes the Einstein–Hilbert action as constrained by a variational principle. The path of a body in a gravitational field (i.e. free fall in space time, a so-called geodesic) can be found using the action principle. === Quantum mechanics and quantum field theory === In quantum mechanics, the system does not follow a single path whose action is stationary, but the behavior of the system depends on all imaginable paths and the value of their action. The action corresponding to the various paths is used to calculate the path integral, that gives the probability amplitudes of the various outcomes. Although equivalent in classical mechanics with Newton's laws, the action principle is better suited for generalizations and plays an important role in modern physics. Indeed, this principle is one of the great generalizations in physical science. In particular, it is fully appreciated and best understood within quantum mechanics. Richard Feynman's path integral formulation of quantum mechanics is based on a stationary-action principle, using path integrals. Maxwell's equations can be derived as conditions of stationary action. == See also == Analytical mechanics Configuration space Hamilton–Jacobi
{ "page_id": 4852151, "source": null, "title": "Hamilton's principle" }
equation Phase space Geodesics as Hamiltonian flows Herglotz's variational principle == References == W.R. Hamilton, "On a General Method in Dynamics.", Philosophical Transactions of the Royal Society Part II (1834) pp. 247–308; Part I (1835) pp. 95–144. (From the collection Sir William Rowan Hamilton (1805–1865): Mathematical Papers edited by David R. Wilkins, School of Mathematics, Trinity College, Dublin 2, Ireland. (2000); also reviewed as On a General Method in Dynamics) Goldstein H. (1980) Classical Mechanics, 2nd ed., Addison Wesley, pp. 35–69. Landau LD and Lifshitz EM (1976) Mechanics, 3rd. ed., Pergamon Press. ISBN 0-08-021022-8 (hardcover) and ISBN 0-08-029141-4 (softcover), pp. 2–4. Arnold VI. (1989) Mathematical Methods of Classical Mechanics, 2nd ed., Springer Verlag, pp. 59–61. Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013. Bedford A.: Hamilton's Principle in Continuum Mechanics. Pitman, 1985. Springer 2001, ISBN 978-3-030-90305-3 ISBN 978-3-030-90306-0 (eBook), https://doi.org/10.1007/978-3-030-90306-0
{ "page_id": 4852151, "source": null, "title": "Hamilton's principle" }
In pathology, grading is a measure of the cell appearance in tumors and other neoplasms. Some pathology grading systems apply only to malignant neoplasms (cancer); others apply also to benign neoplasms. The neoplastic grading is a measure of cell anaplasia (reversion of differentiation) in the sampled tumor and is based on the resemblance of the tumor to the tissue of origin. Grading in cancer is distinguished from staging, which is a measure of the extent to which the cancer has spread. Pathology grading systems classify the microscopic cell appearance abnormality and deviations in their rate of growth with the goal of predicting developments at tissue level (see also the 4 major histological changes in dysplasia). Cancer is a disorder of cell life cycle alteration that leads (non-trivially) to excessive cell proliferation rates, typically longer cell lifespans and poor differentiation. The grade score (numerical: G1 up to G4) increases with the lack of cellular differentiation - it reflects how much the tumor cells differ from the cells of the normal tissue they have originated from (see 'Categories' below). Tumors may be graded on four-tier, three-tier, or two-tier scales, depending on the institution and the tumor type. The histologic tumor grade score along with the metastatic (whole-body-level cancer-spread) staging are used to evaluate each specific cancer patient, develop their individual treatment strategy and to predict their prognosis. A cancer that is very poorly differentiated is called anaplastic. == Categories == Grading systems are also different for many common types of cancer, though following a similar pattern with grades being increasingly malignant over a range of 1 to 4. If no specific system is used, the following general grades are most commonly used, and recommended by the American Joint Commission on Cancer and other bodies: GX Grade cannot be assessed G1 Well differentiated
{ "page_id": 3279289, "source": null, "title": "Grading (tumors)" }
(Low grade) G2 Moderately differentiated (Intermediate grade) G3 Poorly differentiated (High grade) G4 Undifferentiated (High grade) === Specific systems === Of the many cancer-specific schemes, the Gleason system, named after Donald Floyd Gleason, used to grade the adenocarcinoma cells in prostate cancer is the most famous. This system uses a grading score ranging from 2 to 10. Lower Gleason scores describe well-differentiated less aggressive tumors. Other systems include the Bloom-Richardson grading system for breast cancer and the Fuhrman system for kidney cancer. Invasive-front grading is useful as well in oral squamous cell carcinoma. For soft-tissue sarcoma two histological grading systems are used : the National Cancer Institute (NCI) system and the French Federation of Cancer Centers Sarcoma Group (FNCLCC) system. == Examples of grading schemes == == See also == TNM staging system (Other parameters) Tumor kinds that have their own grading system: Teratoma Gleason score == References == == External links == CancerWeb Archived 2002-04-30 at the Wayback Machine Atlas Interactif de Neuro-Oncologie
{ "page_id": 3279289, "source": null, "title": "Grading (tumors)" }
Learning classifier systems, or LCS, are a paradigm of rule-based machine learning methods that combine a discovery component (e.g. typically a genetic algorithm in evolutionary computation) with a learning component (performing either supervised learning, reinforcement learning, or unsupervised learning). Learning classifier systems seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions (e.g. behavior modeling, classification, data mining, regression, function approximation, or game strategy). This approach allows complex solution spaces to be broken up into smaller, simpler parts for the reinforcement learning that is inside artificial intelligence research. The founding concepts behind learning classifier systems came from attempts to model complex adaptive systems, using rule-based agents to form an artificial cognitive system (i.e. artificial intelligence). == Methodology == The architecture and components of a given learning classifier system can be quite variable. It is useful to think of an LCS as a machine consisting of several interacting components. Components may be added or removed, or existing components modified/exchanged to suit the demands of a given problem domain (like algorithmic building blocks) or to make the algorithm flexible enough to function in many different problem domains. As a result, the LCS paradigm can be flexibly applied to many problem domains that call for machine learning. The major divisions among LCS implementations are as follows: (1) Michigan-style architecture vs. Pittsburgh-style architecture, (2) reinforcement learning vs. supervised learning, (3) incremental learning vs. batch learning, (4) online learning vs. offline learning, (5) strength-based fitness vs. accuracy-based fitness, and (6) complete action mapping vs best action mapping. These divisions are not necessarily mutually exclusive. For example, XCS, the best known and best studied LCS algorithm, is Michigan-style, was designed for reinforcement learning but can also perform supervised learning, applies incremental learning that
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
can be either online or offline, applies accuracy-based fitness, and seeks to generate a complete action mapping. === Elements of a generic LCS algorithm === Keeping in mind that LCS is a paradigm for genetic-based machine learning rather than a specific method, the following outlines key elements of a generic, modern (i.e. post-XCS) LCS algorithm. For simplicity let us focus on Michigan-style architecture with supervised learning. See the illustrations on the right laying out the sequential steps involved in this type of generic LCS. ==== Environment ==== The environment is the source of data upon which an LCS learns. It can be an offline, finite training dataset (characteristic of a data mining, classification, or regression problem), or an online sequential stream of live training instances. Each training instance is assumed to include some number of features (also referred to as attributes, or independent variables), and a single endpoint of interest (also referred to as the class, action, phenotype, prediction, or dependent variable). Part of LCS learning can involve feature selection, therefore not all of the features in the training data need to be informative. The set of feature values of an instance is commonly referred to as the state. For simplicity let's assume an example problem domain with Boolean/binary features and a Boolean/binary class. For Michigan-style systems, one instance from the environment is trained on each learning cycle (i.e. incremental learning). Pittsburgh-style systems perform batch learning, where rule sets are evaluated in each iteration over much or all of the training data. ==== Rule/classifier/population ==== A rule is a context dependent relationship between state values and some prediction. Rules typically take the form of an {IF:THEN} expression, (e.g. {IF 'condition' THEN 'action'}, or as a more specific example, {IF 'red' AND 'octagon' THEN 'stop-sign'}). A critical concept in LCS and
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
rule-based machine learning alike, is that an individual rule is not in itself a model, since the rule is only applicable when its condition is satisfied. Think of a rule as a "local-model" of the solution space. Rules can be represented in many different ways to handle different data types (e.g. binary, discrete-valued, ordinal, continuous-valued). Given binary data LCS traditionally applies a ternary rule representation (i.e. rules can include either a 0, 1, or '#' for each feature in the data). The 'don't care' symbol (i.e. '#') serves as a wild card within a rule's condition allowing rules, and the system as a whole to generalize relationships between features and the target endpoint to be predicted. Consider the following rule (#1###0 ~ 1) (i.e. condition ~ action). This rule can be interpreted as: IF the second feature = 1 AND the sixth feature = 0 THEN the class prediction = 1. We would say that the second and sixth features were specified in this rule, while the others were generalized. This rule, and the corresponding prediction are only applicable to an instance when the condition of the rule is satisfied by the instance. This is more commonly referred to as matching. In Michigan-style LCS, each rule has its own fitness, as well as a number of other rule-parameters associated with it that can describe the number of copies of that rule that exist (i.e. the numerosity), the age of the rule, its accuracy, or the accuracy of its reward predictions, and other descriptive or experiential statistics. A rule along with its parameters is often referred to as a classifier. In Michigan-style systems, classifiers are contained within a population [P] that has a user defined maximum number of classifiers. Unlike most stochastic search algorithms (e.g. evolutionary algorithms), LCS populations start out
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
empty (i.e. there is no need to randomly initialize a rule population). Classifiers will instead be initially introduced to the population with a covering mechanism. In any LCS, the trained model is a set of rules/classifiers, rather than any single rule/classifier. In Michigan-style LCS, the entire trained (and optionally, compacted) classifier population forms the prediction model. ==== Matching ==== One of the most critical and often time-consuming elements of an LCS is the matching process. The first step in an LCS learning cycle takes a single training instance from the environment and passes it to [P] where matching takes place. In step two, every rule in [P] is now compared to the training instance to see which rules match (i.e. are contextually relevant to the current instance). In step three, any matching rules are moved to a match set [M]. A rule matches a training instance if all feature values specified in the rule condition are equivalent to the corresponding feature value in the training instance. For example, assuming the training instance is (001001 ~ 0), these rules would match: (###0## ~ 0), (00###1 ~ 0), (#01001 ~ 1), but these rules would not (1##### ~ 0), (000##1 ~ 0), (#0#1#0 ~ 1). Notice that in matching, the endpoint/action specified by the rule is not taken into consideration. As a result, the match set may contain classifiers that propose conflicting actions. In the fourth step, since we are performing supervised learning, [M] is divided into a correct set [C] and an incorrect set [I]. A matching rule goes into the correct set if it proposes the correct action (based on the known action of the training instance), otherwise it goes into [I]. In reinforcement learning LCS, an action set [A] would be formed here instead, since the correct action is
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
not known. ==== Covering ==== At this point in the learning cycle, if no classifiers made it into either [M] or [C] (as would be the case when the population starts off empty), the covering mechanism is applied (fifth step). Covering is a form of online smart population initialization. Covering randomly generates a rule that matches the current training instance (and in the case of supervised learning, that rule is also generated with the correct action. Assuming the training instance is (001001 ~ 0), covering might generate any of the following rules: (#0#0## ~ 0), (001001 ~ 0), (#010## ~ 0). Covering not only ensures that each learning cycle there is at least one correct, matching rule in [C], but that any rule initialized into the population will match at least one training instance. This prevents LCS from exploring the search space of rules that do not match any training instances. ==== Parameter updates/credit assignment/learning ==== In the sixth step, the rule parameters of any rule in [M] are updated to reflect the new experience gained from the current training instance. Depending on the LCS algorithm, a number of updates can take place at this step. For supervised learning, we can simply update the accuracy/error of a rule. Rule accuracy/error is different than model accuracy/error, since it is not calculated over the entire training data, but only over all instances that it matched. Rule accuracy is calculated by dividing the number of times the rule was in a correct set [C] by the number of times it was in a match set [M]. Rule accuracy can be thought of as a 'local accuracy'. Rule fitness is also updated here, and is commonly calculated as a function of rule accuracy. The concept of fitness is taken directly from classic genetic algorithms.
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
Be aware that there are many variations on how LCS updates parameters in order to perform credit assignment and learning. ==== Subsumption ==== In the seventh step, a subsumption mechanism is typically applied. Subsumption is an explicit generalization mechanism that merges classifiers that cover redundant parts of the problem space. The subsuming classifier effectively absorbs the subsumed classifier (and has its numerosity increased). This can only happen when the subsuming classifier is more general, just as accurate, and covers all of the problem space of the classifier it subsumes. ==== Rule discovery/genetic algorithm ==== In the eighth step, LCS adopts a highly elitist genetic algorithm (GA) which will select two parent classifiers based on fitness (survival of the fittest). Parents are selected from [C] typically using tournament selection. Some systems have applied roulette wheel selection or deterministic selection, and have differently selected parent rules from either [P] - panmictic selection, or from [M]). Crossover and mutation operators are now applied to generate two new offspring rules. At this point, both the parent and offspring rules are returned to [P]. The LCS genetic algorithm is highly elitist since each learning iteration, the vast majority of the population is preserved. Rule discovery may alternatively be performed by some other method, such as an estimation of distribution algorithm, but a GA is by far the most common approach. Evolutionary algorithms like the GA employ a stochastic search, which makes LCS a stochastic algorithm. LCS seeks to cleverly explore the search space, but does not perform an exhaustive search of rule combinations, and is not guaranteed to converge on an optimal solution. ==== Deletion ==== The last step in a generic LCS learning cycle is to maintain the maximum population size. The deletion mechanism will select classifiers for deletion (commonly using roulette wheel selection).
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
The probability of a classifier being selected for deletion is inversely proportional to its fitness. When a classifier is selected for deletion, its numerosity parameter is reduced by one. When the numerosity of a classifier is reduced to zero, it is removed entirely from the population. ==== Training ==== LCS will cycle through these steps repeatedly for some user defined number of training iterations, or until some user defined termination criteria have been met. For online learning, LCS will obtain a completely new training instance each iteration from the environment. For offline learning, LCS will iterate through a finite training dataset. Once it reaches the last instance in the dataset, it will go back to the first instance and cycle through the dataset again. ==== Rule compaction ==== Once training is complete, the rule population will inevitably contain some poor, redundant and inexperienced rules. It is common to apply a rule compaction, or condensation heuristic as a post-processing step. This resulting compacted rule population is ready to be applied as a prediction model (e.g. make predictions on testing instances), and/or to be interpreted for knowledge discovery. ==== Prediction ==== Whether or not rule compaction has been applied, the output of an LCS algorithm is a population of classifiers which can be applied to making predictions on previously unseen instances. The prediction mechanism is not part of the supervised LCS learning cycle itself, however it would play an important role in a reinforcement learning LCS learning cycle. For now we consider how the prediction mechanism can be applied for making predictions to test data. When making predictions, the LCS learning components are deactivated so that the population does not continue to learn from incoming testing data. A test instance is passed to [P] where a match set [M] is formed as
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
usual. At this point the match set is differently passed to a prediction array. Rules in the match set can predict different actions, therefore a voting scheme is applied. In a simple voting scheme, the action with the strongest supporting 'votes' from matching rules wins, and becomes the selected prediction. All rules do not get an equal vote. Rather the strength of the vote for a single rule is commonly proportional to its numerosity and fitness. This voting scheme and the nature of how LCS's store knowledge, suggests that LCS algorithms are implicitly ensemble learners. ==== Interpretation ==== Individual LCS rules are typically human readable IF:THEN expression. Rules that constitute the LCS prediction model can be ranked by different rule parameters and manually inspected. Global strategies to guide knowledge discovery using statistical and graphical have also been proposed. With respect to other advanced machine learning approaches, such as artificial neural networks, random forests, or genetic programming, learning classifier systems are particularly well suited to problems that require interpretable solutions. == History == === Early years === John Henry Holland was best known for his work popularizing genetic algorithms (GA), through his ground-breaking book "Adaptation in Natural and Artificial Systems" in 1975 and his formalization of Holland's schema theorem. In 1976, Holland conceptualized an extension of the GA concept to what he called a "cognitive system", and provided the first detailed description of what would become known as the first learning classifier system in the paper "Cognitive Systems based on Adaptive Algorithms". This first system, named Cognitive System One (CS-1) was conceived as a modeling tool, designed to model a real system (i.e. environment) with unknown underlying dynamics using a population of human readable rules. The goal was for a set of rules to perform online machine learning to adapt to
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
the environment based on infrequent payoff/reward (i.e. reinforcement learning) and apply these rules to generate a behavior that matched the real system. This early, ambitious implementation was later regarded as overly complex, yielding inconsistent results. Beginning in 1980, Kenneth de Jong and his student Stephen Smith took a different approach to rule-based machine learning with (LS-1), where learning was viewed as an offline optimization process rather than an online adaptation process. This new approach was more similar to a standard genetic algorithm but evolved independent sets of rules. Since that time LCS methods inspired by the online learning framework introduced by Holland at the University of Michigan have been referred to as Michigan-style LCS, and those inspired by Smith and De Jong at the University of Pittsburgh have been referred to as Pittsburgh-style LCS. In 1986, Holland developed what would be considered the standard Michigan-style LCS for the next decade. Other important concepts that emerged in the early days of LCS research included (1) the formalization of a bucket brigade algorithm (BBA) for credit assignment/learning, (2) selection of parent rules from a common 'environmental niche' (i.e. the match set [M]) rather than from the whole population [P], (3) covering, first introduced as a create operator, (4) the formalization of an action set [A], (5) a simplified algorithm architecture, (6) strength-based fitness, (7) consideration of single-step, or supervised learning problems and the introduction of the correct set [C], (8) accuracy-based fitness (9) the combination of fuzzy logic with LCS (which later spawned a lineage of fuzzy LCS algorithms), (10) encouraging long action chains and default hierarchies for improving performance on multi-step problems, (11) examining latent learning (which later inspired a new branch of anticipatory classifier systems (ACS)), and (12) the introduction of the first Q-learning-like credit assignment technique. While not all
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
of these concepts are applied in modern LCS algorithms, each were landmarks in the development of the LCS paradigm. === The revolution === Interest in learning classifier systems was reinvigorated in the mid 1990s largely due to two events; the development of the Q-Learning algorithm for reinforcement learning, and the introduction of significantly simplified Michigan-style LCS architectures by Stewart Wilson. Wilson's Zeroth-level Classifier System (ZCS) focused on increasing algorithmic understandability based on Hollands standard LCS implementation. This was done, in part, by removing rule-bidding and the internal message list, essential to the original BBA credit assignment, and replacing it with a hybrid BBA/Q-Learning strategy. ZCS demonstrated that a much simpler LCS architecture could perform as well as the original, more complex implementations. However, ZCS still suffered from performance drawbacks including the proliferation of over-general classifiers. In 1995, Wilson published his landmark paper, "Classifier fitness based on accuracy" in which he introduced the classifier system XCS. XCS took the simplified architecture of ZCS and added an accuracy-based fitness, a niche GA (acting in the action set [A]), an explicit generalization mechanism called subsumption, and an adaptation of the Q-Learning credit assignment. XCS was popularized by its ability to reach optimal performance while evolving accurate and maximally general classifiers as well as its impressive problem flexibility (able to perform both reinforcement learning and supervised learning). XCS later became the best known and most studied LCS algorithm and defined a new family of accuracy-based LCS. ZCS alternatively became synonymous with strength-based LCS. XCS is also important, because it successfully bridged the gap between LCS and the field of reinforcement learning. Following the success of XCS, LCS were later described as reinforcement learning systems endowed with a generalization capability. Reinforcement learning typically seeks to learn a value function that maps out a complete representation
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
of the state/action space. Similarly, the design of XCS drives it to form an all-inclusive and accurate representation of the problem space (i.e. a complete map) rather than focusing on high payoff niches in the environment (as was the case with strength-based LCS). Conceptually, complete maps don't only capture what you should do, or what is correct, but also what you shouldn't do, or what's incorrect. Differently, most strength-based LCSs, or exclusively supervised learning LCSs seek a rule set of efficient generalizations in the form of a best action map (or a partial map). Comparisons between strength vs. accuracy-based fitness and complete vs. best action maps have since been examined in greater detail. === In the wake of XCS === XCS inspired the development of a whole new generation of LCS algorithms and applications. In 1995, Congdon was the first to apply LCS to real-world epidemiological investigations of disease followed closely by Holmes who developed the BOOLE++, EpiCS, and later EpiXCS for epidemiological classification. These early works inspired later interest in applying LCS algorithms to complex and large-scale data mining tasks epitomized by bioinformatics applications. In 1998, Stolzmann introduced anticipatory classifier systems (ACS) which included rules in the form of 'condition-action-effect, rather than the classic 'condition-action' representation. ACS was designed to predict the perceptual consequences of an action in all possible situations in an environment. In other words, the system evolves a model that specifies not only what to do in a given situation, but also provides information of what will happen after a specific action will be executed. This family of LCS algorithms is best suited to multi-step problems, planning, speeding up learning, or disambiguating perceptual aliasing (i.e. where the same observation is obtained in distinct states but requires different actions). Butz later pursued this anticipatory family of LCS
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
developing a number of improvements to the original method. In 2002, Wilson introduced XCSF, adding a computed action in order to perform function approximation. In 2003, Bernado-Mansilla introduced a sUpervised Classifier System (UCS), which specialized the XCS algorithm to the task of supervised learning, single-step problems, and forming a best action set. UCS removed the reinforcement learning strategy in favor of a simple, accuracy-based rule fitness as well as the explore/exploit learning phases, characteristic of many reinforcement learners. Bull introduced a simple accuracy-based LCS (YCS) and a simple strength-based LCS Minimal Classifier System (MCS) in order to develop a better theoretical understanding of the LCS framework. Bacardit introduced GAssist and BioHEL, Pittsburgh-style LCSs designed for data mining and scalability to large datasets in bioinformatics applications. In 2008, Drugowitsch published the book titled "Design and Analysis of Learning Classifier Systems" including some theoretical examination of LCS algorithms. Butz introduced the first rule online learning visualization within a GUI for XCSF (see the image at the top of this page). Urbanowicz extended the UCS framework and introduced ExSTraCS, explicitly designed for supervised learning in noisy problem domains (e.g. epidemiology and bioinformatics). ExSTraCS integrated (1) expert knowledge to drive covering and genetic algorithm towards important features in the data, (2) a form of long-term memory referred to as attribute tracking, allowing for more efficient learning and the characterization of heterogeneous data patterns, and (3) a flexible rule representation similar to Bacardit's mixed discrete-continuous attribute list representation. Both Bacardit and Urbanowicz explored statistical and visualization strategies to interpret LCS rules and perform knowledge discovery for data mining. Browne and Iqbal explored the concept of reusing building blocks in the form of code fragments and were the first to solve the 135-bit multiplexer benchmark problem by first learning useful building blocks from simpler multiplexer problems.
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
ExSTraCS 2.0 was later introduced to improve Michigan-style LCS scalability, successfully solving the 135-bit multiplexer benchmark problem for the first time directly. The n-bit multiplexer problem is highly epistatic and heterogeneous, making it a very challenging machine learning task. == Variants == === Michigan-Style Learning Classifier System === Michigan-Style LCSs are characterized by a population of rules where the genetic algorithm operates at the level of individual rules and the solution is represented by the entire rule population. Michigan style systems also learn incrementally which allows them to perform both reinforcement learning and supervised learning, as well as both online and offline learning. Michigan-style systems have the advantage of being applicable to a greater number of problem domains, and the unique benefits of incremental learning. === Pittsburgh-Style Learning Classifier System === Pittsburgh-Style LCSs are characterized by a population of variable length rule-sets where each rule-set is a potential solution. The genetic algorithm typically operates at the level of an entire rule-set. Pittsburgh-style systems can also uniquely evolve ordered rule lists, as well as employ a default rule. These systems have the natural advantage of identifying smaller rule sets, making these systems more interpretable with regards to manual rule inspection. === Hybrid systems === Systems that seek to combine key strengths of both systems have also been proposed. == Advantages == Adaptive: They can acclimate to a changing environment in the case of online learning. Model free: They make limited assumptions about the environment, or the patterns of association within the data. They can model complex, epistatic, heterogeneous, or distributed underlying patterns without relying on prior knowledge. They make no assumptions about the number of predictive vs. non-predictive features in the data. Ensemble Learner: No single model is applied to a given instance that universally provides a prediction. Instead a relevant
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
and often conflicting set of rules contribute a 'vote' which can be interpreted as a fuzzy prediction. Stochastic Learner: Non-deterministic learning is advantageous in large-scale or high complexity problems where deterministic or exhaustive learning becomes intractable. Implicitly Multi-objective: Rules evolve towards accuracy with implicit and explicit pressures encouraging maximal generality/simplicity. This implicit generalization pressure is unique to LCS. Effectively, more general rules, will appear more often in match sets. In turn, they have a more frequent opportunity to be selected as parents, and pass on their more general (genomes) to offspring rules. Interpretable:In the interest of data mining and knowledge discovery individual LCS rules are logical, and can be made to be human interpretable IF:THEN statements. Effective strategies have also been introduced to allow for global knowledge discovery identifying significant features, and patterns of association from the rule population as a whole. Flexible application Single or multi-step problems Supervised, Reinforcement or Unsupervised Learning Binary Class and Multi-Class Classification Regression Discrete or continuous features (or some mix of both types) Clean or noisy problem domains Balanced or imbalanced datasets. Accommodates missing data (i.e. missing feature values in training instances) == Disadvantages == Limited Software Availability: There are a limited number of open source, accessible LCS implementations, and even fewer that are designed to be user friendly or accessible to machine learning practitioners. Interpretation: While LCS algorithms are certainly more interpretable than some advanced machine learners, users must interpret a set of rules (sometimes large sets of rules to comprehend the LCS model.). Methods for rule compaction, and interpretation strategies remains an area of active research. Theory/Convergence Proofs: There is a relatively small body of theoretical work behind LCS algorithms. This is likely due to their relative algorithmic complexity (applying a number of interacting components) as well as their stochastic nature. Overfitting:
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
Like any machine learner, LCS can suffer from overfitting despite implicit and explicit generalization pressures. Run Parameters: LCSs often have many run parameters to consider/optimize. Typically, most parameters can be left to the community determined defaults with the exception of two critical parameters: Maximum rule population size, and the maximum number of learning iterations. Optimizing these parameters are likely to be very problem dependent. Notoriety: Despite their age, LCS algorithms are still not widely known even in machine learning communities. As a result, LCS algorithms are rarely considered in comparison to other established machine learning approaches. This is likely due to the following factors: (1) LCS is a relatively complicated algorithmic approach, (2) LCS, rule-based modeling is a different paradigm of modeling than almost all other machine learning approaches. (3) LCS software implementations are not as common. Computationally Expensive: While certainly more feasible than some exhaustive approaches, LCS algorithms can be computationally expensive. For simple, linear learning problems there is no need to apply an LCS. LCS algorithms are best suited to complex problem spaces, or problem spaces in which little prior knowledge exists. == Problem domains == Adaptive-control Data Mining Engineering Design Feature Selection Function Approximation Game-Play Image Classification Knowledge Handling Medical Diagnosis Modeling Navigation Optimization Prediction Querying Robotics Routing Rule-Induction Scheduling Strategy == Terminology == The name, "Learning Classifier System (LCS)", is a bit misleading since there are many machine learning algorithms that 'learn to classify' (e.g. decision trees, artificial neural networks), but are not LCSs. The term 'rule-based machine learning (RBML)' is useful, as it more clearly captures the essential 'rule-based' component of these systems, but it also generalizes to methods that are not considered to be LCSs (e.g. association rule learning, or artificial immune systems). More general terms such as, 'genetics-based machine learning', and even 'genetic
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
algorithm' have also been applied to refer to what would be more characteristically defined as a learning classifier system. Due to their similarity to genetic algorithms, Pittsburgh-style learning classifier systems are sometimes generically referred to as 'genetic algorithms'. Beyond this, some LCS algorithms, or closely related methods, have been referred to as 'cognitive systems', 'adaptive agents', 'production systems', or generically as a 'classifier system'. This variation in terminology contributes to some confusion in the field. Up until the 2000s nearly all learning classifier system methods were developed with reinforcement learning problems in mind. As a result, the term ‘learning classifier system’ was commonly defined as the combination of ‘trial-and-error’ reinforcement learning with the global search of a genetic algorithm. Interest in supervised learning applications, and even unsupervised learning have since broadened the use and definition of this term. == See also == Rule-based machine learning Production system Expert system Genetic algorithm Association rule learning Artificial immune system Population-based Incremental Learning Machine learning == References == == External links == === Video tutorial === Learning Classifier Systems in a Nutshell - (2016) Go inside a basic LCS algorithm to learn their components and how they work. === Webpages === LCS & GBML Central UWE Learning Classifier Research Group Prediction Dynamics
{ "page_id": 854461, "source": null, "title": "Learning classifier system" }
Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy is a monthly peer-reviewed scientific journal covering spectroscopy. According to the Journal Citation Reports, the journal has a 2011 impact factor of 2.098. Currently, the editors are Malgorzata Baranska, Joel Bowman, Sylvio Canuto, Christian W. Huck, Judy Kim, Huimin Ma, Siva Umapathy The journal was established in 1939 as Spectrochimica Acta. In 1967, Spectrochimica Acta was split into two journals, Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy and Spectrochimica Acta Part B: Atomic Spectroscopy. Part A obtained its current title in 1995. == References == == External links == Official website
{ "page_id": 38799805, "source": null, "title": "Spectrochimica Acta Part A" }
NaGISA (Natural Geography in Shore Areas or Natural Geography of In-Shore Areas) is an international collaborative effort aimed at inventorying, cataloguing, and monitoring biodiversity of the in-shore area. So named for the Japanese word "nagisa" ("where the land meets the sea"), it is an Apronym. NaGISA is the first project of the larger CoML effort (Census of Marine Life) to have global participation in actual field work. The actual procedures of this project involve inexpensive collection equipment (for easy universal participation). This equipment is used to photograph sampling sites, to actually take samples from the sites, and to process these samples. At each site throughout the world, samples are taken from the intertidal zone out to a depth of 10 meters (and optionally out to 20 meters depth). These samples are then processed (the organisms are isolated) and then analyzed and catalogued. The information (regarding the kind and number of organisms analyzed) is sent to the global headquarters of NaGISA- the University of Kyoto in Japan. All of this information is then collated on the Ocean Biogeographic Information System (OBIS website). The end goal of the larger CoML effort is to find what was, what is, and what will be in the world's oceans. For NaGISA the goal is to find this in the world's in-shore areas. == See also == Ecological forecasting == References ==
{ "page_id": 17828289, "source": null, "title": "NaGISA" }
CHLC (or Cooperative Human Linkage Center) was a National Institutes of Health project to map a large number of human genome markers, prior to the completion of the Human Genome Project. The project was stopped in 1999.
{ "page_id": 8391108, "source": null, "title": "Cooperative Human Linkage Center" }
Sergey Gavrilets is a Russian-born American physicist turned theoretical biologist, and currently a Distinguished Professor at the University of Tennessee. He uses mathematical and computational models to study complex biological and social processes. He has made contributions to the study of speciation, sexual selection, fitness landscapes, sexual conflict, social complexity, evolutionary game theory, social norms, homosexuality, social norms, and cultural evolution. He is currently Associate Director for Scientific Activities at the National Institute for Mathematical and Biological Synthesis. In 2017, he was elected a Fellow of the American Academy of Arts and Sciences. Gavrilets has contributed to the book Evolution: The Extended Synthesis (Edited by Massimo Pigliucci and Gerd B. Müller, 2010). == Publications == Books Gavrilets, S. (2004), Fitness Landscapes and the Origin of Species, Princeton University Press, ISBN 978-0691119830 Rice, W.R.; Gavrilets, S.(eds.) (2014), The Genetics and Biology of Sexual Conflict, Cold Spring Harbor, ISBN 978-1-621820-59-8 {{citation}}: |first2= has generic name (help) == References ==
{ "page_id": 52955595, "source": null, "title": "Sergey Gavrilets" }
In physics, the Bethe ansatz is an ansatz for finding the exact wavefunctions of certain quantum many-body models, most commonly for one-dimensional lattice models. It was first used by Hans Bethe in 1931 to find the exact eigenvalues and eigenvectors of the one-dimensional antiferromagnetic isotropic (XXX) Heisenberg model. Since then the method has been extended to other spin chains and statistical lattice models. "Bethe ansatz problems" were one of the topics featuring in the "To learn" section of Richard Feynman's blackboard at the time of his death. == Discussion == In the framework of many-body quantum mechanics, models solvable by the Bethe ansatz can be contrasted with free fermion models. One can say that the dynamics of a free model is one-body reducible: the many-body wave function for fermions (bosons) is the anti-symmetrized (symmetrized) product of one-body wave functions. Models solvable by the Bethe ansatz are not free: the two-body sector has a non-trivial scattering matrix, which in general depends on the momenta. On the other hand, the dynamics of the models solvable by the Bethe ansatz is two-body reducible: the many-body scattering matrix is a product of two-body scattering matrices. Many-body collisions happen as a sequence of two-body collisions and the many-body wave function can be represented in a form which contains only elements from two-body wave functions. The many-body scattering matrix is equal to the product of pairwise scattering matrices. The generic form of the (coordinate) Bethe ansatz for a many-body wavefunction is Ψ M ( j 1 , ⋯ , j M ) = ∏ M ≥ a > b ≥ 1 sgn ⁡ ( j a − j b ) ∑ P ∈ S M ( − 1 ) [ P ] exp ⁡ ( i ∑ a = 1 M k P a j a
{ "page_id": 4327884, "source": null, "title": "Bethe ansatz" }
+ i 2 ∑ M ≥ a > b ≥ 1 sgn ⁡ ( j a − j b ) ϕ ( k P a , k P b ) ) , {\displaystyle \Psi _{M}(j_{1},\cdots ,j_{M})=\prod _{M\geq a>b\geq 1}\operatorname {sgn} (j_{a}-j_{b})\sum _{P\in {\mathfrak {S}}_{M}}(-1)^{[P]}\exp \left(i\sum _{a=1}^{M}k_{P_{a}}j_{a}+{\frac {i}{2}}\sum _{M\geq a>b\geq 1}\operatorname {sgn} (j_{a}-j_{b})\phi (k_{P_{a}},k_{P_{b}})\right),} where M {\displaystyle M} is the number of particles, j a ( a = 1 , ⋯ M ) {\displaystyle j_{a}\ (a=1,\cdots M)} are their position, S M {\displaystyle {\mathfrak {S}}_{M}} is the set of all permutations of the integers 1 , ⋯ , M {\displaystyle 1,\cdots ,M} ; ( − 1 ) [ P ] = ± 1 {\displaystyle (-1)^{[P]}=\pm 1} is the parity of the permutation P {\displaystyle P} ; k a {\displaystyle k_{a}} is the (quasi-)momentum of the a {\displaystyle a} -th particle, ϕ {\displaystyle \phi } is the scattering phase shift function and sgn {\displaystyle \operatorname {sgn} } is the sign function. This form is universal (at least for non-nested systems), with the momentum and scattering functions being model-dependent. The Yang–Baxter equation guarantees consistency of the construction. The Pauli exclusion principle is valid for models solvable by the Bethe ansatz, even for models of interacting bosons. The ground state is a Fermi sphere. Periodic boundary conditions lead to the Bethe ansatz equations or simply Bethe equations. In logarithmic form the Bethe ansatz equations can be generated by the Yang action. The square of the norm of Bethe wave function is equal to the determinant of the Hessian of the Yang action. A substantial generalization is the quantum inverse scattering method, or algebraic Bethe ansatz, which gives an ansatz for the underlying operator algebra that "has allowed a wide class of nonlinear evolution equations to be solved". The exact solutions of the so-called s-d
{ "page_id": 4327884, "source": null, "title": "Bethe ansatz" }
model (by P. B. Wiegmann in 1980 and independently by N. Andrei, also in 1980) and the Anderson model (by P. B. Wiegmann in 1981, and by N. Kawakami and A. Okiji in 1981) are also both based on the Bethe ansatz. There exist multi-channel generalizations of these two models also amenable to exact solutions (by N. Andrei and C. Destri and by C. J. Bolech and N. Andrei). Recently several models solvable by Bethe ansatz were realized experimentally in solid states and optical lattices. An important role in the theoretical description of these experiments was played by Jean-Sébastien Caux and Alexei Tsvelik. == Terminology == There are many similar methods which come under the name of Bethe ansatz Algebraic Bethe ansatz. The quantum inverse scattering method is the method of solution by algebraic Bethe ansatz, and the two are practically synonymous. Analytic Bethe ansatz Coordinate Bethe ansatz (Hans Bethe 1931) Functional Bethe ansatz Nested Bethe ansatz Thermodynamic Bethe ansatz (C. N. Yang & C. P. Yang 1969) == Examples == === Heisenberg antiferromagnetic chain === The Heisenberg antiferromagnetic chain is defined by the Hamiltonian (assuming periodic boundary conditions) H = J ∑ j = 1 N S j ⋅ S j + 1 , S j + N ≡ S j . {\displaystyle H=J\sum _{j=1}^{N}\mathbf {S} _{j}\cdot \mathbf {S} _{j+1},\quad \mathbf {S} _{j+N}\equiv \mathbf {S} _{j}.} This model is solvable using the (coordinate) Bethe ansatz. The scattering phase shift function is ϕ ( k a ( λ a ) , k b ( λ b ) ) = θ 2 ( λ a − λ b ) {\displaystyle \phi {\big (}k_{a}(\lambda _{a}),k_{b}(\lambda _{b}){\big )}=\theta _{2}(\lambda _{a}-\lambda _{b})} with θ n ( λ ) ≡ 2 arctan ⁡ ( 2 λ / n ) , {\displaystyle \theta _{n}(\lambda )\equiv 2\arctan(2\lambda
{ "page_id": 4327884, "source": null, "title": "Bethe ansatz" }
/n),} in which the momentum has been conveniently reparametrized as k ( λ ) = π − 2 arctan ⁡ 2 λ {\displaystyle k(\lambda )=\pi -2\arctan 2\lambda } in terms of the rapidity λ . {\displaystyle \lambda .} The boundary conditions (periodic here) impose the Bethe equations [ λ a + i / 2 λ a − i / 2 ] N = ∏ b ≠ a M λ a − λ b + i λ a − λ b − i , a = 1 , … , M , {\displaystyle \left[{\frac {\lambda _{a}+i/2}{\lambda _{a}-i/2}}\right]^{N}=\prod _{b\neq a}^{M}{\frac {\lambda _{a}-\lambda _{b}+i}{\lambda _{a}-\lambda _{b}-i}},\quad a=1,\dots ,M,} or more conveniently in logarithmic form θ 1 ( λ a ) − 1 N ∑ b = 1 M θ 2 ( λ a − λ b ) = 2 π I a N , {\displaystyle \theta _{1}(\lambda _{a})-{\frac {1}{N}}\sum _{b=1}^{M}\theta _{2}(\lambda _{a}-\lambda _{b})=2\pi {\frac {I_{a}}{N}},} where the quantum numbers I j {\displaystyle I_{j}} are distinct half-odd integers for N − M {\displaystyle N-M} even, integers for N − M {\displaystyle N-M} odd (with I j {\displaystyle I_{j}} defined mod N {\displaystyle {\bmod {N}}} ). == Applicability == The following systems can be solved using the Bethe ansatz Anderson impurity model Gaudin model XXX and XXZ Heisenberg spin chain for arbitrary spin s {\displaystyle s} Hubbard model Kondo model Lieb–Liniger model Six-vertex model and Eight-vertex model (through Heisenberg spin chain) == Chronology == 1928: Werner Heisenberg publishes his model. 1930: Felix Bloch proposes an oversimplified ansatz which miscounts the number of solutions to the Schrödinger equation for the Heisenberg chain. 1931: Hans Bethe proposes the correct ansatz and carefully shows that it yields the correct number of eigenfunctions. 1938: Lamek Hulthén obtains the exact ground-state energy of the Heisenberg model. 1958: Raymond Lee Orbach
{ "page_id": 4327884, "source": null, "title": "Bethe ansatz" }
uses the Bethe ansatz to solve the Heisenberg model with anisotropic interactions. 1962: J. des Cloizeaux and J. J. Pearson obtain the correct spectrum of the Heisenberg antiferromagnet (spinon dispersion relation), showing that it differs from Anderson’s spin-wave theory predictions (the constant prefactor is different). 1963: Elliott H. Lieb and Werner Liniger provide the exact solution of the 1d δ-function interacting Bose gas (now known as the Lieb-Liniger model). Lieb studies the spectrum and defines two basic types of excitations. 1964: Robert B. Griffiths obtains the magnetization curve of the Heisenberg model at zero temperature. 1966: C. N. Yang and C. P. Yang rigorously prove that the ground-state of the Heisenberg chain is given by the Bethe ansatz. They study properties and applications in and. 1967: C. N. Yang generalizes Lieb and Liniger's solution of the δ-function interacting Bose gas to arbitrary permutation symmetry of the wavefunction, giving birth to the nested Bethe ansatz. 1968: Elliott H. Lieb and F. Y. Wu solve the 1d Hubbard model. 1969: C. N. Yang and C. P. Yang obtain the thermodynamics of the Lieb-Liniger model, providing the basis of the thermodynamic Bethe ansatz (TBA). == References == == External links == Introduction to the Bethe Ansatz
{ "page_id": 4327884, "source": null, "title": "Bethe ansatz" }
Hypervelocity is very high velocity, approximately over 3,000 meters per second (11,000 km/h, 6,700 mph, 10,000 ft/s, or Mach 8.8). In particular, hypervelocity is velocity so high that the strength of materials upon impact is very small compared to inertial stresses. Thus, metals and fluids behave alike under hypervelocity impact. An impact under extreme hypervelocity results in vaporization of the impactor and target. For structural metals, hypervelocity is generally considered to be over 2,500 m/s (5,600 mph, 9,000 km/h, 8,200 ft/s, or Mach 7.3). Meteorite craters are also examples of hypervelocity impacts. == Overview == The term "hypervelocity" refers to velocities in the range from a few kilometers per second to some tens of kilometers per second. This is especially relevant in the field of space exploration and military use of space, where hypervelocity impacts (e.g. by space debris or an attacking projectile) can result in anything from minor component degradation to the complete destruction of a spacecraft or missile. The impactor, as well as the surface it hits, can undergo temporary liquefaction. The impact process can generate plasma discharges, which can interfere with spacecraft electronics. Hypervelocity usually occurs during meteor showers and deep space reentries, as carried out during the Zond, Apollo and Luna programs. Given the intrinsic unpredictability of the timing and trajectories of meteors, space capsules are prime data gathering opportunities for the study of thermal protection materials at hypervelocity (in this context, hypervelocity is defined as greater than escape velocity). Given the rarity of such observation opportunities since the 1970s, the Genesis and Stardust Sample Return Capsule (SRC) reentries as well as the recent Hayabusa SRC reentry have spawned observation campaigns, most notably at NASA's Ames Research Center. Hypervelocity collisions can be studied by examining the results of naturally occurring collisions (between micrometeorites and spacecraft, or
{ "page_id": 1968588, "source": null, "title": "Hypervelocity" }
between meteorites and planetary bodies), or they may be performed in laboratories. Currently, the primary tool for laboratory experiments is a light-gas gun, but some experiments have used linear motors to accelerate projectiles to hypervelocity. The properties of metals under hypervelocity have been integrated with weapons, such as explosively formed penetrator. The vaporization upon impact and liquification of surfaces allow metal projectiles formed under hypervelocity forces to penetrate vehicle armor better than conventional bullets. NASA studies the effects of simulated orbital debris at the White Sands Test Facility Remote Hypervelocity Test Laboratory (RHTL). Objects smaller than a softball cannot be detected on radar. This has prompted spacecraft designers to develop shields to protect spacecraft from unavoidable collisions. At RHTL, micrometeoroid and orbital debris (MMOD) impacts are simulated on spacecraft components and shields allowing designers to test threats posed by the growing orbital debris environment and evolve shield technology to stay one step ahead. At RHTL, four two-stage light-gas guns propel 0.05 to 22.2 mm (0.0020 to 0.8740 in) diameter projectiles to velocities as fast as 8.5 km/s (5.3 mi/s). == Hypervelocity reentry events == == Other definitions of hypervelocity == According to the United States Army, hypervelocity can also refer to the muzzle velocity of a weapon system, with the exact definition dependent upon the weapon in question. When discussing small arms a muzzle velocity of 5,000 ft/s (1524 m/s) or greater is considered hypervelocity, while for tank cannons the muzzle velocity must meet or exceed 3,350 ft/s (1021.08 m/s) to be considered hypervelocity, and the threshold for artillery cannons is 3,500 ft/s (1066.8 m/s). == See also == 2009 satellite collision Hypersonic aircraft Hypersonic flight Hypersonic Hypervelocity star Impact depth#Newton's approximation for the impact depth Kinetic energy penetrator Terminal velocity == References ==
{ "page_id": 1968588, "source": null, "title": "Hypervelocity" }
The Rockingham, or Waterloo, Kiln in Swinton, South Yorkshire, England, is a pottery kiln dating from 1815. It formed part of the production centre for the Rockingham Pottery which, in the early 19th century, produced highly-decorative Rococo porcelain. The pottery failed in the mid-19th century, and the kiln is one of the few remaining elements of the Rockingham manufactory. It is a Grade II* listed building and forms part of the Rockingham Works Scheduled monument. The kiln is currently on the Historic England Heritage at Risk Register. == History == The original factory on the Swinton site produced simple earthenware pottery. The first recorded operator was a Joseph Flint, who in the 1740s was renting the site from the Marquess of Rockingham. A partnership with the Leeds Pottery failed and was dissolved by 1806. The subsequent owners, the Brameld family, built the Rockingham Kiln, and other structures on the site, in 1815. The date, the year of the Battle of Waterloo, led to the kiln's alternative name, the Waterloo Kiln. Despite the Brameld's investigations into the production of high-quality porcelain, the venture continued to be unsuccessful and the firm was extricated from a further bankruptcy in 1826 only by the intervention of William Fitzwilliam, 4th Earl Fitzwilliam, who had inherited the Wentworth Woodhouse estate from his uncle, the second Marquess of Rockingham. The Earl's patronage, permitting the use of the Rockingham name and family crest, together with providing direct financial support, saw the Rockingham Pottery develop into a major producer of elaborate rococo-style porcelain, which enjoyed royal endorsement at home and considerable sales abroad. The factory produced major pieces including a full desert service for William IV which took eight years to complete. Ruth Harman, in her 2017 revised volume, Yorkshire West Riding: Sheffield and the South, of the Pevsner
{ "page_id": 72223183, "source": null, "title": "Rockingham Kiln" }
Buildings of England series, notes that "perfection was their undoing" and by 1842 the Rockingham firm was again bankrupt and the site was closed. The Pottery Ponds site is administered by Rotherham Museums. As at November 2022, the kiln is on Historic England's Heritage at Risk Register. Recent interest in the Rockingham Works has seen the erection of a commemorative sculpture in Swinton in 2003, and a community heritage project at the site in 2021, directed by the artist Carlos Cortes. == Architecture and description == The Rockingham Kiln is believed to be the only surviving such pottery kiln in Yorkshire, and one of the few remaining in England. The 17 metres (56 ft) high kiln is bottle-shaped and is constructed in English Bond red brick. Harman records that the structure is more accurately described as a "bottle-shaped brick oven [containing] a kiln". The kiln is a Grade II* listed building and forms part of the Rockingham Works Scheduled monument. == Notes == == References == == Sources == Harman, Ruth; Pevsner, Nikolaus (2017). Yorkshire West Riding: Sheffield and the South. The Buildings of England. New Haven, US and London: Yale University Press. ISBN 978-0-300-22468-9.
{ "page_id": 72223183, "source": null, "title": "Rockingham Kiln" }
The plaque reduction neutralization test is used to quantify the titer of neutralizing antibody for a virus. The serum sample or solution of antibody to be tested is diluted and mixed with a viral suspension. This is incubated to allow the antibody to react with the virus. This is poured over a confluent monolayer of host cells. The surface of the cell layer is covered in a layer of agar or carboxymethyl cellulose to prevent the virus from spreading indiscriminately. The concentration of plaque forming units can be estimated by the number of plaques (regions of infected cells) formed after a few days. Depending on the virus, the plaque forming units are measured by microscopic observation, fluorescent antibodies or specific dyes that react with infected cells. The concentration of serum to reduce the number of plaques by 50% compared to the serum free virus gives the measure of how much antibody is present or how effective it is. This measurement is denoted as the PRNT50 value. Currently it is considered to be the "gold standard" for detecting and measuring antibodies that can neutralise the viruses that cause many diseases. It has a higher sensitivity than other tests like hemagglutination and many commercial Enzyme immunoassay without compromising specificity. Moreover, it is more specific than other serological methods for the diagnosis of some arbovirus. However, the test is relatively cumbersome and time intensive (taking a few days) relative to EIA kits that give quick results (usually several minutes to a few hours). An issue with this assay that has recently been identified is that the neutralization ability of the antibodies is dependent on the virion maturation state and the cell-type used in the assay. Therefore, if the wrong cell line is used for the assay it may seem that the antibodies have
{ "page_id": 32377299, "source": null, "title": "Plaque reduction neutralization test" }
neutralization ability when they actually do not, or vice versa they may seem ineffective when they actually possess neutralization ability. == See also == ELISA – Method to detect an antigen using an antibody and enzyme Immune complex – Molecule formed binding antigens to antibodies Viral quantification using the plaque assay == References ==
{ "page_id": 32377299, "source": null, "title": "Plaque reduction neutralization test" }
The molecular formula N4O (molar mass: 72.03 g/mol, exact mass: 72.0072 u) may refer to: Nitrosylazide Oxatetrazole == See also == Dinitrogen tetroxide
{ "page_id": 52562388, "source": null, "title": "N4O" }
OpenVX is an open, royalty-free standard for cross-platform acceleration of computer vision applications. It is designed by the Khronos Group to facilitate portable, optimized and power-efficient processing of methods for vision algorithms. This is aimed for embedded and real-time programs within computer vision and related scenarios. It uses a connected graph representation of operations. == Overview == OpenVX specifies a higher level of abstraction for programming computer vision use cases than compute frameworks such as OpenCL. The high level makes the programming easy and the underlying execution will be efficient on different computing architectures. This is done while having a consistent and portable vision acceleration API. OpenVX is based on a connected graph of vision nodes that can execute the preferred chain of operations. It uses an opaque memory model, allowing to move image data between the host (CPU) memory and accelerator, such as GPU memory. As a result, the OpenVX implementation can optimize the execution through various techniques, such as acceleration on various processing units or dedicated hardware. This architecture facilitates applications programmed in OpenVX on different systems with different power and performance, including battery-sensitive, vision-enabled, wearable displays. OpenVX is complementary to the open source vision library OpenCV. OpenVX in some applications offers a better optimized graph management than OpenCV. == History == OpenVX 1.0 specification was released in October 2014. OpenVX sample implementation was released in December 2014. OpenVX 1.1 specification was released on May 2, 2016. OpenVX 1.2 was released on May 1, 2017. Updated OpenVX adopters program and OpenVX 1.2 conformance test suite was released on November 21, 2017. OpenVX 1.2.1 was released on November 27, 2018. OpenVX 1.3 was released on October 22, 2019. == Implementations, frameworks and libraries == AMD MIVisionX - for AMD's CPUs and GPUs. Cadence - for Cadence Design Systems's Tensilica
{ "page_id": 45418965, "source": null, "title": "OpenVX" }
Vision DSPs. Imagination - for Imagination Technologies's PowerVR GPUs Synopsys - for Synopsys' DesignWare EV Vision Processors Texas Instruments’ OpenVX (TIOVX) - for Texas Instruments’ Jacinto™ ADAS SoCs. NVIDIA VisionWorks - for CUDA-capable Nvidia GPUs and SoCs. OpenVINO - for Intel's CPUs, GPUs, VPUs, and FPGAs. == References == == External links == Official website for OpenVX OpenVX Specification Registry OpenVX Sample Implementation OpenVX Sample Applications OpenVX Tutorial Material
{ "page_id": 45418965, "source": null, "title": "OpenVX" }
In mycology, the terms teleomorph, anamorph, and holomorph apply to portions of the life cycles of fungi in the phyla Ascomycota and Basidiomycota: Teleomorph: the sexual reproductive stage (morph), typically a fruiting body. Anamorph: an asexual reproductive stage (morph), often mold-like. When a single fungus produces multiple morphologically distinct anamorphs, these are called synanamorphs. Holomorph: the whole fungus, including anamorphs and teleomorph. The terms were introduced in 1981 to simplify the discussion of the procedures of the existing dual-naming system, which (1) permitted anamorphs to have their separate names but (2) treated teleomorphic names as having precedence for being used as the holomorphic name. The Melbourne Code removes the provisions and allows all names to compete on equal footing for priority as the correct name of a fungus, and hence does not use the term holomorph any more. == Dual naming of fungi == Fungi are classified primarily based on the structures associated with sexual reproduction, which tend to be evolutionarily conserved. However, many fungi reproduce only asexually, and cannot easily be classified based on sexual characteristics; some produce both asexual and sexual states. These species are often members of the Ascomycota, but a few of them belong to the Basidiomycota. Even among fungi that reproduce both sexually and asexually, often only one method of reproduction can be observed at a specific point in time or under specific conditions. Additionally, fungi typically grow in mixed colonies and sporulate amongst each other. These facts have made it very difficult to link the various states of the same fungus. Fungi that are not known to produce a teleomorph were historically placed into an artificial phylum, the "Deuteromycota," also known as "fungi imperfecti," simply for convenience. Some workers hold that this is an obsolete concept, and that molecular phylogeny allows accurate placement of
{ "page_id": 1247702, "source": null, "title": "Teleomorph, anamorph and holomorph" }
species which are known from only part of their life cycle. Others retain the term "deuteromycetes," but give it a lowercase "d" and no taxonomic rank. Historically, Article 59 of the International Code of Botanical Nomenclature permitted mycologists to give asexually reproducing fungi (anamorphs) separate names from their sexual states (teleomorphs). This practice was discontinued as of 1 January 2013. The dual naming system can be confusing. However, it is essential for workers in plant pathology, mold identification, medical mycology, and food microbiology, fields in which asexually reproducing fungi are commonly encountered. == From dual system to single nomenclature == The separate names for anamorphs of fungi with a pleomorphic life-cycle has been an issue of debate since the phenomenon was recognized in the mid-19th century. This was even before the first international rules for botanical nomenclature were issued in 1867. Special provisions are to be found in the earliest Codes, which were then modified several times, and often substantially. The rules have been updated regularly and become increasingly complex, and by the mid-1970s they were being interpreted in different ways by different mycologists – even ones working on the same genus. Following intensive discussions under the auspices of the International Mycological Association, drastic changes were made at the International Botanical Congress in 1981 to clarify and simplify the procedures – and the new terms anamorph, teleomorph, and holomorph entered general use. An unfortunate effect of the simplification was that many name changes had to be made, including for some well-known and economically important species; at that date, the conservation of species names was not allowed under the Code. Unforeseen in the 1970s, when the 1981 provisions were crafted, was the impact of molecular systematics. A decade later, it was starting to become obvious that fungi with no known sexual
{ "page_id": 1247702, "source": null, "title": "Teleomorph, anamorph and holomorph" }
stage could confidently be placed in genera which were typified by species in which the sexual stage was known. This possibility of abandoning the dual nomenclatural system was debated at subsequent International Mycological Congresses and on other occasions, and the need for change was increasingly recognized. At the International Botanical Congress in Vienna in 2005, some minor modifications were made which allowed anamorph-typified names to be epitypified by material showing the sexual stage when it was discovered, and for that anamorph name to continue to be used. The 1995 edition of the influential Ainsworth and Bisby’s Dictionary of the Fungi sought to replace the term anamorph with mitosporic fungus and teleomorph with meiosporic fungus, based on the idea that the fundamental distinction is whether mitosis or meiosis preceded sporulation. This is a controversial choice because it is not clear that the morphological differences which traditionally define anamorphs and teleomorphs line up completely with sexual practices, or whether those sexual practices are sufficiently well understood in some cases. The Vienna Congress (2005) established a Special Committee to investigate the issue further, but it was unable to reach a consensus. Matters were becoming increasingly desperate as mycologists using molecular phylogenetic approaches started to ignore the provisions, or interpret them in different ways. == One fungus, one name == The International Botanical Congress in Melbourne in July 2011 made a change in the International Code of Nomenclature for algae, fungi, and plants and adopted the principle "one fungus, one name". After 1 January 2013, one fungus can only have one name; the system of permitting separate names to be used for anamorphs then ended. This means that all legitimate names proposed for a species, regardless of what stage they are typified by, can serve as the correct name for that species. Since the
{ "page_id": 1247702, "source": null, "title": "Teleomorph, anamorph and holomorph" }
Brussels Congress in 1910, there has been provision for a separate name (or names) for the asexual (anamorph) state (or states) of fungi with a pleomorphic life cycle from that applicable to the sexual (teleomorph) state and to the whole fungus. The Brussels Rules (Briquet, Règles Int. Nomencl. Bot., ed. 2. 1912) specified that names given to states other than the sexual one (the “perfect state”) “have only a temporary value”, apparently anticipating a time when they would no longer be needed. At the Melbourne Congress, it was decided that this time had come – but not through disuse as may have been envisaged in Brussels. Throughout the various changes since 1912 to the rules on names of fungi with a pleomorphic life cycle, one element has remained constant: the correct name for the taxon in all its morphs (the holomorph) was the earliest applicable to the sexual state (the teleomorph). In Melbourne, this restriction was overturned and it was decided that all legitimate fungal names were to be treated equally for the purposes of establishing priority, regardless of the life history stage of the type. As a consequence the Melbourne Congress also approved additional special provisions for the conservation and rejection of fungal names to mitigate the nomenclatural disruption that would otherwise arise. All names now compete on an equal footing for priority. In order not to render illegitimate the names that had been introduced in the past for separate morphs, it was agreed that these should not be treated as superfluous alternative names in the sense of the Code. It was further decided that no anamorph-typified name should be taken up to displace a widely used teleomorph-typified name without the case's having been considered by the General Committee established by the Congress. Recognizing that there were cases in
{ "page_id": 1247702, "source": null, "title": "Teleomorph, anamorph and holomorph" }
some groups of fungi where there could be many names that might merit formal retention or rejection, a new provision was introduced: Lists of names can be submitted to the General Committee and, after due scrutiny, names accepted on those lists are to be treated as conserved over competing synonyms (and listed as Appendices to the Code). Lichen-forming fungi (but not lichenicolous fungi) had always been excluded from the provisions permitting dual nomenclature. The provisions are adopted in the Melbourne Code of 2012 as a modification to the existing Article 59. In the Shenzhen Code of 2018, a new chapter F "Names of organisms treated as fungi" was added, collecting all fungus-specific provisions including the original Article 59 into this chapter. As of April 2025, the latest revision of this part is the San Juan Chapter F of 2019, published as an addendum of the Shenzhen Code of 2018. The problem of choosing one name among many remains to be examined for many large, agriculturally or medically-important genera like Aspergillus and Fusarium. Articles have been published on such specific genera to propose ways to define them under the newer rules. == See also == Fungi imperfecti List of mitosporic Ascomycota == References == This article incorporates CC-BY-3.0 text from the reference == External links == Anamorph-teleomorph database at the Centraalbureau voor Schimmelcultures.
{ "page_id": 1247702, "source": null, "title": "Teleomorph, anamorph and holomorph" }
DNA footprinting is a method of in vitro DNA analysis that assists researchers in determining transcription factor (TF) associated binding proteins. This technique can be used to study protein-DNA interactions both outside and within cells. Transcription factors are regulatory proteins that assist with various levels of DNA regulation. These regulatory molecules and associated proteins bind promoters, enhancers, or silencers to drive or repress transcription and are fundamental to understanding the unique regulation of individual genes within the genome. First developed in 1978, primary investigators David J. Galas, Ph.D. and Albert Schmitz, Ph.D. modified the pre-existing Maxam-Gilbert chemical sequencing technique to bind specifically to the lac repressor protein. Since the technique's discovery, scientific researchers have developed this technique to map chromatin and have greatly reduced technical requirements to perform the footprinting method. The most common method of DNA footprinting is DNase-sequencing. DNase-sequencing uses DNase I endonuclease to cleave DNA for analysis. The process of DNA footprinting begins with polymerase chain reaction (PCR) to increase the amount of DNA present. This is to ensure the sample contains sufficient amount of DNA for analysis. Once added, proteins of interest will bind to DNA at their respective binding sites. This is then followed by cleavage with an enzyme like DNase I that will cleave unbound regions of DNA and keep protein-bound DNA intact. The resulting DNA fragments will be separated using Polyacrylamide gel electrophoresis. Polyacrylamide gel electrophoresis allows researchers to determine fragment sizes of the protein-bound DNA fragments that have since been cleaved. This is indicated by the gap regions on the gel, areas where there are no bands, representing specific DNA-protein interactions. == History == In January 1978, David J. Galas, Ph.D. and Albert Schmitz, Ph.D. developed the DNA footprinting technique to study the binding specificity of the lac repressor protein. Galas, the
{ "page_id": 5638621, "source": null, "title": "DNA footprinting" }
primary investigator of the DNA footprinting project, earned his Ph.D. in physics from University of California, Davis. He later went on to lead the Human Genome Project from 1990 to 1993 while he held a position as Director for Health and Environmental Research at the U.S. Department of Energy Office of Science. DNA footprinting was originally a modification of the Maxam-Gilbert chemical sequencing technique, now allowing for binding of the lac repressor protein. The method was submitted and published without revision in Nucleic Acids Research. After the submission of their work, Galas and Schmitz’s method was cited in a 1980 article by David R. Engelke, Ph.D. and colleagues describing eukaryotic proteins and their binding sites. The DNA footprinting technique was further refined by Thomas D. Tullius, Ph.D. and colleagues in August 1986, publishing a paper that used more accurate DNA cleavage mechanisms to boost the scientific rigour of their research and future research. In January 2008, Alan P. Boyle, Ph.D. and colleagues developed a genome-wide DNA footprinting method, which involved running pre-digested nuclei through multiple rounds of digestion and repair to produce DNase-seq, an enzyme analogous to the DNase I used by Galas and Schmitz, to map genomic open chromatin. In recent years, many laboratories and researchers have developed computational methods to statistically analyze deeply sequenced DNase-seq information that originally required an extensive background in bioinformatics to understand and sequence. == Methods == The most common method of DNA footprinting is DNase I-sequencing. This technique uses a DNase I endonuclease enzyme to cleave the DNA and assess whether a specific protein binds to a target region within the DNA. DNase I preferentially cuts at accessible sites not bound by proteins. DNA footprinting systematically identifies transcription factors (TFs) in DNA by analyzing the location of DNase cleavage sites. The DNase-seq method
{ "page_id": 5638621, "source": null, "title": "DNA footprinting" }
of footprinting involves 4 steps: polymerase chain reaction (PCR) of the DNA, incubation of DNA with a protein, DNA cleavage, and DNA analysis through polyacrylamide gel electrophoresis (PAGE). === Polymerase Chain Reaction (PCR) === Polymerase chain reaction (PCR) is the first step in DNase-seq DNA footprinting. The purpose of PCR is to amplify DNA fragments to ensure there is sufficient material before downstream analysis. The ideal amplification length is between 200-400 base pairs. The labelled template DNA is then divided into two samples. One sample will be incubated with the protein of interest and the other sample will remain as a control, where DNA is incubated in solitary. Dividing the DNA sample into two separate conditions allows researchers to assess whether a DNA-protein interaction when undergoing polyacrylamide gel electrophoresis. If the protein binds to DNA, the DNA sample incubated with the protein of interest may display regions protected from DNase I cleavage due to protein binding. The control sample, which lacks protein binding, will undergo random cleavage that creates a distinct fragment pattern observed in PAGE. The PCR process consists of three steps: denaturing of DNA, annealing of DNA and elongation of DNA. The first stage requires high temperatures between 94-98 °C to break apart double-stranded DNA into single-stranded DNA. The DNA mixture is then cooled to roughly 45 °C, to allow for primers to bind to the two DNA single strands. Finally, the DNA is left to elongate at 76 °C. DNA Polymerase will be largely responsible for DNA elongation. DNA Polymerase is an enzyme that will build a DNA strand complementary to the template strand.To allow for the maximum amount of DNA, amplification will continue for 15-18 rounds. This will increase the amount of DNA by approximately 10,000 times. Once the DNA is amplified, it can be labelled
{ "page_id": 5638621, "source": null, "title": "DNA footprinting" }
with either a fluorescent tag protein or a radioactive phosphorus. === Labelling === The DNA template is labelled at the 3' or 5' end, depending on the location of the binding site(s). Two labels can be used for footprinting: radioactivity and fluorescence. Radioactivity has been traditionally used to label DNA fragments for footprinting analysis. This process was originally developed in 1977 by Maxam and Gilbert when proposing their chemical sequencing technique. Radioactive labelling is very sensitive and is optimal for visualizing small amounts of DNA. During the radioactive labelling process, DNA is treated with a kinase enzyme that adds a radioactive phosphate group (³²P) to the backbone of the 3' or 5' end of the DNA. Radioactivity is a specific, sensitive and durable treatment that allows for analysis of small DNA targets with high precision. Fluorescence is a widely used method of DNA labelling. This method is considered to be safer due to the lack of radio-chemicals. DNA fluorescent labelling is specific, versatile, and can be used to label live cells. There are 2 ways to fluorescently label DNA: chemical synthesis and enzymatic synthesis. In chemical synthesis, a fluorescent dye is attached to nucleotides, which are then added directly to the growing DNA strand. Enzymatic synthesis involves the use of fluorescent nucleoside triphosphates, which are used instead of standard nucleotides. Both fluorescence and radioactivity are beneficial for labelling small or fragmented sections of DNA, allowing for more specific footprinting. === Cleavage agent === There are a variety of cleavage agents used in genomic footprinting. A desirable cleavage agent is sequence-neutral, easy to use, and easy to control. No current cleavage agent meets all of the criteria for an ideal agent, but many enzymatic and chemical agents have been used successfully. There are three main cleavage agents employed in DNA footprinting:
{ "page_id": 5638621, "source": null, "title": "DNA footprinting" }
DNase I endonuclease, hydroxyl radicals and ultraviolet irradiation. DNase I is a large enzyme that functions as a double-strand endonuclease. It binds to the minor groove of DNA and cleaves the phosphodiester backbone. DNase I is considered a good cleavage agent because it is large and more likely to be blocked from cleaving the DNA strand at regions bound by a protein of interest. When DNase I activity is blocked there is a "footprint" or an area with little to no DNA cleavage due to binding proteins. DNase activity is dependent on two conditions: affinity between the ligand and protein, and the equilibrium between DNase and DNA. In addition, the DNase I enzyme is easily controlled by adding Ethylenediaminetetraacetic acid (EDTA) to stop the reaction. DNase I also has a number of limitations. The enzyme does not cut DNA randomly; its activity is affected by DNA structure and sequence which results in an uneven ladder. This can limit the precision of predicting a protein's binding site on the DNA molecule. The use of hydroxyl radicals as a method of DNA footprinting was first created from the Fenton reaction. In this method of creating radicals, Fe2+ is reduced with hydrogen peroxide (H2O2) to form free hydroxyl molecules. These hydroxyl molecules then react with the DNA backbone, which results in a break in DNA. Similar to DNase I, hydroxyl radicals have the ability to suggest protein-DNA interactions; radicals are inhibited at specific sites contacted by bound proteins. Due to their small size, the resulting DNA footprint has high resolution. Unlike DNase I, hydroxyl radicals have no sequence preference and result in an evenly distributed ladder. However, hydroxyl radicals are also time-consuming due to longer reaction and digestion times. Ultraviolet (UV) irradiation can induce photoreactions in nucleic acids, leading to DNA damage such
{ "page_id": 5638621, "source": null, "title": "DNA footprinting" }
as single-strand breaks, crosslinks between DNA strands or with proteins, and interactions with solvents. UV light causes the formation of cyclobutane pyrimidine dimers (CPDs) and covalent links between bases. UV irradiation is also limited by protein interactions with DNA, altering the pattern of damage and informing footprinting analysis. Once both protected and unprotected DNA have been treated, a primer extension of the cleaved products must occur. The primer extension is temporary and will terminate upon reaching a damaged base. During analysis, the protected sample will show an additional band where the DNA was crosslinked with a bound protein. This method reacts quickly and can capture interactions that are only momentary. Additionally, UV light can penetrate live cell membranes and can be applied to in vivo experiments. In this method, the bound protein does not protect the DNA, it alters the photoreactions in the vicinity which leads to difficulties in interpretation. === Polyacrylamide Gel Electrophoresis === Gel electrophoresis is a laboratory technique used to separate nucleic acids or proteins based on their size and charge. This method involves applying an electric field to a gel matrix, typically made of agarose or polyacrylamide, through which molecules migrate at varying speeds. Smaller molecules move faster through the gel, while larger molecules migrate more slowly. The resulting separation pattern can be visualized using staining methods or by detecting labelled molecules. Polyacrylamide gels are favoured due to their high resolution in separating small DNA fragments, making them ideal for analyzing complex mixtures and studying DNA-protein interactions. This method is used in DNA footprinting to identify DNA-protein binding sites by separating DNA fragments that result from nuclease digestion or chemical cleavage. After DNA cleavage by a specific cleavage agent, the mixture of protected and cleaved fragments is then separated by polyacrylamide gel electrophoresis (PAGE). The separation
{ "page_id": 5638621, "source": null, "title": "DNA footprinting" }