text
stringlengths 2
132k
| source
dict |
|---|---|
composting process and will break down by 90% within six months. Biopolymers that do this can be marked with a 'compostable' symbol, under European Standard EN 13432 (2000). Packaging marked with this symbol can be put into industrial composting processes and will break down within six months or less. An example of a compostable polymer is PLA film under 20μm thick: films which are thicker than that do not qualify as compostable, even though they are "biodegradable". In Europe there is a home composting standard and associated logo that enables consumers to identify and dispose of packaging in their compost heap. == See also == Biomaterials Bioplastic Biopolymers & Cell (journal) Condensation polymers Condensed tannins DNA sequence Food microbiology § Microbial biopolymers Melanin Non food crops Phosphoramidite Polymer chemistry Sequence-controlled polymers Sequencing Small molecules Worm-like chain == References == == External links == NNFCC: The UK's National Centre for Biorenewable Energy, Fuels and Materials Bioplastics Magazine Biopolymer group What's Stopping Bioplastic?
|
{
"page_id": 3974,
"source": null,
"title": "Biopolymer"
}
|
The molecular formula C11H10N2O2 (molar mass: 202.21 g/mol, exact mass: 202.0742 u) may refer to: Tolimidone Vasicinone
|
{
"page_id": 44830594,
"source": null,
"title": "C11H10N2O2"
}
|
Radiochemistry is the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). Much of radiochemistry deals with the use of radioactivity to study ordinary chemical reactions. This is very different from radiation chemistry where the radiation levels are kept too low to influence the chemistry. Radiochemistry includes the study of both natural and man-made radioisotopes. == Main decay modes == All radioisotopes are unstable isotopes of elements— that undergo nuclear decay and emit some form of radiation. The radiation emitted can be of several types including alpha, beta, gamma radiation, proton, and neutron emission along with neutrino and antiparticle emission decay pathways. 1. α (alpha) radiation—the emission of an alpha particle (which contains 2 protons and 2 neutrons) from an atomic nucleus. When this occurs, the atom's atomic mass will decrease by 4 units and the atomic number will decrease by 2. 2. β (beta) radiation—the transmutation of a neutron into an electron and a proton. After this happens, the electron is emitted from the nucleus into the electron cloud. 3. γ (gamma) radiation—the emission of electromagnetic energy (such as gamma rays) from the nucleus of an atom. This usually occurs during alpha or beta radioactive decay. These three types of radiation can be distinguished by their difference in penetrating power. Alpha can be stopped quite easily by a few centimetres of air or a piece of paper and is equivalent to a helium nucleus. Beta can be cut off by an aluminium sheet just a few millimetres thick and are electrons. Gamma is the most penetrating of the three and is a massless chargeless high-energy photon.
|
{
"page_id": 2035588,
"source": null,
"title": "Radiochemistry"
}
|
Gamma radiation requires an appreciable amount of heavy metal radiation shielding (usually lead or barium-based) to reduce its intensity. == Activation analysis == By neutron irradiation of objects, it is possible to induce radioactivity; this activation of stable isotopes to create radioisotopes is the basis of neutron activation analysis. A high-energy most interesting object which has been studied in this way is the hair of Napoleon's head, which has been examined for its arsenic content. A series of different experimental methods exist, these have been designed to enable the measurement of a range of different elements in different matrices. To reduce the effect of the matrix it is common to use the chemical extraction of the wanted element and/or to allow the radioactivity due to the matrix elements to decay before the measurement of the radioactivity. Since the matrix effect can be corrected by observing the decay spectrum, little or no sample preparation is required for some samples, making neutron activation analysis less susceptible to contamination. The effects of a series of different cooling times can be seen if a hypothetical sample that contains sodium, uranium, and cobalt in a 100:10:1 ratio was subjected to a very short pulse of thermal neutrons. The initial radioactivity would be dominated by the 24Na activity (half-life 15 h) but with increasing time the 239Np (half-life 2.4 d after formation from parent 239U with half-life 24 min) and finally the 60Co activity (5.3 yr) would predominate. == Biology applications == One biological application is the study of DNA using radioactive phosphorus-32. In these experiments, stable phosphorus is replaced by the chemically identical radioactive P-32, and the resulting radioactivity is used in the analysis of the molecules and their behaviour. Another example is the work that was done on the methylation of elements such as
|
{
"page_id": 2035588,
"source": null,
"title": "Radiochemistry"
}
|
sulfur, selenium, tellurium, and polonium by living organisms. It has been shown that bacteria can convert these elements into volatile compounds, it is thought that methylcobalamin (vitamin B12) alkylates these elements to create the dimethyls. It has been shown that a combination of Cobaloxime and inorganic polonium in sterile water forms a volatile polonium compound, while a control experiment that did not contain the cobalt compound did not form the volatile polonium compound. For the sulfur work, the isotope 35S was used, while for polonium 207Po was used. In some related work by the addition of 57Co to the bacterial culture, followed by isolation of the cobalamin from the bacteria (and the measurement of the radioactivity of the isolated cobalamin) it was shown that the bacteria convert available cobalt into methylcobalamin. In medicine PET (Positron Emission Tomography) scans are commonly used for diagnostic purposes in. A radiative tracer is injected intravenously into the patient and then taken to the PET machine. The radioactive tracer releases radiation outward from the patient and the cameras in the machine interpret the radiation rays from the tracer. PET scan machines use solid state scintillation detection because of their high detection efficiency, NaI(Tl) crystals absorb the tracer's radiation and produce photons that get converted into an electrical signal for the machine to analyze. == Environmental == Radiochemistry also includes the study of the behaviour of radioisotopes in the environment; for instance, a forest or grass fire can make radioisotopes mobile again. In these experiments, fires were started in the exclusion zone around Chernobyl and the radioactivity in the air downwind was measured. It is important to note that a vast number of processes can release radioactivity into the environment, for example, the action of cosmic rays on the air is responsible for the formation of
|
{
"page_id": 2035588,
"source": null,
"title": "Radiochemistry"
}
|
radioisotopes (such as 14C and 32P), the decay of 226Ra forms 222Rn which is a gas which can diffuse through rocks before entering buildings and dissolve in water and thus enter drinking water In addition, human activities such as bomb tests, accidents, and normal releases from industry have resulted in the release of radioactivity. === Chemical form of the actinides === The environmental chemistry of some radioactive elements such as plutonium is complicated by the fact that solutions of this element can undergo disproportionation and as a result, many different oxidation states can coexist at once. Some work has been done on the identification of the oxidation state and coordination number of plutonium and the other actinides under different conditions.[2] This includes work on both solutions of relatively simple complexes and work on colloids Two of the key matrixes are soil/rocks and concrete, in these systems the chemical properties of plutonium have been studied using methods such as EXAFS and XANES.[3][4] === Movement of colloids === While binding of a metal to the surfaces of the soil particles can prevent its movement through a layer of soil, it is possible for the particles of soil that bear the radioactive metal can migrate as colloidal particles through the soil. This has been shown to occur using soil particles labeled with 134Cs, these are able to move through cracks in the soil. ==== Normal background ==== Radioactivity is present everywhere on Earth since its formation. According to the International Atomic Energy Agency, one kilogram of soil typically contains the following amounts of the following three natural radioisotopes 370 Bq 40K (typical range 100–700 Bq), 25 Bq 226Ra (typical range 10–50 Bq), 25 Bq 238U (typical range 10–50 Bq) and 25 Bq 232Th (typical range 7–50 Bq). === Action of microorganisms === The
|
{
"page_id": 2035588,
"source": null,
"title": "Radiochemistry"
}
|
action of micro-organisms can fix uranium; Thermoanaerobacter can use chromium(VI), iron(III), cobalt(III), manganese(IV), and uranium(VI) as electron acceptors while acetate, glucose, hydrogen, lactate, pyruvate, succinate, and xylose can act as electron donors for the metabolism of the bacteria. In this way, the metals can be reduced to form magnetite (Fe3O4), siderite (FeCO3), rhodochrosite (MnCO3), and uraninite (UO2). Other researchers have also worked on the fixing of uranium using bacteria [5][6][7], Francis R. Livens et al. (Working at Manchester) have suggested that the reason why Geobacter sulfurreducens can reduce UO2+2 cations to uranium dioxide is that the bacteria reduce the uranyl cations to UO+2 which then undergoes disproportionation to form UO2+2 and UO2. This reasoning was based (at least in part) on the observation that NpO+2 is not converted to an insoluble neptunium oxide by the bacteria. == Education == Despite the growing use of nuclear medicine, the potential expansion of nuclear power plants, and worries about protection against nuclear threats and the management of the nuclear waste generated in past decades, the number of students opting to specialize in nuclear and radiochemistry has decreased significantly over the past few decades. Now, with many experts in these fields approaching retirement age, action is needed to avoid a workforce gap in these critical fields, for example by building student interest in these careers, expanding the educational capacity of universities and colleges, and providing more specific on-the-job training. Nuclear and Radiochemistry (NRC) is mostly being taught at the university level, usually first at the Master- and PhD-degree level. In Europe, substantial effort is being done to harmonize and prepare the NRC education for the industry's and society's future needs. This effort is being coordinated in projects funded by the Coordinated Action supported by the European Atomic Energy Community's 7th Framework Program: The CINCH-II
|
{
"page_id": 2035588,
"source": null,
"title": "Radiochemistry"
}
|
project - Cooperation in education and training In Nuclear Chemistry. == References == == External links == ACS radioelectrochemistry
|
{
"page_id": 2035588,
"source": null,
"title": "Radiochemistry"
}
|
Almond paste is made from ground almonds or almond meal and sugar in equal quantities, with small amounts of cooking oil, eggs, heavy cream or corn syrup added as a binder. It is similar to marzipan, with a coarser texture. Almond paste is used as a filling in pastries, but it can also be found in chocolates. In a type of commercially manufactured almond paste called persipan ground apricot or peach pits are added to save money. == Uses == Almond paste is used as a filling in pastries of many different cultures. It is a chief ingredient of the American bear claw pastry. In the Nordic countries almond paste is used extensively, in various pastries and cookies. In Sweden (where it is known as mandelmassa) it is used in biscuits, muffins and buns and as a filling in the traditional Shrove Tuesday pastry semla and is used in Easter and Christmas sweets. In Denmark (where it is known as marcipan or mandelmasse), almond paste is used in several pastries, for example as a filling in the Danish traditional pastry kringle. In Finland almond paste is called mantelimassa. In the Netherlands, almond paste is called amandelspijs. In Germany, almond paste is also used in pastries and sweets. In German, almond paste is known as Marzipanrohmasse. It Italy it is known as "pasta di mandorle". The soft paste is molded into creative shapes by pastry chefs which can be used as cake decorations or to make frutta martorana. Almond paste is the main ingredient of the traditional French calisson candy in Aix-en-Provence. In Turkey, almond paste is traditionally made in Edirne. == See also == Rainbow cookie Marzipan == References == Parsi Almond Fish
|
{
"page_id": 3215241,
"source": null,
"title": "Almond paste"
}
|
The Angeli–Rimini reaction is an organic reaction between an aldehyde and N-hydroxybenzenesulfonamide in presence of base forming a hydroxamic acid. The other reaction product is a sulfinic acid. The reaction was discovered by the two Italian chemists Angelo Angeli and Enrico Rimini (1874–1917), and was published in 1896. == Chemical test == The reaction is used in a chemical test for the detection of aldehydes in combination with ferric chloride. In this test a few drops of aldehyde containing specimen is dissolved in ethanol, the sulfonamide is added together with some sodium hydroxide solution and then the solution is acidified to Congo red. An added drop of ferric chloride will turn the solution an intense red when aldehyde is present. The sulfonamide can be prepared by reaction of hydroxylamine and benzenesulfonyl chloride in ethanol with potassium metal. == Reaction mechanism == The reaction mechanism for this reaction is not clear and several potential pathways exist. The N-hydroxybenzenesulfonamide 1 or its deprotonated form 2 is a nucleophile in reaction with the aldehyde 3 to intermediate 4. After intramolecular proton exchange to 5 a sulfinic acid anion is split off and hydroxamic acid 8 results through nitroso compound 6 and intermediate 7. Alternatively aziridine intermediate 9 directly forms the end=product. The formation of the nitrene intermediate 10 is ruled out given the lack of reactivity of the chemical mixture towards simple alkenes. == Scope == The Angeli–Rimini reaction has recently been applied in solid-phase synthesis with the sulfonamide covalently linked to a polystyrene solid support. == References ==
|
{
"page_id": 6950790,
"source": null,
"title": "Angeli–Rimini reaction"
}
|
Batteries provided the main source of electricity before the development of electric generators and electrical grids around the end of the 19th century. Successive improvements in battery technology facilitated major electrical advances, from early scientific studies to the rise of telegraphs and telephones, eventually leading to portable computers, mobile phones, electric cars, and many other electrical devices. Students and engineers developed several commercially important types of battery. "Wet cells" were open containers that held liquid electrolyte and metallic electrodes. When the electrodes were completely consumed, the wet cell was renewed by replacing the electrodes and electrolyte. Open containers are unsuitable for mobile or portable use. Wet cells were used commercially in the telegraph and telephone systems. Early electric cars used semi-sealed wet cells. One important classification for batteries is by their life cycle. "Primary" batteries can produce current as soon as assembled, but once the active elements are consumed, they cannot be electrically recharged. The development of the lead-acid battery and subsequent "secondary" or "chargeable" types allowed energy to be restored to the cell, extending the life of permanently assembled cells. The introduction of nickel and lithium based batteries in the latter half of the 20th century made the development of innumerable portable electronic devices feasible, from powerful flashlights to mobile phones. Very large stationary batteries find some applications in grid energy storage, helping to stabilize electric power distribution networks. == Invention == From the mid 18th century on, before there were batteries, experimenters used Leyden jars to store electrical charge. As an early form of capacitor, Leyden jars, unlike electrochemical cells, stored their charge physically and would release it all at once. Many experimenters took to hooking several Leyden jars together to create a stronger charge and one of them, the colonial American inventor Benjamin Franklin, may have been
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
the first to call his grouping an "electrical battery", a play on the military term for weapons functioning together. Based on some findings by Luigi Galvani, Alessandro Volta, a friend and fellow scientist, believed observed electrical phenomena were caused by two different metals joined by a moist intermediary. He verified this hypothesis through experiments and published the results in 1791. In 1800, Volta invented the first true battery, storing and releasing a charge through a chemical reaction instead of physically, which came to be known as the voltaic pile. The voltaic pile consisted of pairs of copper and zinc discs piled on top of each other, separated by a layer of cloth or cardboard soaked in brine (i.e., the electrolyte). Unlike the Leyden jar, the voltaic pile produced continuous electricity and stable current, and lost little charge over time when not in use, though his early models could not produce a voltage strong enough to produce sparks. He experimented with various metals and found that zinc and silver gave the best results. Volta believed the current was the result of two different materials simply touching each other – an obsolete scientific theory known as contact tension – and not the result of chemical reactions. As a consequence, he regarded the corrosion of the zinc plates as an unrelated flaw that could perhaps be fixed by changing the materials somehow. However, no scientist ever succeeded in preventing this corrosion. In fact, it was observed that the corrosion was faster when a higher current was drawn. This suggested that the corrosion was actually integral to the battery's ability to produce a current. This, in part, led to the rejection of Volta's contact tension theory in favor of the electrochemical theory. Volta's illustrations of his Crown of Cups and voltaic pile have extra
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
metal disks, now known to be unnecessary, on both the top and bottom. The figure associated with this section, of the zinc-copper voltaic pile, has the modern design, an indication that "contact tension" is not the source of electromotive force for the voltaic pile. Volta's original pile models had some technical flaws, one of them involving the electrolyte leaking and causing short-circuits due to the weight of the discs compressing the brine-soaked cloth. A Scotsman named William Cruickshank solved this problem by laying the elements horizontally in a box instead of piling them in a stack. This was known as the trough battery. Volta himself invented a variant that consisted of a chain of cups filled with a salt solution, linked together by metallic arcs dipped into the liquid. This was known as the Crown of Cups. These arcs were made of two different metals (e.g., zinc and copper) soldered together. This model also proved to be more efficient than his original piles, though it did not prove as popular. Another problem with Volta's batteries was short battery life (an hour's worth at best), which was caused by two phenomena. The first was that the current produced electrolyzed the electrolyte solution, resulting in a film of hydrogen bubbles forming on the copper, which steadily increased the internal resistance of the battery (this effect, called polarization, is counteracted in modern cells by additional measures). The other was a phenomenon called local action, wherein minute short-circuits would form around impurities in the zinc, causing the zinc to degrade. The latter problem was solved in 1835 by the English inventor William Sturgeon, who found that amalgamated zinc, whose surface had been treated with some mercury, did not suffer from local action. Despite its flaws, Volta's batteries provide a steadier current than Leyden jars,
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
and made possible many new experiments and discoveries, such as the first electrolysis of water by the English surgeon Anthony Carlisle and the English chemist William Nicholson. == First practical batteries == === Daniell cell === An English professor of chemistry named John Frederic Daniell found a way to solve the hydrogen bubble problem in the Voltaic Pile by using a second electrolyte to consume the hydrogen produced by the first. In 1836, he invented the Daniell cell, which consists of a copper pot filled with a copper sulfate solution, in which is immersed an unglazed earthenware container filled with sulfuric acid and a zinc electrode. The earthenware barrier is porous, which allows ions to pass through but keeps the solutions from mixing. The Daniell cell was a great improvement over the existing technology used in the early days of battery development and was the first practical source of electricity. It provides a longer and more reliable current than the Voltaic cell. It is also safer and less corrosive. It has an operating voltage of roughly 1.1 volts. It soon became the industry standard for use, especially with the new telegraph networks. The Daniell cell was also used as the first working standard for definition of the volt, which is the unit of electromotive force. === Bird's cell === A version of the Daniell cell was invented in 1837 by the Guy's Hospital physician Golding Bird who used a plaster of Paris barrier to keep the solutions separate. Bird's experiments with this cell were of some importance to the new discipline of electrometallurgy. === Porous pot cell === The porous pot version of the Daniell cell was invented by John Dancer, a Liverpool instrument maker, in 1838. It consists of a central zinc anode dipped into a porous earthenware pot
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
containing a zinc sulfate solution. The porous pot is, in turn, immersed in a solution of copper sulfate contained in a copper can, which acts as the cell's cathode. The use of a porous barrier allows ions to pass through but keeps the solutions from mixing. === Gravity cell === In the 1860s, a Frenchman named Callaud invented a variant of the Daniell cell called the gravity cell. This simpler version dispensed with the porous barrier. This reduces the internal resistance of the system and, thus, the battery yields a stronger current. It quickly became the battery of choice for the American and British telegraph networks, and was widely used until the 1950s. The gravity cell consists of a glass jar, in which a copper cathode sits on the bottom and a zinc anode is suspended beneath the rim. Copper sulfate crystals are scattered around the cathode and then the jar is filled with distilled water. As the current is drawn, a layer of zinc sulfate solution forms at the top around the anode. This top layer is kept separate from the bottom copper sulfate layer by its lower density and by the polarity of the cell. The zinc sulfate layer is clear in contrast to the deep blue copper sulfate layer, which allows a technician to measure the battery life with a glance. On the other hand, this setup means the battery can be used only in a stationary appliance, or else the solutions mix or spill. Another disadvantage is that a current has to be continually drawn to keep the two solutions from mixing by diffusion, so it is unsuitable for intermittent use. === Poggendorff cell === The German scientist Johann Christian Poggendorff overcame the problems with separating the electrolyte and the depolariser using a porous earthenware pot
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
in 1842. In the Poggendorff cell, sometimes called Grenet Cell due to the works of Eugene Grenet around 1859, the electrolyte is dilute sulphuric acid and the depolariser is chromic acid. The two acids are physically mixed together, eliminating the porous pot. The positive electrode (cathode) is two carbon plates, with a zinc plate (negative or anode) positioned between them. Because of the tendency of the acid mixture to react with the zinc, a mechanism is provided to raise the zinc electrode clear of the acids. The cell provides 1.9 volts. It was popular with experimenters for many years due to its relatively high voltage; greater ability to produce a consistent current and lack of any fumes, but the relative fragility of its thin glass enclosure and the necessity of having to raise the zinc plate when the cell is not in use eventually saw it fall out of favour. The cell was also known as the 'chromic acid cell', but principally as the 'bichromate cell'. This latter name came from the practice of producing the chromic acid by adding sulphuric acid to potassium dichromate, even though the cell itself contains no dichromate. The Fuller cell was developed from the Poggendorff cell. Although the chemistry is principally the same, the two acids are once again separated by a porous container and the zinc is treated with mercury to form an amalgam. === Grove cell === The Welshman William Robert Grove invented the Grove cell in 1839. It consists of a zinc anode dipped in sulfuric acid and a platinum cathode dipped in nitric acid, separated by porous earthenware. The Grove cell provides a high current and nearly twice the voltage of the Daniell cell, which made it the favoured cell of the American telegraph networks for a time. However, it
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
gives off poisonous nitric oxide fumes when operated. The voltage also drops sharply as the charge diminishes, which became a liability as telegraph networks grew more complex. Platinum was and still is very expensive. === Dun cell === Alfred Dun 1885, nitro-muriatic acid (aqua regis) – iron and carbon: In the new element there can be used advantageously as exciting-liquid in the first case such solutions as have in a concentrated condition great depolarizing-power, which effect the whole depolarization chemically without necessitating the mechanical expedient of increased carbon surface. It is preferred to use iron as the positive electrode, and as exciting-liquid nitro muriatic acid (aqua regis), the mixture consisting of muriatic and nitric acids. The nitro-muriatic acid, as explained above, serves for filling both cells. For the carbon-cells it is used strong or very slightly diluted, but for the other cells very diluted, (about one-twentieth, or at the most one-tenth). The element containing in one cell carbon and concentrated nitro-muriatic acid and in the other cell iron and dilute nitro-muriatic acid remains constant for at least twenty hours when employed for electric incandescent lighting. == Rechargeable batteries and dry cells == === Lead-acid === Up to this point, all existing batteries would be permanently drained when all their chemical reactants were spent. In 1859, Gaston Planté invented the lead–acid battery, the first-ever battery that could be recharged by passing a reverse current through it. A lead-acid cell consists of a lead anode and a lead dioxide cathode immersed in sulfuric acid. Both electrodes react with the acid to produce lead sulfate, but the reaction at the lead anode releases electrons whilst the reaction at the lead dioxide consumes them, thus producing a current. These chemical reactions can be reversed by passing a reverse current through the battery, thereby recharging
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
it. Planté's first model consisted of two lead sheets separated by rubber strips and rolled into a spiral. His batteries were first used to power the lights in train carriages while stopped at a station. In 1881, Camille Alphonse Faure invented an improved version that consists of a lead grid lattice into which is pressed a lead oxide paste, forming a plate. Multiple plates can be stacked for greater performance. This design is easier to mass-produce. Compared to other batteries, Planté's is rather heavy and bulky for the amount of energy it can hold. However, it can produce remarkably large currents in surges, because it has very low internal resistance, meaning that a single battery can be used to power multiple circuits. The lead-acid battery is still used today in automobiles and other applications where weight is not a big factor. The basic principle has not changed since 1859. In the early 1930s, a gel electrolyte (instead of a liquid) produced by adding silica to a charged cell was used in the LT battery of portable vacuum-tube radios. In the 1970s, "sealed" versions became common (commonly known as a "gel cell" or "SLA"), allowing the battery to be used in different positions without failure or leakage. Today cells are classified as "primary" if they produce a current only until their chemical reactants are exhausted, and "secondary" if the chemical reactions can be reversed by recharging the cell. The lead-acid cell was the first "secondary" cell. === Leclanché cell === In 1866, Georges Leclanché invented a battery that consists of a zinc anode and a manganese dioxide cathode wrapped in a porous material, dipped in a jar of ammonium chloride solution. The manganese dioxide cathode has a little carbon mixed into it as well, which improves conductivity and absorption. It provided
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
a voltage of 1.4 volts. This cell achieved very quick success in telegraphy, signaling, and electric bell work. The dry cell form was used to power early telephones—usually from an adjacent wooden box affixed to fit batteries before telephones could draw power from the telephone line itself. The Leclanché cell can not provide a sustained current for very long. In lengthy conversations, the battery would run down, rendering the conversation inaudible. This is because certain chemical reactions in the cell increase the internal resistance and, thus, lower the voltage. === Zinc-carbon cell, the first dry cell === Many experimenters tried to immobilize the electrolyte of an electrochemical cell to make it more convenient to use. The Zamboni pile of 1812 is a high-voltage dry battery but capable of delivering only minute currents. Various experiments were made with cellulose, sawdust, spun glass, asbestos fibers, and gelatine. In 1886, Carl Gassner obtained a German patent on a variant of the Leclanché cell, which came to be known as the dry cell because it does not have a free liquid electrolyte. Instead, the ammonium chloride is mixed with plaster of Paris to create a paste, with a small amount of zinc chloride added in to extend the shelf life. The manganese dioxide cathode is dipped in this paste, and both are sealed in a zinc shell, which also acts as the anode. In November 1887, he obtained U.S. patent 373,064 for the same device. Unlike previous wet cells, Gassner's dry cell is more solid, does not require maintenance, does not spill, and can be used in any orientation. It provides a potential of 1.5 volts. The first mass-produced model was the Columbia dry cell, first marketed by the National Carbon Company in 1896. The NCC improved Gassner's model by replacing the plaster of
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
Paris with coiled cardboard, an innovation that left more space for the cathode and made the battery easier to assemble. It was the first convenient battery for the masses and made portable electrical devices practical, and led directly to the invention of the flashlight. The zinc–carbon battery (as it came to be known) is still manufactured today. In parallel, in 1887 Wilhelm Hellesen developed his own dry cell design. It has been claimed that Hellesen's design preceded that of Gassner. In 1887, a dry-battery was developed by Sakizō Yai (屋井 先蔵) of Japan, then patented in 1892. In 1893, Sakizō Yai's dry-battery was exhibited in World's Columbian Exposition and commanded considerable international attention. === NiCd, the first alkaline battery === In 1899, a Swedish scientist named Waldemar Jungner invented the nickel–cadmium battery, a rechargeable battery that has nickel and cadmium electrodes in a potassium hydroxide solution; the first battery to use an alkaline electrolyte. It was commercialized in Sweden in 1910 and reached the United States in 1946. The first models were robust and had significantly better energy density than lead-acid batteries, but were much more expensive. == 20th century: new technologies and ubiquity == === Nickel-iron === Waldemar Jungner patented a nickel–iron battery in 1899, the same year as his Ni-Cad battery patent, but found it to be inferior to its cadmium counterpart and, as a consequence, never bothered developing it. It produced a lot more hydrogen gas when being charged, meaning it could not be sealed, and the charging process was less efficient (it was, however, cheaper). Seeing a way to make a profit in the already competitive lead-acid battery market, Thomas Edison worked in the 1890s on developing an alkaline based battery that he could get a patent on. Edison thought that if he produced a lightweight
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
and durable battery electric cars would become the standard, with his firm as its main battery vendor. After many experiments, and probably borrowing from Jungner's design, he patented an alkaline based nickel–iron battery in 1901. However, customers found his first model of the alkaline nickel–iron battery to be prone to leakage leading to short battery life, and it did not outperform the lead-acid cell by much either. Although Edison was able to produce a more reliable and powerful model seven years later, by this time the inexpensive and reliable Model T Ford had made gasoline engine cars the standard. Nevertheless, Edison's battery achieved great success in other applications such as electric and diesel-electric rail vehicles, providing backup power for railroad crossing signals, or to provide power for the lamps used in mines. === Common alkaline batteries === Until the late 1950s, the zinc–carbon battery continued to be a popular primary cell battery, but its relatively low battery life hampered sales. The Canadian engineer Lewis Urry, working for the Union Carbide, first at the National Carbon Co. in Ontario and, by 1955, at the National Carbon Company Parma Research Laboratory in Cleveland, Ohio, was tasked with finding a way to extend the life of zinc-carbon batteries. Building on earlier work by Edison, Urry decided instead that alkaline batteries held more promise. Until then, longer-lasting alkaline batteries were unfeasibly expensive. Urry's battery consists of a manganese dioxide cathode and a powdered zinc anode with an alkaline electrolyte. Using powdered zinc gives the anode a greater surface area. These batteries were put on the market in 1959. === Nickel–hydrogen and nickel–metal hydride === The nickel–hydrogen battery entered the market as an energy-storage subsystem for commercial communication satellites. The first consumer grade nickel–metal hydride batteries (NiMH) for smaller applications appeared on the market in
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
1989 as a variation of the 1970s nickel–hydrogen battery. NiMH batteries tend to have longer lifespans than NiCd batteries (and their lifespans continue to increase as manufacturers experiment with new alloys) and, since cadmium is toxic, NiMH batteries are less damaging to the environment. === Alkali metal-ion batteries === Lithium is the alkali metal with lowest density and with the greatest electrochemical potential and energy-to-weight ratio. The low atomic weight and small size of its ions also speeds its diffusion, likely making it an ideal battery material. Experimentation with lithium batteries began in 1912 under American physical chemist Gilbert N. Lewis, but commercial lithium batteries did not come to market until the 1970s in the form of the lithium-ion battery. Three volt lithium primary cells such as the CR123A type and three volt button cells are still widely used, especially in cameras and very small devices. Three important developments regarding lithium batteries occurred in the 1980s. In 1980, an American chemist, John B. Goodenough, discovered the LiCoO2 (Lithium cobalt oxide) cathode (positive lead) and a Moroccan research scientist, Rachid Yazami, discovered the graphite anode (negative lead) with the solid electrolyte. In 1981, Japanese chemists Tokio Yamabe and Shizukuni Yata discovered a novel nano-carbonacious-PAS (polyacene) and found that it was very effective for the anode in the conventional liquid electrolyte. This led a research team managed by Akira Yoshino of Asahi Chemical, Japan, to build the first lithium-ion battery prototype in 1985, a rechargeable and more stable version of the lithium battery; Sony commercialized the lithium-ion battery in 1991. In 2019, John Goodenough, Stanley Whittingham, and Akira Yoshino, were awarded the Nobel Prize in Chemistry, for their development of lithium-ion batteries. In 1997, the lithium polymer battery was released by Sony and Asahi Kasei. These batteries hold their electrolyte in a
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
solid polymer composite instead of in a liquid solvent, and the electrodes and separators are laminated to each other. The latter difference allows the battery to be encased in a flexible wrapping instead of in a rigid metal casing, which means such batteries can be specifically shaped to fit a particular device. This advantage has favored lithium polymer batteries in the design of portable electronic devices such as mobile phones and personal digital assistants, and of radio-controlled aircraft, as such batteries allow for a more flexible and compact design. They generally have a lower energy density than normal lithium-ion batteries. High costs and concerns about mineral extraction associated with lithium chemistry have renewed interest in sodium-ion battery development, with early electric vehicle product launches in 2023. === Solid State batteries === In 2024, Solid-state batteries represent a significant technological leap forward, offering numerous advantages over traditional lithium-ion batteries. Unlike lithium-ion batteries, which use liquid or gel electrolytes, solid-state batteries utilize solid electrolytes. This key difference enhances safety, as solid electrolytes are less likely to catch fire or leak. Solid state batteries can also achieve higher energy densities, therefore lasting longer than traditional lithium-based batteries. The automotive industry is keenly interested in this new technology as it promises safer and more efficient vehicles. Companies like Toyota, Ford, and QuantumScape are invested heavily in the development of solid-state batteries. == See also == == Notes and references == “Advances in solid-state batteries: Materials, interfaces, characterizations, and devices.” MRS Bulletin, 16 Jan. 2024, link.springer.com/article/10.1557/s43577-023-00649-7. Volle, Adam. “Solid-state battery | Definition, History, & Facts.” Britannica, www.britannica.com/technology/solid-state-battery.
|
{
"page_id": 8720264,
"source": null,
"title": "History of the battery"
}
|
The molecular formula C18H36O2 (molar mass: 284.48 g/mol, exact mass: 284.2715 u) may refer to: Ethyl palmitate Stearic acid
|
{
"page_id": 22417292,
"source": null,
"title": "C18H36O2"
}
|
Oleg Sushkov is a professor at the University of New South Wales and a leader in the field of high temperature super-conductors. Educated in Russia in quantum mechanics and nuclear physics, he now teaches in Australia. == Education == 1974 MSc, Novosibirsk State University, Russia 1978 PhD in physics, Budker Institute of Nuclear Physics, Novosibirsk, Russia 1984 Doctor of Science (Habilitation), Budker Institute of Nuclear Physics, Novosibirsk, Russia == Awards and recognition == Australian Research Council Professorial Fellow, 2011 - 2015, University of New South Wales Alexander von Humbold Research Award (Germany), 2006 Lenin Komsomol State Prize in Science (Soviet Union), 1982 == Selected publications == As of January 2022, Google Scholar estimates that Sushkov's h-index is 53. Sushkov, O. P.; Kotov, V. N. (31 August 1998). "Bound States of Magnons in the S=1/2 Quantum Spin Ladder". Physical Review Letters. 81 (9). American Physical Society (APS): 1941–1944. arXiv:cond-mat/9803180. Bibcode:1998PhRvL..81.1941S. doi:10.1103/physrevlett.81.1941. ISSN 0031-9007. S2CID 119477768. Sushkov, O. P.; Oitmaa, J.; Weihong, Zheng (20 February 2001). "Quantum phase transitions in the two-dimensional J1−J2 model". Physical Review B. 63 (10). American Physical Society (APS): 104420. arXiv:cond-mat/0007329. Bibcode:2001PhRvB..63j4420S. doi:10.1103/physrevb.63.104420. ISSN 0163-1829. S2CID 118430095. Sushkov, O. (2001). "Conductance anomalies in a one-dimensional quantum contact". Physical Review B. 64 (15). American Physical Society (APS): 155319. arXiv:cond-mat/0104006. Bibcode:2001PhRvB..64o5319S. doi:10.1103/physrevb.64.155319. ISSN 0163-1829. S2CID 119519796. Kuenzi, S. A.; Sushkov, O. P.; Dzuba, V. A.; Cadogan, J. M. (23 September 2002). "Search for violation of fundamental time-reversal and space-reflection symmetries in solid-state experiments". Physical Review A. 66 (3). American Physical Society (APS): 032111. arXiv:cond-mat/0205113. Bibcode:2002PhRvA..66c2111K. doi:10.1103/physreva.66.032111. ISSN 1050-2947. S2CID 119433098. Milstein, A. I.; Sushkov, O. P.; Terekhov, I. S. (31 December 2002). "Radiative Corrections and Parity Nonconservation in Heavy Atoms". Physical Review Letters. 89 (28): 283003. arXiv:hep-ph/0208227. Bibcode:2002PhRvL..89B3003M. doi:10.1103/physrevlett.89.283003. ISSN 0031-9007. PMID 12513140. S2CID 36449469. == References == == External
|
{
"page_id": 5115787,
"source": null,
"title": "Oleg Sushkov"
}
|
links == Google Scholar list for Oleg Sushkov
|
{
"page_id": 5115787,
"source": null,
"title": "Oleg Sushkov"
}
|
Gongronella is a genus of fungi belonging to the family Cunninghamellaceae. The genus has cosmopolitan distribution. == Species == Species: Gongronella brasiliensis C.A.F.de Souza, D.X.Lima & A.L.Santiago Gongronella butleri (Lendn.) Peyronel & Dal Vesco Gongronella guangdongensis F.Liu, T.T.Liu & L.Cai == References ==
|
{
"page_id": 67637134,
"source": null,
"title": "Gongronella"
}
|
Holland's schema theorem, also called the fundamental theorem of genetic algorithms, is an inequality that results from coarse-graining an equation for evolutionary dynamics. The Schema Theorem says that short, low-order schemata with above-average fitness increase exponentially in frequency in successive generations. The theorem was proposed by John Holland in the 1970s. It was initially widely taken to be the foundation for explanations of the power of genetic algorithms. However, this interpretation of its implications has been criticized in several publications reviewed in, where the Schema Theorem is shown to be a special case of the Price equation with the schema indicator function as the macroscopic measurement. A schema is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of cylinder sets, and hence form a topological space. == Description == Consider binary strings of length 6. The schema 1*10*1 describes the set of all strings of length 6 with 1's at positions 1, 3 and 6 and a 0 at position 4. The * is a wildcard symbol, which means that positions 2 and 5 can have a value of either 1 or 0. The order of a schema o ( H ) {\displaystyle o(H)} is defined as the number of fixed positions in the template, while the defining length δ ( H ) {\displaystyle \delta (H)} is the distance between the first and last specific positions. The order of 1*10*1 is 4 and its defining length is 5. The fitness of a schema is the average fitness of all strings matching the schema. The fitness of a string is a measure of the value of the encoded problem solution, as computed by a problem-specific evaluation function. Using the established methods and genetic operators of genetic algorithms, the schema theorem
|
{
"page_id": 4329360,
"source": null,
"title": "Holland's schema theorem"
}
|
states that short, low-order schemata with above-average fitness increase exponentially in successive generations. Expressed as an equation: E ( m ( H , t + 1 ) ) ≥ m ( H , t ) f ( H ) a t [ 1 − p ] . {\displaystyle \operatorname {E} (m(H,t+1))\geq {m(H,t)f(H) \over a_{t}}[1-p].} Here m ( H , t ) {\displaystyle m(H,t)} is the number of strings belonging to schema H {\displaystyle H} at generation t {\displaystyle t} , f ( H ) {\displaystyle f(H)} is the observed average fitness of schema H {\displaystyle H} and a t {\displaystyle a_{t}} is the observed average fitness at generation t {\displaystyle t} . The probability of disruption p {\displaystyle p} is the probability that crossover or mutation will destroy the schema H {\displaystyle H} . Under the assumption that p m ≪ 1 {\displaystyle p_{m}\ll 1} , it can be expressed as: p = δ ( H ) l − 1 p c + o ( H ) p m {\displaystyle p={\delta (H) \over l-1}p_{c}+o(H)p_{m}} where o ( H ) {\displaystyle o(H)} is the order of the schema, l {\displaystyle l} is the length of the code, p m {\displaystyle p_{m}} is the probability of mutation and p c {\displaystyle p_{c}} is the probability of crossover. So a schema with a shorter defining length δ ( H ) {\displaystyle \delta (H)} is less likely to be disrupted.An often misunderstood point is why the Schema Theorem is an inequality rather than an equality. The answer is in fact simple: the Theorem neglects the small, yet non-zero, probability that a string belonging to the schema H {\displaystyle H} will be created "from scratch" by mutation of a single string (or recombination of two strings) that did not belong to H {\displaystyle H}
|
{
"page_id": 4329360,
"source": null,
"title": "Holland's schema theorem"
}
|
in the previous generation. Moreover, the expression for p {\displaystyle p} is clearly pessimistic: depending on the mating partner, recombination may not disrupt the scheme even when a cross point is selected between the first and the last fixed position of H {\displaystyle H} . == Limitation == The schema theorem holds under the assumption of a genetic algorithm that maintains an infinitely large population, but does not always carry over to (finite) practice: due to sampling error in the initial population, genetic algorithms may converge on schemata that have no selective advantage. This happens in particular in multimodal optimization, where a function can have multiple peaks: the population may drift to prefer one of the peaks, ignoring the others. The reason that the Schema Theorem cannot explain the power of genetic algorithms is that it holds for all problem instances, and cannot distinguish between problems in which genetic algorithms perform poorly, and problems for which genetic algorithms perform well. == References == J. Holland, Adaptation in Natural and Artificial Systems, The MIT Press; Reprint edition 1992 (originally published in 1975). J. Holland, Hidden Order: How Adaptation Builds Complexity, Helix Books; 1996.
|
{
"page_id": 4329360,
"source": null,
"title": "Holland's schema theorem"
}
|
Fuel bladders or fuel storage bladders are a type of flexi-bag used as a fuel container. They are collapsible, flexible storage bladders (also known as tanks) that provide transport and storage (temporary or long term) for bulk industrial liquids such as fuels. Standard fuel bladder tanks sizes range from 100-US-gallon (380 L) to 200,000-US-gallon (760,000 L) capacities and larger. Custom fuel storage bladders and cells are available, although at sizes exceeding 50,000 US gallons (190,000 L) there is an increased spill risk. To minimize the risk of leakage, and for the sake of containing a catastrophic spill, all fuel bladders should be housed in secondary containment (bunding). The use of fuel bladders without precautionary measures is risky and should not be undertaken. The EPA has set clear guidelines for the use of secondary containment concerning fuel bladders and imposes fines for discharging of fuel into the environment. == Primary use == Fuel bladders are used in various fields, such as agribusiness, public works, humanitarian, military and industrial areas. Standard tanks are designed for land based use and operations, but can be used in marine settings and in aviation given proper support. Fuel bladders are also commonly used in oil spill recovery operations. High end fuel bladders offer a high degree of protection of the stored liquids, ensuring the contents never come in contact with air. This ensures that there is no risk of evaporation or explosion due to gas formation. In order to prevent liquid contamination, a neutral barrier film is added to the fuel bladder's interior. Fuel bladders are most useful in situations where critical infrastructure has been compromised or does not exist. The benefits of using a flexible storage system like fuel bladders include their ability to be transported and set up quickly. GMA Cover Corp. advertises that
|
{
"page_id": 30543761,
"source": null,
"title": "Fuel bladder"
}
|
their pillow shaped bladders are foldable at less than 5% of the total volume. Fuel bladders have high resistance to climatic conditions, which makes them useful for use in disaster zones and wartime desert operations. == Technical characteristics == High end fuel bladders are made of elastomer coated fabrics usually manufactured by homogeneous vulcanization in one operation. The fabrics ensure mechanical strength needed for shipment and storage. The coatings and design engineering provide compatibility with the intended contents (fuel). Fuels are often classified as dangerous goods thus haves strict regulations for container construction, field performance, and markings. == Transportation == Fuel bladders can come equipped with all the necessary components for transportation; for example, water transportation or fuel transportation on vehicles. Flexible tanks are equipped with adjustable skirts and adjustable towage straps designed to provide high resistance to transport constraints. Adjustable skirts may be replaced by a strap net in case the users need to transform a traditional storage tank to a transportable flexible tank. Specially developed flexible fuel bladders can also be towed by sled, as in the unique Antarctic expeditions made by American, German and British science teams. == Additional uses == Pillow shaped tanks can be designed for non-fuel bulk liquid transport, industrial chemicals, potable water, sludge, and fuel storage. They are also used as "water bladders". The synthetic fabrics used are tough and strong to avoid damage and to prevent leakage. Ultra-tough "crash-worthy" fuel bladders, reinforced with such fibers as Kevlar, are popular in the motorsports industry and are considered a critical safety component, mandated by racing's top series such as Formula 1 and NASCAR. The development of the flexible crash-worthy fuel bladder has significantly reduced the number of fires, and subsequent explosions, often experienced in a violent auto racing crashes. The same crash-worthy technology can
|
{
"page_id": 30543761,
"source": null,
"title": "Fuel bladder"
}
|
also be found in military/industrial vehicles, combatant craft and aircraft. == See also == Ferry tank Self-sealing fuel tank Flexible tank Fuel container == References == ASTM F3063, Standard Specification for Aircraft Fuel Storage and Delivery https://www.flexitanksystems.com/fuel-bladder-tanks/
|
{
"page_id": 30543761,
"source": null,
"title": "Fuel bladder"
}
|
The Walter Boas Medal is awarded by the Australian Institute of Physics for research in Physics in Australia. It is named in memory of is named in memory of Walter Boas (1904-1982) — an eminent scientist and metallurgist who worked on the physics of metals. == Recipients == Source: 1984 James A. Piper, Macquarie University (inaugural winner) 1985 Peter Hannaford, CSIRO Division of Materials Technology 1986 Donald Melrose, Sydney University 1987 Anthony William Thomas, University of Adelaide 1988 Robert Delbourgo, University of Tasmania 1989 Jim Williams, University of Western Australia 1990 Geoff Opat, University of Melbourne 1990 Tony Klein, University of Melbourne 1991 Parameswaran Hariharan, CSIRO Division of Applied Physics 1992 Bruce Harold John McKellar, University of Melbourne 1993 Jim Williams, Australian National University 1994 No medal awarded 1995 David Blair, University of Western Australia 1996 Andris Stelbovics, Murdoch University 1996 Igor Bray, Flinders University 1997 Keith Nugent, University of Melbourne 1997 Stephen W. Wilkins, CSIRO 1998 Robert Clark, University of NSW 1999 No medal awarded 2000 Hans A. Bachor, Australian National University 2001 Anthony G. Williams, University of Adelaide 2002 Peter Robinson, University of Sydney 2003 Gerard J. Milburn, University of Queensland 2004 George Dracoulis, Australian National University 2005 Yuri Kivshar, Australian National University 2006 Michael Edmund Tobar, The University of Western Australia 2007 Derek Leinweber, University of Adelaide 2008 Peter Drummond, Swinburne University of Technology 2009 Victor Flambaum, University of New South Wales 2010 Kostya Ostrikov, CSIRO 2011 Ben Eggleton, University of Sydney 2012 Lloyd Hollenberg, University of Melbourne 2013 Chennupati Jagadish, Australian National University 2014 Stuart Wyithe, University of Melbourne 2015 Min Gu, Swinburne University of Technology 2016 Geraint F. Lewis, University of Sydney 2017 David McClelland, Australian National University 2018 Elisabetta Barberio, University of Melbourne 2019 Andrea Morello, University of NSW 2020 Joss Bland-Hawthorn, University of
|
{
"page_id": 35786644,
"source": null,
"title": "Walter Boas Medal"
}
|
Sydney 2021 Howard Wiseman, Griffith University 2022 Susan M. Scott, Australian National University 2023 Mahananda Dasgupta and David John Hinde, Australian National University == See also == List of physics awards List of prizes named after people == References ==
|
{
"page_id": 35786644,
"source": null,
"title": "Walter Boas Medal"
}
|
The term kodecyte is used to describe cells with detectable Function-Spacer-Lipid (FSL) constructs, and in concert, the term kodevirion (pronounced co-da-virion), is used to describe virions with detectable FSL constructs. The method for labeling virions with FSL constructs is simple, non covalent and only involves incubation of the virion with the FSL construct in saline for a few hours – nothing further is required. The FSL construct will spontaneously, stably and quantitatively incorporate into the virion membrane. Virions have been labelled with fluorescent (FSL-FLRO4) and radioactive iodine (FSL-125I). FSL-FLRO4 could be shown to label virions in a dose dependent manner and could be visualized by flow cytometry either directly, or indirectly if the virion had bound to the cell or fused with the cell membrane. FSLs do not appear to significantly affect the virions infectivity or their ability to bind target cells, probably because they integrate into the membrane without exposing the virion to chemical agents or covalent modification. == See also == Function-Spacer-Lipid construct (Kode™ Technology) Kodecyte == References ==
|
{
"page_id": 31985560,
"source": null,
"title": "Kodevirion"
}
|
The Oparin/Urey Medal honours important contributions to the field of origins of life. The medal is awarded by the International Society for the Study of the Origin of Life (ISSOL). The award was originally named for Alexander Ivanovich Oparin, one of the pioneers in researching the origins of life. In 1993, the Society decided to alternate the name of the award so as to also honour the memory of Harold C. Urey, one of the first to propose the study of cosmochemistry. == List of winners == The current list of medalists is shown below: == References ==
|
{
"page_id": 10358680,
"source": null,
"title": "Oparin Medal"
}
|
In metallurgy, Keller's reagent is a mixture of nitric acid, hydrochloric acid, and hydrofluoric acid, used to etch aluminum alloys to reveal their grain boundaries and orientations. It is also sometimes called Dix–Keller reagent, after E. H. Dix, Jr., and Fred Keller of the Aluminum Corporation of America, who pioneered the use of this technique in the late 1920s and early 1930s. == Safety == Keller's reagent contains oxidizing nitric acid and toxic hydrofluoric acid. The reagent and its fumes may be lethal via contact, inhalation of its fumes, etc. Hydrogen produced on contact with some metals may pose a fire hazard. == See also == Aqua regia == References ==
|
{
"page_id": 78385053,
"source": null,
"title": "Keller's reagent (metallurgy)"
}
|
The Ziff–Gulari–Barshad (ZGB) model is a simple Monte Carlo method for catalytic reactions of oxidation of carbon monoxide to carbon dioxide on a surface using Monte-Carlo methods which captures correctly the essential dynamics: phase transitions between two poisoned states (either CO2- or O-poisoned) and a steady-state in between. It is named after Robert M. Ziff, Erdogan Gulari, and Yoav Barshad, who published it in 1986. == Model definition == The model consists of three steps: Adsorption of the reacting species CO and O2 The actual reaction step on the surface: CO + O → CO2 Desorption of the products. The simplest implementation considers the catalyst as simple square two-dimensional lattice, but one can also consider other kinds of underlying lattices. When a gas-phase molecule touches an empty site, adsorption occurs immediately and the chemical reaction is also instantaneous. Furthermore, one assumes that the composition of the gas phase remains constant. While these requirements would still allow a large number of models and corresponding behaviors, the two special assumptions of the ZGB model are: (i) CO molecules are adsorbed "standing" with the O touching the surface, and require thus only one free lattice site; (ii) O2 molecules are adsorbed "flat" and require thus two adjacent free lattice sites for getting adsorbed. === Results and other work === When the ratio between O2 and CO in the gas phase is increased, the model shows two phase transitions: A continuous one between a O-poisoned and a mixed state, and a discontinuous one between the mixed and a CO-poisoned state. The continuous transition belongs to the universality class of directed percolation. The model was modified several times. == References ==
|
{
"page_id": 32116638,
"source": null,
"title": "Ziff–Gulari–Barshad model"
}
|
Lurex is the registered brand name of the Lurex Company, Ltd. for a type of yarn with a metallic appearance. The yarn is made from synthetic film, onto which a metallic aluminium, silver, or gold layer has been vaporized. "Lurex" may also refer to cloth created with the yarn. The word "lurex" is absent in the English language as a common noun: this is the name of the trademark and the company Lurex Company Limited, which launched the production of such yarn based on nylon and polyester—Lurex in the 1970s. The name was based on the English lure—"temptation; attractiveness". == The Lurex Company == Hugo Wolfram, father of mathematician Stephen Wolfram, served as Managing Director of the Lurex Company; he was also author of three novels. == Lurex in media == Lurex has been a popular material for movie and television costumes. For example, the bodysuit worn by actress Julie Newmar as Catwoman in the Batman TV series of the 1960s is constructed of black Lurex. It was also seen in the slasher movie franchise Scream as the killer Ghostface costume known as Father Death to conceal the identity during the murder sprees. Originally it was going to be white but was changed over concerns regarding comparisons to the Ku Klux Klan. Referenced in Australian group AC/DC's song 'Rocker' - "Lurex socks, blue suede shoes, V8 car, and tattoos" Its presence for 'sparkle' at the 1920s-themed 50th anniversary party for MOMA in New York City in 1979 was noted in a news story on the gala event. Larry Lurex was a name sometimes used by Queen lead singer Freddie Mercury with a few records being released under that name. == See also == Metallic fiber Lamé (fabric) == References == == External links == Official Lurex website
|
{
"page_id": 3149730,
"source": null,
"title": "Lurex"
}
|
Pump–probe microscopy is a non-linear optical imaging modality used in femtochemistry to study chemical reactions. It generates high-contrast images from endogenous non-fluorescent targets. It has numerous applications, including materials science, medicine, and art restoration. == Advantages == The classic method of nonlinear absorption used by microscopists is conventional two-photon fluorescence, in which two photons from a single source interact to excite a photoelectron. The electron then emits a photon as it transitions back to its ground state. This microscopy method has been revolutionary in biological sciences because of its inherent three-dimensional optical sectioning capabilities. Two-photon absorption is inherently a nonlinear process: fluorescent output intensity is proportional to the square of the excitation light intensity. This ensures that fluorescence is only generated within the focus of a laser beam, as the intensity outside of this plane is insufficient to excite a photoelectron. However, this microscope modality is inherently limited by the number of biological molecules that can undergo both two-photon excitation and fluorescence. Pump–probe microscopy circumvents this limitation by directly measuring excitation light. This expands the number of potential targets to any molecule capable of two-photon absorption, even if it does not fluoresce upon relaxation. The method modulates the amplitude of a pulsed laser beam, referred to as the pump, to bring the target molecule to an excited state. This will then affect the properties of a second coherent beam, referred to as the probe, based on the interaction of the two beams with the molecule. These properties are then measured by a detector to form an image. == Physics of pump–probe microscopy == Because pump–probe microscopy does not rely on fluorescent targets, the modality takes advantage of multiple different types of multiphoton absorption. === Two-photon absorption === Two-photon absorption (TPA) is a third-order process in which two photons are nearly
|
{
"page_id": 59248548,
"source": null,
"title": "Pump–probe microscopy"
}
|
simultaneously absorbed by the same molecule. If a second photon is absorbed by the same electron within the same quantum event, the electron enters an excited state. This is the same phenomenon used in two-photon microscopy (TPM), but there are two key features that distinguish pump–probe microscopy from TPM. First, since the molecule is not necessarily fluorescent, a photodetector measures the probe intensity. Therefore, the signal decreases as two-photon absorption occurs, the reverse of TPM. Second, pump–probe microscopy uses spectrally separated sources for each photon, whereas conventional TPM uses one source of a single wavelength. This is referred to as degenerate two-photon excitation. === Excited-state absorption === Excited-state absorption (ESA) occurs when the pump beam sends an electron into an excited state, then the probe beam sends the electron into a higher excited state. This differs from TPA primarily in the timescale over which it occurs. Since an electron can remain in an excited state for a period of nanoseconds, thus requiring longer pulse durations than TPA. === Stimulated emission === Pump–probe microscopy can also measure stimulated emission. In this case, the pump beam drives the electron to an excited state. Then the electron emits a photon when exposed to the probe beam. This interaction increases the probe signal at the detector site. === Ground-state depletion === Ground-state depletion occurs when the pump beam sends the electron into an excited state. However, unlike in ESA, the probe beam cannot send an electron into a secondary excited state. Instead, it sends remaining electrons from the ground state to the first excited state. However, since the pump beam has decreased the number of electrons in the ground state, fewer probe photons are absorbed, and the probe signal increases at the detector site. === Cross-phase modulation === Cross-phase modulation is caused by the
|
{
"page_id": 59248548,
"source": null,
"title": "Pump–probe microscopy"
}
|
Kerr effect, in which the refractive index of the specimen changes in the presence of a large electric field. In this case, the pump beam modulates the phase of the probe, which can then be measured through interferometric techniques. In certain cases, referred to as cross-phase modulation spectral shifting, this phase change induces a change to the pump spectrum that can be detected with a spectral filter. == Optical design == === Excitation === Measuring nonlinear optical interactions requires a high level of instantaneous power and very precise timing. In order to achieve the high number of photons needed to generate these interactions while avoiding damage of delicate specimens, these microscopes require a modelocked laser. These lasers can achieve very high photon counts on the femtosecond timescale and maintain a low average power. Most systems use a Ti:Sapph gain medium due to the wide range of wavelengths that it can access. Typically, the same source is used to generate the pump and the probe. An optical parametric oscillator (OPO) is used to convert the probe beam to the desired wavelength. The probe wavelength can be tuned over a large range for spectroscopic applications. However, for certain types of two-photon interactions, it is possible to use separate pulsed sources. This is only possible with interactions such as excited-state absorption, in which the electrons remain in the excited state for several picoseconds. However, it is more common to use a single femtosecond source with two separate beam paths of different lengths to modulate timing between the pump and probe beams. The pump beam amplitude is modulated using an acousto-optic or electro-optic modulator on the order of 107 Hz. The pump and probe beams are then recombined using a dichroic beamsplitter and scanned using galvanometric mirrors for point-by-point image generation before being focused
|
{
"page_id": 59248548,
"source": null,
"title": "Pump–probe microscopy"
}
|
onto the sample. === Detection === The signal generated by probe modulation is much smaller than the original pump beam, so the two are spectrally separated within the detection path using a dichroic mirror. The probe signal can be collected with many different types of photodetectors, typically a photodiode. Then, the modulated signal is amplified using a lock-in amplifier tuned to the pump modulation frequency. == Data analysis == Similar to hyperspectral data analysis, the pump–probe imaging data, known as a delay stack, has to be processed to obtain an image with molecular contrast of the underlying molecular species. Processing pump–probe data is challenging for several reasons – for example, the signals are bipolar (positive and negative), multi-exponential, and can be significantly altered by subtle changes in the chemical environment. The main methods for analysis of pump–probe data are multi-exponential fitting, principal component analysis, and phasor analysis. === Multi-exponential fitting === In multi-exponential fitting, the time-resolved curves are fitted with an exponential decay model to determine the decay constants. While this method is straightforward, it has low accuracy. === Principal component analysis === Principal component analysis (PCA) was one of the earliest methods used for pump–probe data analysis, as it is commonly used for hyperspectral data analysis. PCA decomposes the data into orthogonal components. In melanoma studies, the principal components have shown good agreement with the signals obtained from the different forms of melanin. An advantage of PCA is that noise can be reduced by keeping only the principal components that account for majority of the variance in the data. However, the principal components do not necessarily reflect actual properties of the underlying chemical species, which are typically non-orthogonal. Therefore, a limitation is that the number of unique chemical species cannot be inferred using PCA. === Phasor analysis === Phasor
|
{
"page_id": 59248548,
"source": null,
"title": "Pump–probe microscopy"
}
|
analysis is commonly used for fluorescence-lifetime imaging microscopy (FLIM) data analysis and has been adapted for pump–probe imaging data analysis. Signals are decomposed into their real and imaginary parts of the Fourier transform at a given frequency. By plotting the real and imaginary parts against one another, the distribution of different chromophores with distinct lifetimes can be mapped. In melanoma studies, this approach has again shown to be able to distinguish between the different forms of melanin. One of the main advantages of phasor analysis is that it provides an intuitive qualitative, graphical view of the content It has also been combined with PCA for quantitative analysis. == Applications == The development of high-speed and high-sensitivity pump–probe imaging techniques has enabled applications in several fields, such as materials science, biology, and art. === Materials science === Pump–probe imaging is ideal for the study and characterization of nanomaterials, such as graphene, nanocubes, nanowires, and a variety of semiconductors, due to their large susceptibilities but weak fluorescence. In particular, single-walled carbon nanotubes have been extensively studied and imaged with submicrometer resolution, providing details about carrier dynamics, photophysical, and photochemical properties. === Biology === The first application of the pump–probe technique in biology was in vitro imaging of stimulated emission of a dye-labelled cell. Pump–probe imaging is now widely used for melanin imaging to differentiate between the two main forms of melanin – eumelanin (brown/black) and pheomelanin (red/yellow). In melanoma, eumelanin is substantially increased. Therefore, imaging the distribution of eumelanin and pheomelanin can help to distinguish benign lesions and melanoma with high sensitivity === Art === Artwork consists of many pigments with a wide range of spectral absorption properties, which determine their color. Due to the broad spectral features of these pigments, the identification of a specific pigment in a mixture is difficult.
|
{
"page_id": 59248548,
"source": null,
"title": "Pump–probe microscopy"
}
|
Pump–probe imaging can provide accurate, high-resolution, molecular information and distinguish between pigments that may even have the same visual color. == References ==
|
{
"page_id": 59248548,
"source": null,
"title": "Pump–probe microscopy"
}
|
In organic chemistry, Keller's reagent is a mixture of anhydrous (glacial) acetic acid, concentrated sulfuric acid, and small amounts of ferric chloride, used to detect alkaloids. Keller's reagent can also be used to detect other kinds of alkaloids via reactions in which it produces products with a wide range of colors. Cohn describes its use to detect the principal components of digitalis (note that they may not be alkaloids). The reaction with this reagent is also known as the Keller–Kiliani reaction, after C. C. Keller and H. Kiliani, who both used it to study digitalis in the late 19th century. It can be used for digitoxin's quantitative analysis.Another method of visualizing the Keller-Kiliani reaction is to treat the test solution with ferric chloride-containing glacial acetic acid, followed by the addition of concentrated sulfuric acid, which sinks to the bottom (like in the brown ring test for nitrates). A brown ring in the interface indicates the presence of cardenolides. == List of color changes with various compounds == Digoxin: olive-brown without red traces Digitoxin: green, then blue Digoxigenin: greenish-yellow Vindolicine: bright blue Uleine: yellow-brown Hunteria eburnea alkaloid J (C39H46N4O2): pale red, later blue violet == See also == Dische test Baljet reaction == References ==
|
{
"page_id": 78385062,
"source": null,
"title": "Keller's reagent (organic)"
}
|
Homeotic genes are genes which regulate the development of anatomical structures in various organisms such as echinoderms, insects, mammals, and plants. Homeotic genes often encode transcription factor proteins, and these proteins affect development by regulating downstream gene networks involved in body patterning. Mutations in homeotic genes cause displaced body parts (homeosis), such as antennae growing at the posterior of the fly instead of at the head. Mutations that lead to development of ectopic structures are usually lethal. == Types == There are several subsets of homeotic genes. They include many of the Hox and ParaHox genes that are important for segmentation. Hox genes are found in bilateral animals, including Drosophila (in which they were first discovered) and humans. Hox genes are a subset of the homeobox genes. The Hox genes are often conserved across species, so some of the Hox genes of Drosophila are homologous to those in humans. In general, Hox genes play a role of regulating expression of genes as well as aiding in development and assignment of specific structures during embryonic growth. This can range from segmentation in Drosophila to central nervous system (CNS) development in vertebrates. Both Hox and ParaHox are grouped as HOX-Like (HOXL) genes, a subset of the ANTP class (named after the Drosophila gene, Antennapedia). They also include the MADS-box-containing genes involved in the ABC model of flower development. Besides flower-producing plants, the MADS-box motif is also present in other organisms such as insects, yeasts, and mammals. They have various functions depending on the organism including flower development, proto-oncogene transcription, and gene regulation in specific cells (such as muscle cells). Despite the terms being commonly interchanged, not all homeotic genes are Hox genes; the MADS-box genes are homeotic but not Hox genes. Thus, the Hox genes are a subset of homeotic genes. ==
|
{
"page_id": 21893031,
"source": null,
"title": "Homeotic gene"
}
|
Drosophila melanogaster == One of the most commonly studied model organisms in regards to homeotic genes is the fruit fly Drosophila melanogaster. Its homeotic Hox genes occur in either the Antennapedia complex (ANT-C) or the Bithorax complex (BX-C) discovered by Edward B. Lewis. Each of the complexes focuses on a different area of development. The antennapedia complex consists of five genes, including proboscipedia, and is involved in the development of the front of the embryo, forming the segments of the head and thorax. The bithorax complex consists of three main genes and is involved in the development of the back of the embryo, namely the abdomen and the posterior segments of the thorax. During development (starting at the blastoderm stage of the embryo), these genes are constantly expressed to assign structures and roles to the different segments of the fly's body. For Drosophila, these genes can be analyzed using the Flybase database. == Research == Much research has been done on homeotic genes in different organisms, ranging from basic understanding of how the molecules work to mutations to how homeotic genes affect the human body. Changing the expression levels of homeotic genes can negatively impact the organism. For example, in one study, a pathogenic phytoplasma caused homeotic genes in a flowering plant to either be significantly upregulated or downregulated. This led to severe phenotypic changes including dwarfing, defects in the pistils, hypopigmentation, and the development of leaf-like structures on most floral organs. In another study, it was found that the homeotic gene Cdx2 acts as a tumor suppressor. In normal expression levels, the gene prevents tumorgenesis and colorectal cancer when exposed to carcinogens; however, when Cdx2 was not well expressed, carcinogens caused tumor development. These studies, along with many others, show the importance of homeotic genes even after development. ==
|
{
"page_id": 21893031,
"source": null,
"title": "Homeotic gene"
}
|
See also == Homeobox Homeosis == References == == External links == Flybase Homeotic Genes and Body Patterns NOVA-Gene Switches
|
{
"page_id": 21893031,
"source": null,
"title": "Homeotic gene"
}
|
Science Horizons Survival is a ZX Spectrum video game developed by Five Ways Software. It was published by Sinclair Research in association with Macmillan Education in 1984. It is an educational game in which the player takes on the role of one of a series of animals, and had to find food to survive while avoiding predators. == Gameplay == The aim was to teach users about food chains; as an insect life is short, with the constant danger of being eaten by a bird - but as an eagle the player is at the top of the food chain with mankind or starvation as the only dangers. The simulation allows the player to be one of six animals: a hawk, a robin, a lion, a mouse, a fly or a butterfly. The world appears in scrolling grid form, with ice caps to the north and south. The player moves one square at a time, with visibility depending on the chosen animal, avoiding predators and find food and water. The game ends when the animal dies, either through starvation, dehydration, being killed by a predator, or old age. == Development == Survival was developed as part of a series of educational software aimed at children aged between 5 and 12. This "Science Horizons" series was instigated by Sir Clive Sinclair and ex-Prime Minister Harold Macmillan. == Reception == CRASH magazine described Survival as an interesting and enjoyable program which can be used to reinforce learning, or on a self-discovery basis. One criticism was difficulty in remembering the control keys. == References == == External links == Survival at World of Spectrum
|
{
"page_id": 1380264,
"source": null,
"title": "Science Horizons Survival"
}
|
The Parks–Bielschowsky three-step test, also known as Park's three-step test or Bielschowsky head tilt test, is a method used to isolate the paretic extraocular muscle, particularly superior oblique muscle and trochlear nerve (fourth cranial nerve), in acquired vertical double vision. It was originally described by Marshall M. Parks. == Bielschowsky's head tilt test == Step 1: Determine which eye is hypertropic in primary position. If there is right hypertropia in primary position, then the depressors of the R eye (IR/SO) or the elevators of the L eye are weak (SR/IO). Step 2: Determine whether the hypertropia increases on right or left gaze. The vertical rectus muscles have their greatest vertical action when the eye is abducted. The oblique muscles have their greatest vertical action when the eye is adducted. Step 3: Determine whether the hypertropia increases on right or left head tilt. During right head tilt, the right eye intorts (SO/SR) and the left eye extorts (IO/IR). When a healthy individual tilts their head, the superior oblique and superior rectus muscles of the eye closest to the shoulder keep the eye level. The inferior oblique and inferior rectus muscles keep the other eye level. In patients with superior oblique palsy, the superior rectus muscle’s action is not counteracted by the superior oblique muscles. This leads to vertical deviation of the affected eye when the head is tilted towards the affected eye. However, there is no deviation when the head is tilted towards the unaffected eye because the superior oblique muscle is not stimulated in the affected eye, but rather it is stimulated in the unaffected eye. When there is a discrepancy in ocular deviation based on which way the head is tilted, the patient is diagnosed with unilateral palsy of the superior oblique muscle due to damage in the Trochlear
|
{
"page_id": 41815975,
"source": null,
"title": "Parks–Bielschowsky three-step test"
}
|
Nerve. People with superior oblique palsy on one side experience double vision, which is improved or even abolished by tilting the head towards the shoulder on the unaffected side. Tilting the head towards the shoulder on the affected side will make the double vision worse by causing increased separation of the two images seen by the patient. Lateralization of side of defect based on Parks-Bielschowsky three-step test: Ipsilesional central gaze hypertropia Vertical diplopia greater in contralesional than ipsilesional gaze Vertical diplopia greater in ipsilesional than contralesional head tilt == History == The physiologic basis of the head tilt test was explained by Alfred Bielschowsky and Hofmann in 1935. However, Nagel described it 30 years prior to Bielschowsky when he noted that the combined action of the superior rectus muscle and the superior oblique muscle of one eye and of the inferior rectus and inferior oblique muscles in the fellow eye causes incycloduction and excycloduction. The procedure that we now follow was given by Marshall M. Parks. == References == == Further reading == Kushner, BJ (Jan 1989). "Errors in the three-step test in the diagnosis of vertical strabismus". Ophthalmology. 96 (1): 127–32. doi:10.1016/s0161-6420(89)32933-2. PMID 2919044. == External links == Park's three-step test
|
{
"page_id": 41815975,
"source": null,
"title": "Parks–Bielschowsky three-step test"
}
|
Vincenzo Barone (b. 8 November 1952, Ancona) is an Italian chemist, active in the field of theoretical and computational chemistry. He became full professor of physical chemistry at the University of Naples in 1994, and professor of theoretical and computational chemistry at the Scuola Normale Superiore di Pisa in 2009. He was elected director of the Scuola Normale in 2016 but resigned in 2019 after a clash with the body of professors, that would have resulted in a no confidence vote. He has been chairperson of the Italian Chemical Society (SCI) from 2011 to 2013 and is also a member of the International Academy of Quantum Molecular Science (IAQMS), the European Academy of Sciences, and a fellow of the Royal Society of Chemistry (RSC). == See also == Martin Suhm Stefan Grimme == References == == Literature == Alex Saragosa: La chimica cambia la sua formula // Il Venerdì di Repubblica, 25 March 2011.
|
{
"page_id": 58658728,
"source": null,
"title": "Vincenzo Barone"
}
|
Orchidology is the scientific study of orchids. It is an organismal-level branch of botany. == See also == List of orchidologists == References == The orchidology of H. G. Jones. Eric A. Christenson, Brittonia, January–March 1994, Volume 46, Issue 1, pages 57–61, doi:10.2307/2807457 == External links == Definition at www.merriam-webster.com
|
{
"page_id": 15601579,
"source": null,
"title": "Orchidology"
}
|
The Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, usually known as the Basel Convention, is an international treaty that was designed to reduce the movements of hazardous waste between nations, and specifically to restrict the transfer of hazardous waste from developed to less developed countries. It does not address the movement of radioactive waste, controlled by the International Atomic Energy Agency. The Basel Convention is also intended to minimize the rate and toxicity of wastes generated, to ensure their environmentally sound management as closely as possible to the source of generation, and to assist developing countries in environmentally sound management of the hazardous and other wastes they generate. The convention was opened for signature on 21 March 1989, and entered into force on 5 May 1992. As of June 2024, there are 191 parties to the convention. In addition, Haiti and the United States have signed the convention but did not ratify it. Following a petition urging action on the issue signed by more than a million people around the world, most of the world's countries, but not the United States, agreed in May 2019 to an amendment of the Basel Convention to include plastic waste as regulated material. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the transportation of plastic waste is prohibited in just about every other country. == History == With the tightening of environmental laws (for example, RCRA) in developed nations in the 1970s, disposal costs for hazardous waste rose dramatically. At the same time, the globalization
|
{
"page_id": 4012,
"source": null,
"title": "Basel Convention"
}
|
of shipping made cross-border movement of waste easier, and many less developed countries were desperate for foreign currency. Consequently, the trade in hazardous waste, particularly to poorer countries, grew rapidly. In 1990, OECD countries exported around 1.8 million tons of hazardous waste. Although most of this waste was shipped to other developed countries, a number of high-profile incidents of hazardous waste-dumping led to calls for regulation. One of the incidents which led to the creation of the Basel Convention was the Khian Sea waste disposal incident, in which a ship carrying incinerator ash from the city of Philadelphia in the United States dumped half of its load on a beach in Haiti before being forced away. It sailed for many months, changing its name several times. Unable to unload the cargo in any port, the crew was believed to have dumped much of it at sea. Another incident was a 1988 case in which five ships transported 8,000 barrels of hazardous waste from Italy to the small Nigerian town of Koko in exchange for $100 monthly rent which was paid to a Nigerian for the use of his farmland. At its meeting that took place from 27 November to 1 December 2006, the parties of the Basel Agreement focused on issues of electronic waste and the dismantling of ships. Increased trade in recyclable materials has led to an increase in a market for used products such as computers. This market is valued in billions of dollars. At issue is the distinction when used computers stop being a "commodity" and become a "waste". As of June 2023, there are 191 parties to the treaty, which includes 188 UN member states, the Cook Islands, the European Union, and the State of Palestine. The five UN member states that are not party to
|
{
"page_id": 4012,
"source": null,
"title": "Basel Convention"
}
|
the treaty are East Timor, Fiji, Haiti, South Sudan, and United States. == Definition of hazardous waste == Waste falls under the scope of the convention if it is within the category of wastes listed in Annex I of the convention and it exhibits one of the hazardous characteristics contained in Annex III. In other words, it must both be listed and possess a characteristic such as being explosive, flammable, toxic, or corrosive. The other way that a waste may fall under the scope of the convention is if it is defined as or considered to be a hazardous waste under the laws of either the exporting country, the importing country, or any of the countries of transit. The definition of the term disposal is made in Article 2 al 4 and just refers to annex IV, which gives a list of operations which are understood as disposal or recovery. Examples of disposal are broad, including recovery and recycling. Alternatively, to fall under the scope of the convention, it is sufficient for waste to be included in Annex II, which lists other wastes, such as household wastes and residue that comes from incinerating household waste. Radioactive waste that is covered under other international control systems and wastes from the normal operation of ships are not covered. Annex IX attempts to define wastes which are not considered hazardous wastes and which would be excluded from the scope of the Basel Convention. If these wastes however are contaminated with hazardous materials to an extent causing them to exhibit an Annex III characteristic, they are not excluded. == Obligations == In addition to conditions on the import and export of the above wastes, there are stringent requirements for notice, consent and tracking for movement of wastes across national boundaries. The convention places a
|
{
"page_id": 4012,
"source": null,
"title": "Basel Convention"
}
|
general prohibition on the exportation or importation of wastes between parties and non-parties. The exception to this rule is where the waste is subject to another treaty that does not take away from the Basel Convention. The United States is a notable non-party to the convention and has a number of such agreements for allowing the shipping of hazardous wastes to Basel Party countries. The OECD Council also has its own control system that governs the transboundary movement of hazardous materials between OECD member countries. This allows, among other things, the OECD countries to continue trading in wastes with countries like the United States that have not ratified the Basel Convention. Parties to the convention must honor import bans of other parties. Article 4 of the Basel Convention calls for an overall reduction of waste generation. By encouraging countries to keep wastes within their boundaries and as close as possible to its source of generation, the internal pressures should provide incentives for waste reduction and pollution prevention. Parties are generally prohibited from exporting covered wastes to, or importing covered waste from, non-parties to the convention. The convention states that illegal hazardous waste traffic is criminal but contains no enforcement provisions. According to Article 12, parties are directed to adopt a protocol that establishes liability rules and procedures that are appropriate for damage that comes from the movement of hazardous waste across borders. The current consensus is that as space is not classed as a "country" under the specific definition, export of e-waste to non-terrestrial locations would not be covered. == Basel Ban Amendment == After the initial adoption of the convention, some least developed countries and environmental organizations argued that it did not go far enough. Many nations and NGOs argued for a total ban on shipment of all hazardous
|
{
"page_id": 4012,
"source": null,
"title": "Basel Convention"
}
|
waste to developing countries. In particular, the original convention did not prohibit waste exports to any location except Antarctica but merely required a notification and consent system known as "prior informed consent" or PIC. Further, many waste traders sought to exploit the good name of recycling and begin to justify all exports as moving to recycling destinations. Many believed a full ban was needed including exports for recycling. These concerns led to several regional waste trade bans, including the Bamako Convention. Lobbying at 1995 Basel conference by developing countries, Greenpeace and several European countries such as Denmark, led to the adoption of an amendment to the convention in 1995 termed the Basel Ban Amendment to the Basel Convention. The amendment has been accepted by 86 countries and the European Union, but has not entered into force (as that requires ratification by three-fourths of the member states to the convention). On 6 September 2019, Croatia became the 97th country to ratify the amendment which will enter into force after 90 days on 5 December 2019. The amendment prohibits the export of hazardous waste from a list of developed (mostly OECD) countries to developing countries. The Basel Ban applies to export for any reason, including recycling. An area of special concern for advocates of the amendment was the sale of ships for salvage, shipbreaking. The Ban Amendment was strenuously opposed by a number of industry groups as well as nations including Australia and Canada. The number of ratification for the entry-into force of the Ban Amendment is under debate: Amendments to the convention enter into force after ratification of "three-fourths of the Parties who accepted them" [Art. 17.5]; so far, the parties of the Basel Convention could not yet agree whether this would be three-fourths of the parties that were party to
|
{
"page_id": 4012,
"source": null,
"title": "Basel Convention"
}
|
the Basel Convention when the ban was adopted, or three-fourths of the current parties of the convention [see Report of COP 9 of the Basel Convention]. The status of the amendment ratifications can be found on the Basel Secretariat's web page. The European Union fully implemented the Basel Ban in its Waste Shipment Regulation (EWSR), making it legally binding in all EU member states. Norway and Switzerland have similarly fully implemented the Basel Ban in their legislation. In the light of the blockage concerning the entry into force of the Ban Amendment, Switzerland and Indonesia have launched a "Country-led Initiative" (CLI) to discuss in an informal manner a way forward to ensure that the trans boundary movements of hazardous wastes, especially to developing countries and countries with economies in the transition, do not lead to an unsound management of hazardous wastes. This discussion aims at identifying and finding solutions to the reasons why hazardous wastes are still brought to countries that are not able to treat them in a safe manner. It is hoped that the CLI will contribute to the realization of the objectives of the Ban Amendment. The Basel Convention's website informs about the progress of this initiative. == Regulation of plastic waste == In the wake of popular outcry, in May 2019 most of the world's countries, but not the United States, agreed to amend the Basel Convention to include plastic waste as a regulated material. The world's oceans are estimated to contain 100 million metric tons of plastic, with up to 90% of this quantity originating in land-based sources. The United States, which produces an annual 42 million metric tons of plastic waste, more than any other country in the world, opposed the amendment, but since it is not a party to the treaty it did
|
{
"page_id": 4012,
"source": null,
"title": "Basel Convention"
}
|
not have an opportunity to vote on it to try to block it. Information about, and visual images of, wildlife, such as seabirds, ingesting plastic, and scientific findings that nanoparticles do penetrate through the blood–brain barrier were reported to have fueled public sentiment for coordinated international legally binding action. Over a million people worldwide signed a petition demanding official action. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the Basel Convention as amended in May 2019 prohibits the transportation of plastic waste to just about every other country. The Basel Convention contains three main entries on plastic wastes in Annex II, VIII and IX of the convention. The Plastic Waste Amendments of the convention are now binding on 186 States. In addition to ensuring the trade in plastic waste is more transparent and better regulated, under the Basel Convention governments must take steps not only to ensure the environmentally sound management of plastic waste, but also to tackle plastic waste at its source. == Basel watchdog == The Basel Action Network (BAN) is a charitable civil society non-governmental organization that works as a consumer watchdog for implementation of the Basel Convention. BAN's principal aims is fighting exportation of toxic waste, including plastic waste, from industrialized societies to developing countries. BAN is based in Seattle, Washington, United States, with a partner office in the Philippines. BAN works to curb trans-border trade in hazardous electronic waste, land dumping, incineration, and the use of prison labor. == See also == Asbestos and the law Bamako Convention Electronic waste by country Rotterdam
|
{
"page_id": 4012,
"source": null,
"title": "Basel Convention"
}
|
Convention Stockholm Convention == References == This article incorporates text from a free content work. Licensed under Cc BY-SA 3.0 IGO (license statement/permission). Text taken from Drowning in Plastics – Marine Litter and Plastic Waste Vital Graphics, United Nations Environment Programme. == Further reading == Toxic Exports, Jennifer Clapp, Cornell University Press, 2001. Challenging the Chip: Labor Rights and Environmental Justice in the Global Electronics Industry, Ted Smith, David A. Sonnenfeld, and David Naguib Pellow, eds., Temple University Press link, ISBN 1-59213-330-4. "Toxic Trade: International Knowledge Networks & the Development of the Basel Convention," Jason Lloyd, International Public Policy Review, UCL. == External links == Official website Text of the Convention "A Simplified Guide to the Basel Convention" Text of the regulation no.1013/2006 of the European Union on shipments of waste Flow of Waste among Basel Parties Introductory note to the Basel Convention by Dr. Katharina Kummer Peiry, Executive Secretary of the Basel Convention, UNEP on the website of the UN Audiovisual Library of International Law Basel Convention, Treaty available in ECOLEX-the gateway to environmental law (English) Organisations Basel Action Network Africa Institute for the Environmentally Sound Management of Hazardous and other Wastes a.k.a. Basel Convention Regional Centre Pretoria Page on the Basel Convention at Greenpeace Basel Convention Coordinating Centre for Asia and the Pacific
|
{
"page_id": 4012,
"source": null,
"title": "Basel Convention"
}
|
Quantum: Einstein, Bohr, and the Great Debate About the Nature of Reality is a science history book written by Manjit Kumar. It was released on October 16, 2008. == Overview == He describes Einstein, Bohr and the "Great Debate about the Nature of Reality" that played out over a number of years, particularly at the Fifth Solvay International Conference on electrons and photons in 1927, where the physicists met to discuss the then newly formulated quantum theory. It narrates the life of some eminent physicists and their work and also gives a view of the environment of science at that time. It tells the life stories of Bohr, Einstein, Planck, Rutherford, Schrödinger, and others. == Reception == Quantum was number 1 on the Hindustan Times top 10 science books you should read in 2012. Quantum had also been shortlisted for the BBC Samuel Johnson Prize for Non-Fiction, 2009. == See also == Bohr–Einstein debates EPR paradox == References == == External links == Manjit Kumar's blog about his book Quantum
|
{
"page_id": 24973222,
"source": null,
"title": "Quantum (book)"
}
|
The molecular formula C14H18FNO (molar mass: 235.302 g/mol) may refer to: Fluorexetamine 2F-NENDCK
|
{
"page_id": 76156847,
"source": null,
"title": "C14H18FNO"
}
|
Polyester fiberfill is a synthetic fiber used for stuffing pillows and other soft objects such as stuffed animals. It is also used in audio speakers for its acoustic properties. It is commonly sold under the trademark name Poly-Fil, or un-trademarked as polyfill. == References ==
|
{
"page_id": 55709615,
"source": null,
"title": "Polyester fiberfill"
}
|
The molecular formula C16H19N3O4S (molar mass : 349.40 g/mol) may refer to : Ampicillin, a beta-lactam antibiotic Cefradine, a first generation cephalosporin antibiotic Resminostat
|
{
"page_id": 24711089,
"source": null,
"title": "C16H19N3O4S"
}
|
In electrochemistry, the Cottrell equation describes the change in electric current with respect to time in a controlled potential experiment, such as chronoamperometry. Specifically it describes the current response when the potential is a step function in time. It was derived by Frederick Gardner Cottrell in 1903. For a simple redox event, such as the ferrocene/ferrocenium couple, the current measured depends on the rate at which the analyte diffuses to the electrode. That is, the current is said to be "diffusion controlled". The Cottrell equation describes the case for an electrode that is planar but can also be derived for spherical, cylindrical, and rectangular geometries by using the corresponding Laplace operator and boundary conditions in conjunction with Fick's second law of diffusion. i = n F A c j 0 D j π t {\displaystyle i={\frac {nFAc_{j}^{0}{\sqrt {D_{j}}}}{\sqrt {\pi t}}}} where, i= current, in units of A n = number of electrons (to reduce/oxidize one molecule of analyte j, for example) F = Faraday constant, 96485 C/mol A = area of the (planar) electrode in cm2 c j 0 {\displaystyle c_{j}^{0}} = initial concentration of the reducible analyte j {\displaystyle j} in mol/cm3; Dj = diffusion coefficient for species j in cm2/s t = time in s. Deviations from linearity in the plot of i vs. t –1/2sometimes indicate that the redox event is associated with other processes, such as association of a ligand, dissociation of a ligand, or a change in geometry. Deviations from linearity can be expected at very short time scales due to non-ideality in the potential step. At long time scales, buildup of the diffusion layer causes a shift from a linearly dominated to a radially dominated diffusion regime, which causes another deviation from linearity. In practice, the Cottrell equation simplifies to i = k t
|
{
"page_id": 11866035,
"source": null,
"title": "Cottrell equation"
}
|
− 1 / 2 , {\displaystyle i=kt^{-1/2},} where k is the collection of constants for a given system (n, F, A, c j 0 {\displaystyle c_{j}^{0}} , Dj). == See also == Voltammetry Electroanalytical methods Limiting current Anson equation == References ==
|
{
"page_id": 11866035,
"source": null,
"title": "Cottrell equation"
}
|
The molecular formula C6H11NO4 may refer to: α-Aminoadipic acid N-Methyl-L-glutamic acid SYM-2081 (4-methyl-L-glutamic acid)
|
{
"page_id": 24317876,
"source": null,
"title": "C6H11NO4"
}
|
Boiled fish, or more precisely salt-boiled fish, is fish boiled with salt and thus preserved for later consumption. Although this method is used in other parts of the world, it is of major commercial significance only in Southeast Asia. The shelf life of products so treated can range from as little as one or two days, up to several months. In Indonesia, this fish preservation method is known as pindang. == Preservation method == The technique works to preserve fish through both exposure to high temperatures and salting — the high temperature of boiling water kills microbes that might otherwise decompose the fish flesh while the application of salt directly promotes preservation. This technique is especially prevalent in the tropics during monsoon season, since the torrential rains hinder the simpler and traditional salting and sun-drying method of preservation. This salted fish method is considered 'dry preservation', while the Pindang method is often called 'wet preservation'. After being covered in coarse salt, the fish are boiled on a low flame until the liquids are evaporated and the salt seasoning is well absorbed into fish. The wet boiling method requires less salt than dry preservation, and thus the taste is not as salty as that of sun-dried salted fish. Although the basic ingredients often involve only fish, water, and salt, other ingredients, especially spices or herbs that contains tannin, can be added to boost preservation effectiveness. Examples of sources of tannin used include turmeric, tamarind, shallot skin, teak leaves, guava leaves, tea, and soy sauce, as well as other spices common in Southeast Asia. Including tannins gives the food a yellowish to brown color and fish so treated will last longer than fish preserved via the plain boiled method. == Regional variation == In Indonesia, various boiled fish products are generally known
|
{
"page_id": 65474487,
"source": null,
"title": "Boiled fish"
}
|
as pindang, and the method of preparation is often described as 'Indonesian salt-boiled fish'. == See also == Cured fish Fish processing Fish preservation Salted fish Salted squid Dried shrimp Brining Food portal == References ==
|
{
"page_id": 65474487,
"source": null,
"title": "Boiled fish"
}
|
Metallurgical failure analysis is the process to determine the mechanism that has caused a metal component to fail. It can identify the cause of failure, providing insight into the root cause and potential solutions to prevent similar failures in the future, as well as culpability, which is important in legal cases. Resolving the source of metallurgical failures can be of financial interest to companies. The annual cost of corrosion (a common cause of metallurgical failures) in the United States was estimated by NACE International in 2012 to be $450 billion a year, a 67% increase compared to estimates for 2001. These failures can be analyzed to determine their root cause, which if corrected, would save reduce the cost of failures to companies. Failure can be broadly divided into functional failure and expected performance failure. Functional failure occurs when a component or process fails and its entire parent system stops functioning entirely. This category includes the common idea of a component fracturing rapidly. Expected performance failures are when a component causes the system to perform below a certain performance criterion, such as life expectancy, operating limits, or shape and color. Some performance criteria are documented by the supplier, such as maximum load allowed on a tractor, while others are implied or expected by the customer, such gas consumption (miles per gallon for automobiles). Often a combination of both environmental conditions and stress will cause failure. Metal components are designed to withstand the environment and stresses that they will be subjected to. The design of a metal component involves not only a specific elemental composition but also specific manufacturing process such as heat treatments, machining processes, etc. The huge arrays of different metals that result all have unique physical properties. Specific properties are designed into metal components to make them more robust
|
{
"page_id": 28446647,
"source": null,
"title": "Metallurgical failure analysis"
}
|
to various environmental conditions. These differences in physical properties will exhibit unique failure modes. A metallurgical failure analysis takes into account as much of this information as possible during analysis. The ultimate goal of failure analysis is to provide a determination of the root cause and a solution to any underlying problems to prevent future failures. == Failure investigation == The first step in failure analysis is investigating the failure to collect information. The sequence of steps for information gathering in a failure investigation are: Collection information about the circumstances surrounding the failure and selection of specimens Preliminary examination of the failed part (visual examination) and comparison with parts that have not failed Macroscopic examination and analysis and photographic documentation of specimens (fracture surfaces, secondary cracks, and other surface phenomena) Microscopic examination and analysis of specimens (fracture surfaces) Selection and preparation of metallographic sections Microscopic examination and analysis of prepared metallographic specimens Nondestructive testing Destructive/mechanical testing Determination of failure mechanism Chemical analysis (bulk, local, surface corrosion products, deposits or coatings) Identify all possible root causes Testing most likely possible root causes under simulated service conditions Analysis of all the evidence, formulation of conclusions, and writing the report including recommendations === Techniques used === Various techniques are used in the investigative process of metallurgical failure analysis. Macroscopic examination: camera, stereoscope Microscopic examination: light microscopy, electron microscopy, x-ray microscopy, metallographic etching Mechanical testing: hardness testing, tensile testing, Charpy impact testing Chemical testing: microprobe analysis, energy dispersive spectroscopy Non-destructive testing: Non-destructive testing is a test method that allows certain physical properties of metal to be examined without taking the samples completely out of service. NDT is generally used to detect failures in components before the component fails catastrophically. Destructive testing: Destructive testing involves removing a metal component from service and sectioning the component
|
{
"page_id": 28446647,
"source": null,
"title": "Metallurgical failure analysis"
}
|
for analysis. Destructive testing gives the failure analyst the ability to conduct the analysis in a laboratory setting and perform tests on the material that will ultimately destroy the component. == Metallurgical failure modes == There is no standardized list of metallurgical failure modes and different metallurgists might use a different name for the same failure mode. The failure mode terms listed below are those accepted by ASTM, ASM, and/or NACE as distinct metallurgical failure mechanisms. === Caused by corrosion and stress === Stress corrosion cracking Stress corrosion (NACE term) Corrosion fatigue Caustic cracking (ASTM term) Caustic embrittlement (ASM term) Sulfide stress cracking (ASM, NACE term) Stress-accelerated Corrosion (NACE term) Hydrogen stress cracking (ASM term) Hydrogen-assisted stress corrosion cracking (ASM term) === Caused by stress === Fatigue (ASTM, ASM term) Mechanical overload Creep Rupture Cracking (NACE term) Embrittlement === Caused by corrosion === Erosion corrosion Pitting corrosion Oxygen pitting Hydrogen embrittlement Hydrogen-induced cracking (ASM term) Corrosion embrittlement (ASM term) Hydrogen disintegration (NACE term) Hydrogen-assisted cracking (ASM term) Hydrogen blistering Corrosion == Potential root causes == Potential root causes of metallurgical failures are vast, spanning the lifecycle of component from design to manufacturing to usage. The most common reasons for failures can be classified into the following categories: === Service or operation conditions === Failures due to service or operation conditions includes using a component outside of its intended conditions, such as an impact force or a high load. It can also include failures due to unexpected conditions in usage, such as an unexpected contact point that causes wear and abrasion or an unexpected humidity level or chemical presence that causes corrosion. These factors result in the component failing at an earlier time than expected. === Improper maintenance === Improper maintenance would cause potential sources of fracture to go untreated and
|
{
"page_id": 28446647,
"source": null,
"title": "Metallurgical failure analysis"
}
|
lead to premature failure of a component in the future. The reason for improper maintenance could be either intentional, such as skipping a yearly maintenance to avoid the cost, or unintentional, such as using the wrong engine oil. === Improper testing or inspection === Testing and/or inspection are typically included in component manufacturing lines to verify the product meets some set of standards to ensure the desired performance in the field. Improper testing or inspection would circumvent these quality checks and could allow a part with a defect that would normally disqualify the component from field use to be sold to a customer, potentially leading to a failure. === Fabrication or manufacturing errors === Manufacturing or fabrication errors occur during the processing of the material or component. For metal parts, casting defects are common, such as cold shut, hot tears or slag inclusions. It can also be surface treatment problems, processing parameters such as ramming a sand mold or wrong temperature during hardening. === Design errors === Design errors arise when the desired use case was not properly accounted for, leading to a ineffective design, such as the stress state in service or potential corrosive agents in the service environment. Design errors often include dimensioning and materials selection, but it can also be the complete design. == Use of computational methods for failure analysis == Computational methods have been increasing in popularity as a method to test possible root because they do not need to sacrifice a component to prove a root cause. Common cases where computational methods are used are for failures due to erosion, failures of components under complex stress states, and for predictive analyses. Computational fluid dynamics is used to determine the flow pattern and shear stresses on a component that had failed due to erosive wear.
|
{
"page_id": 28446647,
"source": null,
"title": "Metallurgical failure analysis"
}
|
Finite element analysis is used to model components under complex stress states. Finite element analysis as well as phase field models can be used for predicting crack propagation and failure, which are then used to prevent failure by influencing component design. == See also == Forensic engineering Corrosion engineering Failure analysis Fracture Fracture mechanics == References ==
|
{
"page_id": 28446647,
"source": null,
"title": "Metallurgical failure analysis"
}
|
In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The term is closely associated with the work of the mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome. The idea that small causes may have large effects in weather was earlier acknowledged by the French mathematician and physicist Henri Poincaré. The American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos. The concept of the butterfly effect has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences. == History == In The Vocation of Man (1800), Johann Gottlieb Fichte says "you could not
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole". Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. This is evidenced by the case of the three-body problem by Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology. In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908. In 1950, Alan Turing noted: "The displacement of a single electron by a billionth of a centimetre at one moment might make the difference between a man being killed by an avalanche a year later, or escaping." The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in "A Sound of Thunder", a 1952 short story by Ray Bradbury in which a time traveller alters the future by inadvertently treading on a butterfly in the past. More precisely, though, almost the exact idea and the exact phrasing —of a tiny insect's wing affecting the entire atmosphere's winds— was published in a children's book which became extremely successful and well-known globally in 1962, the year before Lorenz published: "...whatever we do affects everything and everyone else, if even in the tiniest way. Why, when a housefly flaps his wings, a breeze goes round the world." -- The Princess of Pure Reason In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
weather scenario. Lorenz wrote: At one point I decided to repeat some of the computations in order to examine what was happening in greater detail. I stopped the computer, typed in a line of numbers that it had printed out a while earlier, and set it running again. I went down the hall for a cup of coffee and returned after about an hour, during which time the computer had simulated about two months of weather. The numbers being printed were nothing like the old ones. I immediately suspected a weak vacuum tube or some other computer trouble, which was not uncommon, but before calling for service I decided to see just where the mistake had occurred, knowing that this could speed up the servicing process. Instead of a sudden break, I found that the new values at first repeated the old ones, but soon afterward differed by one and then several units in the last [decimal] place, and then began to differ in the next to the last place and then in the place before that. In fact, the differences more or less steadily doubled in size every four days or so, until all resemblance with the original output disappeared somewhere in the second month. This was enough to tell me what had happened: the numbers that I had typed in were not the exact original numbers, but were the rounded-off values that had appeared in the original printout. The initial round-off errors were the culprits; they were steadily amplifying until they dominated the solution. In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called Deterministic Nonperiodic Flow (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated: One meteorologist remarked that if the theory were correct, one flap
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
of a sea gull's wings would be enough to alter the course of the weather forever. The controversy has not yet been settled, but the most recent evidence seems to favor the sea gulls. Following proposals from colleagues, in later speeches and papers, Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted Does the flap of a butterfly's wings in Brazil set off a tornado in Texas? as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely. The phrase refers to the effect of a butterfly's wings creating tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate, or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can cause the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The flapping wing creates a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado.
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions. Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos. While the "butterfly effect" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. Lorenz proposed a mathematical model for how tiny motions in the atmosphere scale up to affect larger systems. He found that the systems in that model could only be predicted up to a specific point in the future, and beyond that, reducing the error in the initial conditions would not increase the predictability (as long as the error is not zero). This demonstrated that a deterministic system could be "observationally indistinguishable" from a non-deterministic one in terms of predictability. Recent re-examinations of this paper suggest that it offered a significant challenge to the idea that our universe is deterministic, comparable to the challenges
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
offered by quantum physics. In the book entitled The Essence of Chaos published in 1993, Lorenz defined butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." This feature is the same as sensitive dependence of solutions on initial conditions (SDIC) in . In the same book, Lorenz applied the activity of skiing and developed an idealized skiing model for revealing the sensitivity of time-varying paths to initial positions. A predictability horizon is determined before the onset of SDIC. == Illustrations == == Theory and mathematical definition == Recurrence, the approximate return of a system toward its initial conditions, together with sensitive dependence on initial conditions, are the two main ingredients for chaotic motion. They have the practical consequence of making complex systems, such as the weather, difficult to predict past a certain time range (approximately a week in the case of weather) since it is impossible to measure the starting atmospheric conditions completely accurately. A dynamical system displays sensitive dependence on initial conditions if points arbitrarily close together separate over time at an exponential rate. The definition is not topological, but essentially metrical. Lorenz defined sensitive dependence as follows: The property characterizing an orbit (i.e., a solution) if most other orbits that pass close to it at some point do not remain close to it as time advances. If M is the state space for the map f t {\displaystyle f^{t}} , then f t {\displaystyle f^{t}} displays sensitive dependence to initial conditions if for any x in M and any δ > 0, there are y in M, with distance d(. , .) such that 0 < d ( x , y ) < δ
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
{\displaystyle 0<d(x,y)<\delta } and such that d ( f τ ( x ) , f τ ( y ) ) > e a τ d ( x , y ) {\displaystyle d(f^{\tau }(x),f^{\tau }(y))>\mathrm {e} ^{a\tau }\,d(x,y)} for some positive parameter a. The definition does not require that all points from a neighborhood separate from the base point x, but it requires one positive Lyapunov exponent. In addition to a positive Lyapunov exponent, boundedness is another major feature within chaotic systems. The simplest mathematical framework exhibiting sensitive dependence on initial conditions is provided by a particular parametrization of the logistic map: x n + 1 = 4 x n ( 1 − x n ) , 0 ≤ x 0 ≤ 1 , {\displaystyle x_{n+1}=4x_{n}(1-x_{n}),\quad 0\leq x_{0}\leq 1,} which, unlike most chaotic maps, has a closed-form solution: x n = sin 2 ( 2 n θ π ) {\displaystyle x_{n}=\sin ^{2}(2^{n}\theta \pi )} where the initial condition parameter θ {\displaystyle \theta } is given by θ = 1 π sin − 1 ( x 0 1 / 2 ) {\displaystyle \theta ={\tfrac {1}{\pi }}\sin ^{-1}(x_{0}^{1/2})} . For rational θ {\displaystyle \theta } , after a finite number of iterations x n {\displaystyle x_{n}} maps into a periodic sequence. But almost all θ {\displaystyle \theta } are irrational, and, for irrational θ {\displaystyle \theta } , x n {\displaystyle x_{n}} never repeats itself – it is non-periodic. This solution equation clearly demonstrates the two key features of chaos – stretching and folding: the factor 2n shows the exponential growth of stretching, which results in sensitive dependence on initial conditions (the butterfly effect), while the squared sine function keeps x n {\displaystyle x_{n}} folded within the range [0, 1]. == In physical systems == === In weather === ==== Overview
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
==== The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example. The climate scientists James Annan and William Connolley explain that chaos is important in the development of weather prediction methods; models are sensitive to initial conditions. They add the caveat: "Of course the existence of an unknown butterfly flapping its wings has no direct bearing on weather forecasts, since it will take far too long for such a small perturbation to grow to a significant size, and we have many more immediate uncertainties to worry about. So the direct impact of this phenomenon on weather prediction is often somewhat wrong." ==== Differentiating types of butterfly effects ==== The concept of the butterfly effect encompasses several phenomena. The two kinds of butterfly effects, including the sensitive dependence on initial conditions, and the ability of a tiny perturbation to create an organized circulation at large distances, are not exactly the same. In Palmer et al., a new type of butterfly effect is introduced, highlighting the potential impact of small-scale processes on finite predictability within the Lorenz 1969 model. Additionally, the identification of ill-conditioned aspects of the Lorenz 1969 model points to a practical form of finite predictability. These two distinct mechanisms suggesting finite predictability in the Lorenz 1969 model are collectively referred to as the third kind of butterfly effect. The authors in have considered Palmer et al.'s suggestions and have aimed to present their perspective without raising specific contentions. The third kind of butterfly effect with finite predictability, as discussed in, was primarily proposed based on a convergent geometric series, known as Lorenz's and Lilly's formulas. Ongoing discussions are addressing the validity of these two formulas for estimating predictability limits in. A comparison of the two kinds of
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
butterfly effects and the third kind of butterfly effect has been documented. In recent studies, it was reported that both meteorological and non-meteorological linear models have shown that instability plays a role in producing a butterfly effect, which is characterized by brief but significant exponential growth resulting from a small disturbance. ==== Recent debates on butterfly effects ==== The first kind of butterfly effect (BE1), known as SDIC (Sensitive Dependence on Initial Conditions), is widely recognized and demonstrated through idealized chaotic models. However, opinions differ regarding the second kind of butterfly effect, specifically the impact of a butterfly flapping its wings on tornado formation, as indicated in two 2024 articles. In more recent discussions published by Physics Today, it is acknowledged that the second kind of butterfly effect (BE2) has never been rigorously verified using a realistic weather model. While the studies suggest that BE2 is unlikely in the real atmosphere, its invalidity in this context does not negate the applicability of BE1 in other areas, such as pandemics or historical events. For the third kind of butterfly effect, the limited predictability within the Lorenz 1969 model is explained by scale interactions in one article and by system ill-conditioning in another more recent study. ==== Finite predictability in chaotic systems ==== According to Lighthill (1986), the presence of SDIC (commonly known as the butterfly effect) implies that chaotic systems have a finite predictability limit. In a literature review, it was found that Lorenz's perspective on the predictability limit can be condensed into the following statement: (A). The Lorenz 1963 model qualitatively revealed the essence of a finite predictability within a chaotic system such as the atmosphere. However, it did not determine a precise limit for the predictability of the atmosphere. (B). In the 1960s, the two-week predictability limit was originally
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
estimated based on a doubling time of five days in real-world models. Since then, this finding has been documented in Charney et al. (1966) and has become a consensus. Recently, a short video has been created to present Lorenz's perspective on predictability limit. A recent study refers to the two-week predictability limit, initially calculated in the 1960s with the Mintz-Arakawa model's five-day doubling time, as the "Predictability Limit Hypothesis." Inspired by Moore's Law, this term acknowledges the collaborative contributions of Lorenz, Mintz, and Arakawa under Charney's leadership. The hypothesis supports the investigation into extended-range predictions using both partial differential equation (PDE)-based physics methods and Artificial Intelligence (AI) techniques. ==== Revised perspectives on chaotic and non-chaotic systems ==== By revealing coexisting chaotic and non-chaotic attractors within Lorenz models, Shen and his colleagues proposed a revised view that "weather possesses chaos and order", in contrast to the conventional view of "weather is chaotic". As a result, sensitive dependence on initial conditions (SDIC) does not always appear. Namely, SDIC appears when two orbits (i.e., solutions) become the chaotic attractor; it does not appear when two orbits move toward the same point attractor. The above animation for double pendulum motion provides an analogy. For large angles of swing the motion of the pendulum is often chaotic. By comparison, for small angles of swing, motions are non-chaotic. Multistability is defined when a system (e.g., the double pendulum system) contains more than one bounded attractor that depends only on initial conditions. The multistability was illustrated using kayaking in Figure on the right side (i.e., Figure 1 of ) where the appearance of strong currents and a stagnant area suggests instability and local stability, respectively. As a result, when two kayaks move along strong currents, their paths display SDIC. On the other hand, when two kayaks move
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
into a stagnant area, they become trapped, showing no typical SDIC (although a chaotic transient may occur). Such features of SDIC or no SDIC suggest two types of solutions and illustrate the nature of multistability. By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows: "The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons." === In quantum mechanics === The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics, including atoms in strong fields and the anisotropic Kepler problem. Some authors have argued that extreme (exponential) dependence on initial conditions is not expected in pure quantum treatments; however, the sensitive dependence on initial conditions demonstrated in classical motion is included in the semiclassical treatments developed by Martin Gutzwiller and John B. Delos and co-workers. The random matrix theory and simulations with quantum computers prove that some versions of the butterfly effect in quantum mechanics do not exist. Other authors suggest that the butterfly effect can be observed in quantum systems. Zbyszek P. Karkuszewski et al. consider the time evolution of quantum systems which have slightly different Hamiltonians. They investigate the level of sensitivity of quantum systems to small changes in their given Hamiltonians. David Poulin et al. presented a quantum algorithm to measure fidelity decay, which "measures the rate at which identical initial states diverge when subjected to slightly different dynamics". They consider fidelity decay to be "the closest quantum analog to the (purely classical) butterfly effect". Whereas the classical butterfly effect considers the effect of a small
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. This quantum butterfly effect has been demonstrated experimentally. Quantum and semiclassical treatments of system sensitivity to initial conditions are known as quantum chaos. == In popular culture == The butterfly effect has appeared across mediums such as literature (for instance, A Sound of Thunder), films and television (such as The Simpsons), video games (such as Life Is Strange), webcomics (such as Homestuck), AI-driven expansive language models, and more. == See also == == References == == Further reading == James Gleick, Chaos: Making a New Science, New York: Viking, 1987. 368 pp. Devaney, Robert L. (2003). Introduction to Chaotic Dynamical Systems. Westview Press. ISBN 0670811785. Hilborn, Robert C. (2004). "Sea gulls, butterflies, and grasshoppers: A brief history of the butterfly effect in nonlinear dynamics". American Journal of Physics. 72 (4): 425–427. Bibcode:2004AmJPh..72..425H. doi:10.1119/1.1636492. Bradbury, Ray. "A Sound of Thunder." Collier's. 28 June 1952 == External links == Weather and Chaos: The Work of Edward N. Lorenz. A short documentary that explains the "butterfly effect" in context of Lorenz's work. The Chaos Hypertextbook. An introductory primer on chaos and fractals Dizikes, Peter (2008-06-08). "The meaning of the butterfly. Why pop culture loves the 'butterfly effect,' and gets it totally wrong". The Boston Globe. Boston, Massachusetts. Retrieved 2022-06-19. New England Complex Systems Institute - Concepts: Butterfly Effect ChaosBook.org. Advanced graduate textbook on chaos (no fractals) Weisstein, Eric W. "Butterfly Effect". MathWorld.
|
{
"page_id": 4024,
"source": null,
"title": "Butterfly effect"
}
|
Discovered by British engineer Christopher Cockerell, the momentum curtain is a unique and efficient way to reduce friction between a vehicle and its surface of travel, be it water or land, by levitating the vehicle above this surface via a cushion of air. It is this principle of levitation upon which a hovercraft is based, and Christopher Cockerell set about applying his momentum curtain theory to hovercraft to increase their abilities in overcoming friction in travel. Levitating a vehicle above the ground/water to reduce its drag was not a new concept. John Thornycroft, in 1877, discovered that trapping air beneath a ship's hull, or pumping air beneath it with bellows, decreased the effects of friction upon the hull thereby increasing the ship's top attainable speeds. However, technology at the time was insufficient for Thornycroft's ideas to be developed further. Cockerell used the idea of pumped air under a hull (this then becoming a plenum, i.e. the opposite of a vacuum) and improved upon it further. Simply pumping air between a hull and the ground wasted a lot of energy in terms of leakage of air around the edges of the hull. Cockerell discovered that by means of generating a wall (curtain) of high-speed downward-directed air around the edges of a hull, that less air leaked out from the sides (due to the momentum of the high-speed air molecules), and thus a greater pressure could be attained beneath the hull. So, with the same input power, a greater amount of lift could be developed, and the hull could be lifted higher above the surface, reducing friction and increasing clearance. This theory was tried, tested and developed throughout the 1950s and 1960s until it was finally realised in full-scale in the SR-N1 hovercraft. == References ==
|
{
"page_id": 21958586,
"source": null,
"title": "Momentum curtain"
}
|
The molecular formula C10H20O (molar mass: 156.27 g/mol, exact mass: 156.1514 u) may refer to: Citronellol, also called dihydrogeraniol Decanal 2-Decanone Menthol Rhodinol
|
{
"page_id": 22417345,
"source": null,
"title": "C10H20O"
}
|
Photodisintegration (also called phototransmutation, or a photonuclear reaction) is a nuclear process in which an atomic nucleus absorbs a high-energy gamma ray, enters an excited state, and immediately decays by emitting a subatomic particle. The incoming gamma ray effectively knocks one or more neutrons, protons, or an alpha particle out of the nucleus. The reactions are called (γ,n), (γ,p), and (γ,α), respectively. Photodisintegration is endothermic (energy absorbing) for atomic nuclei lighter than iron and sometimes exothermic (energy releasing) for atomic nuclei heavier than iron. Photodisintegration is responsible for the nucleosynthesis of at least some heavy, proton-rich elements via the p-process in supernovae of type Ib, Ic, or II. This causes the iron to further fuse into the heavier elements. == Photodisintegration of deuterium == A photon carrying 2.22 MeV or more energy can photodisintegrate an atom of deuterium: James Chadwick and Maurice Goldhaber used this reaction to measure the proton-neutron mass difference. This experiment proves that a neutron is not a bound state of a proton and an electron, as had been proposed by Ernest Rutherford. == Photodisintegration of beryllium == A photon carrying 1.67 MeV or more energy can photodisintegrate an atom of beryllium-9 (100% of natural beryllium, its only stable isotope): Antimony-124 is assembled with beryllium to make laboratory neutron sources and startup neutron sources. Antimony-124 (half-life 60.20 days) emits β− and 1.690 MeV gamma rays (also 0.602 MeV and 9 fainter emissions from 0.645 to 2.090 MeV), yielding stable tellurium-124. Gamma rays from antimony-124 split beryllium-9 into two alpha particles and a neutron with an average kinetic energy of 24 keV (a so-called intermediate neutron in terms of energy): Other isotopes have higher thresholds for photoneutron production, as high as 18.72 MeV, for carbon-12. == Hypernovae == In explosions of very large stars (250 or more solar
|
{
"page_id": 11145154,
"source": null,
"title": "Photodisintegration"
}
|
masses), photodisintegration is a major factor in the supernova event. As the star reaches the end of its life, it reaches temperatures and pressures where photodisintegration's energy-absorbing effects temporarily reduce pressure and temperature within the star's core. This causes the core to start to collapse as energy is taken away by photodisintegration, and the collapsing core leads to the formation of a black hole. A portion of mass escapes in the form of relativistic jets, which could have "sprayed" the first metals into the universe. == Photodisintegration in lightning == Terrestrial lightnings produce high-speed electrons that create bursts of gamma-rays as bremsstrahlung. The energy of these rays is sometimes sufficient to start photonuclear reactions resulting in emitted neutrons. One such reaction, 147N(γ,n)137N, is the only natural process other than those induced by cosmic rays in which 137N is produced on Earth. The unstable isotopes remaining from the reaction may subsequently emit positrons by β+ decay. == Photofission == Photofission is a similar but distinct process, in which a nucleus, after absorbing a gamma ray, undergoes nuclear fission (splits into two fragments of nearly equal mass). == See also == Pair-instability supernova Silicon-burning process == References ==
|
{
"page_id": 11145154,
"source": null,
"title": "Photodisintegration"
}
|
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual. When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs. The theory is contrasted with punctuated equilibrium. == History == The word phyletic derives from the Greek φυλετικός phūletikos, which conveys the meaning of a line of descent. Phyletic gradualism contrasts with the theory of punctuated equilibrium, which proposes that most evolution occurs isolated in rare episodes of rapid evolution, when a single species splits into two distinct species, followed by a long period of stasis or non-change. These models both contrast with variable-speed evolution ("variable speedism"), which maintains that different species evolve at different rates, and that there is no reason to stress one rate of change over another. Evolutionary biologist Richard Dawkins argues that constant-rate gradualism is not present in the professional literature, thereby the term serves only as a straw-man for punctuated-equilibrium advocates. In his book The Blind Watchmaker, Dawkins observes that Charles Darwin himself was not a constant-rate gradualist, as suggested by Niles Eldredge and Stephen Jay Gould. In the first edition of On the Origin of Species, Darwin stated that "Species of different genera and classes have not changed at the same rate, or in the same degree. In the oldest tertiary beds a few living shells may still be found in the midst of a multitude of extinct forms... The Silurian Lingula differs but little from the living species of this genus". Lingula is among the few brachiopods surviving today but also known from fossils over 500 million years old. In
|
{
"page_id": 2559939,
"source": null,
"title": "Phyletic gradualism"
}
|
the fifth edition of The Origin of Species, Darwin wrote that "the periods during which species have undergone modification, though long as measured in years, have probably been short in comparison with the periods during which they retain the same form". == See also == Punctuated equilibrium Punctuated gradualism Quantum evolution == References == == External links == Media related to Phyletic gradualism at Wikimedia Commons The distinction between phyletic gradualism and punctuated equilibrium models - by Mark Ridley
|
{
"page_id": 2559939,
"source": null,
"title": "Phyletic gradualism"
}
|
Selective breeding (also called artificial selection) is the process by which humans use animal breeding and plant breeding to selectively develop particular phenotypic traits (characteristics) by choosing which typically animal or plant males and females will sexually reproduce and have offspring together. Domesticated animals are known as breeds, normally bred by a professional breeder, while domesticated plants are known as varieties, cultigens, cultivars, or breeds. Two purebred animals of different breeds produce a crossbreed, and crossbred plants are called hybrids. Flowers, vegetables and fruit-trees may be bred by amateurs and commercial or non-commercial professionals: major crops are usually the provenance of the professionals. In animal breeding artificial selection is often combined with techniques such as inbreeding, linebreeding, and outcrossing. In plant breeding, similar methods are used. Charles Darwin discussed how selective breeding had been successful in producing change over time in his 1859 book, On the Origin of Species. Its first chapter discusses selective breeding and domestication of such animals as pigeons, cats, cattle, and dogs. Darwin used artificial selection as an analogy to propose and explain the theory of natural selection but distinguished the latter from the former as a separate process that is non-directed. The deliberate exploitation of selective breeding to produce desired results has become very common in agriculture and experimental biology. Selective breeding can be unintentional, for example, resulting from the process of human cultivation; and it may also produce unintended – desirable or undesirable – results. For example, in some grains, an increase in seed size may have resulted from certain ploughing practices rather than from the intentional selection of larger seeds. Most likely, there has been an interdependence between natural and artificial factors that have resulted in plant domestication. == History == Selective breeding of both plants and animals has been practiced since prehistory;
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
key species such as wheat, rice, and dogs have been significantly different from their wild ancestors for millennia, and maize, which required especially large changes from teosinte, its wild form, was selectively bred in Mesoamerica. Selective breeding was practiced by the Romans. Treatises as much as 2,000 years old give advice on selecting animals for different purposes, and these ancient works cite still older authorities, such as Mago the Carthaginian. The notion of selective breeding was later expressed by the Persian Muslim polymath Abu Rayhan Biruni in the 11th century. He noted the idea in his book titled India, which included various examples. The agriculturist selects his corn, letting grow as much as he requires, and tearing out the remainder. The forester leaves those branches which he perceives to be excellent, whilst he cuts away all others. The bees kill those of their kind who only eat, but do not work in their beehive. Selective breeding was established as a scientific practice by Robert Bakewell during the British Agricultural Revolution in the 18th century. Arguably, his most important breeding program was with sheep. Using native stock, he was able to quickly select for large, yet fine-boned sheep, with long, lustrous wool. The Lincoln Longwool was improved by Bakewell, and in turn the Lincoln was used to develop the subsequent breed, named the New (or Dishley) Leicester. It was hornless and had a square, meaty body with straight top lines. These sheep were exported widely, including to Australia and North America, and have contributed to numerous modern breeds, despite the fact that they fell quickly out of favor as market preferences in meat and textiles changed. Bloodlines of these original New Leicesters survive today as the English Leicester (or Leicester Longwool), which is primarily kept for wool production. Bakewell was also
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
the first to breed cattle to be used primarily for beef. Previously, cattle were first and foremost kept for pulling ploughs as oxen, but he crossed long-horned heifers and a Westmoreland bull to eventually create the Dishley Longhorn. As more and more farmers followed his lead, farm animals increased dramatically in size and quality. In 1700, the average weight of a bull sold for slaughter was 370 pounds (168 kg). By 1786, that weight had more than doubled to 840 pounds (381 kg). However, after his death, the Dishley Longhorn was replaced with short-horn versions. He also bred the Improved Black Cart horse, which later became the Shire horse. Charles Darwin coined the term 'selective breeding'; he was interested in the process as an illustration of his proposed wider process of natural selection. Darwin noted that many domesticated animals and plants had special properties that were developed by intentional animal and plant breeding from individuals that showed desirable characteristics, and discouraging the breeding of individuals with less desirable characteristics. Darwin used the term "artificial selection" twice in the 1859 first edition of his work On the Origin of Species, in Chapter IV: Natural Selection, and in Chapter VI: Difficulties on Theory: Slow though the process of selection may be, if feeble man can do much by his powers of artificial selection, I can see no limit to the amount of change, to the beauty and infinite complexity of the co-adaptations between all organic beings, one with another and with their physical conditions of life, which may be effected in the long course of time by nature's power of selection. We are profoundly ignorant of the causes producing slight and unimportant variations; and we are immediately made conscious of this by reflecting on the differences in the breeds of our domesticated
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
animals in different countries,—more especially in the less civilized countries where there has been but little artificial selection. == Animal breeding == Animals with homogeneous appearance, behavior, and other characteristics are known as particular breeds or pure breeds, and they are bred through culling animals with particular traits and selecting for further breeding those with other traits. Purebred animals belong to a single, recognizable breed, and purebreds with recorded lineage are called pedigreed. Crossbreeds are a mix of two purebreds, whereas mixed breeds are a mix of several breeds, often unknown. Animal breeding begins with breeding stock, a group of animals used for the purpose of planned breeding. When individuals are looking to breed animals, they look for certain valuable traits in purebred stock for a certain purpose, or may intend to use some type of crossbreeding to produce a new type of stock with different and presumably superior abilities in a given area of endeavor. For example, to breed chickens, a breeder typically intends to receive eggs, meat, and new, young birds for further reproduction. Thus, the breeder has to study different breeds and types of chickens and analyze what can be expected from a certain set of characteristics before he or she starts breeding them. Therefore, when purchasing initial breeding stock, the breeder seeks a group of birds that will most closely fit the purpose intended. Purebred breeding aims to establish and maintain stable traits, that animals will pass to the next generation. By "breeding the best to the best," employing a certain degree of inbreeding, considerable culling, and selection for "superior" qualities, one could develop a bloodline superior in certain respects to the original base stock. Such animals can be recorded with a breed registry, the organization that maintains pedigrees and/or stud books. However, single-trait breeding, breeding
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
for only one trait over all others, can be problematic. In one case mentioned by the animal behaviorist Temple Grandin, roosters bred for fast growth or heavy muscles did not know how to perform typical rooster courtship dances, which alienated the roosters from hens and led the roosters to kill the hens after mating with them. A Soviet attempt to breed lab rats with higher intelligence led to cases of neurosis severe enough to make the animals incapable of any problem solving unless drugs like phenazepam were used. The observable phenomenon of hybrid vigor stands in contrast to the notion of breed purity. However, on the other hand, indiscriminate breeding of crossbred or hybrid animals may also result in degradation of quality. Studies in evolutionary physiology, behavioral genetics, and other areas of organismal biology have also made use of deliberate selective breeding, though longer generation times and greater difficulty in breeding can make these projects challenging in such vertebrates as house mice. == Plant breeding == The process of plant breeding has been used for thousands of years, and began with the domestication of wild plants into uniform and predictable agricultural cultigens. These high-yielding varieties have been particularly important in agriculture. As crops improved, humans were able to move from hunter-gatherer style living to a mix of hunter-gatherer and agriculture practices. Although these higher yielding plants were derived from an extremely primitive version of plant breeding, this form of agriculture was an investment that the people who grew them were planting then could have a more varied diet. This meant that they did not completely stop their hunting and gathering immediately but instead over time transitioned and ultimately favored agriculture. Originally this was due to humans not wanting to risk using all their time and resources for their crops just
|
{
"page_id": 200646,
"source": null,
"title": "Selective breeding"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.