text
stringlengths
2
132k
source
dict
approaches infinity. == References ==
{ "page_id": 1183025, "source": null, "title": "Zeta potential" }
Beryllium is a chemical element; it has symbol Be and atomic number 4. It is a steel-gray, hard, strong, lightweight and brittle alkaline earth metal. It is a divalent element that occurs naturally only in combination with other elements to form minerals. Gemstones high in beryllium include beryl (aquamarine, emerald, red beryl) and chrysoberyl. It is a relatively rare element in the universe, usually occurring as a product of the spallation of larger atomic nuclei that have collided with cosmic rays. Within the cores of stars, beryllium is depleted as it is fused into heavier elements. Beryllium constitutes about 0.0004 percent by mass of Earth's crust. The world's annual beryllium production of 220 tons is usually manufactured by extraction from the mineral beryl, a difficult process because beryllium bonds strongly to oxygen. In structural applications, the combination of high flexural rigidity, thermal stability, thermal conductivity and low density (1.85 times that of water) make beryllium a desirable aerospace material for aircraft components, missiles, spacecraft, and satellites. Because of its low density and atomic mass, beryllium is relatively transparent to X-rays and other forms of ionizing radiation; therefore, it is the most common window material for X-ray equipment and components of particle detectors. When added as an alloying element to aluminium, copper (notably the alloy beryllium copper), iron, or nickel, beryllium improves many physical properties. For example, tools and components made of beryllium copper alloys are strong and hard and do not create sparks when they strike a steel surface. In air, the surface of beryllium oxidizes readily at room temperature to form a passivation layer 1–10 nm thick that protects it from further oxidation and corrosion. The metal oxidizes in bulk (beyond the passivation layer) when heated above 500 °C (932 °F), and burns brilliantly when heated to about 2,500
{ "page_id": 3378, "source": null, "title": "Beryllium" }
°C (4,530 °F). The commercial use of beryllium requires the use of appropriate dust control equipment and industrial controls at all times because of the toxicity of inhaled beryllium-containing dusts that can cause a chronic life-threatening allergic disease, berylliosis, in some people. Berylliosis is typically manifested by chronic pulmonary fibrosis and, in severe cases, right sided heart failure and death. == Characteristics == === Physical properties === Beryllium is a steel gray and hard metal that is brittle at room temperature and has a close-packed hexagonal crystal structure. It has exceptional stiffness (Young's modulus 287 GPa) and a melting point of 1287 °C. The modulus of elasticity of beryllium is approximately 35% greater than that of steel. The combination of this modulus and a relatively low density results in an unusually fast sound conduction speed in beryllium – about 12.9 km/s at ambient conditions. Among all metals, beryllium dissipates the most heat per unit weight, with both high specific heat (1925 J·kg−1·K−1) and thermal conductivity (216 W·m−1·K−1). Beryllium's conductivity and relatively low coefficient of linear thermal expansion (11.4 × 10−6 K−1), make it uniquely stable under extreme temperature differences.: 11.1 === Nuclear properties === Naturally occurring beryllium, save for slight contamination by the radioisotopes created by cosmic rays, is isotopically pure beryllium-9, which has a nuclear spin of ⁠3/2⁠−. The inelastic scattering cross section of beryllium increases with relation to neutron energy, allowing for significant slowing of higher-energy neutrons. Therefore, it works as a neutron reflector and neutron moderator; the exact strength of neutron slowing strongly depends on the purity and size of the crystallites in the material. The single primordial beryllium isotope 9Be also undergoes a (n,2n) neutron reaction with neutron energies over about 1.9 MeV, to produce 8Be, which almost immediately breaks into two alpha particles. Thus, for
{ "page_id": 3378, "source": null, "title": "Beryllium" }
high-energy neutrons, beryllium is a neutron multiplier, releasing more neutrons than it absorbs. This nuclear reaction is: 94Be + n → 2 42He + 2 n Neutrons are liberated when beryllium nuclei are struck by energetic alpha particles producing the nuclear reaction 94Be + 42He → 126C + n where 42He is an alpha particle and 126C is a carbon-12 nucleus. Beryllium also releases neutrons under bombardment by gamma rays. Thus, natural beryllium bombarded either by alphas or gammas from a suitable radioisotope is a key component of most radioisotope-powered nuclear reaction neutron sources for the laboratory production of free neutrons. Small amounts of tritium are liberated when 94Be nuclei absorb low energy neutrons in the three-step nuclear reaction 94Be + n → 42He + 62He, 62He → 63Li + β−, 63Li + n → 42He + 31H 62He has a half-life of only 0.8 seconds, β− is an electron, and 63Li has a high neutron absorption cross section. Tritium is a radioisotope of concern in nuclear reactor waste streams. === Optical properties === As a metal, beryllium is transparent or translucent to most wavelengths of X-rays and gamma rays, making it useful for the output windows of X-ray tubes and other such apparatus. === Isotopes and nucleosynthesis === Both stable and unstable isotopes of beryllium are created in stars, but the radioisotopes do not last long. It is believed that the beryllium in the universe was created in the interstellar medium when cosmic rays induced fission in heavier elements found in interstellar gas and dust, a process called cosmic ray spallation. Natural beryllium is solely made up of the stable isotope beryllium-9. Beryllium is the only monoisotopic element with an even atomic number. About one billionth (10−9) of the primordial atoms created in the Big Bang nucleosynthesis were 7Be.
{ "page_id": 3378, "source": null, "title": "Beryllium" }
This is a consequence of the low density of matter when the temperature of the universe cooled enough for small nuclei to be stable. Creating such nuclei require nuclear collisions that are rare at low density.: 297 7Be is unstable and decays by electron capture into 7Li with a half-life of 53 days, but in the early universe this decay channel was unavailable due to atoms being fully ionized. The conversion of 7Be to Li was only complete near the time of recombination. The isotope 7Be (half-life 53 days) is also a cosmogenic nuclide, and also shows an atmospheric abundance inversely proportional to solar activity. The 2s electrons of beryllium may contribute to chemical bonding. Therefore, when 7Be decays by L-electron capture, it does so by taking electrons from its atomic orbitals that may be participating in bonding. This makes its decay rate dependent to a measurable degree upon its chemical surroundings – a rare occurrence in nuclear decay. 8Be is unstable but has a ground state resonance with an important role in the triple-alpha process in helium-fueled stars. As first proposed by British astronomer Sir Fred Hoyle based solely on astrophysical analysis, the energy levels of 8Be and 12C allow carbon nucleosynthesis by increasing the contact time for two of the three alpha particles in the carbon production process. The main carbon producing reaction in the universe is 4 He + 8 Be → 12 C + γ {\displaystyle ^{4}{\textrm {He}}\ +\ ^{8}{\textrm {Be}}\rightarrow \ ^{12}{\textrm {C}}+\gamma } where 4He is an alpha particle. Radioactive cosmogenic 10Be is produced in the atmosphere of the Earth by the cosmic ray spallation of oxygen. 10Be accumulates at the soil surface, where its relatively long half-life (1.36 million years) permits a long residence time before decaying to boron-10. Thus, 10Be and its
{ "page_id": 3378, "source": null, "title": "Beryllium" }
daughter products are used to examine natural soil erosion, soil formation and the development of lateritic soils, and as a proxy for measurement of the variations in solar activity and the age of ice cores. The production of 10Be is inversely proportional to solar activity, because increased solar wind during periods of high solar activity decreases the flux of galactic cosmic rays that reach the Earth. Nuclear explosions also form 10Be by the reaction of fast neutrons with 13C in the carbon dioxide in air. This is one of the indicators of past activity at nuclear weapon test sites. The exotic isotopes 11Be and 14Be are known to exhibit a nuclear halo. This phenomenon can be understood as the nuclei of 11Be and 14Be have, respectively, 1 and 4 neutrons orbiting substantially outside the expected nuclear radius. == Occurrence == Beryllium is found in over 100 minerals, but most are uncommon to rare. The more common beryllium containing minerals include: bertrandite (Be4Si2O7(OH)2), beryl (Al2Be3Si6O18), chrysoberyl (Al2BeO4) and phenakite (Be2SiO4). Precious forms of beryl are aquamarine, red beryl and emerald. The green color in gem-quality forms of beryl comes from varying amounts of chromium (about 2% for emerald). The two main ores of beryllium, beryl and bertrandite, are found in Argentina, Brazil, India, Madagascar, Russia and the United States. Total world reserves of beryllium ore are greater than 400,000 tonnes. The Sun has a concentration of 0.1 parts per billion (ppb) of beryllium. Beryllium has a concentration of 2 to 6 parts per million (ppm) in the Earth's crust and is the 47th most abundant element. It is most concentrated in the soils at 6 ppm. Trace amounts of 9Be are found in the Earth's atmosphere. The concentration of beryllium in sea water is 0.2–0.6 parts per trillion. In stream water,
{ "page_id": 3378, "source": null, "title": "Beryllium" }
however, beryllium is more abundant with a concentration of 0.1 ppb. == Extraction == The extraction of beryllium from its compounds is a difficult process due to its high affinity for oxygen at elevated temperatures, and its ability to reduce water when its oxide film is removed. Currently the United States, China and Kazakhstan are the only three countries involved in the industrial-scale extraction of beryllium. Kazakhstan produces beryllium from a concentrate stockpiled before the breakup of the Soviet Union around 1991. This resource had become nearly depleted by mid-2010s. Production of beryllium in Russia was halted in 1997, and is planned to be resumed in the 2020s. Beryllium is most commonly extracted from the mineral beryl, which is either sintered using an extraction agent or melted into a soluble mixture. The sintering process involves mixing beryl with sodium fluorosilicate and soda at 770 °C (1,420 °F) to form sodium fluoroberyllate, aluminium oxide and silicon dioxide. Beryllium hydroxide is precipitated from a solution of sodium fluoroberyllate and sodium hydroxide in water. The extraction of beryllium using the melt method involves grinding beryl into a powder and heating it to 1,650 °C (3,000 °F). The melt is quickly cooled with water and then reheated 250 to 300 °C (482 to 572 °F) in concentrated sulfuric acid, mostly yielding beryllium sulfate and aluminium sulfate. Aqueous ammonia is then used to remove the aluminium and sulfur, leaving beryllium hydroxide. Beryllium hydroxide created using either the sinter or melt method is then converted into beryllium fluoride or beryllium chloride. To form the fluoride, aqueous ammonium hydrogen fluoride is added to beryllium hydroxide to yield a precipitate of ammonium tetrafluoroberyllate, which is heated to 1,000 °C (1,830 °F) to form beryllium fluoride. Heating the fluoride to 900 °C (1,650 °F) with magnesium forms finely divided
{ "page_id": 3378, "source": null, "title": "Beryllium" }
beryllium, and additional heating to 1,300 °C (2,370 °F) creates the compact metal. Heating beryllium hydroxide forms beryllium oxide, which becomes beryllium chloride when combined with carbon and chlorine. Electrolysis of molten beryllium chloride is then used to obtain the metal. == Chemical properties == A beryllium atom has the electronic configuration [He] 2s2. The predominant oxidation state of beryllium is +2; the beryllium atom has lost both of its valence electrons. Lower oxidation states complexes of beryllium are exceedingly rare. For example, a stable complex with a Be-Be bond, which formally features beryllium in the +1 oxidation state, has been described. Beryllium in the 0 oxidation state is also known in a complex with a Mg-Be bond. Beryllium's chemical behavior is largely a result of its small atomic and ionic radii. It thus has very high ionization potentials and does not form divalent cations. Instead it forms two covalent bonds with a tendency to polymerize, as in solid BeCl2..: 37 Its chemistry has similarities to that of aluminium, an example of a diagonal relationship.: 107 At room temperature, the surface of beryllium forms a 1−10 nm-thick oxide passivation layer that prevents further reactions with air, except for gradual thickening of the oxide up to about 25 nm. When heated above about 500 °C, oxidation into the bulk metal progresses along grain boundaries. Once the metal is ignited in air by heating above the oxide melting point around 2500 °C, beryllium burns brilliantly, forming a mixture of beryllium oxide and beryllium nitride. Beryllium dissolves readily in non-oxidizing acids, such as HCl and diluted H2SO4, but not in nitric acid or water as this forms the oxide. This behavior is similar to that of aluminium. Beryllium also dissolves and reacts with alkali solutions.: 112 Binary compounds of beryllium(II) are polymeric in
{ "page_id": 3378, "source": null, "title": "Beryllium" }
the solid state. BeF2 has a silica-like structure with corner-shared BeF4 tetrahedra. BeCl2 and BeBr2 have chain structures with edge-shared tetrahedra. Beryllium oxide, BeO, is a white refractory solid which has a wurtzite crystal structure and a thermal conductivity as high as some metals. BeO is amphoteric. Beryllium sulfide, selenide and telluride are known, all having the zincblende structure. Beryllium nitride, Be3N2, is a high-melting-point compound which is readily hydrolyzed. Beryllium azide, BeN6 is known and beryllium phosphide, Be3P2 has a similar structure to Be3N2. A number of beryllium borides are known, such as Be5B, Be4B, Be2B, BeB2, BeB6 and BeB12. Beryllium carbide, Be2C, is a refractory brick-red compound that reacts with water to give methane. Beryllium silicides have been identified in the form of variously sized nanoclusters, formed through a spontaneous reaction between pure beryllium and silicon. The halides BeX2 (X = F, Cl, Br, and I) have a linear monomeric molecular structure in the gas phase.: 117 Beryllium is a strong electron acceptor leading to Be bonding effects similar to hydrogen bonding. === Aqueous solutions === Solutions of beryllium salts, such as beryllium sulfate and beryllium nitrate, are acidic because of hydrolysis of the [Be(H2O)4]2+ ion. The concentration of the first hydrolysis product, [Be(H2O)3(OH)]+, is less than 1% of the beryllium concentration. The most stable hydrolysis product is the trimeric ion [Be3(OH)3(H2O)6]3+. Beryllium hydroxide, Be(OH)2, is insoluble in water at pH 5 or more. Consequently, beryllium compounds are generally insoluble at biological pH. Because of this, inhalation of beryllium metal dust leads to the development of the fatal condition of berylliosis. Be(OH)2 dissolves in strongly alkaline solutions. Beryllium(II) forms few complexes with monodentate ligands because the water molecules in the aquo-ion, [Be(H2O)4]2+ are bound very strongly to the beryllium ion. Notable exceptions are the series of water-soluble complexes
{ "page_id": 3378, "source": null, "title": "Beryllium" }
with the fluoride ion: [Be(H2O)4]2+ + n F− ⇌ Be[(H2O)2−nFn]2− + n H2O Beryllium(II) forms many complexes with bidentate ligands containing oxygen-donor atoms. The species [Be3O(H2PO4)6]2- is notable for having a 3-coordinate oxide ion at its center. Basic beryllium acetate, Be4O(OAc)6, has an oxide ion surrounded by a tetrahedron of beryllium atoms. With organic ligands, such as the malonate ion, the acid deprotonates when forming the complex. The donor atoms are two oxygens. H2A + [Be(H2O)4]2+ ⇌ [BeA(H2O)2] + 2 H+ + 2 H2O H2A + [BeA(H2O)2] ⇌ [BeA2]2− + 2 H+ + 2 H2O The formation of a complex is in competition with the metal ion-hydrolysis reaction and mixed complexes with both the anion and the hydroxide ion are also formed. For example, derivatives of the cyclic trimer are known, with a bidentate ligand replacing one or more pairs of water molecules. Aliphatic hydroxycarboxylic acids such as glycolic acid form rather weak monodentate complexes in solution, in which the hydroxyl group remains intact. In the solid state, the hydroxyl group may deprotonate: a hexamer, Na4[Be6(OCH2(O)O)6], was isolated long ago. Aromatic hydroxy ligands (i.e. phenols) form relatively strong complexes. For example, log K1 and log K2 values of 12.2 and 9.3 have been reported for complexes with tiron. Beryllium has generally a rather poor affinity for ammine ligands. There are many early reports of complexes with amino acids, but unfortunately they are not reliable as the concomitant hydrolysis reactions were not understood at the time of publication. Values for log β of ca. 6 to 7 have been reported. The degree of formation is small because of competition with hydrolysis reactions. === Organic chemistry === Organometallic beryllium compounds are known to be highly reactive. Examples of known organoberyllium compounds are dineopentylberyllium, beryllocene (Cp2Be), diallylberyllium (by exchange reaction of diethyl beryllium
{ "page_id": 3378, "source": null, "title": "Beryllium" }
with triallyl boron), bis(1,3-trimethylsilylallyl)beryllium, Be(mes)2, and (beryllium(I) complex) diberyllocene. Ligands can also be aryls and alkynyls. == History == The mineral beryl, which contains beryllium, has been used at least since the Ptolemaic dynasty of Egypt. In the first century CE, Roman naturalist Pliny the Elder mentioned in his encyclopedia Natural History that beryl and emerald ("smaragdus") were similar. The Papyrus Graecus Holmiensis, written in the third or fourth century CE, contains notes on how to prepare artificial emerald and beryl. Early analyses of emeralds and beryls by Martin Heinrich Klaproth, Torbern Olof Bergman, Franz Karl Achard, and Johann Jakob Bindheim always yielded similar elements, leading to the mistaken conclusion that both substances are aluminium silicates. Mineralogist René Just Haüy discovered that both crystals are geometrically identical, and he asked chemist Louis-Nicolas Vauquelin for a chemical analysis. In a 1798 paper read before the Institut de France, Vauquelin reported that he found a new "earth" by dissolving aluminium hydroxide from emerald and beryl in an additional alkali. The editors of the journal Annales de chimie et de physique named the new earth "glucine" for the sweet taste of some of its compounds. The name beryllium was first used by Friedrich Wöhler in 1828. Both beryllium and glucinum were used concurrently until 1949, when the IUPAC adopted beryllium as the standard name of the element. Friedrich Wöhler and Antoine Bussy independently isolated beryllium in 1828 by the chemical reaction of metallic potassium with beryllium chloride, as follows: BeCl2 + 2 K → 2 KCl + Be Using an alcohol lamp, Wöhler heated alternating layers of beryllium chloride and potassium in a wired-shut platinum crucible. The above reaction immediately took place and caused the crucible to become white hot. Upon cooling and washing the resulting gray-black powder, he saw that it was
{ "page_id": 3378, "source": null, "title": "Beryllium" }
made of fine particles with a dark metallic luster. The highly reactive potassium had been produced by the electrolysis of its compounds. He did not succeed to melt the beryllium particles. The direct electrolysis of a molten mixture of beryllium fluoride and sodium fluoride by Paul Lebeau in 1898 resulted in the first pure (99.5 to 99.8%) samples of beryllium. However, industrial production started only after the First World War. The original industrial involvement included subsidiaries and scientists related to the Union Carbide and Carbon Corporation in Cleveland, Ohio, and Siemens & Halske AG in Berlin. In the US, the process was ruled by Hugh S. Cooper, director of The Kemet Laboratories Company. In Germany, the first commercially successful process for producing beryllium was developed in 1921 by Alfred Stock and Hans Goldschmidt. A sample of beryllium was bombarded with alpha rays from the decay of radium in a 1932 experiment by James Chadwick that uncovered the existence of the neutron. This same method is used in one class of radioisotope-based laboratory neutron sources that produce 30 neutrons for every million α particles. Beryllium production saw a rapid increase during World War II due to the rising demand for hard beryllium-copper alloys and phosphors for fluorescent lights. Most early fluorescent lamps used zinc orthosilicate with varying content of beryllium to emit greenish light. Small additions of magnesium tungstate improved the blue part of the spectrum to yield an acceptable white light. Halophosphate-based phosphors replaced beryllium-based phosphors after beryllium was found to be toxic. Electrolysis of a mixture of beryllium fluoride and sodium fluoride was used to isolate beryllium during the 19th century. The metal's high melting point makes this process more energy-consuming than corresponding processes used for the alkali metals. Early in the 20th century, the production of beryllium by
{ "page_id": 3378, "source": null, "title": "Beryllium" }
the thermal decomposition of beryllium iodide was investigated following the success of a similar process for the production of zirconium, but this process proved to be uneconomical for volume production. Pure beryllium metal did not become readily available until 1957, even though it had been used as an alloying metal to harden and toughen copper much earlier. Beryllium could be produced by reducing beryllium compounds such as beryllium chloride with metallic potassium or sodium. Currently, most beryllium is produced by reducing beryllium fluoride with magnesium. The price on the American market for vacuum-cast beryllium ingots was about $338 per pound ($745 per kilogram) in 2001. Between 1998 and 2008, the world's production of beryllium had decreased from 343 to about 200 tonnes. It then increased to 230 metric tons by 2018, of which 170 tonnes came from the United States. === Etymology === Beryllium was named for the semiprecious mineral beryl, from which it was first isolated. Martin Klaproth, having independently determined that beryl and emerald share an element, preferred the name "beryllina" due to the fact that yttria also formed sweet salts. Although Humphry Davy failed to isolate it, he proposed the name glucium for the new metal, derived from the name glucina for the earth it was found in; altered forms of this name, glucinium or glucinum (symbol Gl) continued to be used into the 20th century. == Applications == === Radiation windows === Because of its low atomic number and very low absorption for X-rays, the oldest and still one of the most important applications of beryllium is in radiation windows for X-ray tubes. Extreme demands are placed on purity and cleanliness of beryllium to avoid artifacts in the X-ray images. Thin beryllium foils are used as radiation windows for X-ray detectors, and their extremely low absorption
{ "page_id": 3378, "source": null, "title": "Beryllium" }
minimizes the heating effects caused by high-intensity, low energy X-rays typical of synchrotron radiation. Vacuum-tight windows and beam-tubes for radiation experiments on synchrotrons are manufactured exclusively from beryllium. In scientific setups for various X-ray emission studies (e.g., energy-dispersive X-ray spectroscopy) the sample holder is usually made of beryllium because its emitted X-rays have much lower energies (≈100 eV) than X-rays from most studied materials. Low atomic number also makes beryllium relatively transparent to energetic particles. Therefore, it is used to build the beam pipe around the collision region in particle physics setups, such as all four main detector experiments at the Large Hadron Collider (ALICE, ATLAS, CMS, LHCb), the Tevatron and at SLAC. The low density of beryllium allows collision products to reach the surrounding detectors without significant interaction, its stiffness allows a powerful vacuum to be produced within the pipe to minimize interaction with gases, its thermal stability allows it to function correctly at temperatures of only a few degrees above absolute zero, and its diamagnetic nature keeps it from interfering with the complex multipole magnet systems used to steer and focus the particle beams. === Mechanical applications === Because of its stiffness, light weight and dimensional stability over a wide temperature range, beryllium metal is used for lightweight structural components in the defense and aerospace industries in high-speed aircraft, guided missiles, spacecraft, and satellites, including the James Webb Space Telescope. Several liquid-fuel rockets have used rocket nozzles made of pure beryllium. Beryllium powder was itself studied as a rocket fuel, but this use has never materialized. A small number of extreme high-end bicycle frames have been built with beryllium. From 1998 to 2000, the McLaren Formula One team used Mercedes-Benz engines with beryllium-aluminium alloy pistons. The use of beryllium engine components was banned following a protest by Scuderia
{ "page_id": 3378, "source": null, "title": "Beryllium" }
Ferrari. Mixing about 2.0% beryllium into copper forms an alloy called beryllium copper that is six times stronger than copper alone. Beryllium alloys are used in many applications because of their combination of elasticity, high electrical conductivity and thermal conductivity, high strength and hardness, nonmagnetic properties, as well as good corrosion and fatigue resistance. These applications include non-sparking tools that are used near flammable gases (beryllium nickel), springs, membranes (beryllium nickel and beryllium iron) used in surgical instruments, and high temperature devices. As little as 50 parts per million of beryllium alloyed with liquid magnesium leads to a significant increase in oxidation resistance and decrease in flammability. The high elastic stiffness of beryllium has led to its extensive use in precision instrumentation, e.g. in inertial guidance systems and in the support mechanisms for optical systems. Beryllium-copper alloys were also applied as a hardening agent in "Jason pistols", which were used to strip the paint from the hulls of ships. In sound amplification systems, the speed at which sound travels directly affects the resonant frequency of the amplifier, thereby influencing the range of audible high-frequency sounds. Beryllium stands out due to its exceptionally high speed of sound propagation compared to other metals. This unique property allows beryllium to achieve higher resonant frequencies, making it an ideal material for use as a diaphragm in high-quality loudspeakers. Beryllium was used for cantilevers in high-performance phonograph cartridge styli, where its extreme stiffness and low density allowed for tracking weights to be reduced to 1 gram while still tracking high frequency passages with minimal distortion. An earlier major application of beryllium was in brakes for military airplanes because of its hardness, high melting point, and exceptional ability to dissipate heat. Environmental considerations have led to substitution by other materials. A metal matrix composite material combining
{ "page_id": 3378, "source": null, "title": "Beryllium" }
beryllium with aluminium developed under the trade name AlBeMet for the high performance aerospace industry has low weight but four times the stiffness of aluminum alone. === Mirrors === Large-area beryllium mirrors, frequently with a honeycomb support structure, are used, for example, in meteorological satellites where low weight and long-term dimensional stability are critical. Smaller beryllium mirrors are used in optical guidance systems and in fire-control systems, e.g. in the German-made Leopard 1 and Leopard 2 main battle tanks. In these systems, very rapid movement of the mirror is required, which again dictates low mass and high rigidity. Usually the beryllium mirror is coated with hard electroless nickel plating which can be more easily polished to a finer optical finish than beryllium. In some applications, the beryllium blank is polished without any coating. This is particularly applicable to cryogenic operation where thermal expansion mismatch can cause the coating to buckle. The James Webb Space Telescope has 18 hexagonal beryllium sections for its mirrors, each plated with a thin layer of gold. Because JWST will face a temperature of 33 K, the mirror is made of gold-plated beryllium, which is capable of handling extreme cold better than glass. Beryllium contracts and deforms less than glass and remains more uniform in such temperatures. For the same reason, the optics of the Spitzer Space Telescope are entirely built of beryllium metal. === Magnetic applications === Beryllium is non-magnetic. Therefore, tools fabricated out of beryllium-based materials are used by naval or military explosive ordnance disposal teams for work on or near naval mines, since these mines commonly have magnetic fuzes. They are also found in maintenance and construction materials near magnetic resonance imaging (MRI) machines because of the high magnetic fields generated. === Nuclear applications === High purity beryllium can be used in nuclear
{ "page_id": 3378, "source": null, "title": "Beryllium" }
reactors as a moderator, reflector, or as cladding on fuel elements. Thin plates or foils of beryllium are sometimes used in nuclear weapon designs as the very outer layer of the plutonium pits in the primary stages of thermonuclear bombs, placed to surround the fissile material. These layers of beryllium are good "pushers" for the implosion of the plutonium-239, and they are good neutron reflectors, just as in beryllium-moderated nuclear reactors. Beryllium is commonly used in some neutron sources in laboratory devices in which relatively few neutrons are needed (rather than having to use a nuclear reactor or a particle accelerator-powered neutron generator). For this purpose, a target of beryllium-9 is bombarded with energetic alpha particles from a radioisotope such as polonium-210, radium-226, plutonium-238, or americium-241. In the nuclear reaction that occurs, a beryllium nucleus is transmuted into carbon-12, and one free neutron is emitted, traveling in about the same direction as the alpha particle was heading. Such alpha decay-driven beryllium neutron sources, named "urchin" neutron initiators, were used in some early atomic bombs. Neutron sources in which beryllium is bombarded with gamma rays from a gamma decay radioisotope are also used to produce laboratory neutrons. Beryllium is used in fuel fabrication for CANDU reactors. The fuel elements have small appendages that are resistance brazed to the fuel cladding using an induction brazing process with Be as the braze filler material. Bearing pads are brazed in place to prevent contact between the fuel bundle and the pressure tube containing it, and inter-element spacer pads are brazed on to prevent element to element contact. Beryllium is used at the Joint European Torus nuclear-fusion research laboratory, and it will be used in the more advanced ITER to condition the components which face the plasma. Beryllium has been proposed as a cladding material
{ "page_id": 3378, "source": null, "title": "Beryllium" }
for nuclear fuel rods, because of its good combination of mechanical, chemical, and nuclear properties. Beryllium fluoride is one of the constituent salts of the eutectic salt mixture FLiBe, which is used as a solvent, moderator and coolant in many hypothetical molten salt reactor designs, including the liquid fluoride thorium reactor (LFTR). === Acoustics === The low weight and high rigidity of beryllium make it useful as a material for high-frequency speaker drivers. Because beryllium is expensive (many times more than titanium), hard to shape due to its brittleness, and toxic if mishandled, beryllium tweeters are limited to high-end home, pro audio, and public address applications. Some high-fidelity products have been fraudulently claimed to be made of the material. Some high-end phonograph cartridges used beryllium cantilevers to improve tracking by reducing mass. === Electronic === Beryllium is a p-type dopant in III-V compound semiconductors. It is widely used in materials such as GaAs, AlGaAs, InGaAs and InAlAs grown by molecular beam epitaxy (MBE). Cross-rolled beryllium sheet is an excellent structural support for printed circuit boards in surface-mount technology. In critical electronic applications, beryllium is both a structural support and heat sink. The application also requires a coefficient of thermal expansion that is well matched to the alumina and polyimide-glass substrates. The beryllium-beryllium oxide composite "E-Materials" have been specially designed for these electronic applications and have the additional advantage that the thermal expansion coefficient can be tailored to match diverse substrate materials. Beryllium oxide is useful for many applications that require the combined properties of an electrical insulator and an excellent heat conductor, with high strength and hardness and a very high melting point. Beryllium oxide is frequently used as an insulator base plate in high-power transistors in radio frequency transmitters for telecommunications. Beryllium oxide is being studied for use in
{ "page_id": 3378, "source": null, "title": "Beryllium" }
increasing the thermal conductivity of uranium dioxide nuclear fuel pellets. Beryllium compounds were used in fluorescent lighting tubes, but this use was discontinued because of the disease berylliosis which developed in the workers who were making the tubes. === Medical applications === Beryllium is a component of several dental alloys. Beryllium is used in X-ray windows because it is transparent to X-rays, allowing for clearer and more efficient imaging. In medical imaging equipment, such as CT scanners and mammography machines, beryllium's strength and light weight enhance durability and performance. Beryllium is used in analytical equipment for blood, HIV, and other diseases. Beryllium alloys are used in surgical instruments, optical mirrors, and laser systems for medical treatments. == Toxicity and safety == === Biological effects === Approximately 35 micrograms of beryllium is found in the average human body, an amount not considered harmful. Beryllium is chemically similar to magnesium and therefore can displace it from enzymes, which causes them to malfunction. Because Be2+ is a highly charged and small ion, it can easily get into many tissues and cells, where it specifically targets cell nuclei, inhibiting many enzymes, including those used for synthesizing DNA. Its toxicity is exacerbated by the fact that the body has no means to control beryllium levels, and once inside the body, beryllium cannot be removed. === Inhalation === Chronic beryllium disease (CBD), or berylliosis, is a pulmonary and systemic granulomatous disease caused by inhalation of dust or fumes contaminated with beryllium; either large amounts over a short time or small amounts over a long time can lead to this ailment. Symptoms of the disease can take up to five years to develop; about a third of patients with it die and the survivors are left disabled. The International Agency for Research on Cancer (IARC) lists beryllium
{ "page_id": 3378, "source": null, "title": "Beryllium" }
and beryllium compounds as Category 1 carcinogens. === Occupational exposure === In the US, the Occupational Safety and Health Administration (OSHA) has designated a permissible exposure limit (PEL) for beryllium and beryllium compounds of 0.2 μg/m3 as an 8-hour time-weighted average (TWA) and 2.0 μg/m3 as a short-term exposure limit over a sampling period of 15 minutes. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) upper-bound threshold of 0.5 μg/m3. The IDLH (immediately dangerous to life and health) value is 4 mg/m3. The toxicity of beryllium is on par with other toxic metalloids/metals, such as arsenic and mercury. Exposure to beryllium in the workplace can lead to a sensitized immune response, and over time development of berylliosis. NIOSH in the United States researches these effects in collaboration with a major manufacturer of beryllium products. NIOSH also conducts genetic research on sensitization and CBD, independently of this collaboration. Acute beryllium disease in the form of chemical pneumonitis was first reported in Europe in 1933 and in the United States in 1943. A survey found that about 5% of workers in plants manufacturing fluorescent lamps in 1949 in the United States had beryllium-related lung diseases. Chronic berylliosis resembles sarcoidosis in many respects, and the differential diagnosis is often difficult. It killed some early workers in nuclear weapons design, such as Herbert L. Anderson. Beryllium may be found in coal slag. When the slag is formulated into an abrasive agent for blasting paint and rust from hard surfaces, the beryllium can become airborne and become a source of exposure. Although the use of beryllium compounds in fluorescent lighting tubes was discontinued in 1949, potential for exposure to beryllium exists in the nuclear and aerospace industries, in the refining of beryllium metal and the melting of
{ "page_id": 3378, "source": null, "title": "Beryllium" }
beryllium-containing alloys, in the manufacturing of electronic devices, and in the handling of other beryllium-containing material. === Detection === Early researchers undertook the highly hazardous practice of identifying beryllium and its various compounds from its sweet taste. A modern test for beryllium in air and on surfaces has been developed and published as an international voluntary consensus standard, ASTM D7202. The procedure uses dilute ammonium bifluoride for dissolution and fluorescence detection with beryllium bound to sulfonated hydroxybenzoquinoline, allowing up to 100 times more sensitive detection than the recommended limit for beryllium concentration in the workplace. Fluorescence increases with increasing beryllium concentration. The new procedure has been successfully tested on a variety of surfaces and is effective for the dissolution and detection of refractory beryllium oxide and siliceous beryllium in minute concentrations (ASTM D7458). The NIOSH Manual of Analytical Methods contains methods for measuring occupational exposures to beryllium. == Notes == == References == == Cited sources == Emsley, John (2001). Nature's Building Blocks: An A–Z Guide to the Elements. Oxford, England, UK: Oxford University Press. ISBN 978-0-19-850340-8. Weeks, Mary Elvira; Leichester, Henry M. (1968). Discovery of the Elements. Easton, PA: Journal of Chemical Education. LCCCN 68-15217. == Further reading == Newman LS (2003). "Beryllium". Chemical & Engineering News. 81 (36): 38. doi:10.1021/cen-v081n036.p038. Mroz MM, Balkissoon R, and Newman LS. "Beryllium". In: Bingham E, Cohrssen B, Powell C (eds.) Patty's Toxicology, Fifth Edition. New York: John Wiley & Sons 2001, 177–220. Walsh, KA, Beryllium Chemistry and Processing. Vidal, EE. et al. Eds. 2009, Materials Park, OH:ASM International. Beryllium Lymphocyte Proliferation Testing (BeLPT). DOE Specification 1142–2001. Washington, DC: U.S. Department of Energy, 2001. == External links == ATSDR Case Studies in Environmental Medicine: Beryllium Toxicity Archived 4 February 2016 at the Wayback Machine U.S. Department of Health and Human Services It's Elemental
{ "page_id": 3378, "source": null, "title": "Beryllium" }
– Beryllium MSDS: ESPI Metals Beryllium at The Periodic Table of Videos (University of Nottingham) National Institute for Occupational Safety and Health – Beryllium Page National Supplemental Screening Program (Oak Ridge Associated Universities) Historic Price of Beryllium in USA
{ "page_id": 3378, "source": null, "title": "Beryllium" }
In alchemy, fixation is a process by which a previously volatile substance is "transformed" into a form (often solid) that is not affected by fire. It separates the substance or object and puts it back in the same or different shape at a subatomic level. Fixation is sometimes listed as one of the processes required for transformation of a substance, or completion of the alchemical magnum opus. == See also == Atomic layer deposition Mond process Thermal decomposition
{ "page_id": 2428216, "source": null, "title": "Fixation (alchemy)" }
In quantum mechanics, the particle in a one-dimensional lattice is a problem that occurs in the model of a periodic crystal lattice. The potential is caused by ions in the periodic structure of the crystal creating an electromagnetic field so electrons are subject to a regular potential inside the lattice. It is a generalization of the free electron model, which assumes zero potential inside the lattice. == Problem definition == When talking about solid materials, the discussion is mainly around crystals – periodic lattices. Here we will discuss a 1D lattice of positive ions. Assuming the spacing between two ions is a, the potential in the lattice will look something like this: The mathematical representation of the potential is a periodic function with a period a. According to Bloch's theorem, the wavefunction solution of the Schrödinger equation when the potential is periodic, can be written as: ψ ( x ) = e i k x u ( x ) , {\displaystyle \psi (x)=e^{ikx}u(x),} where u(x) is a periodic function which satisfies u(x + a) = u(x). It is the Bloch factor with Floquet exponent k {\displaystyle k} which gives rise to the band structure of the energy spectrum of the Schrödinger equation with a periodic potential like the Kronig–Penney potential or a cosine function as it was shown in 1928 by Strutt. The solutions can be given with the help of the Mathieu functions. When nearing the edges of the lattice, there are problems with the boundary condition. Therefore, we can represent the ion lattice as a ring following the Born–von Karman boundary conditions. If L is the length of the lattice so that L ≫ a, then the number of ions in the lattice is so big, that when considering one ion, its surrounding is almost linear, and the
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
wavefunction of the electron is unchanged. So now, instead of two boundary conditions we get one circular boundary condition: ψ ( 0 ) = ψ ( L ) . {\displaystyle \psi (0)=\psi (L).} If N is the number of ions in the lattice, then we have the relation: aN = L. Replacing in the boundary condition and applying Bloch's theorem will result in a quantization for k: ψ ( 0 ) = e i k ⋅ 0 u ( 0 ) = e i k L u ( L ) = ψ ( L ) {\displaystyle \psi (0)=e^{ik\cdot 0}u(0)=e^{ikL}u(L)=\psi (L)} u ( 0 ) = e i k L u ( L ) = e i k L u ( N a ) → e i k L = 1 {\displaystyle u(0)=e^{ikL}u(L)=e^{ikL}u(Na)\to e^{ikL}=1} ⇒ k L = 2 π n → k = 2 π L n ( n = 0 , ± 1 , … , ± N 2 ) . {\displaystyle \Rightarrow kL=2\pi n\to k={2\pi \over L}n\qquad \left(n=0,\pm 1,\dots ,\pm {\frac {N}{2}}\right).} == Kronig–Penney model == The Kronig–Penney model (named after Ralph Kronig and William Penney) is a simple, idealized quantum-mechanical system that consists of an infinite periodic array of rectangular potential barriers. The potential function is approximated by a rectangular potential: Using Bloch's theorem, we only need to find a solution for a single period, make sure it is continuous and smooth, and to make sure the function u(x) is also continuous and smooth. Considering a single period of the potential: We have two regions here. We will solve for each independently: Let E be an energy value above the well (E>0) For 0 < x < ( a − b ) {\displaystyle 0<x<(a-b)} : − ℏ 2 2 m ψ x x = E ψ ⇒
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
ψ = A e i α x + A ′ e − i α x ( α 2 = 2 m E ℏ 2 ) {\displaystyle {\begin{aligned}{\frac {-\hbar ^{2}}{2m}}\psi _{xx}&=E\psi \\\Rightarrow \psi &=Ae^{i\alpha x}+A'e^{-i\alpha x}&\left(\alpha ^{2}={2mE \over \hbar ^{2}}\right)\end{aligned}}} For − b < x < 0 {\displaystyle -b<x<0} : − ℏ 2 2 m ψ x x = ( E + V 0 ) ψ ⇒ ψ = B e i β x + B ′ e − i β x ( β 2 = 2 m ( E + V 0 ) ℏ 2 ) . {\displaystyle {\begin{aligned}{\frac {-\hbar ^{2}}{2m}}\psi _{xx}&=(E+V_{0})\psi \\\Rightarrow \psi &=Be^{i\beta x}+B'e^{-i\beta x}&\left(\beta ^{2}={2m(E+V_{0}) \over \hbar ^{2}}\right).\end{aligned}}} To find u(x) in each region, we need to manipulate the electron's wavefunction: ψ ( 0 < x < a − b ) = A e i α x + A ′ e − i α x = e i k x ( A e i ( α − k ) x + A ′ e − i ( α + k ) x ) ⇒ u ( 0 < x < a − b ) = A e i ( α − k ) x + A ′ e − i ( α + k ) x . {\displaystyle {\begin{aligned}\psi (0<x<a-b)&=Ae^{i\alpha x}+A'e^{-i\alpha x}=e^{ikx}\left(Ae^{i(\alpha -k)x}+A'e^{-i(\alpha +k)x}\right)\\\Rightarrow u(0<x<a-b)&=Ae^{i(\alpha -k)x}+A'e^{-i(\alpha +k)x}.\end{aligned}}} And in the same manner: u ( − b < x < 0 ) = B e i ( β − k ) x + B ′ e − i ( β + k ) x . {\displaystyle u(-b<x<0)=Be^{i(\beta -k)x}+B'e^{-i(\beta +k)x}.} To complete the solution we need to make sure the probability function is continuous and smooth, i.e.: ψ ( 0 − ) = ψ ( 0 + ) ψ ′ ( 0 − ) = ψ ′ ( 0
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
+ ) . {\displaystyle \psi (0^{-})=\psi (0^{+})\qquad \psi '(0^{-})=\psi '(0^{+}).} And that u(x) and u′(x) are periodic: u ( − b ) = u ( a − b ) u ′ ( − b ) = u ′ ( a − b ) . {\displaystyle u(-b)=u(a-b)\qquad u'(-b)=u'(a-b).} These conditions yield the following matrix: ( 1 1 − 1 − 1 α − α − β β e i ( α − k ) ( a − b ) e − i ( α + k ) ( a − b ) − e − i ( β − k ) b − e i ( β + k ) b ( α − k ) e i ( α − k ) ( a − b ) − ( α + k ) e − i ( α + k ) ( a − b ) − ( β − k ) e − i ( β − k ) b ( β + k ) e i ( β + k ) b ) ( A A ′ B B ′ ) = ( 0 0 0 0 ) . {\displaystyle {\begin{pmatrix}1&1&-1&-1\\\alpha &-\alpha &-\beta &\beta \\e^{i(\alpha -k)(a-b)}&e^{-i(\alpha +k)(a-b)}&-e^{-i(\beta -k)b}&-e^{i(\beta +k)b}\\(\alpha -k)e^{i(\alpha -k)(a-b)}&-(\alpha +k)e^{-i(\alpha +k)(a-b)}&-(\beta -k)e^{-i(\beta -k)b}&(\beta +k)e^{i(\beta +k)b}\end{pmatrix}}{\begin{pmatrix}A\\A'\\B\\B'\end{pmatrix}}={\begin{pmatrix}0\\0\\0\\0\end{pmatrix}}.} For us to have a non-trivial solution, the determinant of the matrix must be 0. This leads us to the following expression: cos ⁡ ( k a ) = cos ⁡ ( β b ) cos ⁡ [ α ( a − b ) ] − α 2 + β 2 2 α β sin ⁡ ( β b ) sin ⁡ [ α ( a − b ) ] . {\displaystyle \cos(ka)=\cos(\beta b)\cos[\alpha (a-b)]-{\alpha ^{2}+\beta ^{2} \over 2\alpha \beta }\sin(\beta b)\sin[\alpha (a-b)].} To further simplify the expression, we perform the
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
following approximations: b → 0 ; V 0 → ∞ ; V 0 b = c o n s t a n t {\displaystyle b\to 0;\quad V_{0}\to \infty ;\quad V_{0}b=\mathrm {constant} } ⇒ β 2 b = c o n s t a n t ; α 2 b → 0 {\displaystyle \Rightarrow \beta ^{2}b=\mathrm {constant} ;\quad \alpha ^{2}b\to 0} ⇒ β b → 0 ; sin ⁡ ( β b ) → β b ; cos ⁡ ( β b ) → 1. {\displaystyle \Rightarrow \beta b\to 0;\quad \sin(\beta b)\to \beta b;\quad \cos(\beta b)\to 1.} The expression will now be: cos ⁡ ( k a ) = cos ⁡ ( α a ) + P sin ⁡ ( α a ) α a , P = m V 0 b a ℏ 2 . {\displaystyle \cos(ka)=\cos(\alpha a)+P{\frac {\sin(\alpha a)}{\alpha a}},\qquad P={\frac {mV_{0}ba}{\hbar ^{2}}}.} For energy values inside the well (E < 0), we get: cos ⁡ ( k a ) = cos ⁡ ( β b ) cosh ⁡ [ α ( a − b ) ] − β 2 − α 2 2 α β sin ⁡ ( β b ) sinh ⁡ [ α ( a − b ) ] , {\displaystyle \cos(ka)=\cos(\beta b)\cosh[\alpha (a-b)]-{\beta ^{2}-\alpha ^{2} \over 2\alpha \beta }\sin(\beta b)\sinh[\alpha (a-b)],} with α 2 = 2 m | E | ℏ 2 {\displaystyle \alpha ^{2}={2m|E| \over \hbar ^{2}}} and β 2 = 2 m ( V 0 − | E | ) ℏ 2 {\displaystyle \beta ^{2}={\frac {2m(V_{0}-|E|)}{\hbar ^{2}}}} . Following the same approximations as above ( b → 0 ; V 0 → ∞ ; V 0 b = c o n s t a n t {\displaystyle b\to 0;\,V_{0}\to \infty ;\,V_{0}b=\mathrm {constant} } ), we arrive at cos ⁡ ( k a )
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
= cosh ⁡ ( α a ) + P sinh ⁡ ( α a ) α a {\displaystyle \cos(ka)=\cosh(\alpha a)+P{\frac {\sinh(\alpha a)}{\alpha a}}} with the same formula for P as in the previous case ( P = m V 0 b a ℏ 2 ) {\displaystyle \left(P={\frac {mV_{0}ba}{\hbar ^{2}}}\right)} . == Band gaps in the Kronig–Penney model == In the previous paragraph, the only variables not determined by the parameters of the physical system are the energy E and the crystal momentum k. By picking a value for E, one can compute the right hand side, and then compute k by taking the arccos {\displaystyle \arccos } of both sides. Thus, the expression gives rise to the dispersion relation. The right hand side of the last expression above can sometimes be greater than 1 or less than –1, in which case there is no value of k that can make the equation true. Since α a ∝ E {\displaystyle \alpha a\propto {\sqrt {E}}} , that means there are certain values of E for which there are no eigenfunctions of the Schrödinger equation. These values constitute the band gap. Thus, the Kronig–Penney model is one of the simplest periodic potentials to exhibit a band gap. == Kronig–Penney model: alternative solution == An alternative treatment to a similar problem is given. Here we have a delta periodic potential: V ( x ) = A ⋅ ∑ n = − ∞ ∞ δ ( x − n a ) . {\displaystyle V(x)=A\cdot \sum _{n=-\infty }^{\infty }\delta (x-na).} A is some constant, and a is the lattice constant (the spacing between each site). Since this potential is periodic, we could expand it as a Fourier series: V ( x ) = ∑ K V ~ ( K ) ⋅ e i K x ,
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
{\displaystyle V(x)=\sum _{K}{\tilde {V}}(K)\cdot e^{iKx},} where V ~ ( K ) = 1 a ∫ − a / 2 a / 2 d x V ( x ) e − i K x = 1 a ∫ − a / 2 a / 2 d x ∑ n = − ∞ ∞ A ⋅ δ ( x − n a ) e − i K x = A a . {\displaystyle {\tilde {V}}(K)={\frac {1}{a}}\int _{-a/2}^{a/2}dx\,V(x)\,e^{-iKx}={\frac {1}{a}}\int _{-a/2}^{a/2}dx\sum _{n=-\infty }^{\infty }A\cdot \delta (x-na)\,e^{-iKx}={\frac {A}{a}}.} The wave-function, using Bloch's theorem, is equal to ψ k ( x ) = e i k x u k ( x ) {\displaystyle \psi _{k}(x)=e^{ikx}u_{k}(x)} where u k ( x ) {\displaystyle u_{k}(x)} is a function that is periodic in the lattice, which means that we can expand it as a Fourier series as well: u k ( x ) = ∑ K u ~ k ( K ) e i K x . {\displaystyle u_{k}(x)=\sum _{K}{\tilde {u}}_{k}(K)e^{iKx}.} Thus the wave function is: ψ k ( x ) = ∑ K u ~ k ( K ) e i ( k + K ) x . {\displaystyle \psi _{k}(x)=\sum _{K}{\tilde {u}}_{k}(K)\,e^{i(k+K)x}.} Putting this into the Schrödinger equation, we get: [ ℏ 2 ( k + K ) 2 2 m − E k ] u ~ k ( K ) + ∑ K ′ V ~ ( K − K ′ ) u ~ k ( K ′ ) = 0 {\displaystyle \left[{\frac {\hbar ^{2}(k+K)^{2}}{2m}}-E_{k}\right]{\tilde {u}}_{k}(K)+\sum _{K'}{\tilde {V}}(K-K')\,{\tilde {u}}_{k}(K')=0} or rather: [ ℏ 2 ( k + K ) 2 2 m − E k ] u ~ k ( K ) + A a ∑ K ′ u ~ k ( K ′ ) = 0 {\displaystyle \left[{\frac {\hbar ^{2}(k+K)^{2}}{2m}}-E_{k}\right]{\tilde {u}}_{k}(K)+{\frac {A}{a}}\sum _{K'}{\tilde {u}}_{k}(K')=0} Now
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
we recognize that: u k ( 0 ) = ∑ K ′ u ~ k ( K ′ ) {\displaystyle u_{k}(0)=\sum _{K'}{\tilde {u}}_{k}(K')} Plug this into the Schrödinger equation: [ ℏ 2 ( k + K ) 2 2 m − E k ] u ~ k ( K ) + A a u k ( 0 ) = 0 {\displaystyle \left[{\frac {\hbar ^{2}(k+K)^{2}}{2m}}-E_{k}\right]{\tilde {u}}_{k}(K)+{\frac {A}{a}}u_{k}(0)=0} Solving this for u ~ k ( K ) {\displaystyle {\tilde {u}}_{k}(K)} we get: u ~ k ( K ) = 2 m ℏ 2 A a f ( k ) 2 m E k ℏ 2 − ( k + K ) 2 = 2 m ℏ 2 A a 2 m E k ℏ 2 − ( k + K ) 2 u k ( 0 ) {\displaystyle {\tilde {u}}_{k}(K)={\frac {{\frac {2m}{\hbar ^{2}}}{\frac {A}{a}}f(k)}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}={\frac {{\frac {2m}{\hbar ^{2}}}{\frac {A}{a}}}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}\,u_{k}(0)} We sum this last equation over all values of K to arrive at: ∑ K u ~ k ( K ) = ∑ K 2 m ℏ 2 A a 2 m E k ℏ 2 − ( k + K ) 2 u k ( 0 ) {\displaystyle \sum _{K}{\tilde {u}}_{k}(K)=\sum _{K}{\frac {{\frac {2m}{\hbar ^{2}}}{\frac {A}{a}}}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}\,u_{k}(0)} Or: u k ( 0 ) = ∑ K 2 m ℏ 2 A a 2 m E k ℏ 2 − ( k + K ) 2 u k ( 0 ) {\displaystyle u_{k}(0)=\sum _{K}{\frac {{\frac {2m}{\hbar ^{2}}}{\frac {A}{a}}}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}\,u_{k}(0)} Conveniently, u k ( 0 ) {\displaystyle u_{k}(0)} cancels out and we get: 1 = ∑ K 2 m ℏ 2 A a 2 m E k ℏ 2 − ( k + K ) 2 {\displaystyle 1=\sum _{K}{\frac {{\frac {2m}{\hbar ^{2}}}{\frac {A}{a}}}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}} Or: ℏ 2 2
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
m a A = ∑ K 1 2 m E k ℏ 2 − ( k + K ) 2 {\displaystyle {\frac {\hbar ^{2}}{2m}}{\frac {a}{A}}=\sum _{K}{\frac {1}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}} To save ourselves some unnecessary notational effort we define a new variable: α 2 := 2 m E k ℏ 2 {\displaystyle \alpha ^{2}:={\frac {2mE_{k}}{\hbar ^{2}}}} and finally our expression is: ℏ 2 2 m a A = ∑ K 1 α 2 − ( k + K ) 2 {\displaystyle {\frac {\hbar ^{2}}{2m}}{\frac {a}{A}}=\sum _{K}{\frac {1}{\alpha ^{2}-(k+K)^{2}}}} Now, K is a reciprocal lattice vector, which means that a sum over K is actually a sum over integer multiples of 2 π a {\displaystyle {\frac {2\pi }{a}}} : ℏ 2 2 m a A = ∑ n = − ∞ ∞ 1 α 2 − ( k + 2 π n a ) 2 {\displaystyle {\frac {\hbar ^{2}}{2m}}{\frac {a}{A}}=\sum _{n=-\infty }^{\infty }{\frac {1}{\alpha ^{2}-(k+{\frac {2\pi n}{a}})^{2}}}} We can juggle this expression a little bit to make it more suggestive (use partial fraction decomposition): ℏ 2 2 m a A = ∑ n = − ∞ ∞ 1 α 2 − ( k + 2 π n a ) 2 = − 1 2 α ∑ n = − ∞ ∞ [ 1 ( k + 2 π n a ) − α − 1 ( k + 2 π n a ) + α ] = − a 4 α ∑ n = − ∞ ∞ [ 1 π n + k a 2 − α a 2 − 1 π n + k a 2 + α a 2 ] = − a 4 α [ ∑ n = − ∞ ∞ 1 π n + k a 2 − α a 2 − ∑ n = − ∞ ∞
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
1 π n + k a 2 + α a 2 ] {\displaystyle {\begin{aligned}{\frac {\hbar ^{2}}{2m}}{\frac {a}{A}}&=\sum _{n=-\infty }^{\infty }{\frac {1}{\alpha ^{2}-(k+{\frac {2\pi n}{a}})^{2}}}\\&=-{\frac {1}{2\alpha }}\sum _{n=-\infty }^{\infty }\left[{\frac {1}{(k+{\frac {2\pi n}{a}})-\alpha }}-{\frac {1}{(k+{\frac {2\pi n}{a}})+\alpha }}\right]\\&=-{\frac {a}{4\alpha }}\sum _{n=-\infty }^{\infty }\left[{\frac {1}{\pi n+{\frac {ka}{2}}-{\frac {\alpha a}{2}}}}-{\frac {1}{\pi n+{\frac {ka}{2}}+{\frac {\alpha a}{2}}}}\right]\\&=-{\frac {a}{4\alpha }}\left[\sum _{n=-\infty }^{\infty }{\frac {1}{\pi n+{\frac {ka}{2}}-{\frac {\alpha a}{2}}}}-\sum _{n=-\infty }^{\infty }{\frac {1}{\pi n+{\frac {ka}{2}}+{\frac {\alpha a}{2}}}}\right]\end{aligned}}} If we use a nice identity of a sum of the cotangent function (Equation 18) which says: cot ⁡ ( x ) = ∑ n = − ∞ ∞ 1 2 π n + 2 x − 1 2 π n − 2 x {\displaystyle \cot(x)=\sum _{n=-\infty }^{\infty }{\frac {1}{2\pi n+2x}}-{\frac {1}{2\pi n-2x}}} and plug it into our expression we get to: ℏ 2 2 m a A = − a 4 α [ cot ⁡ ( k a 2 − α a 2 ) − cot ⁡ ( k a 2 + α a 2 ) ] {\displaystyle {\frac {\hbar ^{2}}{2m}}{\frac {a}{A}}=-{\frac {a}{4\alpha }}\left[\cot \left({\tfrac {ka}{2}}-{\tfrac {\alpha a}{2}}\right)-\cot \left({\tfrac {ka}{2}}+{\tfrac {\alpha a}{2}}\right)\right]} We use the sum of cot and then, the product of sin (which is part of the formula for the sum of cot) to arrive at: cos ⁡ ( k a ) = cos ⁡ ( α a ) + m A ℏ 2 α sin ⁡ ( α a ) {\displaystyle \cos(ka)=\cos(\alpha a)+{\frac {mA}{\hbar ^{2}\alpha }}\sin(\alpha a)} This equation shows the relation between the energy (through α) and the wave-vector, k, and as you can see, since the left hand side of the equation can only range from −1 to 1 then there are some limits on the values that α (and thus, the energy) can take, that is, at some ranges of values of the
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
energy, there is no solution according to these equation, and thus, the system will not have those energies: energy gaps. These are the so-called band-gaps, which can be shown to exist in any shape of periodic potential (not just delta or square barriers). For a different and detailed calculation of the gap formula (i.e. for the gap between bands) and the level splitting of eigenvalues of the one-dimensional Schrödinger equation see Müller-Kirsten. Corresponding results for the cosine potential (Mathieu equation) are also given in detail in this reference. == Finite lattice == In some cases, the Schrödinger equation can be solved analytically on a one-dimensional lattice of finite length using the theory of periodic differential equations. The length of the lattice is assumed to be L = N a {\displaystyle L=Na} , where a {\displaystyle a} is the potential period and the number of periods N {\displaystyle N} is a positive integer. The two ends of the lattice are at τ {\displaystyle \tau } and L + τ {\displaystyle L+\tau } , where τ {\displaystyle \tau } determines the point of termination. The wavefunction vanishes outside the interval [ τ , L + τ ] {\displaystyle [\tau ,L+\tau ]} . The eigenstates of the finite system can be found in terms of the Bloch states of an infinite system with the same periodic potential. If there is a band gap between two consecutive energy bands of the infinite system, there is a sharp distinction between two types of states in the finite lattice. For each energy band of the infinite system, there are N − 1 {\displaystyle N-1} bulk states whose energies depend on the length N {\displaystyle N} but not on the termination τ {\displaystyle \tau } . These states are standing waves constructed as a superposition of two
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
Bloch states with momenta k {\displaystyle k} and − k {\displaystyle -k} , where k {\displaystyle k} is chosen so that the wavefunction vanishes at the boundaries. The energies of these states match the energy bands of the infinite system. For each band gap, there is one additional state. The energies of these states depend on the point of termination τ {\displaystyle \tau } but not on the length N {\displaystyle N} . The energy of such a state can lie either at the band edge or within the band gap. If the energy is within the band gap, the state is a surface state localized at one end of the lattice, but if the energy is at the band edge, the state is delocalized across the lattice. == See also == Empty lattice approximation Nearly free electron model Crystal structure == References == == External links == "The Kronig–Penney Model" by Michael Croucher, an interactive calculation of 1d periodic potential band structure using Mathematica, from The Wolfram Demonstrations Project.
{ "page_id": 462138, "source": null, "title": "Particle in a one-dimensional lattice" }
Dibyendu Nandi is an Indian space scientist known for his research related to the solar cycle, solar dynamo activity and their influence on space weather. Nandi is the head of the Center of Excellence in Space Sciences, India or CESSI at IISER Kolkata. He is associated with Montana State University, the Center for Astrophysics | Harvard & Smithsonian and IISER Kolkata where he carried out most of his research work. == Education == Nandi did his early schooling at the Cossipore English School and St. James School, both in Kolkata. He then graduated in Physics from St. Xavier's College, Kolkata in 1995 and joined IISc from where he received his M.S. and PhD degrees in 1997 and 2003, respectively. == Career == Following his PhD, Dibyendu worked as a postdoctoral fellow, research scientist and assistant research professor at the Solar Physics Group at Montana State University, Bozeman, US. He returned to India in 2008 and joined the Indian Institute of Science Education and Research Kolkata as an assistant professor in the physics department. He is currently professor and head of the Center of Excellence in Space Sciences India. He has held a visiting faculty position at the Institute of Mathematics and Statistics at St Andrews University (UK), a visiting astrophysicist position at the Center for Astrophysics | Harvard & Smithsonian (USA) and a Wenner Gren Visiting Professorship at the Nordic Institute of Theoretical Physics (NORDITA) in Stockholm (Sweden). He established the Center of Excellence in Space Science India which is a multi-institutional center engaged in astronomy and space science research and technology development at IISER Kolkata. He is also currently the chairperson of the Public Outreach and Education Committee of the Astronomical Society of India, vice president of the International Astronomical Union's Commission E4 and coordinator of the Education Cluster
{ "page_id": 36113723, "source": null, "title": "Dibyendu Nandi" }
of the International Space Weather Action teams of the Committee on Space Research. == Awards and recognition == Dibyendu Nandi was the 2012 recipient of the Karen Harvey Prize of the American Astronomical Society. This is the first time that a space scientist working in the Asia-Pacific region has received this honour. He also received the Modali Award of the Astronomical Society of India, and the Young Career Award of the Asia-Pacific Solar Physics Meeting society. A list of his honours follows. National scholarship of the Government of India based on the B.S. exams in 1995. "Brueckner Studentship" by the Solar Physics Division of the American Astronomical Society in 2000. Research work regarding the role of meridional flows in the Sun's interior in setting the period of the sunspot cycle. "Martin Forster Gold Medal" for the best thesis of 2002–2003, by the Division of Physical and Mathematical Sciences of the IISc, Bangalore in 2004 United Kingdom British Council's "Researcher Exchange Programme Award" in 2007. American Astronomical Society Solar Physics Division's "Parker Lectureship" at the AAS-SPD Annual Meeting in 2008. "Ramanujan Fellowship" by Department of Science and Technology, Government of India in 2009. Solar cycle (dynamo) simulation selected as an exhibit for EPO purposes at NASA’s Scientific Visualization Studio and featured in SDO pre-launch outreach videos in 2010. News-articles and interviews related to research on the unusual lull in solar activity published in multiple media outlets, including Reuters, ABC, CBC, Sydney Morning Herald, Dawn, Times of India, Telegraph, Hindu, Deccan Herald, Hindustan Times, Indian Express etc. Also covered in the following magazines: Scientific American, Sky and Telescope and Discovery. Interviews were aired in: CNN-IBN, Lok Sabha TV, All India Radio (Kolkata) in 2011. "Karen Harvey Prize" of the American Astronomical Society's Solar Physics Division in 2012. Wenner-Gren Professorship, Nordic Institute of
{ "page_id": 36113723, "source": null, "title": "Dibyendu Nandi" }
Theoretical Physics, Sweden in 2018. Laxminarayana & Nagalaxmi Modali Award, Astronomical Society of India in 2018 Asia-Pacific Young Career Award in Solar Physics in 2019 == References ==
{ "page_id": 36113723, "source": null, "title": "Dibyendu Nandi" }
"My Life as a Turkey" is a television episode that premiered in 2011 in the UK on BBC (season 30 of the series Natural World, August 1) and in the US on PBS (season 30 of the series Nature, November 16). It won an Emmy Award for Outstanding Nature Programming. It was based on the book Illumination in the Flatwoods by Joe Hutto, who also co-wrote and hosted the TV program. == Synopsis == My Life as a Turkey describes how Hutto raised a brood of wild turkeys. Hutto narrates over a recreation of his time living with turkeys with Jeff Palmer playing Hutto. They imprinted on him as they came out of the egg. He then led them on walks through the Florida woods. He describes how he learned their language and was impressed by their instincts and native intelligence. Eventually, after about a year, they became independent of him. The film shows footage of turkeys at all these ages, and is a re-enactment of the material described in Hutto's book. == Development == In order to get the footage of Palmer living with wild Turkeys, PBS recreated Hutto's entire experiment over the course of a year with actor Jeff Palmer imprinting and living with the turkeys. Hutto stated in an interview that they were extremely lucky to have turkeys that had similar personalities to the ones in his book. == Reception & media coverage == In relation to the film, Joe Hutto has been profiled in newspapers, with a focus on the program and book. The book was mentioned in The New Yorker. == Meaningful Quotes == “Their language and their understanding of the ecology shows a remarkable intelligence. But their ability to understand the world goes much further than just communication. I came to realize that these
{ "page_id": 41553212, "source": null, "title": "My Life as a Turkey" }
young turkeys in many ways were more conscious than I was. I actually felt a sort of embarrassment when I was in their presence - they were so in the moment - and ultimately their experience of that manifested in a kind of joy that I don’t experience and I was very envious of that. I was learning new things about turkeys everyday. But this was not just about how they live their lives - these animals were showing me how to live my life also. We do not have a privileged access to reality. So many of us live either in the past or in the future - and betray the moment. And in some sense we forget to live our lives - and the wild turkeys were aways reminding me to live my life. I think as humans we have this peculiar predisposition to be always thinking ahead - living a little bit in the future - anticipating the next minute, the next hour, the next day - and we betray the moment. Wild turkeys don’t do that. They are convinced that everything that they need, all their needs, will be met only in the present moment and in this space. The world is not better half mile through the woods, it’s not better an hour from now, and it’s not better tomorrow - that this is as good as it gets. So they constantly reminded me to do better, and to not live in this abstraction of the future, which by definition will never exist. And so we sort of betray our lives in the moment and the wild turkeys reminded me to be present, to be here.” "I learned many things - but maybe the most important was that we are essentially unaware of the overwhelming
{ "page_id": 41553212, "source": null, "title": "My Life as a Turkey" }
complexity that exists all around us. And I’ll never see the world in the same way again." == Awards == Emmy - Outstanding Nature Programming (2012) Jackson Hole Wildlife Film Festival - Best Writing (2011) == References == == External links == "My Life as a Turkey" at IMDb My Life as a Turkey (viewable at Daily Motion) My Life as a Turkey (viewable online at PBS) My Life as a Turkey (clips online at BBC)
{ "page_id": 41553212, "source": null, "title": "My Life as a Turkey" }
The median tongue bud (also tuberculum impar) marks the beginning of the development of the tongue. It appears as a midline swelling from the first pharyngeal arch late in the fourth week of embryogenesis. In the fifth week, a pair of lateral lingual swellings (or distal tongue buds) develop above and in line with the median tongue bud. These swellings grow downwards towards each other, quickly overgrowing the median tongue bud. The line of the fusion of the distal tongue buds is marked by the median sulcus. == References == This article incorporates text in the public domain from page 1102 of the 20th edition of Gray's Anatomy (1918) == External links == hednk-024—Embryo Images at University of North Carolina
{ "page_id": 7998783, "source": null, "title": "Median tongue bud" }
Osteoimmunology (όστέον, osteon from Greek, "bone"; immunitas from Latin, "immunity"; and λόγος, logos, from Greek "study") is a field that emerged about 40 years ago that studies the interface between the skeletal system and the immune system, comprising the "osteo-immune system". Osteoimmunology also studies the shared components and mechanisms between the two systems in vertebrates, including ligands, receptors, signaling molecules and transcription factors. Over the past decade, osteoimmunology has been investigated clinically for the treatment of bone metastases, rheumatoid arthritis (RA), osteoporosis, osteopetrosis, and periodontitis. Studies in osteoimmunology reveal relationships between molecular communication among blood cells and structural pathologies in the body. == System similarities == The RANKL-RANK-OPG axis (OPG stands for osteoprotegerin) is an example of an important signaling system functioning both in bone and immune cell communication. RANKL is expressed on osteoblasts and activated T cells, whereas RANK is expressed on osteoclasts, and dendritic cells (DCs), both of which can be derived from myeloid progenitor cells. Surface RANKL on osteoblasts as well as secreted RANKL provide necessary signals for osteoclast precursors to differentiate into osteoclasts. RANKL expression on activated T cells leads to DC activation through binding to RANK expressed on DCs. OPG, produced by DCs, is a soluble decoy receptor for RANKL that competitively inhibits RANKL binding to RANK. == Crosstalk == The bone marrow cavity is important for the proper development of the immune system, and houses important stem cells for maintenance of the immune system. Within this space, as well as outside of it, cytokines produced by immune cells also have important effects on regulating bone homeostasis. Some important cytokines that are produced by the immune system, including RANKL, M-CSF, TNFa, ILs, and IFNs, affect the differentiation and activity of osteoclasts and bone resorption. Such inflammatory osteoclastogenesis and osteoclast activation can be seen in ex
{ "page_id": 12062017, "source": null, "title": "Osteoimmunology" }
vivo primary cultures of cells from the inflamed synovial fluid of patients with disease flare of the autoimmune disease rheumatoid arthritis. == Clinical osteoimmunology == Clinical osteoimmunology is a field that studies a treatment or prevention of the bone related diseases caused by disorders of the immune system. Aberrant and/or prolonged activation of immune system leads to derangement of bone modeling and remodeling. Common diseases caused by disorder of osteoimmune system is osteoporosis and bone destruction accompanied by RA characterized by high infiltration of CD4+ T cells in rheumatoid joints, in which two mechanisms are involved: One is an indirect effect on osteoclastogenesis from rheumatoid synovial cells in joints since synovial cells have osteoclast precursors and osteoclast supporting cells, synovial macrophages are highly differentiated into osteoclasts with help of RANKL released from osteoclast supporting cells. The second is an indirect effect on osteoclast differentiation and activity by the secretion of inflammatory cytokines such as IL-1, IL-6, TNFa, in synovium of RA, which increase RANKL signaling and finally bone destruction. A clinical approach to prevent bone related diseases caused by RA is OPG and RANKL treatment in arthritis. There is some evidence that infections (e.g. respiratory virus infection) can reduce the numbers of osteoblasts in bone, the key cells involved in bone formation. == See also == Bone metabolism Osteoimmunology and Osseointegration HSC Osteoarthritis == References ==
{ "page_id": 12062017, "source": null, "title": "Osteoimmunology" }
Cyanin may refer to: Cyanine, a non-systematic name of a synthetic dye family belonging to the polymethine group. Cyanin (anthocyanin) (Cyanidin-3,5-O-diglucoside), a diglucoside of the anthocyanidin cyanidin.
{ "page_id": 24120642, "source": null, "title": "Cyanin" }
1-Arseno-3-phosphoglycerate is a compound produced by the enzyme glyceraldehyde 3-phosphate dehydrogenase, present in high concentrations in many organisms, from glyceraldehyde 3-phosphate and arsenate in the glycolysis pathway. The compound is unstable and hydrolyzes spontaneously to 3-phosphoglycerate, bypassing the energy producing step of glycolysis. == Effects on glycolysis == 1-Arseno-3-phosphoglycerate can be derived from the glycolytic pathway via the bonding of Arsenate and glyceraldehyde-3-phosphate, which is catalyzed by glyceraldehyde phosphate dehydrogenase (GAPDH). The net production of ATP is zero as a result of the formation of the intermediate, 1-arseno-3-phosphoglycerate, as opposed to the conventional pathway, which produces a net result of two ATP molecules. Glyceraldehyde − 3 − phosphate + AsO 4 3 − + NAD + → G A P D H NADH + H + + 1 − Arseno − 3 − phosphoglycerate {\displaystyle {\ce {Glyceraldehyde-3-phosphate + AsO4^3- + NAD+ ->[GAPDH] NADH +H+ + 1-Arseno-3-phosphoglycerate}}} == References ==
{ "page_id": 39783756, "source": null, "title": "1-Arseno-3-phosphoglycerate" }
In statistical learning theory, a learnable function class is a set of functions for which an algorithm can be devised to asymptotically minimize the expected risk, uniformly over all probability distributions. The concept of learnable classes are closely related to regularization in machine learning, and provides large sample justifications for certain learning algorithms. == Definition == === Background === Let Ω = X × Y = { ( x , y ) } {\displaystyle \Omega ={\mathcal {X}}\times {\mathcal {Y}}=\{(x,y)\}} be the sample space, where y {\displaystyle y} are the labels and x {\displaystyle x} are the covariates (predictors). F = { f : X ↦ Y } {\displaystyle {\mathcal {F}}=\{f:{\mathcal {X}}\mapsto {\mathcal {Y}}\}} is a collection of mappings (functions) under consideration to link x {\displaystyle x} to y {\displaystyle y} . L : Y × Y ↦ R {\displaystyle L:{\mathcal {Y}}\times {\mathcal {Y}}\mapsto \mathbb {R} } is a pre-given loss function (usually non-negative). Given a probability distribution P ( x , y ) {\displaystyle P(x,y)} on Ω {\displaystyle \Omega } , define the expected risk I P ( f ) {\displaystyle I_{P}(f)} to be: I P ( f ) = ∫ L ( f ( x ) , y ) d P ( x , y ) {\displaystyle I_{P}(f)=\int L(f(x),y)dP(x,y)} The general goal in statistical learning is to find the function in F {\displaystyle {\mathcal {F}}} that minimizes the expected risk. That is, to find solutions to the following problem: f ^ = arg ⁡ min f ∈ F I P ( f ) {\displaystyle {\hat {f}}=\arg \min _{f\in {\mathcal {F}}}I_{P}(f)} But in practice the distribution P {\displaystyle P} is unknown, and any learning task can only be based on finite samples. Thus we seek instead to find an algorithm that asymptotically minimizes the empirical risk, i.e., to find a
{ "page_id": 48827727, "source": null, "title": "Learnable function class" }
sequence of functions { f ^ n } n = 1 ∞ {\displaystyle \{{\hat {f}}_{n}\}_{n=1}^{\infty }} that satisfies lim n → ∞ P ( I P ( f ^ n ) − inf f ∈ F I P ( f ) > ϵ ) = 0 {\displaystyle \lim _{n\rightarrow \infty }\mathbb {P} (I_{P}({\hat {f}}_{n})-\inf _{f\in {\mathcal {F}}}I_{P}(f)>\epsilon )=0} One usual algorithm to find such a sequence is through empirical risk minimization. === Learnable function class === We can make the condition given in the above equation stronger by requiring that the convergence is uniform for all probability distributions. That is: The intuition behind the more strict requirement is as such: the rate at which sequence { f ^ n } {\displaystyle \{{\hat {f}}_{n}\}} converges to the minimizer of the expected risk can be very different for different P ( x , y ) {\displaystyle P(x,y)} . Because in real world the true distribution P {\displaystyle P} is always unknown, we would want to select a sequence that performs well under all cases. However, by the no free lunch theorem, such a sequence that satisfies (1) does not exist if F {\displaystyle {\mathcal {F}}} is too complex. This means we need to be careful and not allow too "many" functions in F {\displaystyle {\mathcal {F}}} if we want (1) to be a meaningful requirement. Specifically, function classes that ensure the existence of a sequence { f ^ n } {\displaystyle \{{\hat {f}}_{n}\}} that satisfies (1) are known as learnable classes. It is worth noting that at least for supervised classification and regression problems, if a function class is learnable, then the empirical risk minimization automatically satisfies (1). Thus in these settings not only do we know that the problem posed by (1) is solvable, we also immediately have an algorithm that
{ "page_id": 48827727, "source": null, "title": "Learnable function class" }
gives the solution. == Interpretations == If the true relationship between y {\displaystyle y} and x {\displaystyle x} is y ∼ f ∗ ( x ) {\displaystyle y\sim f^{*}(x)} , then by selecting the appropriate loss function, f ∗ {\displaystyle f^{*}} can always be expressed as the minimizer of the expected loss across all possible functions. That is, f ∗ = arg ⁡ min f ∈ F ∗ I P ( f ) {\displaystyle f^{*}=\arg \min _{f\in {\mathcal {F}}^{*}}I_{P}(f)} Here we let F ∗ {\displaystyle {\mathcal {F}}^{*}} be the collection of all possible functions mapping X {\displaystyle {\mathcal {X}}} onto Y {\displaystyle {\mathcal {Y}}} . f ∗ {\displaystyle f^{*}} can be interpreted as the actual data generating mechanism. However, the no free lunch theorem tells us that in practice, with finite samples we cannot hope to search for the expected risk minimizer over F ∗ {\displaystyle {\mathcal {F}}^{*}} . Thus we often consider a subset of F ∗ {\displaystyle {\mathcal {F}}^{*}} , F {\displaystyle {\mathcal {F}}} , to carry out searches on. By doing so, we risk that f ∗ {\displaystyle f^{*}} might not be an element of F {\displaystyle {\mathcal {F}}} . This tradeoff can be mathematically expressed as In the above decomposition, part ( b ) {\displaystyle (b)} does not depend on the data and is non-stochastic. It describes how far away our assumptions ( F {\displaystyle {\mathcal {F}}} ) are from the truth ( F ∗ {\displaystyle {\mathcal {F}}^{*}} ). ( b ) {\displaystyle (b)} will be strictly greater than 0 if we make assumptions that are too strong ( F {\displaystyle {\mathcal {F}}} too small). On the other hand, failing to put enough restrictions on F {\displaystyle {\mathcal {F}}} will cause it to be not learnable, and part ( a ) {\displaystyle (a)} will not stochastically
{ "page_id": 48827727, "source": null, "title": "Learnable function class" }
converge to 0. This is the well-known overfitting problem in statistics and machine learning literature. == Example: Tikhonov regularization == A good example where learnable classes are used is the so-called Tikhonov regularization in reproducing kernel Hilbert space (RKHS). Specifically, let F ∗ {\displaystyle {\mathcal {F^{*}}}} be an RKHS, and | | ⋅ | | 2 {\displaystyle ||\cdot ||_{2}} be the norm on F ∗ {\displaystyle {\mathcal {F^{*}}}} given by its inner product. It is shown in that F = { f : | | f | | 2 ≤ γ } {\displaystyle {\mathcal {F}}=\{f:||f||_{2}\leq \gamma \}} is a learnable class for any finite, positive γ {\displaystyle \gamma } . The empirical minimization algorithm to the dual form of this problem is arg ⁡ min f ∈ F ∗ { ∑ i = 1 n L ( f ( x i ) , y i ) + λ | | f | | 2 } {\displaystyle \arg \min _{f\in {\mathcal {F}}^{*}}\left\{\sum _{i=1}^{n}L(f(x_{i}),y_{i})+\lambda ||f||_{2}\right\}} This was first introduced by Tikhonov to solve ill-posed problems. Many statistical learning algorithms can be expressed in such a form (for example, the well-known ridge regression). The tradeoff between ( a ) {\displaystyle (a)} and ( b ) {\displaystyle (b)} in (2) is geometrically more intuitive with Tikhonov regularization in RKHS. We can consider a sequence of { F γ } {\displaystyle \{{\mathcal {F}}_{\gamma }\}} , which are essentially balls in F ∗ {\displaystyle {\mathcal {F^{*}}}} with centers at 0. As γ {\displaystyle \gamma } gets larger, F γ {\displaystyle {\mathcal {F}}_{\gamma }} gets closer to the entire space, and ( b ) {\displaystyle (b)} is likely to become smaller. However we will also suffer smaller convergence rates in ( a ) {\displaystyle (a)} . The way to choose an optimal γ {\displaystyle \gamma } in
{ "page_id": 48827727, "source": null, "title": "Learnable function class" }
finite sample settings is usually through cross-validation. == Relationship to empirical process theory == Part ( a ) {\displaystyle (a)} in (2) is closely linked to empirical process theory in statistics, where the empirical risk { ∑ i = 1 n L ( y i , f ( x i ) ) , f ∈ F } {\displaystyle \{\sum _{i=1}^{n}L(y_{i},f(x_{i})),f\in {\mathcal {F}}\}} are known as empirical processes. In this field, the function class F {\displaystyle {\mathcal {F}}} that satisfies the stochastic convergence are known as uniform Glivenko–Cantelli classes. It has been shown that under certain regularity conditions, learnable classes and uniformly Glivenko-Cantelli classes are equivalent. Interplay between ( a ) {\displaystyle (a)} and ( b ) {\displaystyle (b)} in statistics literature is often known as the bias-variance tradeoff. However, note that in the authors gave an example of stochastic convex optimization for General Setting of Learning where learnability is not equivalent with uniform convergence. == References ==
{ "page_id": 48827727, "source": null, "title": "Learnable function class" }
Herb-induced liver injury (HILI) is a form of drug-induced liver injury caused by herbal medicines, typically herbal supplements or herb-based ethnomedicines. == Prevalence == Herbs are a common component of ethnomedicines and their potential hepatotoxicity is a concern for people taking such medicines or other herbal supplements. Use of such products is widespread within Ayurvedic medicine. Although injury from ayurvedic medicines has commonly been blamed on improper adulteration of drugs, a number of preparations can be harmful though direct effects purely because of their herbal ingredients. == Implicated products == Tinospora cordifolia supplement usage is of concern as the constituent chemicals cause hepatitis. Advocates of ayurveda have attempted to blame adulteration for these effects but investigation has shown the toxicity to result directly from the medicine itself. Supplement use can lead to death or the need for a liver transplant. Herbal supplements of Cullen corylifolium are hepatotoxic, as constituent chemicals cause cholestatic hepatitis. People with liver problems or certain other comorbidities are at risk of death from supplement use. Turmeric is promoted with numerous claims for health benefit, with little or no supporting evidence. Although of low bioavailability, some supplements boost potency via a variety of preparation techniques. Turmeric supplements are hepatotoxic, and have caused a recorded rise in incidence of liver injury. In Italy, the government has banned any claims of turmeric health benefits and mandated warning for turmeric-based supplements. Withania somnifera (ashwagandha) is promoted within ayurveda for numerous claimed benefits which medical science has been unable to confirm. Use of ashwagandha supplements is responsible for a global rise in herb-induced liver injury. == See also == Alternative medicine == References ==
{ "page_id": 79891792, "source": null, "title": "Herb-induced liver injury" }
In organic chemistry, the Kumada coupling is a type of cross coupling reaction, useful for generating carbon–carbon bonds by the reaction of a Grignard reagent and an organic halide. The procedure uses transition metal catalysts, typically nickel or palladium, to couple a combination of two alkyl, aryl or vinyl groups. The groups of Robert Corriu and Makoto Kumada reported the reaction independently in 1972. The reaction is notable for being among the first reported catalytic cross-coupling methods. Despite the subsequent development of alternative reactions (Suzuki, Sonogashira, Stille, Hiyama, Negishi), the Kumada coupling continues to be employed in many synthetic applications, including the industrial-scale production of aliskiren, a hypertension medication, and polythiophenes, useful in organic electronic devices. == History == The first investigations into the catalytic coupling of Grignard reagents with organic halides date back to the 1941 study of cobalt catalysts by Morris S. Kharasch and E. K. Fields. In 1971, Tamura and Kochi elaborated on this work in a series of publications demonstrating the viability of catalysts based on silver, copper and iron. However, these early approaches produced poor yields due to substantial formation of homocoupling products, where two identical species are coupled. These efforts culminated in 1972, when the Corriu and Kumada groups concurrently reported the use of nickel-containing catalysts. With the introduction of palladium catalysts in 1975 by the Murahashi group, the scope of the reaction was further broadened. Subsequently, many additional coupling techniques have been developed, culminating in the 2010 Nobel Prize in Chemistry recognized Ei-ichi Negishi, Akira Suzuki and Richard F. Heck for their contributions to the field. == Mechanism == === Palladium catalysis === According to the widely accepted mechanism, the palladium-catalyzed Kumada coupling is understood to be analogous to palladium's role in other cross coupling reactions. The proposed catalytic cycle involves both palladium(0)
{ "page_id": 10685778, "source": null, "title": "Kumada coupling" }
and palladium(II) oxidation states. Initially, the electron-rich Pd(0) catalyst (1) inserts into the R–X bond of the organic halide. This oxidative addition forms an organo-Pd(II)-complex (2). Subsequent transmetalation with the Grignard reagent forms a hetero-organometallic complex (3). Before the next step, isomerization is necessary to bring the organic ligands next to each other into mutually cis positions. Finally, reductive elimination of (4) forms a carbon–carbon bond and releases the cross coupled product while regenerating the Pd(0) catalyst (1). For palladium catalysts, the frequently rate-determining oxidative addition occurs more slowly than with nickel catalyst systems. === Nickel catalysis === Current understanding of the mechanism for the nickel-catalyzed coupling is limited. Indeed, the reaction mechanism is believed to proceed differently under different reaction conditions and when using different nickel ligands. In general the mechanism can still be described as analogous to the palladium scheme (right). Under certain reaction conditions, however, the mechanism fails to explain all observations. Examination by Vicic and coworkers using tridentate terpyridine ligand identified intermediates of a Ni(II)-Ni(I)-Ni(III) catalytic cycle, suggesting a more complicated scheme. Additionally, with the addition of butadiene, the reaction is believed to involve a Ni(IV) intermediate. == Scope == === Organic halides and pseudohalides === The Kumada coupling has been successfully demonstrated for a variety of aryl or vinyl halides. In place of the halide reagent pseudohalides can also be used, and the coupling has been shown to be quite effective using tosylate and triflate species in variety of conditions. Despite broad success with aryl and vinyl couplings, the use of alkyl halides is less general due to several complicating factors. Having no π-electrons, alkyl halides require different oxidative addition mechanisms than aryl or vinyl groups, and these processes are currently poorly understood. Additionally, the presence of β-hydrogens makes alkyl halides susceptible to competitive elimination
{ "page_id": 10685778, "source": null, "title": "Kumada coupling" }
processes. These issues have been circumvented by the presence of an activating group, such as the carbonyl in α-bromoketones, that drives the reaction forward. However, Kumada couplings have also been performed with non-activated alkyl chains, often through the use of additional catalysts or reagents. For instance, with the addition of 1,3-butadienes Kambe and coworkers demonstrated nickel catalyzed alkyl–alkyl couplings that would otherwise be unreactive. Though poorly understood, the mechanism of this reaction is proposed to involve the formation of an octadienyl nickel complex. This catalyst is proposed to undergo transmetalation with a Grignard reagent first, prior to the reductive elimination of the halide, reducing the risk of β-hydride elimination. However, the presence of a Ni(IV) intermediate is contrary to mechanisms proposed for aryl or vinyl halide couplings. === Grignard reagent === Couplings involving aryl and vinyl Grignard reagents were reported in the original publications by Kumada and Corriu. Alkyl Grignard reagents can also be used without difficulty, as they do not suffer from β-hydride elimination processes. Although the Grignard reagent inherently has poor functional group tolerance, low-temperature syntheses have been prepared with highly functionalized aryl groups. === Catalysts === Kumada couplings can be performed with a variety of nickel(II) or palladium(II) catalysts. The structures of the catalytic precursors can be generally formulated as ML2X2, where L is a phosphine ligand. Common choices for L2 include bidentate diphosphine ligands such as dppe and dppp among others. Work by Alois Fürstner and coworkers on iron-based catalysts have shown reasonable yields. The catalytic species in these reactions is proposed to be an "inorganic Grignard reagent" consisting of Fe(MgX)2. === Reaction conditions === The reaction typically is carried out in tetrahydrofuran or diethyl ether as solvent. Such ethereal solvents are convenient because these are typical solvents for generating the Grignard reagent. Due to the
{ "page_id": 10685778, "source": null, "title": "Kumada coupling" }
high reactivity of the Grignard reagent, Kumada couplings have limited functional group tolerance which can be problematic in large syntheses. In particular, Grignard reagents are sensitive to protonolysis from even mildly acidic groups such as alcohols. They also add to carbonyls and other oxidative groups. As in many coupling reactions, the transition metal palladium catalyst is often air-sensitive, requiring an inert Argon or nitrogen reaction environment. A sample synthetic preparation is available at the Organic Syntheses website. == Selectivity == === Stereoselectivity === Both cis- and trans-olefin halides promote the overall retention of geometric configuration when coupled with alkyl Grignards. This observation is independent of other factors, including the choice of catalyst ligands and vinylic substituents. Conversely, a Kumada coupling using vinylic Grignard reagents proceeds without stereospecificity to form a mixture of cis- and trans-alkenes. The degree of isomerization is dependent on a variety of factors including reagent ratios and the identity of the halide group. According to Kumada, this loss of stereochemistry is attributable to side-reactions between two equivalents of the allylic Grignard reagent. === Enantioselectivity === Asymmetric Kumada couplings can be effected through the use of chiral ligands. Using planar chiral ferrocene ligands, enantiomeric excesses (ee) upward of 95% have been observed in aryl couplings. More recently, Gregory Fu and co-workers have demonstrated enantioconvergent couplings of α-bromoketones using catalysts based on bis-oxazoline ligands, wherein the chiral catalyst converts a racemic mixture of starting material to one enantiomer of product with up to 95% ee. The latter reaction is also significant for involving a traditionally inaccessible alkyl halide coupling. === Chemoselectivity === Grignard reagents do not typically couple with chlorinated arenes. This low reactivity is the basis for chemoselectivity for nickel insertion into the C–Br bond of bromochlorobenzene using a NiCl2-based catalyst. == Applications == === Synthesis of aliskiren
{ "page_id": 10685778, "source": null, "title": "Kumada coupling" }
=== The Kumada coupling is suitable for large-scale, industrial processes, such as drug synthesis. The reaction is used to construct the carbon skeleton of aliskiren (trade name Tekturna), a treatment for hypertension. === Synthesis of polythiophenes === The Kumada coupling also shows promise in the synthesis of conjugated polymers, polymers such as polyalkylthiophenes (PAT), which have a variety of potential applications in organic solar cells and light-emitting diodes. In 1992, McCollough and Lowe developed the first synthesis of regioregular polyalkylthiophenes by utilizing the Kumada coupling scheme pictured below, which requires subzero temperatures. Since this initial preparation, the synthesis has been improved to obtain higher yields and operate at room temperature. == See also == Heck reaction Hiyama coupling Suzuki reaction Negishi coupling Petasis reaction Stille reaction Sonogashira coupling Murahashi coupling == Citations ==
{ "page_id": 10685778, "source": null, "title": "Kumada coupling" }
Brian David Josephson (born 4 January 1940) is a Welsh condensed matter physicist and a professor emeritus of physics at the University of Cambridge. Best known for his pioneering work on superconductivity and quantum tunnelling, he shared the 1973 Nobel Prize in Physics with Leo Esaki and Ivar Giaever for his discovery of the Josephson effect, made in 1962 when he was a 22 year-old PhD student at Cambridge. Josephson has spent his academic career as a member of the Theory of Condensed Matter group at Cambridge's Cavendish Laboratory. He has been a fellow of Trinity College, Cambridge since 1962, and served as professor of physics from 1974 until 2007. In the early 1970s, Josephson took up transcendental meditation and turned his attention to issues outside the boundaries of mainstream science. He set up the Mind–Matter Unification Project at Cavendish to explore the idea of intelligence in nature, the relationship between quantum mechanics and consciousness, and the synthesis of science and Eastern mysticism, broadly known as quantum mysticism. He has expressed support for topics such as parapsychology, water memory and cold fusion, which has made him a focus of criticism from fellow scientists. == Early life and career == === Education === Josephson was born in Cardiff, Wales, on 4 January 1940 to Jewish parents, Mimi (née Weisbard, 1911–1998) and Abraham Josephson. He attended Cardiff High School, where he credits some of the school masters for having helped him, particularly the physics master, Emrys Jones, who introduced him to theoretical physics. In 1957, he went up to Cambridge, where he initially read mathematics at Trinity College, Cambridge. After completing Maths Part II in two years, and finding it somewhat sterile, he decided to switch to physics. Josephson was known at Cambridge as a brilliant but shy student. Physicist John Waldram
{ "page_id": 396631, "source": null, "title": "Brian Josephson" }
recalled overhearing Nicholas Kurti, an examiner from Oxford, discuss Josephson's exam results with David Shoenberg, reader in physics at Cambridge, and asking: "Who is this chap Josephson? He seems to be going through the theory like a knife through butter." While still an undergraduate, he published a paper on the Mössbauer effect, pointing out a crucial issue other researchers had overlooked. According to one eminent physicist speaking to Physics World, Josephson wrote several papers important enough to assure him a place in the history of physics even without his discovery of the Josephson effect. He graduated in 1960 and became a research student in the university's Mond Laboratory on the old Cavendish site, where he was supervised by Brian Pippard. American physicist Philip Anderson, also a future Nobel Prize laureate, spent a year in Cambridge in 1961–1962, and recalled that having Josephson in a class was "a disconcerting experience for a lecturer, I can assure you, because everything had to be right or he would come up and explain it to me after class." It was during this period, as a PhD student in 1962, that he carried out the research that led to his discovery of the Josephson effect; the Cavendish Laboratory unveiled a plaque on the Mond Building dedicated to the discovery in November 2012. He was elected a fellow of Trinity College in 1962, and obtained his PhD in 1964 for a thesis entitled Non-linear conduction in superconductors. === Discovery of the Josephson effect === Josephson was 22 years old when he did the work on quantum tunnelling that won him the Nobel Prize. He discovered that a supercurrent could tunnel through a thin barrier, predicting, according to physicist Andrew Whitaker, that "at a junction of two superconductors, a current will flow even if there is no
{ "page_id": 396631, "source": null, "title": "Brian Josephson" }
drop in voltage; that when there is a voltage drop, the current should oscillate at a frequency related to the drop in voltage; and that there is a dependence on any magnetic field." This became known as the Josephson effect and the junction as a Josephson junction. His calculations were published in Physics Letters (chosen by Pippard because it was a new journal) in a paper entitled "Possible new effects in superconductive tunnelling," received on 8 June 1962 and published on 1 July. They were confirmed experimentally by Philip Anderson and John Rowell of Bell Labs in Princeton; this appeared in their paper, "Probable Observation of the Josephson Superconducting Tunneling Effect," submitted to Physical Review Letters in January 1963. Before Anderson and Rowell confirmed the calculations, the American physicist John Bardeen, who had shared the 1956 Nobel Prize in Physics (and who shared it again in 1972), objected to Josephson's work. He submitted an article to Physical Review Letters on 25 July 1962, arguing that "there can be no such superfluid flow." The disagreement led to a confrontation in September that year at Queen Mary College, London, at the Eighth International Conference on Low Temperature Physics. When Bardeen (then one of the most eminent physicists in the world) began speaking, Josephson (still a student) stood up and interrupted him. The men exchanged views, reportedly in a civil and soft-spoken manner. See also: John Bardeen § Josephson effect controversy. Whitaker writes that the discovery of the Josephson effect led to "much important physics," including the invention of SQUIDs (superconducting quantum interference devices), which are used in geology to make highly sensitive measurements, as well as in medicine and computing. IBM used Josephson's work in 1980 to build a prototype of a computer that would be up to 100 times faster than
{ "page_id": 396631, "source": null, "title": "Brian Josephson" }
the IBM 3033 mainframe. === Nobel Prize === Josephson was awarded several important prizes for his discovery, including the 1969 Research Corporation Award for outstanding contributions to science, and the Hughes Medal and Holweck Prize in 1972. In 1973 he won the Nobel Prize in Physics, sharing the $122,000 award with two other scientists who had also worked on quantum tunnelling. Josephson was awarded half the prize "for his theoretical predictions of the properties of a supercurrent through a tunnel barrier, in particular those phenomena which are generally known as the Josephson effects". The other half of the award was shared equally by Japanese physicist Leo Esaki of the Thomas Watson Research Center in Yorktown, New York, and Norwegian-American physicist Ivar Giaever of General Electric in Schenectady, New York. === Positions held === Josephson spent a postdoctoral year in the United States (1965–1966) as research assistant professor at the University of Illinois at Urbana–Champaign. After returning to Cambridge, he was made assistant director of research at the Cavendish Laboratory in 1967, where he remained a member of the Theory of Condensed Matter group, a theoretical physics group, for the rest of his career. He was elected a Fellow of the Royal Society (FRS) in 1970, and the same year was awarded a National Science Foundation fellowship by Cornell University, where he spent one year. In 1972 he became a reader in physics at Cambridge and in 1974 a full professor, a position he held until he retired in 2007. A practitioner of Transcendental Meditation (TM) since the early seventies, Josephson became a visiting faculty member in 1975 of the Maharishi European Research University in the Netherlands, part of the TM movement. He also held visiting professorships at Wayne State University in 1983, the Indian Institute of Science, Bangalore in 1984,
{ "page_id": 396631, "source": null, "title": "Brian Josephson" }
and the University of Missouri-Rolla in 1987. == Parapsychology == === Early interest and Transcendental Meditation === Josephson became interested in philosophy of mind in the late sixties and, in particular, in the mind–body problem, and is one of the few scientists to argue that parapsychological phenomena (telepathy, psychokinesis and other paranormal themes) may be real. In 1971, he began practising Transcendental Meditation (TM), which had been taken up by several celebrities, including the Beatles. Winning the Nobel Prize in 1973 gave him the freedom to work in less orthodox areas, and he became increasingly involved – including during science conferences, to the irritation of fellow scientists – in talking about meditation, telepathy and higher states of consciousness. In 1974, he angered scientists during a colloquium of molecular and cellular biologists in Versailles by inviting them to read the Bhagavad Gita (5th – 2nd century BCE) and the work of Maharishi Mahesh Yogi, the founder of the TM movement, and by arguing about special states of consciousness achieved through meditation. "Nothing forces us," one scientist shouted at him, "to listen to your wild speculations." Biophysicist Henri Atlan wrote that the session ended in uproar. In May that year, Josephson addressed a symposium held to welcome the Maharishi to Cambridge. The following month, at the first Canadian conference on psychokinesis, he was one of 21 scientists who tested claims by Matthew Manning, a Cambridgeshire teenager who said he had psychokinetic abilities; Josephson apparently told a reporter that he believed Manning's powers were a new kind of energy. He later withdrew or corrected the statement. Josephson said that Trinity College's tradition of interest in the paranormal meant that he did not dismiss these ideas out of hand. Several presidents of the Society for Psychical Research had been fellows of Trinity, and the
{ "page_id": 396631, "source": null, "title": "Brian Josephson" }
Perrott-Warrick Fund, set up in Trinity in 1937 to fund parapsychology research, is still administered by the college. He continued to explore the idea that there is intelligence in nature, particularly after reading Fritjof Capra's The Tao of Physics (1975), and in 1979 took up a more advanced form of TM, known as the TM-Sidhi program. According to Anderson, the TM movement produced a poster showing Josephson levitating several inches above the floor. Josephson argued that meditation could lead to mystical and scientific insights, and that, as a result of it, he had come to believe in a creator. === Fundamental Fysiks Group === Josephson became involved in the mid-1970s with a group of physicists associated with the Lawrence Berkeley Laboratory at the University of California, Berkeley, who were investigating paranormal claims. They had organized themselves loosely into the Fundamental Fysiks Group, and had effectively become the Stanford Research Institute's (SRI) "house theorists," according to historian of science David Kaiser. Core members in the group were Elizabeth Rauscher, George Weissmann, John Clauser, Jack Sarfatti, Saul-Paul Sirag, Nick Herbert, Fred Alan Wolf, Fritjof Capra, Henry Stapp, Philippe Eberhard and Gary Zukav. There was significant government interest at the time in quantum mechanics – the American government was financing research at SRI into telepathy – and physicists able to understand it found themselves in demand. The Fundamental Fysiks Group used ideas from quantum physics, particularly Bell's theorem and quantum entanglement, to explore issues such as action at a distance, clairvoyance, precognition, remote viewing and psychokinesis. In 1976, Josephson travelled to California at the invitation of one of the Fundamental Fysiks Group members, Jack Sarfatti, who introduced him to others including laser physicists Russell Targ and Harold Puthoff, and quantum physicist Henry Stapp. The San Francisco Chronicle covered Josephson's visit. Josephson co-organized a
{ "page_id": 396631, "source": null, "title": "Brian Josephson" }
symposium on consciousness at Cambridge in 1978, publishing the proceedings as Consciousness and the Physical World (1980), with neuroscientist V. S. Ramachandran. A conference on "Science and Consciousness" followed a year later in Cordoba, Spain, attended by physicists and Jungian psychoanalysts, and addressed by Josephson, Fritjof Capra and David Bohm (1917–1992). By 1996, he had set up the Mind–Matter Unification Project at the Cavendish Laboratory to explore intelligent processes in nature. In 2002, he told Physics World: "Future science will consider quantum mechanics as the phenomenology of particular kinds of organised complex system. Quantum entanglement would be one manifestation of such organisation, paranormal phenomena another." === Reception and views on the scientific community === Josephson delivered the Pollock Memorial Lecture in 2006, the Hermann Staudinger Lecture in 2009 and the Sir Nevill Mott Lecture in 2010. Matthew Reisz wrote in Times Higher Education in 2010 that Josephson has long been one of physics' "more colourful figures." His support for unorthodox causes has attracted criticism from fellow scientists since the 1970s, including from Philip Anderson. Josephson regards the criticism as prejudice, and believes that it has served to deprive him of an academic support network. He has repeatedly criticized "science by consensus," arguing that the scientific community is too quick to reject certain kinds of ideas. "Anything goes among the physics community – cosmic wormholes, time travel," he argues, "just so long as it keeps its distance from anything mystical or New Age-ish." Referring to this position as "pathological disbelief," he holds it responsible for the rejection by academic journals of papers on the paranormal. He has compared parapsychology to the theory of continental drift, proposed in 1912 by Alfred Wegener (1880–1930) to explain observations that were otherwise inexplicable, which was resisted and ridiculed until evidence led to its acceptance after
{ "page_id": 396631, "source": null, "title": "Brian Josephson" }
Wegener's death. Science writer Martin Gardner criticized Josephson in 1980 for complaining to The New York Review of Books, along with three other physicists, about an article by J. A. Wheeler that ridiculed parapsychology. Several physicists complained in 2001 when, in a Royal Mail booklet celebrating the Nobel Prize's centenary, Josephson wrote that Britain was at the forefront of research into telepathy. Physicist David Deutsch said the Royal Mail had "let itself be hoodwinked" into supporting nonsense, although another physicist, Robert Matthews, suggested that Deutsch was skating on thin ice given the latter's own work on parallel universes and time travel. In 2004, Josephson criticized an experiment by the Committee for Skeptical Inquiry to test claims by Russian schoolgirl Natasha Demkina that she could see inside people's bodies using a special kind of vision. The experiment involved her being asked to match six people to their confirmed medical conditions (plus one with none); to pass the test she had to make five correct matches, but made only four. Josephson argued that this was statistically significant, and that the experiment had set her up to fail. One of the researchers, Richard Wiseman, professor of psychology at the University of Hertfordshire, responded by highlighting that the conditions of the experiment had been agreed to before it started, and the potential significance of her claims warranted a higher than normal bar. Keith Rennolis, professor of applied statistics at the University of Greenwich, supported Josephson's position, asserting that the experiment was "woefully inadequate" to determine any effect. Josephson's reputation for promoting unorthodox causes was cemented by his support for the ideas of water memory and cold fusion, both of which are rejected by mainstream scientists. Water memory is purported to provide a possible explanation for homeopathy; it is dismissed by a majority of scientists
{ "page_id": 396631, "source": null, "title": "Brian Josephson" }
as pseudoscience, although Josephson has expressed support for it since attending a conference at which French immunologist Jacques Benveniste first proposed it. Cold fusion is the hypothesis that nuclear reactions can occur at room temperature. When Martin Fleischmann, the British chemist who pioneered research into it, died in 2012, Josephson wrote a supportive obituary in the Guardian, and had published in Nature a letter complaining that its obituary had failed to give Fleischmann due credit. Antony Valentini of Imperial College London withdrew Josephson's invitation to a 2010 conference on the de Broglie-Bohm theory because of his work on the paranormal, although it was reinstated after complaints. Josephson's defense of paranormal claims and of cold fusion have led him to being described as an exemplar of a sufferer of the hypothetical Nobel disease. == Awards == == Selected works == == See also == Josephson voltage standard Josephson vortex Long Josephson junction Pi Josephson junction Phi Josephson junction List of Jewish Nobel laureates List of Nobel laureates in Physics List of physicists Scientific phenomena named after people == References == == Further reading == Brian Josephson's home page, University of Cambridge. Brian Josephson, academia.edu. "bdj50: Conference in Cambridge to mark the 50th Anniversary of the Publication of Brian Josephson’s Seminal Work", Department of Physics, University of Cambridge. Anderson, Philip. "How Josephson Discovered His Effect", Physics Today, November 1970. Anderson's account of Josephson's discovery; he taught the graduate course in solid-state/many-body theory in which Josephson was a student. Barone, A. and Paterno, G. Physics and Applications of the Josephson Effect, Wiley, 1982. Bertlmann, R. A. and Zeilinger, A. (eds.), Quantum (Un)speakables: From Bell to Quantum Information, Springer, 2002. Buckel, Werner and Kleiner, Reinhold. Superconductivity: Fundamentals and Applications, VCH, 1991. Jibu, Mari and Yasue, Kunio. Quantum Brain Dynamics and Consciousness: An Introduction, John
{ "page_id": 396631, "source": null, "title": "Brian Josephson" }
Benjamins Publishing, 1995. Josephson, Brian; Rubik, Beverly A.; Fontana, David; Lorimer, David. "Defining consciousness", Nature, 358(618), 20 August 1992. Rosen, Joe. "Josephson, Brian David," Encyclopedia of Physics, Infobase Publishing, 2009, pp. 165–166. Stapp, Henry. "Quantum Approaches to Consciousness," in Philip David Zelazo, Morris Moscovitch and Evan Thompson (eds.), The Cambridge Handbook of Consciousness, 2007. Stenger, Victor J. The Unconscious Quantum: Metaphysics in Modern Physics and Cosmology, Prometheus Books, 1995. == External links == Brian Josephson on Nobelprize.org including the Nobel Lecture, 12 December 1973 The Discovery of Tunnelling Supercurrents
{ "page_id": 396631, "source": null, "title": "Brian Josephson" }
Paracatenula is a genus of millimeter sized free-living marine gutless catenulid flatworms. Paracatenula spp. are found worldwide in warm temperate to tropical subtidal sediments. They are part of the interstitial meiofauna of sandy sediments. Adult Paracatenula lack a mouth and a gut and are associated with intracellular symbiotic alphaproteobacteria of the genus Candidatus Riegeria. The symbionts are housed in bacteriocytes in a specialized organ, the trophosome (Greek τροφος trophos ‘food’). Ca. Riegeria can make up half of the worms' biomass. The beneficial symbiosis with the carbon dioxide fixing and sulfur-oxidizing endosymbionts allows the marine flatworm to live in nutrient poor environments. The symbionts not only provide the nutrition but also maintain the primary energy reserves in the symbiosis. == Diversity == Five species of Paracatenula have been described—P. erato, P. kalliope, P. polyhymnia, P. urania and P. galateia, named after muses and nymphs of the Greek mythology. Several more species have been morphologically and molecularly identified, but are not formally described. The best studied species are P. galateia from the Belize barrier reef and a yet undescribed species P. sp. santandrea from the Italian Island of Elba. == Distribution == Paracatenula are globally distributed in warm temperate to tropical regions and have been collected from Belize (Caribbean Sea), Egypt (Red Sea), Australia (Pacific Ocean) and Italy (Mediterranean Sea). They occur in the oxic-anoxic interface of subtidal sands and have been found in water depths up to 40 m. == Anatomy == Paracatenula can reach a length of up to 15 mm and a width of 0.4 mm. Several larger species of Paracatenula, such as P. galateia are flattened like a leaf, while all smaller species are round. All Paracatenula species examined so far were found to harbor bacterial symbionts in specialized symbiont-housing cells that form the nutritive organ - the
{ "page_id": 43978074, "source": null, "title": "Paracatenula" }
trophosome. The frontal part of the worms—the rostrum—is transparent and bacteria-free, and houses the brain, while the trophosome region appears white due to light refracting inclusions in the bacterial symbionts. Some species of Paracatenula such as P. galateia possess a statocyst with a single statolith. == Life cycle and reproduction == Although Paracatenula produce sperm and eggs that can be very informative to differentiate between species, sexual reproduction has not been observed. Instead, the worms reproduce by asexual fission or fragmentation, a process called paratomy. Paracatenula worms have high regenerative capabilities and can regenerate a lost head including the brain within 10–14 days The bacteriocytes of dividing worms are split during the fission process and the population of symbiotic bacteria is distributed to the two daughter individuals. == Host–symbiont relationship == Paracatenula host their symbionts within bacteriocytes in the trophosome. These bacteria, named Ca. Riegeria, belong to the lineage of Alphaproteobacteria forming a monophyletic group within the order Rhodospirillales and the family Rhodospirillaceae. The co-speciation between host and bacteria suggests a strict vertical transmission of the bacteria in which the endosymbionts are directly transferred from parents to their offspring. The symbiosis is shown to be beneficial for both partners. The lack of both a gut lumen and a mouth indicate that the host derives most of its nutrition from its symbionts, which have the potential for carbon dioxide fixation and sulfur oxidation. In return, the host provides its symbionts with a stable supply of electron donors such as sulfide and oxygen in a dynamic and heterogeneous environment. Furthermore, symbionts living intracellularly in the worms are protected from predation as well as competition for nutrients by other bacteria. == Symbiont physiology == Despite having a reduced genome with roughly 1400 genes, Ca. Riegeria symbionts have maintained a broad physiological repertoire, which
{ "page_id": 43978074, "source": null, "title": "Paracatenula" }
stands in contrast to all other reduced symbionts vertically transmitted for hundreds of millions of years. Ca. R. santandreae symbionts fix carbon dioxide, store carbon in multiple storage compounds and produce all necessary building blocks for cellular life, including sugars, nucleotides, amino acids, vitamins and co-factors. == Host provisioning == Paracatenula lack mouth and gut, and are nutritionally dependent on their symbionts. In all other chemosynthetic symbioses the host acquires their nutrition by digestion of symbionts. In contrast to this, in Paracatenula, the symbionts cater their host by secreting outer-membrane vesicles (OMVs) and symbiont digestion is rare. With their massive storage capabilities and the elegant way of providing the nutrition via OMVs, the symbionts have been suggested to form a ‘bacterial liver’ and peculiar ‘battery’ in the integrated Paracatenula symbiosis == References ==
{ "page_id": 43978074, "source": null, "title": "Paracatenula" }
Display behaviour is a set of ritualized behaviours that enable an animal to communicate to other animals (typically of the same species) about specific stimuli. Such ritualized behaviours can be visual, but many animals depend on a mixture of visual, audio, tactical and chemical signals. Evolution has tailored these stereotyped behaviours to allow animals to communicate both conspecifically and interspecifically which allows for a broader connection in different niches in an ecosystem. It is connected to sexual selection and survival of the species in various ways. Typically, display behaviour is used for courtship between two animals and to signal to the female that a viable male is ready to mate. In other instances, species may make territorial displays, in order to preserve a foraging or hunting territory for its family or group. A third form is exhibited by tournament species in which males will fight in order to gain the 'right' to breed. Animals from a broad range of evolutionary hierarchies avail of display behaviours - from invertebrates such as the simple jumping spider to the more complex vertebrates like the harbour seal. == In animals == === Invertebrates === ==== Insects ==== Communication is important for animals throughout the animal kingdom. For example, since female praying mantids are sexually cannibalistic, the male typically uses a cryptic form of display. This is a series of creeping movements executed by the male as it approaches the female, with freezing whenever the female looks towards the male. However, according to laboratory studies conducted by Loxton in 1979, one type of mantis, Ephestiasula arnoena, shows both male and female counterparts performing overt and ritualized behaviour before mating. Both displayed a semaphore behaviour, meaning waving their front legs in a boxing fashion before the slow approach of the male from behind. This semaphore display
{ "page_id": 5770589, "source": null, "title": "Display (zoology)" }
communicates that both are ready for copulation. Flies belonging to the genus Megaselia also show such behaviour. Contrary to the typically female-selected mating that occurs for most organisms, these flies have females that show the display behaviour and males that choose the mate. Females have a bright orange colouring that attracts the male and also perform a series of fluttering wing movements that make the insect appear to "dance" and make the openings on their abdomens to swell in order to attract a male. There is experimental evidence that implies the female may also release pheromones that attract the male; this is an instance of chemical display behaviour that plays a large role in animal communication. Auditory courtship behavior is seen in fruit flies like A. suspensa when they perform calling and pre-copulatory songs before mating. Both of these sounds are created by rapid flapping of the males wings. ==== Arachnids ==== Many arachnids show ritualized displays. For example, the arachnid family Salticidae consists of jumping spiders with keen vision which results in very clear display behaviours for courting in particular. Salticids are very similar in appearance to ants that live in the same area and therefore use their appearance to avoid predators. Since this similarity in appearance is so obvious, salticid spiders can use display behaviours to communicate both with members of their own species and also with members of the ants that they mimic. === Vertebrates === ==== Birds ==== Birds commonly use displays for courtship and communication. Manakin birds (in the family Pipridae) in the Amazon undergo large demonstrations of display behaviour in order to court females in the population. Since males provide no other immediate benefit to females, they must undergo ritualized behaviours in order to show their fitness to possible mates; the female then uses
{ "page_id": 5770589, "source": null, "title": "Display (zoology)" }
the information she gathers from this interaction to make a decision on who she will mate with. This display behaviour consists of various flight patterns, wing and colour displays, and particular vocalizations. ==== Mammals ==== Along with invertebrates and birds, vertebrates like the harbour seal also show display behaviour. Since the harbour seal resides in an aquatic environment, the display behaviours expressed are slightly different from those seen in terrestrial mammal species. Male harbour seals show specific vocalization and diving behaviours while demonstrating such behaviours for possible mates. As seals are distributed over such a large area, these display behaviours can slightly change geographically as males try to appeal to the largest number of females possible over a large geographical range. Dive displays, head flicks, and various vocalizations all work together in a display behaviour that signifies to the females in a colony that the males are ready to mate. == Factors influencing displays == Display is a set of conspicuous behaviours that allows for the attraction of mates but also can result in the attraction of predators. As a result, animals have certain environmental and social cues that they can use to decide when is the most beneficial time to show such behaviours; they use these triggers to minimize cost (predator avoidance) and maximize gain (mate attraction). The first factor is temporal. Depending on the time of the season, animals (more specifically, tropical frogs, in this study) show strong seasonal trends in display behaviour favouring times closer to the beginning of the mating season. This is plausible as this allows the most time for the attraction of a mate and the decline in calling to the end of the season is also valid because most organisms will have a mate by then and not have any need to continue
{ "page_id": 5770589, "source": null, "title": "Display (zoology)" }
such display behaviour. Depending upon the species and evolutionary histories, environmental factors such as temperature, elevation, and precipitation can affect the presence of these behaviours. Along with environmental cues, social cues can also play a role in the demonstration of display behaviour. For example, aggressive display behaviour in the crayfish Orconectes virilistends to be triggered by impositions of other crayfish on previously established territory. Such displays consist of a preliminary raising of claws between 4 and 5 times and if this is not sufficient to warn the other to not encroach on the territory then tactile engagement will occur. In this case, display behaviour is a preliminary step to the engagement of aggressive tactile behaviour whereas many cases of display behaviour result in the engagement of mating rituals. == In humans == Human men advertise their suitability as mates by signalling their status in the social hierarchy, often by acquiring wealth or fame. The Papuan big men of New Guinea staged elaborate feasts to show the extent of their influence and power. The potlatches of the Pacific Northwest were held for much of the same effect. == Tournament species == Tournament species in zoology are those species in which members of one sex (usually males) compete in order to mate. In tournament species, the reproductive success of the small group of competition winners is predominantly higher than that of the large group of losers. Tournament species are characterized by fierce same-sex fighting. Significantly larger or better-armed individuals in these species have an advantage, but only to the competing sex. Thus, most tournament species have high sexual dimorphism. Examples of tournament species include grouse, peafowl, lions, mountain gorillas and elephant seals. In some species, members of the competing sex come together in special display areas called leks. In other species, competition
{ "page_id": 5770589, "source": null, "title": "Display (zoology)" }
is more direct, in the form of fighting between males. In a small number of species, females compete for males; these include species of jacana, species of phalarope, and the spotted hyena. In all these cases, the female of the species shows traits that help in same-sex battles: larger bodies, aggressiveness, territorialism. Even maintenance of a multiple-male "harem" is sometimes seen in these animals. Most species fall on a continuum between tournament species and pair-bonding species. == See also == Aposematism Lek mating Mating Sexual selection Stotting Threat display == References ==
{ "page_id": 5770589, "source": null, "title": "Display (zoology)" }
Environmental analysis is the use of examination and statistical methods to study the chemical and biological factors that determine the quality of an environment. The purpose of this is commonly to monitor and study levels of pollutants in the atmosphere, rivers and other specific settings. Also, to monitor amounts of natural and chemical components. Other environmental analysis techniques include biological surveys or biosurveys, soil analysis or soil tests, vegetation surveys, tree identification, and remote sensing which uses satellite imagery to assess the environment on different spatial scales. == Analysis techniques == Chemical analysis typically involves sampling some part of the environment and using lab equipment to figure out how much of a certain target compound exists. Chemical analysis may be used to assess pollution levels for remediation, or to make sure groundwater is safe for drinking. Biological surveys typically includes a measurement of the abundance of a certain species within a certain area to confirm information about the ecosystem for specific reasons. Analysis like this could be used in efforts to understand species abundance, or to look at how external effects from the environment are affecting an ecosystem. Soil tests may involve chemical analysis, but most often soil tests involve removing a section of soil to understand what each layer of soil is composed of for specific reasons. Soil samples might be needed when determining whether they can build on a certain site, or just to produce a model of an area, or to determine possible crop production considering nutrient levels. Vegetation surveys are quite similar to a biosurvey, it's the process of measuring the abundance of plant species and trees within a specific area to understand more about the ecosystem for specific reasons. Sometimes these are done to understand ecological effects from outside factors, or to just determine overall
{ "page_id": 24972641, "source": null, "title": "Environmental analysis" }
ecosystem health. Remote sensing can be used for environmental analysis by taking imagery shot by satellites in multiple wavelengths to assess areas of different scales for a certain objective. Remote sensing can be used to identify land use, it can be used to determine damages from forest fires, it can be used for weather systems and meteorology, and also for atmospheric composition. Recent advances in remote sensing field has also led to the development of autonomous devices for the analysis of physical and chemical parameters of the environment using the sensors. == References ==
{ "page_id": 24972641, "source": null, "title": "Environmental analysis" }
Caesium oxide (IUPAC name), or cesium oxide, describes inorganic compounds composed of caesium and oxygen. Several binary (containing only Cs and O) oxides of caesium are known. Caesium oxide may refer to: Caesium suboxides (Cs7O, Cs4O, and Cs11O3) Caesium monoxide (Cs2O, the most common oxide) Caesium peroxide (Cs2O2) Caesium sesquioxide (Cs2O3) Caesium superoxide (CsO2) Caesium ozonide (CsO3) == References ==
{ "page_id": 68881763, "source": null, "title": "Caesium oxide" }
Ceration is a chemical process, a common practice in alchemy. It is performed by continuously adding a liquid by imbibition to a hard, dry substance while it is heated. Typically, this treatment makes the substance softer, more like molten wax (cera in Latin). Pseudo-Geber's Summa Perfectionis explains that ceration is "the mollification of an hard thing, not fusible unto liquefaction", and stresses the importance of correct humidity in the process. Antoine-Joseph Pernety's 1787 mytho-Hermetic dictionary defines it somewhat differently as the time when matter passes from black to gray, and then to white. Continuous cooking effects this change. Ceration may be synonymous with similar terms for alchemical burning processes. Incineration, for example is listed by Manly P. Hall. == See also == Calcination Incineration == References ==
{ "page_id": 2428263, "source": null, "title": "Ceration" }
Marker assisted selection or marker aided selection (MAS) is an indirect selection process where a trait of interest is selected based on a marker (morphological, biochemical or DNA/RNA variation) linked to a trait of interest (e.g. productivity, disease resistance, abiotic stress tolerance, and quality), rather than on the trait itself. This process has been extensively researched and proposed for plant- and animal- breeding. For example, using MAS to select individuals with disease resistance involves identifying a marker allele that is linked with disease resistance rather than the level of disease resistance. The assumption is that the marker associates at high frequency with the gene or quantitative trait locus (QTL) of interest, due to genetic linkage (close proximity, on the chromosome, of the marker locus and the disease resistance-determining locus). MAS can be useful to select for traits that are difficult or expensive to measure, exhibit low heritability and/or are expressed late in development. At certain points in the breeding process the specimens are examined to ensure that they express the desired trait. == Marker types == The majority of MAS work in the present era uses DNA-based markers. However, the first markers that allowed indirect selection of a trait of interest were morphological markers. In 1923, Karl Sax first reported association of a simply inherited genetic marker with a quantitative trait in plants when he observed segregation of seed size associated with segregation for a seed coat color marker in beans (Phaseolus vulgaris L.). In 1935, J. Rasmusson demonstrated linkage of flowering time (a quantitative trait) in peas with a simply inherited gene for flower color. Markers may be: Morphological — These were the first markers loci available that have an obvious impact on the morphology of plants. These markers are often detectable by eye, by simple visual inspection. Examples
{ "page_id": 11406702, "source": null, "title": "Marker-assisted selection" }
of this type of marker include the presence or absence of an awn, leaf sheath coloration, height, grain color, aroma of rice etc. In well-characterized crops like maize, tomato, pea, barley or wheat, tens or hundreds of genes that determine morphological traits have been mapped to specific chromosome locations. Biochemical — A protein that can be extracted and observed; for example, isozymes and storage proteins. Cytological — Cytological markers are chromosomal features that can be identified through microscopy. These generally take the form of chromosome bands, regions of chromatin that become impregnated with specific dyes used in cytology. The presence or absence of a chromosome band can be correlated with a particular trait, indicating that the locus responsible for the trait is located within or near (tightly linked) to the banded region. Morphological and cytological markers formed the backbone of early genetic studies in crops such as wheat and maize. DNA-based — Including microsatellites (also known as short tandem repeats, STRs, or simple sequence repeats, SSRs), restriction fragment length polymorphism (RFLP), random amplification of polymorphic DNA (RAPD), amplified fragment length polymorphism (AFLP), and single nucleotide polymorphisms (SNPs). == Positive and negative selectable markers == The following terms are generally less relevant to discussions of MAS in plant and animal breeding, but are highly relevant in molecular biology research: Positive selectable markers are selectable markers that confer selective advantage to the host organism. An example would be antibiotic resistance, which allows the host organism to survive antibiotic selection. Negative selectable markers are selectable markers that eliminate or inhibit growth of the host organism upon selection. An example would be thymidine kinase, which makes the host sensitive to ganciclovir selection. A distinction can be made between selectable markers (which eliminate certain genotypes from the population) and screenable markers (which cause certain genotypes
{ "page_id": 11406702, "source": null, "title": "Marker-assisted selection" }
to be readily identifiable, at which point the experimenter must "score" or evaluate the population and act to retain the preferred genotypes). Most MAS uses screenable markers rather than selectable markers. == Gene vs marker == The gene of interest directly causes production of protein(s) or RNA that produce a desired trait or phenotype, whereas markers (a DNA sequence or the morphological or biochemical markers produced due to that DNA) are genetically linked to the gene of interest. The gene of interest and the marker tend to move together during segregation of gametes due to their proximity on the same chromosome and concomitant reduction in recombination (chromosome crossover events) between the marker and gene of interest. For some traits, the gene of interest has been discovered and the presence of desirable alleles can be directly assayed with a high level of confidence. However, if the gene of interest is not known, markers linked to the gene of interest can still be used to select for individuals with desirable alleles of the gene of interest. When markers are used there may be some inaccurate results due to inaccurate tests for the marker. There also can be false positive results when markers are used, due to recombination between the marker of interest and gene (or QTL). A perfect marker would elicit no false positive results. The term 'perfect marker' is sometimes used when tests are performed to detect a SNP or other DNA polymorphism in the gene of interest, if that SNP or other polymorphism is the direct cause of the trait of interest. The term 'marker' is still appropriate to use when directly assaying the gene of interest, because the test of genotype is an indirect test of the trait or phenotype of interest. == Important properties of ideal markers for
{ "page_id": 11406702, "source": null, "title": "Marker-assisted selection" }
MAS == An ideal marker: Has easy recognition of phenotypes - ideally all possible phenotypes (homo- and heterozygotes) from all possible alleles Demonstrates measurable differences in expression between trait types or gene of interest alleles, early in the development of the organism Testing for the marker does not have variable success depending on the allele at the marker locus or the allele at the target locus (the gene of interest that determines the trait of interest). Low or null interaction among the markers allowing the use of many at the same time in a segregating population Abundant in number Polymorphic == Drawbacks of morphological markers == Morphological markers are associated with several general deficits that reduce their usefulness including: the delay of marker expression until late into the development of the organism allowing dominance to mask the underlying genetics pleiotropy, which does not allow easy and parsimonious inferences to be drawn from one gene to one trait confounding effects of genes unrelated to the gene or trait of interest but which also affect the morphological marker (epistasis) frequent confounding effects of environmental factors which affect the morphological characteristics of the organism To avoid problems specific to morphological markers, DNA-based markers have been developed. They are highly polymorphic, exhibit simple inheritance (often codominant), are abundant throughout the genome, are easy and fast to detect, exhibit minimum pleiotropic effects, and detection is not dependent on the developmental stage of the organism. Numerous markers have been mapped to different chromosomes in several crops including rice, wheat, maize, soybean and several others, and in livestock such as cattle, pigs and chickens. Those markers have been used in diversity analysis, parentage detection, DNA fingerprinting, and prediction of hybrid performance. Molecular markers are useful in indirect selection processes, enabling manual selection of individuals for further propagation.
{ "page_id": 11406702, "source": null, "title": "Marker-assisted selection" }
== Selection for major genes linked to markers == 'Major genes' that are responsible for economically important characteristics are frequent in the plant kingdom. Such characteristics include disease resistance, male sterility, self-incompatibility, and others related to shape, color, and architecture of whole plants and are often of mono- or oligogenic in nature. The marker loci that are tightly linked to major genes can be used for selection and are sometimes more efficient than direct selection for the target gene. Such advantages in efficiency may be due for example, to higher expression of the marker mRNA in such cases that the marker is itself a gene. Alternatively, in such cases that the target gene of interest differs between two alleles by a difficult-to-detect single nucleotide polymorphism, an external marker (be it another gene or a polymorphism that is easier to detect, such as a short tandem repeat) may present as the most realistic option. == Situations that are favorable for molecular marker selection == There are several indications for the use of molecular markers in the selection of a genetic trait. Situations such as: The selected character is expressed late in plant development, like fruit and flower features or adult characters with a juvenile period (so that it is not necessary to wait for the organism to become fully developed before arrangements can be made for propagation) The expression of the target gene is recessive (so that individuals which are heterozygous positive for the recessive allele can be crossed to produce some homozygous offspring with the desired trait) There are special conditions for expression of the target gene(s), as in the case of breeding for disease and pest resistance (where inoculation with the disease or subjection to pests would otherwise be required). Sometimes inoculation methods are unreliable and sometimes field inoculation
{ "page_id": 11406702, "source": null, "title": "Marker-assisted selection" }
with the pathogen is not even allowed for safety reasons. Moreover, sometimes expression is dependent on environmental conditions. The phenotype is affected by two or more unlinked genes (epistatis). For example, selection for multiple genes which provide resistance against diseases or insect pests for gene pyramiding. The cost of genotyping (for example, the molecular marker assays needed here) is decreasing thus increasing the attractiveness of MAS as the development of the technology continues. (Additionally, the cost of phenotyping performed by a human is a labor burden, which is higher in a developed country and increasing in a developing country.) == Steps for MAS == Generally the first step is to map the gene or quantitative trait locus (QTL) of interest first by using different techniques and then using this information for marker assisted selection. Generally, the markers to be used should be close to gene of interest (<5 recombination unit or cM) in order to ensure that only minor fraction of the selected individuals will be recombinants. Generally, not only a single marker but rather two markers are used in order to reduce the chances of an error due to homologous recombination. For example, if two flanking markers are used at same time with an interval between them of approximately 20cM, there is higher probability (99%) for recovery of the target gene. == QTL mapping techniques == In plants QTL mapping is generally achieved using bi-parental cross populations; a cross between two parents which have a contrasting phenotype for the trait of interest are developed. Commonly used populations are near isogenic lines (NILs), recombinant inbred lines (RILs), doubled haploids (DH), back cross and F2. Linkage between the phenotype and markers which have already been mapped is tested in these populations in order to determine the position of the QTL. Such
{ "page_id": 11406702, "source": null, "title": "Marker-assisted selection" }
techniques are based on linkage and are therefore referred to as "linkage mapping".A == Single step MAS and QTL mapping == In contrast to two-step QTL mapping and MAS, a single-step method for breeding typical plant populations has been developed. In such an approach, in the first few breeding cycles, markers linked to the trait of interest are identified by QTL mapping and later the same information is used in the same population. In this approach, pedigree structure is created from families that are created by crossing number of parents (in three-way or four way crosses). Both phenotyping and genotyping is done using molecular markers mapped the possible location of QTL of interest. This will identify markers and their favorable alleles. Once these favorable marker alleles are identified, the frequency of such alleles will be increased and response to marker assisted selection is estimated. Marker allele(s) with desirable effect will be further used in next selection cycle or other experiments. == High-throughput genotyping techniques == Recently high-throughput genotyping techniques are developed which allows marker aided screening of many genotypes. This will help breeders in shifting traditional breeding to marker aided selection. One example of such automation is using DNA isolation robots, capillary electrophoresis and pipetting robots. One recent example of capllilary system is Applied Biosystems 3130 Genetic Analyzer. This is the latest generation of 4-capillary electrophoresis instruments for the low to medium throughput laboratories. High-throughput MAS is needed for crop breeding because current techniques are not cost effective. Arrays have been developed for rice by Masouleh et al 2009; wheat by Berard et al 2009, Bernardo et al 2015, and Rasheed et al 2016; legumes by Varshney et al 2016; and various other crops, but all of these have also problems with customization, cost, flexibility, and equipment costs. == Use
{ "page_id": 11406702, "source": null, "title": "Marker-assisted selection" }
of MAS for backcross breeding == A minimum of five or six-backcross generations are required to transfer a gene of interest from a donor (may not be adapted) to a recipient (recurrent – adapted cultivar). The recovery of the recurrent genotype can be accelerated with the use of molecular markers. If the F1 is heterozygous for the marker locus, individuals with the recurrent parent allele(s) at the marker locus in first or subsequent backcross generations will also carry a chromosome tagged by the marker. == Marker assisted gene pyramiding == Gene pyramiding has been proposed and applied to enhance resistance to disease and insects by selecting for two or more than two genes at a time. For example, in rice such pyramids have been developed against bacterial blight and blast. The advantage of use of markers in this case allows to select for QTL-allele-linked markers that have same phenotypic effect. MAS has also been proved useful for livestock improvement. A coordinated effort to implement wheat (Durum (Triticum turgidum) and common wheat (Triticum aestivum)) marker assisted selection in the U.S. as well as a resource for marker assisted selection exists at the Wheat CAP (Coordinated Agricultural Project) website. == See also == Association mapping Family based QTL mapping Genomics of domestication History of plant breeding Molecular breeding Nested association mapping QTL mapping Selection methods in plant breeding based on mode of reproduction Smart breeding == References == == Further reading == review application of MAS in crop improvement Collard, Bertrand C. Y.; Mackill, David J. (12 February 2008). "Marker-assisted selection: an approach for precision plant breeding in the twenty-first century". Philosophical Transactions of the Royal Society B: Biological Sciences. 363 (1491): 557–572. doi:10.1098/rstb.2007.2170. ISSN 0962-8436. PMC 2610170. PMID 17715053. Gupta, P. K.; Langridge, Peter; Mir, R. R. (11 December 2009). "Marker-assisted
{ "page_id": 11406702, "source": null, "title": "Marker-assisted selection" }
wheat breeding: present status and future possibilities". Molecular Breeding. 26 (2): 145–161. doi:10.1007/s11032-009-9359-7. ISSN 1380-3743. S2CID 9989382. Moose, Stephen P.; Mumm, Rita H. (1 July 2008). "Molecular Plant Breeding as the Foundation for 21st Century Crop Improvement". Plant Physiology. 147 (3). Oxford University Press (OUP): 969–977. doi:10.1104/pp.108.118232. ISSN 1532-2548. PMC 2442525. PMID 18612074. American Society of Plant Biologists. Plant Breeding and Genomics
{ "page_id": 11406702, "source": null, "title": "Marker-assisted selection" }
The perineal raphe is a visible line or ridge of tissue on the body that extends from the anus through the perineum to the scrotum (male) or the vulva (female). It is found in both males and females, arises from the fusion of the urogenital folds, and is visible running medial through anteroposterior, to the anus where it resolves in a small knot of skin of varying size. In males, this structure continues through the midline of the scrotum (scrotal raphe) and upwards through the posterior midline aspect of the penis (penile raphe). It also exists deeper through the scrotum where it is called the scrotal septum. It is the result of a fetal developmental phenomenon whereby the scrotum and penis close toward the midline and fuse. == See also == Embryonic and prenatal development of the male reproductive system in humans Frenulum of penis Linea nigra Raphe == Images == == References == This article incorporates text in the public domain from page 1237 of the 20th edition of Gray's Anatomy (1918)
{ "page_id": 6098286, "source": null, "title": "Perineal raphe" }
A contig (from contiguous) is a set of overlapping DNA segments that together represent a consensus region of DNA. In bottom-up sequencing projects, a contig refers to overlapping sequence data (reads); in top-down sequencing projects, contig refers to the overlapping clones that form a physical map of the genome that is used to guide sequencing and assembly. Contigs can thus refer both to overlapping DNA sequences and to overlapping physical segments (fragments) contained in clones depending on the context. == Original definition of contig == In 1980, Staden wrote: In order to make it easier to talk about our data gained by the shotgun method of sequencing we have invented the word "contig". A contig is a set of gel readings that are related to one another by overlap of their sequences. All gel readings belong to one and only one contig, and each contig contains at least one gel reading. The gel readings in a contig can be summed to form a contiguous consensus sequence and the length of this sequence is the length of the contig. == Sequence contigs == A sequence contig is a continuous (not contiguous) sequence resulting from the reassembly of the small DNA fragments generated by bottom-up sequencing strategies. This meaning of contig is consistent with the original definition by Rodger Staden (1979). The bottom-up DNA sequencing strategy involves shearing genomic DNA into many small fragments ("bottom"), sequencing these fragments, reassembling them back into contigs and eventually the entire genome ("up"). Because current technology allows for the direct sequencing of only relatively short DNA fragments (300–1000 nucleotides), genomic DNA must be fragmented into small pieces prior to sequencing. In bottom-up sequencing projects, amplified DNA is sheared randomly into fragments appropriately sized for sequencing. The subsequent sequence reads, which are the data that contain the
{ "page_id": 331121, "source": null, "title": "Contig" }
sequences of the small fragments, are put into a database. The assembly software then searches this database for pairs of overlapping reads. Assembling the reads from such a pair (including, of course, only one copy of the identical sequence) produces a longer contiguous read (contig) of sequenced DNA. By repeating this process many times, at first with the initial short pairs of reads but then using increasingly longer pairs that are the result of previous assembly, the DNA sequence of an entire chromosome can be determined. Today, it is common to use paired-end sequencing technology where both ends of consistently sized longer DNA fragments are sequenced. Here, a contig still refers to any contiguous stretch of sequence data created by read overlap. Because the fragments are of known length, the distance between the two end reads from each fragment is known. This gives additional information about the orientation of contigs constructed from these reads and allows for their assembly into scaffolds in a process called scaffolding. Scaffolds consist of overlapping contigs separated by gaps of known length. The new constraints placed on the orientation of the contigs allows for the placement of highly repeated sequences in the genome. If one end read has a repetitive sequence, as long as its mate pair is located within a contig, its placement is known. The remaining gaps between the contigs in the scaffolds can then be sequenced by a variety of methods, including PCR amplification followed by sequencing (for smaller gaps) and BAC cloning methods followed by sequencing for larger gaps. == BAC contigs == Contig can also refer to the overlapping clones that form a physical map of a chromosome when the top-down or hierarchical sequencing strategy is used. In this sequencing method, a low-resolution map is made prior to sequencing in
{ "page_id": 331121, "source": null, "title": "Contig" }
order to provide a framework to guide the later assembly of the sequence reads of the genome. This map identifies the relative positions and overlap of the clones used for sequencing. Sets of overlapping clones that form a contiguous stretch of DNA are called contigs; the minimum number of clones that form a contig that covers the entire chromosome comprise the tiling path that is used for sequencing. Once a tiling path has been selected, its component BACs are sheared into smaller fragments and sequenced. Contigs therefore provide the framework for hierarchical sequencing. The assembly of a contig map involves several steps. First, DNA is sheared into larger (50–200kb) pieces, which are cloned into BACs or PACs to form a BAC library. Since these clones should cover the entire genome/chromosome, it is theoretically possible to assemble a contig of BACs that covers the entire chromosome. Reality, however, is not always ideal. Gaps often remain, and a scaffold—consisting of contigs and gaps—that covers the map region is often the first result. The gaps between contigs can be closed by various methods outlined below. === Construction of BAC contigs === BAC contigs are constructed by aligning BAC regions of known overlap via a variety of methods. One common strategy is to use sequence-tagged site (STS) content mapping to detect unique DNA sites in common between BACs. The degree of overlap is roughly estimated by the number of STS markers in common between two clones, with more markers in common signifying a greater overlap. Because this strategy provides only a very rough estimate of overlap, restriction digest fragment analysis, which provides a more precise measurement of clone overlap, is often used. In this strategy, clones are treated with one or two restriction enzymes and the resulting fragments separated by gel electrophoresis. If two
{ "page_id": 331121, "source": null, "title": "Contig" }
clones, they will likely have restriction sites in common, and will thus share several fragments. Because the number of fragments in common and the length of these fragments is known (the length is judged by comparison to a size standard), the degree of overlap can be deduced to a high degree of precision. === Gaps between contigs === Gaps often remain after initial BAC contig construction. These gaps occur if the Bacterial Artificial Chromosome (BAC) library screened has low complexity, meaning it does not contain a high number of STS or restriction sites, or if certain regions were less stable in cloning hosts and thus underrepresented in the library. If gaps between contigs remain after STS landmark mapping and restriction fingerprinting have been performed, the sequencing of contig ends can be used to close these gaps. This end-sequencing strategy essentially creates a novel STS with which to screen the other contigs. Alternatively, the end sequence of a contig can be used as a primer to primer walk across the gap. == See also == Staden Package == References == == External links == Definition of the term and historical perspective Staden package of sequence assembly: Definitions and background information
{ "page_id": 331121, "source": null, "title": "Contig" }
Thomas “Rock” Mackie is a medical physicist. He grew up in Saskatoon and received his undergraduate degree in Physics from the University of Saskatchewan in 1980. He went on to earn his doctorate in Physics at the University of Alberta in 1984. His expertise is in radiation therapy treatment planning and intensity modulated radiation therapy. He is a primary inventor and algorithm designer of the helical tomotherapy concept. Mackie is a professor in the departments of Medical Physics, Human Oncology, Biomedical Engineering and Engineering Physics at the University of Wisconsin–Madison. He has over 150 peer-reviewed publications, over 15 patents, and has been the supervisor for dozens of Ph.D. students. Mackie is a Fellow of the American Association of Physicists in Medicine and a member at large of that organization’s Science Council. He is also the Vice-Chair of the University of Wisconsin–Madison Calibration Laboratory. Mackie serves as President of the John R. Cameron Medical Physics Foundation, a non-profit organization that supports the UW Medical Physics Department, medical physics in the developing world and high school science scholarships in high schools in the Greater Madison region. Mackie is a member of the board of the Wisconsin Biomedical and Medical Device Association. Mackie was a founder of Geometrics Corporation (now owned by Philips Medical Systems) which developed the Pinnacle treatment planning system which still operates its Research and Development facility in Madison, Wisconsin. He is also a founder and chairman of the board of TomoTherapy, Incorporated, an international company listed on the NASDAQ stock exchange under the symbol TOMO, employing over 700 people based out of Madison, WI, USA. In 2002, Mackie was one of six Wisconsin regional winners of the Ernst & Young Entrepreneur of the Year awards. Mackie is a member and Vice-Chair of the International Commission on Radiation Units and
{ "page_id": 11210100, "source": null, "title": "Thomas Rockwell Mackie" }
Measurements (ICRU). == References == "Madison campus offers a road map for UWM and Milwaukee". Milwaukee Journal Sentinel (Milwaukee, Wisconsin). September 17, 2005. Retrieved 2008-07-30. Dar, Stephanie (April 3, 2008). "TomoTherapy founder shares his steps to business' success. Thomas Rockwell Mackie told faculty members the benefits of entrepreneurship". The Daily Cardinal. Archived from the original on 2008-04-29. Retrieved 2008-07-30. == External links == Mackie's page at University of Wisconsin–Madison Medical Physics
{ "page_id": 11210100, "source": null, "title": "Thomas Rockwell Mackie" }
This timeline describes the major developments, both experimental and theoretical, of: Einstein’s special theory of relativity (SR), its predecessors like the theories of luminiferous aether, its early competitors, i.e.: Ritz’s ballistic theory of light, the models of electromagnetic mass created by Abraham (1902), Lorentz (1904), Bucherer (1904) and Langevin (1904). This list also mentions the origins of standard notation (like c) and terminology (like theory of relavity). == Criteria for inclusion == Theories other than SR are not described here exhaustively, but only to the extent that is directly relevant to SR – i.e. at points when they: anticipated some elements of SR, like Fresnel’s hypothesis of partial aether drag, led to new experiments testing SR, like Stokes’s model of complete aether drag, were disproved or questioned, e.g. by the experiments of Oliver Lodge. For a more detailed timeline of aether theories – e.g. their emergence with the wave theory of light – see a separate article. Also, not all experiments are listed here – repetitions, even with much higher precision than the original, are mentioned only if they influence or challenge the opinions at their time. It was the case with: Michelson and Morley (1886) repeating the experiment of Fizeau (1851), contradicting Michelson’s interpretation of his 1881 experiment; Michelson–Morley (1887), more conclusive than the original experiment by Michelson (1881) and difficult to reconcile with their experiment of 1886, or other first-order measurements; Kaufmann’s 1906 repetition of his 1902 experiment, because he claimed to contradict the model of Einstein and Lorentz, considered consistent with the data from 1902; Miller (1933) or Marinov (1974), with results different than Michelson–Morley. For lists of repetitions, see the articles of particular experiments. The measurements of speed of light are also mentioned only to the minimum extent, i.e. when they proved for the first time
{ "page_id": 69602684, "source": null, "title": "Timeline of special relativity and the speed of light" }
that c is finite and invariant. Innovations like the use of Foucault's rotating mirror or the Fizeau wheel are not listed here – see the article about speed of light. This timeline also ignores, for reasons of volume and clarity: the long story of spacetime and the concept of time as the fourth dimension; e.g. the ideas of Lagrange and Wells; mathematical innovations that influenced the formalism of SR, e.g. the introduction of fibre bundles; indirect evidence for SR, through the evidence for relativistic theories like general relativity or relativistic quantum mechanics; publication of countless textbooks and popular science books or articles, even very influential classics like Mr Tompkins by George Gamow; the cultural impact of SR, e.g. publication of documentaries or commemorations of SR during the World Year of Physics 2005; new, untested theories modifying SR like Doubly special relativity or Variable speed of light. == Before the 19th century == 1632 – Galileo Galilei writes about the relativity of motion and that some forms of motion are undetectable; this would be later called the relativity principle, essential for special relativity as one of its postulates. 1674 – Robert Hooke makes his observations of the Gamma Draconis star, or γ Draconis for short. He proves a variation in its position on the sky, which would be later identified as stellar aberration. 1676 – Ole Rømer gives the first piece of evidence that the speed of light is finite, through his observation of the moons of Jupiter; the discovery divides scientists of his time. 1690 – Christiaan Huygens gives the first estimate of the speed of light in air or vacuum, based on Rømer’s work. The result is equivalent to about 2×108 m/s in modern units, correct only to the order of magnitude. 1727 – James Bradley correctly identifies the
{ "page_id": 69602684, "source": null, "title": "Timeline of special relativity and the speed of light" }
peculiar behaviour of γ Draconis as stellar aberration. Bradley uses this fact to estimate the speed of light in air or vacuum, and his result is more accurate than Huygens’s: about 3.0×108 m/s in modern units. For the first time, the measurement is correct to the first two significant figures. == 19th century == === Before 1880s === 1810 – François Arago observes that the speed of light of stars – measured with stellar aberration – may be independent of the relative motion of stars and the Earth; or at least, no differences are observable with a naked eye. 1818 – Augustin-Jean Fresnel proposes his model of partial aether dragging to explain Arago’s finding. 1845 – George Gabriel Stokes creates his own model of complete aether dragging. 1851 – The Fizeau experiment with light in flowing water confirms Fresnel’s model. 1861 – James Clerk Maxwell publishes his equations of the electromagnetic field, which had a great impact on the later works on aether and special relativity. 1868 – Martinus Hoek modifies the experiment of Fizeau, with the same conclusions. 1871 – George Biddell Airy observes the stellar aberration in a telescope filled with water, confirming Fresnel’s model and contradicting Stokes’s. === 1880s === 1881 – Albert Michelson performs his original interferometric experiment. It detects no aether wind, contradicting Fresnel’s model in favour of Stokes’s. 1885 – Ludwig Lange introduces the idea of inertial frame of reference. It is essential to relativity as an element of the modern formulation of the relativity principle. 1886 – Albert Michelson and Edward Morley repeat the Fizeau experiment with higher precision, confirming its result and contradicting the earlier conclusions of Michelson. 1887 – Woldemar Voigt publishes his coordinate transformations preserving the wave equation. They are very similar – but not equivalent – to the later
{ "page_id": 69602684, "source": null, "title": "Timeline of special relativity and the speed of light" }
Lorentz transformations. 1887 – the Michelson–Morley experiment fails to detect aether wind, disproving some aether theories and leading to new ones. 1889 – George FitzGerald conjectures the length contraction to explain the Michelson–Morley experiment. === 1890s === 1892 – Hendrik Lorentz – independently of FitzGerald – proposes the same explanation, with a formula only approximating the special-relativistic length contraction to the first order. 1893 – Oliver Lodge makes an interferometric experiment questioning the aether drag hypothesis. 1894 – Paul Drude introduces the symbol c for speed of light in vacuum. 1895 – Hendrik Lorentz corrects his 1892 model, proposing a contraction by the Lorentz factor (γ). 1895 – Albert Einstein probably makes his thought experiment about chasing a light beam, later relevant to his work on special relativity. 1897 – Oliver Lodge publishes another experimental result questioning aether drag. 1897 – Joseph Larmor publishes his coordinate transformations extending the length contraction formula. These transformations imply a form of time dilation and were an approximation of the full Lorentz transformations. 1898 – Henri Poincaré states that simultaneity is relative. 1899 – Hendrik Antoon Lorentz publishes an early version of his coordinate transformations, including the local time. == 20th century == === 1900s === 1902 – Lord Rayleigh writes that Lorentz’s hypothesis of length contraction predicts a form of birefringence and tries to observe it. The null result questions Lorentz’s model, but it would be later explained by a combination of length contraction and time dilation. 1902 – Max Abraham develops his classical model of the electron. It anticipated some elements of special relativity like the non-linear dependence of momentum on velocity – or, in other, more debatable terms, the relativistic mass. However, Abraham’s formula was different than in SR or in Lorentz’s theory. 1902 – Walter Kaufmann publishes his measurements of
{ "page_id": 69602684, "source": null, "title": "Timeline of special relativity and the speed of light" }
how the electron’s momentum – or, using later terms, its relativistic mass – depends on its speed. The results seem to confirm Abraham’s model. 1903 – Olinto De Pretto presents his aether theory with some form of mass–energy equivalence. It was described by a formula looking like Einstein’s E = mc2, but with different meanings of the terms. 1903 – Frederick Thomas Trouton and H.R. Noble publish the results of their experiment with capacitors, showing no aether drift. 1904 – DeWitt Bristol Brace conducts an improved version of Rayleigh’s 1902 experiment, again with null result. 1904 – Hendrik Lorentz explains the experimental results of Rayleigh, Brace, Trouton and Noble, using his refined coordinate transformations; he also proves that Maxwell’s equations are invariant under them. Lorentz also presents his own classical model of the electron, including the length contraction absent in the work of Abraham – but consistent with Kaufmann’s data so far. 1904 – Alfred Bucherer and Paul Langevin independently publish a model of the electron and its mass increasing with speed, in a way different both from Abraham’s and Lorentz’s theories. This hypothesis was also consistent with Kaufmann’s results at that stage. 1904 – Henri Poincaré presents the principle of relativity for electromagnetism. 1905 – Poincaré introduces the name Lorentz transformations and is the first to present them in their full form that would be later present in Einstein’s special relativity proper. Also, Poincaré is the first to describe the relativistic velocity-addition formula – implicitly in his publication and explicitly in his letter to Lorentz. 1905 – Albert Einstein publishes his special theory of relativity, including the mass–energy equivalence that would be later written as E = mc2. 1906 – Alfred Bucherer introduces the name theory of relativity, based on Max Planck’s term relative theory. 1906 – Walter Kaufmann
{ "page_id": 69602684, "source": null, "title": "Timeline of special relativity and the speed of light" }