id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
945,488
https://en.wikipedia.org/wiki/Basic%20oxygen%20steelmaking
Basic oxygen steelmaking (BOS, BOP, BOF, or OSM), also known as Linz-Donawitz steelmaking or the oxygen converter process, is a method of primary steelmaking in which carbon-rich molten pig iron is made into steel. Blowing oxygen through molten pig iron lowers the carbon content of the alloy and changes it into low-carbon steel. The process is known as basic because fluxes of calcium oxide or dolomite, which are chemical bases, are added to promote the removal of impurities and protect the lining of the converter. The process was invented in 1948 by Swiss engineer Robert Durrer and commercialized in 1952–1953 by the Austrian steelmaking company VOEST and ÖAMG. The LD converter, named after the Austrian towns Linz and Donawitz (a district of Leoben) is a refined version of the Bessemer converter where blowing of air is replaced with blowing oxygen. It reduced capital cost of the plants and smelting time, and increased labor productivity. Between 1920 and 2000, labor requirements in the industry decreased by a factor of 1,000, from more than 3 man-hours per metric ton to just 0.003. By 2000 the basic oxygen furnace accounted for 60% of global steel output. Modern furnaces will take a charge of iron of up to 400 tons and convert it into steel in less than 40 minutes, compared to 10–12 hours in an open hearth furnace. History The basic oxygen process developed outside of the traditional "big steel" environment. It was developed and refined by a single man, Swiss engineer Robert Durrer, and commercialized by two small steel companies in allied-occupied Austria, which had not yet recovered from the destruction of World War II. In 1856, Henry Bessemer had patented a steelmaking process involving oxygen blowing for decarbonizing molten iron (UK Patent No. 2207). For nearly 100 years commercial quantities of oxygen were not available or were too expensive, and steelmaking used air blowing. During WWII German (Karl Valerian Schwarz), Belgian (John Miles) and Swiss (Durrer and Heinrich Heilbrugge) engineers proposed their versions of oxygen-blown steelmaking, but only Durrer and Heilbrugge brought it to mass-scale production. In 1943, Durrer, formerly a professor at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin), returned to Switzerland and accepted a seat on the board of Roll AG, the country's largest steel mill. In 1947 he purchased the first small 2.5-ton experimental converter from the US, and on April 3, 1948 the new converter produced its first steel. The new process could conveniently process large amounts of scrap metal with only a small proportion of primary metal necessary. In the summer of 1948, Roll AG and two Austrian state-owned companies, VÖEST and ÖAMG, agreed to commercialize the Durrer process. By June 1949, VÖEST developed an adaptation of Durrer's process, known as the LD (Linz-Donawitz) process. In December 1949, VÖEST and ÖAMG committed to building their first 30-ton oxygen converters. They were put into operation in November 1952 (VÖEST in Linz) and May 1953 (ÖAMG, Donawitz) and temporarily became the leading edge of the world's steelmaking, causing a surge in steel-related research. Thirty-four thousand businesspeople and engineers visited the VÖEST converter by 1963. The LD process reduced processing time and capital costs per ton of steel, contributing to the competitive advantage of Austrian steel. VÖEST eventually acquired the rights to market the new technology. Errors by the VÖEST and the ÖAMG management in licensing their technology made control over its adoption in Japan impossible. By the end of the 1950s, the Austrians lost their competitive edge. In the original LD process, oxygen was blown over the top of the molten iron through the water-cooled nozzle of a vertical lance. In the 1960s, steelmakers introduced bottom-blown converters and developed inert gas blowing for stirring the molten metal and removing phosphorus impurities. In the Soviet Union, some experimental production of steel using the process was done in 1934, but industrial use was hampered by lack of efficient technology to produce liquid oxygen. In 1939, the Russian physicist Pyotr Kapitsa perfected the design of the centrifugal turboexpander. The process was put to use in 1942–1944. Most turboexpanders in industrial use since then have been based on Kapitsa's design and centrifugal turboexpanders have taken over almost 100% of industrial gas liquefaction, and in particular the production of liquid oxygen for steelmaking. Big American steelmakers were late adopters of the new technology. The first oxygen converters in the US were launched at the end of 1954 by McLouth Steel in Trenton, Michigan, which accounted for less than 1% of the national steel market. U.S. Steel and Bethlehem Steel introduced the oxygen process in 1964. By 1970, half of the world's and 80% of Japan's steel output was produced in oxygen converters. In the last quarter of the 20th century, use of basic oxygen converters for steel production was gradually, partially replaced by the electric arc furnace using scrap steel and iron. In Japan the share of LD process decreased from 80% in 1970 to 70% in 2000; worldwide share of the basic oxygen process stabilized at 60%. Process Basic oxygen steelmaking is a primary steelmaking process for converting molten pig iron into steel by blowing oxygen through a lance over the molten pig iron inside the converter. Exothermic heat is generated by the oxidation reactions during blowing. The basic oxygen steel-making process is as follows: Molten pig iron (sometimes referred to as "hot metal") from a blast furnace is poured into a large refractory-lined container called a ladle. The metal in the ladle is sent directly for basic oxygen steelmaking or to a pretreatment stage where sulfur, silicon, and phosphorus are removed before charging the hot metal into the converter. In external desulfurizing pretreatment, a lance is lowered into the molten iron in the ladle and several hundred kilograms of powdered magnesium are added and the sulfur impurities are reduced to magnesium sulfide in a violent exothermic reaction. The sulfide is then raked off. Similar pretreatments are possible for external desiliconisation and external dephosphorisation using mill scale (iron oxide) and lime as fluxes. The decision to pretreat depends on the quality of the hot metal and the required final quality of the steel. Filling the furnace with the ingredients is called charging. The BOS process is autogenous, i.e. the required thermal energy is produced during the oxidation process. Maintaining the proper charge balance, the ratio of hot metal from melt to cold scrap is important. The BOS vessel can be tilted up to 360° and is tilted towards the deslagging side for charging scrap and hot metal. The BOS vessel is charged with steel or iron scrap (25–30%), if required. Molten iron from the ladle is added as required for the charge balance. A typical chemistry of hotmetal charged into the BOS vessel is: 4% C, 0.2–0.8% Si, 0.08%–0.18% P, and 0.01–0.04% S, all of which can be oxidised by the supplied oxygen except sulfur (which requires reducing conditions). The vessel is then set upright and a water-cooled, copper tipped lance with 3–7 nozzles is lowered into it to within a few feet of the surface of the bath and high-purity oxygen at a pressure of is introduced at supersonic speed. The lance "blows" 99% pure oxygen over the hot metal, igniting the carbon dissolved in the steel, to form carbon monoxide and carbon dioxide, causing the temperature to rise to about 1700 °C. This melts the scrap, lowers the carbon content of the molten iron and helps remove unwanted chemical elements. It is this use of pure oxygen (instead of air) that improves upon the Bessemer process, as the nitrogen (an undesirable element) and other gases in air do not react with the charge, and decrease the efficiency of the furnace. Fluxes (calcium oxide or dolomite) are fed into the vessel to form slag, to maintain basicity of the slag – the ratio of calcium oxide to silicon oxide – at a level to minimise refractory wear and absorb impurities during the steelmaking process. During "blowing", churning of metal and fluxes in the vessel forms an emulsion that facilitates the refining process. Near the end of the blowing cycle, which takes about 20 minutes, the temperature is measured and samples are taken. A typical chemistry of the blown metal is 0.3–0.9% C, 0.05–0.1% Mn, 0.001–0.003% Si, 0.01–0.03% S and 0.005–0.03% P. The BOS vessel is tilted towards the slagging side and the steel is poured through a tap hole into a steel ladle with basic refractory lining. This process is called tapping the steel. The steel is further refined in the ladle furnace, by adding alloying materials to impart special properties required by the customer. Sometimes argon or nitrogen is bubbled into the ladle to make the alloys mix correctly. After the steel is poured off from the BOS vessel, the slag is poured into the slag pots through the BOS vessel mouth and dumped. Variants Earlier converters, with a false bottom that can be detached and repaired, are still in use. Modern converters have a fixed bottom with plugs for argon purging. The energy optimization furnace (EOF) is a BOF variant associated with a scrap preheater where the sensible heat in the off-gas is used for preheating scrap, located above the furnace roof. The lance used for blowing has undergone changes. Slagless lances, with a long tapering copper tip, have been employed to avoid jamming of the lance during blowing. Post-combustion lance tips burn the CO generated during blowing into and provide additional heat. For slag-free tapping, darts, refractory balls, and slag detectors are employed. Modern converters are fully automated with automatic blowing patterns and sophisticated control systems. See also AJAX furnace, transitional oxygen-based open hearth technology References Bibliography McGannon, Harold E. editor (1971). The Making, Shaping and Treating of Steel: Ninth Edition. Pittsburgh, Pennsylvania: United States Steel Corporation. Smil, Vaclav (2006). Transforming the twentieth century: technical innovations and their consequences, Volume 2. Oxford University Press US. . Brock, James W.; Elzinga, Kenneth G. (1991). Antitrust, the market, and the state: the contributions of Walter Adams. M. E. Sharpe. . Tweraser, Kurt (2000). The Marshall Plan and the Reconstruction of the Austrian Steel Industry 1945–1953. in: Bischof, Gunther et al. (2000). The Marshall Plan in Austria. Transaction Publishers. . pp. 290–322. External links Basic Oxygen Steelmaking module at steeluniversity.org, including a fully interactive simulation (archived) Basic Oxygen Steelmaking cost model showing typical cost structure for liquid steel Austrian inventions Metallurgical processes Steelmaking Swiss inventions
Basic oxygen steelmaking
[ "Chemistry", "Materials_science" ]
2,427
[ "Metallurgical processes", "Steelmaking", "Metallurgy" ]
945,530
https://en.wikipedia.org/wiki/Pulse-forming%20network
A pulse-forming network (PFN) is an electric circuit that accumulates electrical energy over a comparatively long time, and then releases the stored energy in the form of a relatively square pulse of comparatively brief duration for various pulsed power applications. In a PFN, energy storage components such as capacitors, inductors or transmission lines are charged by means of a high-voltage power source, then rapidly discharged into a load through a high-voltage switch, such as a spark gap or hydrogen thyratron. Repetition rates range from single pulses to about 104 per second. PFNs are used to produce uniform electrical pulses of short duration to power devices such as klystron or magnetron tube oscillators in radar sets, pulsed lasers, particle accelerators, flashtubes, and high-voltage utility test equipment. Much high-energy research equipment is operated in a pulsed mode, both to keep heat dissipation down and because high-energy physics often occurs at short time scales, so large PFNs are widely used in high-energy research. They have been used to produce nanosecond-length pulses with voltages of up to 106–107 volts and currents up to 106 amperes, with peak power in the terawatt range, similar to lightning bolts. Implementation A PFN consists of a series of high-voltage energy-storage capacitors and inductors. These components are interconnected as a "ladder network" that behaves similarly to a length of transmission line. For this reason, a PFN is sometimes called an "artificial, or synthetic, transmission line". Electrical energy is initially stored within the charged capacitors of the PFN by a high-voltage DC power supply. When the PFN is discharged, the capacitors discharge in sequence, producing an approximately rectangular pulse. The pulse is conducted to the load through a transmission line. The PFN must be impedance-matched to the load to prevent the energy reflecting back toward the PFN. Transmission-line PFNs A length of transmission line can be used as a pulse-forming network. This can give substantially flat-topped pulses at the inconvenience of using of a large length of cable. In a simple charged transmission-line pulse generator (animation, right) a length of transmission line such as a coaxial cable is connected through a switch to a matched load RL at one end, and at the other end to a DC voltage source V through a resistor RS, which is large compared to the characteristic impedance Z0 of the line. When the power supply is connected, it slowly charges up the capacitance of the line through RS. When the switch is closed, a voltage equal to V/2 is applied to the load, the charge stored in the line begins to discharge through the load with a current of V/2Z0, and a voltage step travels up the line toward the source. The source end of the line is approximately an open circuit due to the high RS, so the step is reflected uninverted and travels back down the line toward the load. The result is that a pulse of voltage is applied to the load with a duration equal to 2D/c, where D is the length of the line, and c is the propagation velocity of the pulse in the line. The propagation velocity in typical transmission lines is generally more than 50% of the speed of light. For example, in most types of coaxial cable the propagation velocity is approximately 2/3 the speed of light, or 20 cm/ns. High-power PFNs generally use specialized transmission lines consisting of pipes filled with oil or deionized water as a dielectric to handle the high power stress. A disadvantage of simple PFN pulse generators is that because the transmission line must be matched to the load resistance RL to prevent reflections, the voltage stored on the line is divided equally between the load resistance and the characteristic impedance of the line, so the voltage pulse applied to the load is only one-half the power-supply voltage. Blumlein transmission line A transmission line circuit which circumvented the above problem, producing an output pulse equal to the power-supply voltage V, was invented in 1937 by British engineer Alan Blumlein and is widely used today in PFNs. In the Blumlein generator (animation, right), the load is connected in series between two equal-length transmission lines, which are charged by a DC power supply at one end (note that the right line is charged through the impedance of the load). To trigger the pulse, a switch short-circuits the line at the power-supply end, causing a negative voltage step to travel toward the load. Since the characteristic impedance Z0 of the line is made equal to half the load impedance RL, the voltage step is half-reflected and half-transmitted, resulting in two symmetrical opposite-polarity voltage steps, which propagate away from the load, creating between them a voltage drop of V/2 − (−V/2)= V across the load. The voltage steps reflect from the ends and return, ending the pulse. As in other charge-line generators, the pulse duration is equal to 2D/c, where D is the length of the individual transmission lines. A second advantage of the Blumlein geometry is that the switching device can be grounded, rather than located in the high-voltage side of the transmission line as in the typical charged line, which complicates the triggering electronics. Uses of PFNs Upon command, a high-voltage switch transfers the energy stored within the PFN into the load. When the switch "fires" (closes), the network of capacitors and inductors within the PFN creates an approximately square output pulse of short duration and high power. This high-power pulse becomes a brief source of high power to the load. Sometimes a specially designed pulse transformer is connected between the PFN and load. This technique improves the impedance match between the PFN and the load so as to improve power-transfer efficiency. A pulse transformer is typically required when driving higher-impedance devices such as klystrons or magnetrons from a PFN. Because the PFN is charged over a relatively long time and then discharged over a very short time, the output pulse may have a peak power of megawatts or even terawatts. The combination of a high-voltage source, PFN, HV switch, and pulse transformer (when required) is sometimes called a "power modulator" or "pulser". See also Pulse (signal processing) Pulse generator Pulsed power Thyratron Thyristor Triggered spark gaps Marx generator Crossatron Pulsed laser Radar References External links Eric Heine, "Conversion". NIKHEF Electronic Department, Amsterdam, the Netherlands. Riepe, Kenneth B., "High-voltage microsecond pulse-forming network". Review of Scientific Instruments Vol 48(8) pp. 1028–1030. August 1977. (Abstract) Glasoe, G. Norris, Lebacqz, Jean V., "Pulse Generators", McGraw-Hill, MIT Radiation Laboratory Series, Volume 5, 1948. Pulsed power
Pulse-forming network
[ "Physics" ]
1,512
[ "Power (physics)", "Pulsed power", "Physical quantities" ]
945,656
https://en.wikipedia.org/wiki/Free-electron%20laser
A free-electron laser (FEL) is a fourth generation light source producing extremely brilliant and short pulses of radiation. An FEL functions much as a laser but employs relativistic electrons as a gain medium instead of using stimulated emission from atomic or molecular excitations. In an FEL, a bunch of electrons passes through a magnetic structure called an undulator or wiggler to generate radiation, which re-interacts with the electrons to make them emit coherently, exponentially increasing its intensity. As electron kinetic energy and undulator parameters can be adapted as desired, free-electron lasers are tunable and can be built for a wider frequency range than any other type of laser, currently ranging in wavelength from microwaves, through terahertz radiation and infrared, to the visible spectrum, ultraviolet, and X-ray. The first free-electron laser was developed by John Madey in 1971 at Stanford University using technology developed by Hans Motz and his coworkers, who built an undulator at Stanford in 1953, using the wiggler magnetic configuration. Madey used a 43 MeV electron beam and 5 m long wiggler to amplify a signal. Beam creation To create an FEL, an electron gun is used. A beam of electrons is generated by a short laser pulse illuminating a photocathode located inside a microwave cavity and accelerated to almost the speed of light in a device called a photoinjector. The beam is further accelerated to a design energy by a particle accelerator, usually a linear particle accelerator. Then the beam passes through a periodic arrangement of magnets with alternating poles across the beam path, which creates a side to side magnetic field. The direction of the beam is called the longitudinal direction, while the direction across the beam path is called transverse. This array of magnets is called an undulator or a wiggler, because the Lorentz force of the field forces the electrons in the beam to wiggle transversely, traveling along a sinusoidal path about the axis of the undulator. The transverse acceleration of the electrons across this path results in the release of photons, which are monochromatic but still incoherent, because the electromagnetic waves from randomly distributed electrons interfere constructively and destructively in time. The resulting radiation power scales linearly with the number of electrons. Mirrors at each end of the undulator create an optical cavity, causing the radiation to form standing waves, or alternately an external excitation laser is provided. The radiation becomes sufficiently strong that the transverse electric field of the radiation beam interacts with the transverse electron current created by the sinusoidal wiggling motion, causing some electrons to gain and others to lose energy to the optical field via the ponderomotive force. This energy modulation evolves into electron density (current) modulations with a period of one optical wavelength. The electrons are thus longitudinally clumped into microbunches, separated by one optical wavelength along the axis. Whereas an undulator alone would cause the electrons to radiate independently (incoherently), the radiation emitted by the bunched electrons is in phase, and the fields add together coherently. The radiation intensity grows, causing additional microbunching of the electrons, which continue to radiate in phase with each other. This process continues until the electrons are completely microbunched and the radiation reaches a saturated power several orders of magnitude higher than that of the undulator radiation. The wavelength of the radiation emitted can be readily tuned by adjusting the energy of the electron beam or the magnetic-field strength of the undulators. FELs are relativistic machines. The wavelength of the emitted radiation, , is given by or when the wiggler strength parameter , discussed below, is small where is the undulator wavelength (the spatial period of the magnetic field), is the relativistic Lorentz factor and the proportionality constant depends on the undulator geometry and is of the order of 1. This formula can be understood as a combination of two relativistic effects. Imagine you are sitting on an electron passing through the undulator. Due to Lorentz contraction the undulator is shortened by a factor and the electron experiences much shorter undulator wavelength . However, the radiation emitted at this wavelength is observed in the laboratory frame of reference and the relativistic Doppler effect brings the second factor to the above formula. In an X-ray FEL the typical undulator wavelength of 1 cm is transformed to X-ray wavelengths on the order of 1 nm by ≈ 2000, i.e. the electrons have to travel with the speed of 0.9999998c. Wiggler strength parameter K , a dimensionless parameter, defines the wiggler strength as the relationship between the length of a period and the radius of bend, where is the bending radius, is the applied magnetic field, is the electron mass, and is the elementary charge. Expressed in practical units, the dimensionless undulator parameter is . Quantum effects In most cases, the theory of classical electromagnetism adequately accounts for the behavior of free electron lasers. For sufficiently short wavelengths, quantum effects of electron recoil and shot noise may have to be considered. Construction Free-electron lasers require the use of an electron accelerator with its associated shielding, as accelerated electrons can be a radiation hazard if not properly contained. These accelerators are typically powered by klystrons, which require a high-voltage supply. The electron beam must be maintained in a vacuum, which requires the use of numerous vacuum pumps along the beam path. While this equipment is bulky and expensive, free-electron lasers can achieve very high peak powers, and the tunability of FELs makes them highly desirable in many disciplines, including chemistry, structure determination of molecules in biology, medical diagnosis, and nondestructive testing. Infrared and terahertz FELs The Fritz Haber Institute in Berlin completed a mid-infrared and terahertz FEL in 2013. At Helmholtz-Zentrum Dresden - Rossendorf two terahertz and mid-infrared FEL-based sources are in operation. FELBE is an FEL equipped with a cavity with continuous pulsing with a repetition rate of 13 MHz, pulsing with 1 kHz by applying a pulse picker, and macrobunch operation with bunch length > 100 µs and macrobunch repetition rates ≤ 25 Hz. Pulse duration and pulse energy vary with wavelength and lie in the range from 1 - 25 ps and 100 nJ - few µJ, respectively. The TELBE facility is based on a superradiant undulator offering THz pulses ranging from 0.1 THz to 2.5 THz at repetition rates up to 500 kHz. X-ray FELs The lack of mirror materials that can reflect extreme ultraviolet and x-rays means that X-ray free electron lasers (XFEL) need to work without a resonant cavity. Consequently, in an X-ray FEL (XFEL) the beam is produced by a single pass of radiation through the undulator. This requires that there be enough amplification over a single pass to produce an appropriate beam. Hence, XFELs use long undulator sections that are tens or hundreds of meters long. This allows XFELs to produce the brightest X-ray pulses of any human-made x-ray source. The intense pulses from the X-ray laser lies in the principle of self-amplified spontaneous emission (SASE), which leads to microbunching. Initially all electrons are distributed evenly and emit only incoherent spontaneous radiation. Through the interaction of this radiation and the electrons' oscillations, they drift into microbunches separated by a distance equal to one radiation wavelength. This interaction drives all electrons to begin emitting coherent radiation. Emitted radiation can reinforce itself perfectly whereby wave crests and wave troughs are optimally superimposed on one another. This results in an exponential increase of emitted radiation power, leading to high beam intensities and laser-like properties. Examples of facilities operating on the SASE FEL principle include the: Free electron LASer in Hamburg (FLASH) Linac Coherent Light Source (LCLS) at the SLAC National Accelerator Laboratory European x-ray free electron laser (EuXFEL) in Hamburg SPring-8 Compact SASE Source (SCSS) in Japan SwissFEL at the Paul Scherrer Institute in Switzerland SACLA at the RIKEN Harima Institute in Japan PAL-XFEL (Pohang Accelerator Laboratory X-ray Free-Electron Laser) in Korea In 2022, an upgrade to Stanford University’s Linac Coherent Light Source (LCLS-II) used temperatures around −271 °C to produce 106 pulses/second of near light-speed electrons, using superconducting niobium cavities. Seeding and Self-seeding One problem with SASE FELs is the lack of temporal coherence due to a noisy startup process. To avoid this, one can "seed" an FEL with a laser tuned to the resonance of the FEL. Such a temporally coherent seed can be produced by more conventional means, such as by high harmonic generation (HHG) using an optical laser pulse. This results in coherent amplification of the input signal; in effect, the output laser quality is characterized by the seed. While HHG seeds are available at wavelengths down to the extreme ultraviolet, seeding is not feasible at x-ray wavelengths due to the lack of conventional x-ray lasers. In late 2010, in Italy, the seeded-FEL source FERMI@Elettra started commissioning, at the Trieste Synchrotron Laboratory. FERMI@Elettra is a single-pass FEL user-facility covering the wavelength range from 100 nm (12 eV) to 10 nm (124 eV), located next to the third-generation synchrotron radiation facility ELETTRA in Trieste, Italy. In 2001, at Brookhaven national laboratory, a seeding technique called "High-Gain Harmonic-Generation" that works to X-ray wavelength has been developed. The technique, which can be multiple-staged in an FEL to achieve increasingly shorter wavelengths, utilizes a longitudinal shift of the radiation relative to the electron bunch to avoid the reduced beam quality caused by a previous stage. This longitudinal staging along the beam is called "Fresh-Bunch". This technique was demonstrated at x-ray wavelength at Trieste Synchrotron Laboratory. A similar staging approach, named "Fresh-Slice", was demonstrated at the Paul Scherrer Institut, also at X-ray wavelengths. In the Fresh Slice the short X-ray pulse produced at the first stage is moved to a fresh part of the electron bunch by a transverse tilt of the bunch. In 2012, scientists working on the LCLS found an alternative solution to the seeding limitation for x-ray wavelengths by self-seeding the laser with its own beam after being filtered through a diamond monochromator. The resulting intensity and monochromaticity of the beam were unprecedented and allowed new experiments to be conducted involving manipulating atoms and imaging molecules. Other labs around the world are incorporating the technique into their equipment. Research Biomedical Basic research Researchers have explored X-ray free-electron lasers as an alternative to synchrotron light sources that have been the workhorses of protein crystallography and cell biology. Exceptionally bright and fast X-rays can image proteins using x-ray crystallography. This technique allows first-time imaging of proteins that do not stack in a way that allows imaging by conventional techniques, 25% of the total number of proteins. Resolutions of 0.8 nm have been achieved with pulse durations of 30 femtoseconds. To get a clear view, a resolution of 0.1–0.3 nm is required. The short pulse durations allow images of X-ray diffraction patterns to be recorded before the molecules are destroyed. The bright, fast X-rays were produced at the Linac Coherent Light Source at SLAC. As of 2014, LCLS was the world's most powerful X-ray FEL. Due to the increased repetition rates of the next-generation X-ray FEL sources, such as the European XFEL, the expected number of diffraction patterns is also expected to increase by a substantial amount. The increase in the number of diffraction patterns will place a large strain on existing analysis methods. To combat this, several methods have been researched to sort the huge amount of data that typical X-ray FEL experiments will generate. While the various methods have been shown to be effective, it is clear that to pave the way towards single-particle X-ray FEL imaging at full repetition rates, several challenges have to be overcome before the next resolution revolution can be achieved. New biomarkers for metabolic diseases: taking advantage of the selectivity and sensitivity when combining infrared ion spectroscopy and mass spectrometry scientists can provide a structural fingerprint of small molecules in biological samples, like blood or urine. This new and unique methodology is generating exciting new possibilities to better understand metabolic diseases and develop novel diagnostic and therapeutic strategies. Surgery Research by Glenn Edwards and colleagues at Vanderbilt University's FEL Center in 1994 found that soft tissues including skin, cornea, and brain tissue could be cut, or ablated, using infrared FEL wavelengths around 6.45 micrometres with minimal collateral damage to adjacent tissue. This led to surgeries on humans, the first ever using a free-electron laser. Starting in 1999, Copeland and Konrad performed three surgeries in which they resected meningioma brain tumors. Beginning in 2000, Joos and Mawn performed five surgeries that cut a window in the sheath of the optic nerve, to test the efficacy for optic nerve sheath fenestration. These eight surgeries produced results consistent with the standard of care and with the added benefit of minimal collateral damage. A review of FELs for medical uses is given in the 1st edition of Tunable Laser Applications. Fat removal Several small, clinical lasers tunable in the 6 to 7 micrometre range with pulse structure and energy to give minimal collateral damage in soft tissue have been created. At Vanderbilt, there exists a Raman shifted system pumped by an Alexandrite laser. Rox Anderson proposed the medical application of the free-electron laser in melting fats without harming the overlying skin. At infrared wavelengths, water in tissue was heated by the laser, but at wavelengths corresponding to 915, 1210 and 1720 nm, subsurface lipids were differentially heated more strongly than water. The possible applications of this selective photothermolysis (heating tissues using light) include the selective destruction of sebum lipids to treat acne, as well as targeting other lipids associated with cellulite and body fat as well as fatty plaques that form in arteries which can help treat atherosclerosis and heart disease. Military FEL technology is being evaluated by the US Navy as a candidate for an anti-aircraft and anti-missile directed-energy weapon. The Thomas Jefferson National Accelerator Facility's FEL has demonstrated over 14 kW power output. Compact multi-megawatt class FEL weapons are undergoing research. On June 9, 2009 the Office of Naval Research announced it had awarded Raytheon a contract to develop a 100 kW experimental FEL. On March 18, 2010 Boeing Directed Energy Systems announced the completion of an initial design for U.S. Naval use. A prototype FEL system was demonstrated, with a full-power prototype scheduled by 2018. FEL prize winners The FEL prize is given to a person who has contributed significantly to the advancement of the field of free-electron lasers. In addition, it gives the international FEL community the opportunity to recognize its members for their outstanding achievements. The prize winners are announced at the FEL conference, which currently takes place every two years. 1988 John Madey 1989 William Colson 1990 Todd Smith and Luis Elias 1991 Phillip Sprangle and Nikolai Vinokurov 1992 Robert Phillips 1993 Roger Warren 1994 Alberto Renieri and Giuseppe Dattoli 1995 Richard Pantell and George Bekefi 1996 Charles Brau 1997 Kwang-Je Kim 1998 John Walsh 1999 Claudio Pellegrini 2000 Stephen V. Benson, Eisuke J. Minehara, and George R. Neil 2001 Michel Billardon, Marie-Emmanuelle Couprie, and Jean-Michel Ortega 2002 H. Alan Schwettman and Alexander F.G. van der Meer 2003 Li-Hua Yu 2004 Vladimir Litvinenko and Hiroyuki Hama 2005 Avraham (Avi) Gover 2006 Evgueni Saldin and Jörg Rossbach 2007 Ilan Ben-Zvi and James Rosenzweig 2008 Samuel Krinsky 2009 David Dowell and Paul Emma 2010 Sven Reiche 2011 Tsumoru Shintake 2012 John Galayda 2013 Luca Giannessi and Young Uk Jeong 2014 Zhirong Huang and William Fawley 2015 Mikhail Yurkov and Evgeny Schneidmiller 2017 Bruce Carlsten, Dinh Nguyen, and Richard Sheffield 2019 Enrico Allaria, Gennady Stupakov, and Alex Lumpkin 2022 Brian McNeil and Ying Wu 2024 Toru Hara, Hitoshi Tanaka, and Takashi Tanaka Young Scientist FEL Award The Young Scientist FEL Award (or "Young Investigator FEL Prize") is intended to honor outstanding contributions to FEL science and technology from a person who is less than 37 years of age at the time of the FEL conference. 2008 Michael Röhrs 2009 Pavel Evtushenko 2010 Guillaume Lambert 2011 Marie Labat 2012 Daniel F. Ratner 2013 Dao Xiang 2014 Erik Hemsing 2015 Agostino Marinelli and Haixiao Deng 2017 Eugenio Ferrari and Eléonore Roussel 2019 Joe Duris and Chao Feng 2022 Zhen Zhang, Jiawei Yan, and Svitozar Serkez 2024 Philipp Dijkstal See also Bremsstrahlung Cyclotron radiation Electron wake European X-ray free-electron laser Gyrotron International Linear Collider Laser acronyms Smith–Purcell effect Synchrotron radiation References Further reading Madey, John, Stimulated emission of radiation in periodically deflected electron beam, US Patent 38 22 410,1974 "The FEL Program at Jefferson Lab" Jefferson Lab Free-Electron Laser Program Paolo Luchini, Hans Motz, Undulators and Free-electron Lasers, Oxford University Press, 1990. External links Chapter dedicated to XFEL, included in the web to learn crystallography CSIC Lightsources.org LCLS the Linac Coherent Light Source, the world's first hard x-ray FEL at the SLAC National Accelerator Laboratory FERMI, the new FEL at the ELETTRA synchrotron in Trieste Free-Electron Laser Open Book (National Academies Press) The World Wide Web Virtual Library: Free-Electron Laser research and applications European XFEL PSI SwissFEL SPring-8 Compact SASE Source PAL-XFEL, South Korea Electron beam transport system and diagnostics of the Dresden FEL The Free Electron Laser for Infrared eXperiments FELIX W. M. Keck Free Electron Laser Center Jefferson Lab's Free-Electron Laser Program Free-Electron Lasers: The Next Generation by Davide Castelvecchi New Scientist, January 21, 2006 Airborne megawatt class free-electron laser for defense and security FERMI@Elettra Free-Electron Laser Project Center for Free-Electron Laser Science (CFEL) FELIX Laboratory, free-electron lasers in Nijmegen, the Netherlands Electron beam Medical equipment Terahertz technology Accelerator physics
Free-electron laser
[ "Physics", "Chemistry", "Biology" ]
4,026
[ "Electron", "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Electron beam", "Electromagnetic spectrum", "Medical equipment", "Experimental physics", "Accelerator physics", "Terahertz technology", "Medical technology" ]
945,780
https://en.wikipedia.org/wiki/Bellcrank
A bellcrank is a type of crank that changes motion through an angle. The angle can range from 0 to 360 degrees, but 90-degree and 180-degree bellcranks are most common. The name comes from its first use, changing the vertical pull on a rope to a horizontal pull on the striker of a bell to sound it. Design A typical 90-degree bellcrank consists of an L-shaped crank pivoted where the two arms of the L meet. Moving rods or cables are attached to the outer ends of the L. When one is pulled, the L rotates around the pivot point, pulling on the other rod. A typical 180-degree bellcrank consists of a straight bar that pivots at or near its center. When one rod is pulled or pushed, the bar rotates around the pivot point, pulling or pushing on the other rod. Changing the length of the bellcrank's arms changes the mechanical advantage of the system. Many applications do not change the direction of motion but instead amplify a force "in line", which a bellcrank can do in a limited space. There is a tradeoff between range of motion, linearity of motion, and size. The greater the angle traversed by the crank, the more the motion ratio changes, and the more non-linear the motion becomes. Applications Aircraft Bellcranks are often used in aircraft flight control systems to connect the pilot's controls to the control surfaces. For example, on light aircraft, the rudder often has a bellcrank (also called a control horn) whose pivot point is the rudder hinge. A cable connects one of the pilot's rudder pedal to one side of the bellcrank. When the pilot pushes the rudder pedal, the cable pulls the bellcrank, causing the rudder to rotate. The opposite rudder pedal is connected to the other end of the bellcrank to rotate the rudder in the opposite direction. Architectural Bellcrank mechanisms were installed at the top of entryway stairs in multi-unit Victorian and Edwardian homes ( to 1930), particularly in the San Francisco Bay Area, to allow residents to open and close the doors remotely so they would not need to walk down the stairs to welcome guests. Automotive Bellcranks are also seen in automotive applications, such as in the linkage connecting the throttle pedal to the carburetor or connecting the brake pedal to the master cylinder. In vehicle suspensions, bellcranks are used in pullrod and pushrod suspensions in cars or in the Christie suspension in tanks. More vertical suspension designs such as MacPherson struts may not be feasible in some vehicle designs due to space, aerodynamic, or other design constraints; bellcranks translate the vertical motion of the wheel into horizontal motion, allowing the suspension to be mounted transversely or longitudinally within the vehicle. Bicycles Bellcranks are used in some internally geared hub assemblies to select gears. The motion from a Bowden cable is translated by a bellcrank to a push rod, which selects which portion of the epicyclic gears are driven by the bicycle's rear sprocket. References External links daerospace.com. Mechanical engineering Linkages (mechanical)
Bellcrank
[ "Physics", "Engineering" ]
679
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
946,319
https://en.wikipedia.org/wiki/Orpiment
Orpiment, also known as ″yellow arsenic blende″ is a deep-colored, orange-yellow arsenic sulfide mineral with formula . It is found in volcanic fumaroles, low-temperature hydrothermal veins, and hot springs and may be formed through sublimation. Orpiment takes its name from the Latin auripigmentum (aurum, "gold" + pigmentum, "pigment"), due to its deep-yellow color. Orpiment once was widely used in artworks, medicine, and other applications. Because of its toxicity and instability, its usage has declined. Etymology The Latin auripigmentum (aurum, "gold" + pigmentum, "pigment") referred both to its deep-yellow color and to the historical belief that it was thought to contain gold. The Latin term was used by Pliny in the first century after Christ. The Greek for orpiment was arsenikon, deriving from the Greek word arsenikos, meaning "male", from the belief that metals were of different sexes. This Greek term was used by Theophrastus in the fourth century BC. The Chinese term for orpiment is Ci-Huang (in Pinyin), meaning "female yellow". The Persian for orpiment is zarnikh, deriving from the word "zar", the Persian for gold. Physical and optical properties Orpiment is a common monoclinic arsenic sulfide mineral. It has a Mohs hardness of 1.5 to 2 and a specific gravity of 3.49. It melts at to . Optically, it is biaxial (−) with refractive indices of a = 2.4, b = 2.81, g = 3.02. Visual characteristics Orpiment is a type of lemon-yellow to golden- or brownish-yellow crystal commonly found in foliated columnar or fibrous aggregates, may alternatively be botryoidal or reniform, granular or powdery, and, rarely, as prismatic crystals. Used as a pigment, orpiment's color is often described as a lemon- or canary-yellow, and occasionally as a golden- or brownish-yellow. In the Munsell color system, "orpiment" is designated "brilliant yellow", Munsell notation 4.4Y 8.7 /8.9. Orpiment and realgar Orpiment and realgar are closely related minerals and are often categorized in the same group. They are both arsenic sulfides and belong to the monoclinic crystal system. They are found in the same deposits and can form in the same geologic environments. As a result, Orpiment and realgar share similar physical properties and histories of use by humans. In Chinese, the names for orpiment and realgar are Ci-Huang and Xiong-Huang, respectively meaning "female yellow" and "male yellow". Their names symbolize their close natural conjunction, both physically in terms of their occurrence and properties and culturally in Chinese traditions. Orpiment and realgar can be distinguished by their different visual characteristics. While orpiment typically has a vibrant golden-yellow color, realgar, in contrast, normally has an orange or reddish hue. Permanence and conservation Yellow orpiment (As2S3) degrades into arsenic oxides. Because of their solubility in water, arsenic oxides readily migrate to the surrounding environment. In painted works using orpiment, migrating, degraded arsenic oxides are often detectable throughout the multi-layered paint system. This widespread arsenic migration has consequences for the conservation of orpiment as a pigment in works of art. Orpiment is also sensitive to light exposure, decaying into a friable white arsenic trioxide over time. Similarly, on ancient, orpiment-coated manuscript paper in Nepal, orpiment used to deter insects has often turned white over time. Because of orpiment's solubility and instability as a pigment, preventing the degradation of orpiment may need to be prioritized in art conservation. Proper conservation methods should minimize exposure to strong light. Such methods should emphasize humidity control and avoid the use of water-based cleaning agents. Use by artists Orpiment has historically been used in artworks in many locales in the Eastern Hemisphere. It was one of the few clear, bright-yellow pigments available to artists until the 19th century. Historical and regional use of orpiment In Egypt, lumps of orpiment pigment have been found in a fourteenth-century BC tomb. In China, orpiment is known to have been used to color Chinese lacquer, despite no written sources mentioning this. Orpiment has also been identified on Central Asian wall paintings from the sixth to the thirteenth centuries. In a traditional Thai painting technique, still in use today, yellow ink for writing and drawing on black paper manuscripts is made using orpiment. Medieval European artists imported orpiment from Asia Minor. Orpiment has been identified on Norwegian wooden altar frontals, polychrome sculptures, and folk art objects, including a crucifix. It was also used in twelfth- to sixteenth-century Eastern Orthodox icons from Bulgaria, Russia, and the former Yugoslavia. In Venice, records show that orpiment was purchased for a Romanian prince in 1600. European use of orpiment was uncommon until the nineteenth century, during which it saw use as a pigment in Impressionist paintings. Orpiment as a pigment In the Medieval Norwegian church of Tingelstad, orpiment was used in painting the altar frontal. Orpiment was commonly combined with indigo dye to make a dark, rich green. In the Wilton Diptych (c 1395-9), this green pigment was used in egg tempera on the left panel for the green cloak of Edmund the Martyr. Renaissance artists such as Raphael also used orpiment as a yellow pigment. In Raphael's Sistine Madonna from 1513–14, orpiment is used to achieve yellow on the clothing of the figures and in the background. Tintoretto's Portrait of Vincenzo Morosini from about 1575–80 uses the pigment in its details. Orpiment is used to replicate the gold embroidery on Morosini's embroidered stole and to highlight the fur of the spotted ferret on his chest. Limitations Orpiment was one of the few clear, bright-yellow pigments available to artists until the 19th century. Its extreme toxicity and incompatibility with other, common, pigments, including lead and copper-based substances such as verdigris and azurite, meant that its use as a pigment ended when cadmium yellows, chromium yellows and organic aniline dye-based colors were introduced during the 19th century. Other historical uses Orpiment was traded in the Roman Empire and was used as a medicine in China, even though it is very toxic. It has been used as fly poison and to tip arrows with poison. Because of its striking color, it was of interest to alchemists, both in China and Europe, searching for a way to make gold. It also has been found in the wall decorations of Tutankhamun's tomb and ancient Egyptian scrolls, and on the walls of the Taj Mahal. For centuries, orpiment was ground down and used as a pigment in painting and for sealing wax, and was even used in ancient China as a correction fluid. Orpiment is mentioned in the 17th century by Robert Hooke in Micrographia for the manufacture of small shot. Scientists like Richard Adolf Zsigmondy and Hermann Ambronn puzzled jointly over the amorphous form of , "orpiment glass", as early as 1904. Industry uses Orpiment is used in the production of infrared-transmitting glass, oil cloth, linoleum, semiconductors, photoconductors, pigments, and fireworks. Mixed with two parts of slaked lime (calcium hydroxide), orpiment is still commonly used in rural India as a depilatory. It is used in the tanning industry to remove hair from hides. Orpiment has been used as bookends. In 2023, the UK Office for Product Safety and Standards recalled 40 pieces sold by TK Maxx between June and October 2022, due to the mineral's toxicity. Crystal structure Gallery See also List of inorganic pigments References External links Webexhibits "Pigments Through the Ages: Orpiment" Babylonian Talmud Tractate Chullin see Rashi 'haZarnich' Orpiment, Colourlex Arsenic minerals Sulfide minerals Inorganic pigments Alchemical substances Monoclinic minerals Minerals in space group 14 Blendes
Orpiment
[ "Chemistry" ]
1,806
[ "Inorganic pigments", "Alchemical substances", "Inorganic compounds" ]
946,426
https://en.wikipedia.org/wiki/Amplitude-shift%20keying
Amplitude-shift keying (ASK) is a form of amplitude modulation that represents digital data as variations in the amplitude of a carrier wave. In an ASK system, a symbol, representing one or more bits, is sent by transmitting a fixed-amplitude carrier wave at a fixed frequency for a specific time duration. For example, if each symbol represents a single bit, then the carrier signal could be transmitted at nominal amplitude when the input value is 1, but transmitted at reduced amplitude or not at all when the input value is 0. Method Any digital modulation scheme uses a finite number of distinct signals to represent digital data. ASK uses a finite number of amplitudes, each assigned a unique pattern of binary digits. Usually, each amplitude encodes an equal number of bits. Each pattern of bits forms the symbol that is represented by the particular amplitude. The demodulator, which is designed specifically for the symbol-set used by the modulator, determines the amplitude of the received signal and maps it back to the symbol it represents, thus recovering the original data. Frequency and phase of the carrier are kept constant. Like AM, an ASK is also linear and sensitive to atmospheric noise, distortions, propagation conditions on different routes in PSTN, etc. Both ASK modulation and demodulation processes are relatively inexpensive. The ASK technique is also commonly used to transmit digital data over optical fiber. For LED transmitters, binary 1 is represented by a short pulse of light and binary 0 by the absence of light. Laser transmitters normally have a fixed "bias" current that causes the device to emit a low light level. This low level represents binary 0, while a higher-amplitude lightwave represents binary 1. The simplest and most common form of ASK operates as a switch, using the presence of a carrier wave to indicate a binary one and its absence to indicate a binary zero. This type of modulation is called on-off keying (OOK), and is used at radio frequencies to transmit Morse code (referred to as continuous wave operation), More sophisticated encoding schemes have been developed which represent data in groups using additional amplitude levels. For instance, a four-level encoding scheme can represent two bits with each shift in amplitude; an eight-level scheme can represent three bits; and so on. These forms of amplitude-shift keying require a high signal-to-noise ratio for their recovery, as by their nature much of the signal is transmitted at reduced power. ASK system can be divided into three blocks. The first one represents the transmitter, the second one is a linear model of the effects of the channel, the third one shows the structure of the receiver. The following notation is used: ht(f) is the carrier signal for the transmission hc(f) is the impulse response of the channel n(t) is the noise introduced by the channel hr(f) is the filter at the receiver L is the number of levels that are used for transmission Ts is the time between the generation of two symbols Different symbols are represented with different voltages. If the maximum allowed value for the voltage is A, then all the possible values are in the range [−A, A] and they are given by: the difference between one voltage and the other is: Considering the picture, the symbols v[n] are generated randomly by the source S, then the impulse generator creates impulses with an area of v[n]. These impulses are sent to the filter ht to be sent through the channel. In other words, for each symbol a different carrier wave is sent with the relative amplitude. Out of the transmitter, the signal s(t) can be expressed in the form: In the receiver, after the filtering through hr (t) the signal is: where we use the notation: where * indicates the convolution between two signals. After the A/D conversion the signal z[k] can be expressed in the form: In this relationship, the second term represents the symbol to be extracted. The others are unwanted: the first one is the effect of noise, the third one is due to the intersymbol interference. If the filters are chosen so that g(t) will satisfy the Nyquist ISI criterion, then there will be no intersymbol interference and the value of the sum will be zero, so: the transmission will be affected only by noise. Probability of error The probability density function of having an error of a given size can be modelled by a Gaussian function; the mean value will be the relative sent value, and its variance will be given by: where is the spectral density of the noise within the band and Hr (f) is the continuous Fourier transform of the impulse response of the filter hr (f). The probability of making an error is given by: where, for example, is the conditional probability of making an error given that a symbol v0 has been sent and is the probability of sending a symbol v0. If the probability of sending any symbol is the same, then: If we represent all the probability density functions on the same plot against the possible value of the voltage to be transmitted, we get a picture like this (the particular case of is shown): The probability of making an error after a single symbol has been sent is the area of the Gaussian function falling under the functions for the other symbols. It is shown in cyan for just one of them. If we call the area under one side of the Gaussian, the sum of all the areas will be: . The total probability of making an error can be expressed in the form: We now have to calculate the value of . In order to do that, we can move the origin of the reference wherever we want: the area below the function will not change. We are in a situation like the one shown in the following picture: it does not matter which Gaussian function we are considering, the area we want to calculate will be the same. The value we are looking for will be given by the following integral: where is the complementary error function. Putting all these results together, the probability to make an error is: from this formula we can easily understand that the probability to make an error decreases if the maximum amplitude of the transmitted signal or the amplification of the system becomes greater; on the other hand, it increases if the number of levels or the power of noise becomes greater. This relationship is valid when there is no intersymbol interference, i.e. is a Nyquist function. See also Frequency-shift keying (FSK) References External links Calculating the Sensitivity of an Amplitude Shift Keying (ASK) Receiver Quantized radio modulation modes Applied probability Fault tolerance
Amplitude-shift keying
[ "Mathematics", "Engineering" ]
1,365
[ "Applied mathematics", "Reliability engineering", "Applied probability", "Fault tolerance" ]
946,666
https://en.wikipedia.org/wiki/Rotary%20evaporator
A rotary evaporator (rotovap) is a device used in chemical laboratories for the efficient and gentle removal of solvents from samples by evaporation. When referenced in the chemistry research literature, description of the use of this technique and equipment may include the phrase "rotary evaporator", though use is often rather signaled by other language (e.g., "the sample was evaporated under reduced pressure"). Rotary evaporators are also used in molecular cooking for the preparation of distillates and extracts. A simple rotary evaporator system was invented by Lyman C. Craig. It was first commercialized by the Swiss company Büchi in 1957. The device separates substances with different boiling points, and greatly simplifies work in chemistry laboratories. In research the most common size accommodates round-bottom flasks of a few liters, whereas large scale (e.g., 20L-50L) versions are used in pilot plants in commercial chemical operations. Design The main components of a rotary evaporator are: A motor unit that rotates the evaporation flask or vial containing the user's sample. A vapor duct that is the axis for sample rotation, and is a vacuum-tight conduit for the vapor being drawn off the sample. A vacuum system, to substantially reduce the pressure within the evaporator system. A heated fluid bath (generally water) to heat the sample. A condenser with either a coil passing coolant, or a "cold finger" into which coolant mixtures such as dry ice and acetone are placed. A condensate-collecting flask at the bottom of the condenser, to catch the distilling solvent after it re-condenses. A mechanical or motorized mechanism to quickly lift the evaporation flask from the heating bath. The vacuum system used with rotary evaporators can be as simple as a water aspirator with a trap immersed in a cold bath (for non-toxic solvents), or as complex as a regulated mechanical vacuum pump with refrigerated trap. Glassware used in the vapor stream and condenser can be simple or complex, depending upon the goals of the evaporation, and any propensities the dissolved compounds might give to the mixture (e.g., to foam or "bump"). Commercial instruments are available that include the basic features, and various traps are manufactured to insert between the evaporation flask and the vapor duct. Modern equipment often adds features such as digital control of vacuum, digital display of temperature and rotational speed, and vapor temperature sensing. Theory Vacuum evaporators as a class function because lowering the pressure above a bulk liquid lowers the boiling points of the component liquids in it. Generally, the component liquids of interest in applications of rotary evaporation are research solvents that one desires to remove from a sample after an extraction, such as following a natural product isolation or a step in an organic synthesis. Liquid solvents can be removed without excessive heating of what are often complex and sensitive solvent-solute combinations. Rotary evaporation is most often and conveniently applied to separate "low boiling" solvents such a n-hexane or ethyl acetate from compounds which are solid at room temperature and pressure. However, careful application also allows removal of a solvent from a sample containing a liquid compound if there is minimal co-evaporation (azeotropic behavior), and a sufficient difference in boiling points at the chosen temperature and reduced pressure. Solvents with higher boiling points such as water (100 °C at standard atmospheric pressure, 760 torr or 1 bar), dimethylformamide (DMF, 153 °C at the same), or dimethyl sulfoxide (DMSO, 189 °C at the same), can also be evaporated if the unit's vacuum system is capable of sufficiently low pressure. (For instance, both DMF and DMSO will boil below 50 °C if the vacuum is reduced from 760 torr to 5 torr [from 1 bar to 6.6 mbar]) However, more recent developments are often applied in these cases (e.g., evaporation while centrifuging or vortexing at high speeds). Rotary evaporation for high boiling hydrogen bond-forming solvents such as water is often a last recourse, as other evaporation methods or freeze-drying (lyophilization) are available. This is partly due to the fact that in such solvents, the tendency to "bump" is accentuated. The modern centrifugal evaporation technologies are particularly useful when one has many samples to do in parallel, as in medium- to high-throughput synthesis now expanding in industry and academia. Evaporation under vacuum can also, in principle, be performed using standard organic distillation glassware — i.e., without rotation of the sample. The key advantages in use of a rotary evaporator are That the centrifugal force and the frictional force between the wall of the rotating flask and the liquid sample result in the formation of a thin film of warm solvent being spread over a large surface. The forces created by the rotation suppress bumping. The combination of these characteristics and the conveniences built into modern rotary evaporators allow for quick, gentle evaporation of solvents from most samples, even in the hands of relatively inexperienced users. Solvent remaining after rotary evaporation can be removed by exposing the sample to even deeper vacuum, on a more tightly sealed vacuum system, at ambient or higher temperature (e.g., on a Schlenk line or in a vacuum oven). A key disadvantage in rotary evaporations, besides its single sample nature, is the potential of some sample types to bump, e.g. ethanol and water, which can result in loss of a portion of the material intended to be retained. Even professionals experience periodic mishaps during evaporation, especially bumping, though experienced users become aware of the propensity of some mixtures to bump or foam, and apply precautions that help to avoid most such events. In particular, bumping can often be prevented by taking homogeneous phases into the evaporation, by carefully regulating the strength of the vacuum (or the bath temperature) to provide for an even rate of evaporation, or, in rare cases, through use of added agents such as boiling chips (to make the nucleation step of evaporation more uniform). Rotary evaporators can also be equipped with further special traps and condenser arrays that are best suited to particular difficult sample types, including those with the tendency to foam or bump. Safety Possible hazards include implosions resulting from use of glassware that contains flaws, such as star-cracks. Explosions may occur from concentrating unstable impurities during evaporation, for example when rotavapping an ethereal solution containing peroxides. This can also occur when taking certain unstable compounds, such as organic azides and acetylides, nitro-containing compounds, molecules with strain energy, etc. to dryness. Users of rotary evaporation equipment must take precautions to avoid contact with rotating parts, particularly entanglement of loose clothing, hair, or necklaces. Under these circumstances, the winding action of the rotating parts can draw the users into the apparatus resulting in breakage of glassware, burns, and chemical exposure. Extra caution must also be applied to operations with air reactive materials, especially when under vacuum. A leak can draw air into the apparatus and a violent reaction can occur. See also Vapor pressure Centrifugal evaporator References Evaporators Distillation Laboratory equipment ja:エバポレーター#ロータリーエバポレーター
Rotary evaporator
[ "Chemistry", "Engineering" ]
1,614
[ "Chemical equipment", "Distillation", "Evaporators", "Separation processes" ]
946,929
https://en.wikipedia.org/wiki/Bionics
Bionics or biologically inspired engineering is the application of biological methods and systems found in nature to the study and design of engineering systems and modern technology. The word bionic, coined by Jack E. Steele in August 1958, is a portmanteau from biology and electronics which was popularized by the 1970s U.S. television series The Six Million Dollar Man and The Bionic Woman, both based on the novel Cyborg by Martin Caidin. All three stories feature humans given various superhuman powers by their electromechanical implants. According to proponents of bionic technology, the transfer of technology between lifeforms and manufactured objects is desirable because evolutionary pressure typically forces living organisms—fauna and flora—to become optimized and efficient. For example, dirt- and water-repellent paint (coating) was inspired by the hydrophobic properties of the lotus flower plant (the lotus effect). The term "biomimetic" is preferred for references to chemical reactions, such as reactions that, in nature, involve biological macromolecules (e.g., enzymes or nucleic acids) whose chemistry can be replicated in vitro using much smaller molecules. Examples of bionics in engineering include the hulls of boats imitating the thick skin of dolphins or sonar, radar, and medical ultrasound imaging imitating animal echolocation. In the field of computer science, the study of bionics has produced artificial neurons, artificial neural networks, and swarm intelligence. Bionics also influenced Evolutionary computation but took the idea further by simulating evolution in silico and producing optimized solutions that had never appeared in nature. A 2006 research article estimated that "at present there is only a 12% overlap between biology and technology in terms of the mechanisms used". History The name "biomimetics" was coined by Otto Schmitt in the 1950s. The term "bionics" was later introduced by Jack E. Steele in August 1958 while working at the Aeronautics Division House at Wright-Patterson Air Force Base in Dayton, Ohio. However, terms like biomimicry or biomimetics are preferred in order to avoid confusion with the medical term "bionics." Coincidentally, Martin Caidin used the word for his 1972 novel Cyborg, which was adapted into the television film and subsequent series The Six Million Dollar Man. Caidin was a long-time aviation industry writer before turning to fiction full-time. Methods The study of bionics often emphasizes implementing a function found in nature rather than imitating biological structures. For example, in computer science, cybernetics models the feedback and control mechanisms that are inherent in intelligent behavior, while artificial intelligence models the intelligent function regardless of the particular way it can be achieved. The conscious copying of examples and mechanisms from natural organisms and ecologies is a form of applied case-based reasoning, treating nature itself as a database of solutions that already work. Proponents argue that the selective pressure placed on all natural life forms minimizes and removes failures. Although almost all engineering could be said to be a form of biomimicry, the modern origins of this field are usually attributed to Buckminster Fuller and its later codification as a house or field of study to Janine Benyus. There are generally three biological levels in the fauna or flora after which technology can be modeled: Mimicking natural methods of manufacture Imitating mechanisms found in nature (e.g. velcro) Studying organizational principles from the social behavior of organisms, such as the flocking behavior of birds, optimization of ant foraging and bee foraging, and the swarm intelligence (SI)-based behavior of a school of fish. Examples In robotics, bionics and biomimetics are used to apply the way animals move to the design of robots. BionicKangaroo was based on the movements and physiology of kangaroos. Velcro is the most famous example of biomimetics. In 1948, the Swiss engineer George de Mestral was cleaning his dog of burrs picked up on a walk when he realized how the hooks of the burrs clung to the fur. The horn-shaped, saw-tooth design for lumberjack blades used at the turn of the 19th century to cut down trees when it was still done by hand was modeled after observations of a wood-burrowing beetle. The blades were significantly more efficient and thus revolutionized the timber industry. Cat's eye reflectors were invented by Percy Shaw in 1935 after studying the mechanism of cat eyes. He had found that cats had a system of reflecting cells, known as tapetum lucidum, which was capable of reflecting the tiniest bit of light. Leonardo da Vinci's flying machines and ships are early examples of drawing from nature in engineering. Resilin is a replacement for rubber that has been created by studying the material also found in arthropods. Julian Vincent drew from the study of pinecones when he developed in 2004 "smart" clothing that adapts to changing temperatures. "I wanted a nonliving system which would respond to changes in moisture by changing shape," he said. "There are several such systems in plants, but most are very small—the pinecone is the largest and therefore the easiest to work on." Pinecones respond to higher humidity by opening their scales (to disperse their seeds). The "smart" fabric does the same thing, opening up when the wearer is warm and sweating and shutting tight when cold. "Morphing aircraft wings" that change shape according to the speed and duration of flight were designed in 2004 by biomimetic scientists from Penn State University. The morphing wings were inspired by different bird species that have differently shaped wings according to the speed at which they fly. In order to change the shape and underlying structure of the aircraft wings, the researchers needed to make the overlying skin also be able to change, which their design does by covering the wings with fish-inspired scales that could slide over each other. In some respects this is a refinement of the swing-wing design. Some paints and roof tiles have been engineered to be self-cleaning by copying the mechanism from the Nelumbo lotus. Cholesteric liquid crystals (CLCs) are the thin-film material often used to fabricate fish tank thermometers or mood rings that change color with temperature changes. They change color because their molecules are arranged in a helical or chiral arrangement and with temperature the pitch of that helical structure changes, reflecting different wavelengths of light. Chiral Photonics, Inc. has abstracted the self-assembled structure of the organic CLCs to produce analogous optical devices using tiny lengths of inorganic, twisted glass fiber. Nanostructures and physical mechanisms that produce the shining color of butterfly wings were reproduced in silico by Greg Parker, professor of Electronics and Computer Science at the University of Southampton, and research student Luca Plattner in the field of photonics, which is electronics using photons as the information carrier instead of electrons. The wing structure of the blue morpho butterfly was studied and the way it reflects light was mimicked to create an RFID tag that can be read through water and on metal. The wing structure of butterflies has also inspired the creation of new nanosensors to detect explosives. Neuromorphic chips and silicon retinae have wiring that is modeled after real neural networks. Techno Ecosystems or 'Eco Cyborg' systems involve the coupling of natural ecological processes to technological ones which mimic ecological functions. This results in the creation of a self-regulating hybrid system. Research into this field was initiated by Howard T. Odum, who perceived the structure and energy dynamics of ecosystems as being analogous to energy flow between components of an electrical circuit. Medical adhesives involving glue and tiny nano-hairs are being developed based on the physical structures found in the feet of geckos. Computer viruses also show similarities with biological viruses, attacking program-oriented information towards self-reproduction and dissemination. The cooling system of the Eastgate Centre building in Harare was modeled after a termite mound to achieve very efficient passive cooling. Adhesive which allows mussels to stick to rocks, piers, and boat hulls inspired bioadhesive gel for blood vessels. The field of bionics has inspired new aircraft designs which offer greater agility along with other advantages. This has been described by Geoff Spedding, Måns Rosén, and Anders Hedenström in an article in Journal of Experimental Biology. Similar statements were also made by John Videler and Eize Stamhuis in their book Avian Flight, and in the article they present in Science about LEVs. This research in bionics may also be used to create more efficient helicopters or miniature UAVs, as stated by Bret Tobalske in an article in Science about Hummingbirds. UC Berkeley as well as ESA have been working in a similar direction and created the Robofly (a miniature UAV) and the Entomopter (a UAV which can walk, crawl and fly). A bio-inspired mechanical device can generate plasma in water via cavitation using the morphological accurate snapping shrimp claw. This was described in detail by Xin Tang and David Staack in an article published in Science Advances. Specific uses of the term In medicine Bionics refers to the flow of concepts from biology to engineering and vice versa. Hence, there are two slightly different points of view regarding the meaning of the word. In medicine, bionics means the replacement or enhancement of organs or other body parts by mechanical versions. Bionic implants differ from mere prostheses by mimicking the original function very closely, or even surpassing it. The German equivalent of bionics, Bionik, always adheres to the broader meaning, in that it tries to develop engineering solutions from biological models. This approach is motivated by the fact that biological solutions will usually be optimized by evolutionary forces. While the technologies that make bionic implants possible are developing gradually, a few successful bionic devices already exist, a well known one being the Australian-invented multi-channel cochlear implant (bionic ear), a device for deaf people. Since the bionic ear, many bionic devices have emerged and work is progressing on bionics solutions for other sensory disorders (e.g. vision and balance). Bionic research has recently provided treatments for medical problems such as neurological and psychiatric conditions, for example Parkinson's disease and epilepsy. In 1997, Colombian researcher Alvaro Rios Poveda developed an upper limb and hand prosthesis with sensory feedback. This technology allows amputee patients to handle prosthetic hand systems in a more natural way. By 2004 fully functional artificial hearts were developed. Significant progress is expected with the advent of nanotechnology. A well-known example of a proposed nanodevice is a respirocyte, an artificial red cell designed (though not yet built) by Robert Freitas. During his eight years in the Department of Bioengineering at the University of Pennsylvania, Kwabena Boahen developed a silicon retina that was able to process images in the same manner as a living retina. He confirmed the results by comparing the electrical signals from his silicon retina to the electrical signals produced by a salamander eye while the two retinas were looking at the same image. On July 21, 2015, the BBC's medical correspondent Fergus Walsh reported, "surgeons in Manchester have performed the first bionic eye implant in a patient with the most common cause of sight loss in the developed world. Ray Flynn, 80, has dry age-related macular degeneration which has led to the total loss of his central vision. He is using a retinal implant that converts video images from a miniature video camera worn on his glasses. He can now make out the direction of white lines on a computer screen using the retinal implant." The implant, known as the Argus II and manufactured in the US by the company Second Sight Medical Products, had been used previously in patients who were blind as the result of the rare inherited degenerative eye disease retinitis pigmentosa. In 2016,Tilly Lockey (born October 7, 2005) was fitted with a pair of bionic "Hero Arms" manufactured by OpenBionics, a UK bionics enterprise. The Hero Arm is a lightweight myoelectric prosthesis for below-elbow amputee adults and children aged eight and above. Tilly Lockey, who at 15 months had both her arms amputated after being diagnosed with meningococcal sepsis strain B, describes the Hero Arms as “really realistic, to the point where it was quite creepy how realistic they were.” On February 17, 2020, Darren Fuller, a military veteran, became the first person to receive a bionic arm under a public healthcare system. Fuller lost the lower section of his right arm while serving term in Afghanistan during an incident that involved mortar ammunition in 2008. Other uses Business biomimetics is the latest development in the application of biomimetics. Specifically it applies principles and practice from biological systems to business strategy, process, organization design, and strategic thinking. It has been successfully used by a range of industries in FMCG, defense, central government, packaging, and business services. Based on the work by Phil Richardson at the University of Bath the approach was launched at the House of Lords in May 2009. Generally, biometrics is used as a creativity technique that studies biological prototypes to get ideas for engineering solutions. In chemistry, a biomimetic synthesis is a chemical synthesis inspired by biochemical processes. Another, more recent meaning of the term bionics refers to merging organism and machine. This approach results in a hybrid system combining biological and engineering parts, which can also be referred as a cybernetic organism (cyborg). Practical realization of this was demonstrated in Kevin Warwick's implant experiments bringing about ultrasound input via his own nervous system. See also Biomechatronics Biomedical engineering Biomimetics The Bionic Woman Bionic Woman (2007 TV series) Bionic architecture Biophysics Biotechnology Cyborg Cyborg (novel) History of technology Implant Index of environmental articles Neuroprosthetics Prosthesis The Six Million Dollar Man Wyss Institute for Biologically Inspired Engineering Terminator Transhumanism References Sources Biomimicry: Innovation Inspired by Nature. 1997. Janine Benyus. Biomimicry for Optimization, Control, and Automation, Springer-Verlag, London, 2005, Kevin M. Passino "Ideas Stolen Right from Nature" (Wired) Bionics and Engineering: The Relevance of Biology to Engineering, presented at Society of Women Engineers Convention, Seattle, WA, 1983, Jill E. Steele Bionics: Nature as a Model. 1993. PRO FUTURA Verlag GmbH, München, Umweltstiftung WWF Deutschland Lipov A.N. "At the origins of modern bionics. Bio-morphological formation in an artificial environment" Polygnosis. No. 1–2. 2010. Ch. 1–2. pp. 126–136. Lipov A.N. "At the origins of modern bionics. Bio-morphological formation in an artificial environment." Polygnosis. No. 3. 2010. Part 3. pр. 80–91. External links Bionics Queensland Centre for Nature Inspired Engineering at UCL (University College London) Biological Robotics at the University of Tulsa Wyss Institute for Biologically Inspired Engineering The Biomimicry Institute Center for Biologically Inspired Design Biologically Inspired Design group at the Design and Intelligence Lab, Georgia Tech Center for Biologically Inspired Materials & Material Systems Biologically Inspired Product Development at the University of Maryland The Biologically Inspired Materials Institute Center for Biologically Inspired Robotics Research at Case Western Reserve University Biologically Inspired Materials Institute Bio Inspired Engineering at the Applied University Kufstein, Austria Laboratory for Nature Inspired Engineering at The Pennsylvania State University 1950s neologisms Articles containing video clips Biological engineering Biotechnology Bioinspiration
Bionics
[ "Engineering", "Biology" ]
3,294
[ "Biological engineering", "Bionics", "Biotechnology", "nan", "Bioinspiration" ]
947,234
https://en.wikipedia.org/wiki/Dunkl%20operator
In mathematics, particularly the study of Lie groups, a Dunkl operator is a certain kind of mathematical operator, involving differential operators but also reflections in an underlying space. Formally, let G be a Coxeter group with reduced root system R and kv an arbitrary "multiplicity" function on R (so ku = kv whenever the reflections σu and σv corresponding to the roots u and v are conjugate in G). Then, the Dunkl operator is defined by: where is the i-th component of v, 1 ≤ i ≤ N, x in RN, and f a smooth function on RN. Dunkl operators were introduced by . One of Dunkl's major results was that Dunkl operators "commute," that is, they satisfy just as partial derivatives do. Thus Dunkl operators represent a meaningful generalization of partial derivatives. References Lie groups
Dunkl operator
[ "Mathematics" ]
184
[ "Lie groups", "Mathematical structures", "Algebraic structures" ]
947,383
https://en.wikipedia.org/wiki/Tensile%20structure
In structural engineering, a tensile structure is a construction of elements carrying only tension and no compression or bending. The term tensile should not be confused with tensegrity, which is a structural form with both tension and compression elements. Tensile structures are the most common type of thin-shell structures. Most tensile structures are supported by some form of compression or bending elements, such as masts (as in The O2, formerly the Millennium Dome), compression rings or beams. A tensile membrane structure is most often used as a roof, as they can economically and attractively span large distances. Tensile membrane structures may also be used as complete buildings, with a few common applications being sports facilities, warehousing and storage buildings, and exhibition venues. History This form of construction has only become more rigorously analyzed and widespread in large structures in the latter part of the twentieth century. Tensile structures have long been used in tents, where the guy ropes and tent poles provide pre-tension to the fabric and allow it to withstand loads. Russian engineer Vladimir Shukhov was one of the first to develop practical calculations of stresses and deformations of tensile structures, shells and membranes. Shukhov designed eight tensile structures and thin-shell structures exhibition pavilions for the Nizhny Novgorod Fair of 1896, covering the area of 27,000 square meters. A more recent large-scale use of a membrane-covered tensile structure is the Sidney Myer Music Bowl, constructed in 1958. Antonio Gaudi used the concept in reverse to create a compression-only structure for the Colonia Guell Church. He created a hanging tensile model of the church to calculate the compression forces and to experimentally determine the column and vault geometries. The concept was later championed by German architect and engineer Frei Otto, whose first use of the idea was in the construction of the West German pavilion at Expo 67 in Montreal. Otto next used the idea for the roof of the Olympic Stadium for the 1972 Summer Olympics in Munich. Since the 1960s, tensile structures have been promoted by designers and engineers such as Ove Arup, Buro Happold, Frei Otto, Mahmoud Bodo Rasch, Eero Saarinen, Horst Berger, Matthew Nowicki, Jörg Schlaich, and David Geiger. Steady technological progress has increased the popularity of fabric-roofed structures. The low weight of the materials makes construction easier and cheaper than standard designs, especially when vast open spaces have to be covered. Types of structure with significant tension members Linear structures Suspension bridges Stressed ribbon bridge Draped cables Cable-stayed beams or trusses Cable trusses Straight tensioned cables Three-dimensional structures Bicycle wheel (can be used as a roof in a horizontal orientation) 3D cable trusses Tensegrity structures Surface-stressed structures Prestressed membranes Pneumatically stressed membranes Gridshell Fabric structure Cable and membrane structures Membrane materials Common materials for doubly curved fabric structures are PTFE-coated fiberglass and PVC-coated polyester. These are woven materials with different strengths in different directions. The warp fibers (those fibers which are originally straight—equivalent to the starting fibers on a loom) can carry greater load than the weft or fill fibers, which are woven between the warp fibers. Other structures make use of ETFE film, either as single layer or in cushion form (which can be inflated, to provide good insulation properties or for aesthetic effect—as on the Allianz Arena in Munich). ETFE cushions can also be etched with patterns in order to let different levels of light through when inflated to different levels. In daylight, fabric membrane translucency offers soft diffused naturally lit spaces, while at night, artificial lighting can be used to create an ambient exterior luminescence. They are most often supported by a structural frame as they cannot derive their strength from double curvature. Cables Cables can be of mild steel, high strength steel (drawn carbon steel), stainless steel, polyester or aramid fibres. Structural cables are made of a series of small strands twisted or bound together to form a much larger cable. Steel cables are either spiral strand, where circular rods are twisted together and "glued" using a polymer, or locked coil strand, where individual interlocking steel strands form the cable (often with a spiral strand core). Spiral strand is slightly weaker than locked coil strand. Steel spiral strand cables have a Young's modulus, E of 150±10 kN/mm2 (or 150±10 GPa) and come in sizes from 3 to 90 mm diameter. Spiral strand suffers from construction stretch, where the strands compact when the cable is loaded. This is normally removed by pre-stretching the cable and cycling the load up and down to 45% of the ultimate tensile load. Locked coil strand typically has a Young's Modulus of 160±10 kN/mm2 and comes in sizes from 20 mm to 160 mm diameter. The properties of the individual strands of different materials are shown in the table below, where UTS is ultimate tensile strength, or the breaking load: Structural forms Air-supported structures are a form of tensile structures where the fabric envelope is supported by pressurised air only. The majority of fabric structures derive their strength from their doubly curved shape. By forcing the fabric to take on double-curvature the fabric gains sufficient stiffness to withstand the loads it is subjected to (for example wind and snow loads). In order to induce an adequately doubly curved form it is most often necessary to pretension or prestress the fabric or its supporting structure. Form-finding The behaviour of structures which depend upon prestress to attain their strength is non-linear, so anything other than a very simple cable has, until the 1990s, been very difficult to design. The most common way to design doubly curved fabric structures was to construct scale models of the final buildings in order to understand their behaviour and to conduct form-finding exercises. Such scale models often employed stocking material or tights, or soap film, as they behave in a very similar way to structural fabrics (they cannot carry shear). Soap films have uniform stress in every direction and require a closed boundary to form. They naturally form a minimal surface—the form with minimal area and embodying minimal energy. They are however very difficult to measure. For a large film, its weight can seriously affect its form. For a membrane with curvature in two directions, the basic equation of equilibrium is: where: R1 and R2 are the principal radii of curvature for soap films or the directions of the warp and weft for fabrics t1 and t2 are the tensions in the relevant directions w is the load per square metre Lines of principal curvature have no twist and intersect other lines of principal curvature at right angles. A geodesic or geodetic line is usually the shortest line between two points on the surface. These lines are typically used when defining the cutting pattern seam-lines. This is due to their relative straightness after the planar cloths have been generated, resulting in lower cloth wastage and closer alignment with the fabric weave. In a pre-stressed but unloaded surface w = 0, so . In a soap film surface tensions are uniform in both directions, so R1 = −R2. It is now possible to use powerful non-linear numerical analysis programs (or finite element analysis) to formfind and design fabric and cable structures. The programs must allow for large deflections. The final shape, or form, of a fabric structure depends upon: shape, or pattern, of the fabric the geometry of the supporting structure (such as masts, cables, ringbeams etc.) the pretension applied to the fabric or its supporting structure It is important that the final form will not allow ponding of water, as this can deform the membrane and lead to local failure or progressive failure of the entire structure. Snow loading can be a serious problem for membrane structure, as the snow often will not flow off the structure as water will. For example, this has in the past caused the (temporary) collapse of the Hubert H. Humphrey Metrodome, an air-inflated structure in Minneapolis, Minnesota. Some structures prone to ponding use heating to melt snow which settles on them. There are many different doubly curved forms, many of which have special mathematical properties. The most basic doubly curved from is the saddle shape, which can be a hyperbolic paraboloid (not all saddle shapes are hyperbolic paraboloids). This is a double ruled surface and is often used in both in lightweight shell structures (see hyperboloid structures). True ruled surfaces are rarely found in tensile structures. Other forms are anticlastic saddles, various radial, conical tent forms and any combination of them. Pretension Pretension is tension artificially induced in the structural elements in addition to any self-weight or imposed loads they may carry. It is used to ensure that the normally very flexible structural elements remain stiff under all possible loads. A day to day example of pretension is a shelving unit supported by wires running from floor to ceiling. The wires hold the shelves in place because they are tensioned – if the wires were slack the system would not work. Pretension can be applied to a membrane by stretching it from its edges or by pretensioning cables which support it and hence changing its shape. The level of pretension applied determines the shape of a membrane structure. Alternative form-finding approach The alternative approximated approach to the form-finding problem solution is based on the total energy balance of a grid-nodal system. Due to its physical meaning this approach is called the stretched grid method (SGM). Simple mathematics of cables Transversely and uniformly loaded cable A uniformly loaded cable spanning between two supports forms a curve intermediate between a catenary curve and a parabola. The simplifying assumption can be made that it approximates a circular arc (of radius R). By equilibrium: The horizontal and vertical reactions : By geometry: The length of the cable: The tension in the cable: By substitution: The tension is also equal to: The extension of the cable upon being loaded is (from Hooke's Law, where the axial stiffness, k, is equal to ): where E is the Young's modulus of the cable and A is its cross-sectional area. If an initial pretension, is added to the cable, the extension becomes: Combining the above equations gives: By plotting the left hand side of this equation against T, and plotting the right hand side on the same axes, also against T, the intersection will give the actual equilibrium tension in the cable for a given loading w and a given pretension . Cable with central point load A similar solution to that above can be derived where: By equilibrium: By geometry: This gives the following relationship: As before, plotting the left hand side and right hand side of the equation against the tension, T, will give the equilibrium tension for a given pretension, and load, W. Tensioned cable oscillations The fundamental natural frequency, f1 of tensioned cables is given by: where T = tension in newtons, m = mass in kilograms and L = span length. Notable structures Shukhov Rotunda, Russia, 1896 Canada Place, Vancouver, British Columbia for Expo '86 Yoyogi National Gymnasium by Kenzo Tange, Yoyogi Park, Tokyo, Japan Ingalls Rink, Yale University by Eero Saarinen Khan Shatyr Entertainment Center, Astana, Kazakhstan Tropicana Field, St. Petersburg, Florida Olympiapark, Munich by Frei Otto Sidney Myer Music Bowl, Melbourne The O2 (formerly the Millennium Dome), London by Buro Happold and Richard Rogers Partnership Denver International Airport, Denver Dorton Arena, Raleigh Georgia Dome, Atlanta, Georgia by Heery and Weidlinger Associates (demolished in 2017) Grantley Adams International Airport, Christ Church, Barbados Pengrowth Saddledome, Calgary by Graham McCourt Architects and Jan Bobrowski and Partners Scandinavium, Gothenburg, Sweden Hong Kong Museum of Coastal Defence Modernization of the Central Railway Station, Sofia, Bulgaria Redbird Arena, Illinois State University, Normal, Illinois Retractable Umbrellas, Al-Masjid an-Nabawi, Medina, Saudi Arabia Killesberg Tower, Stuttgart Gallery of well-known tensile structures Classification numbers The Construction Specifications Institute (CSI) and Construction Specifications Canada (CSC), MasterFormat 2018 Edition, Division 05 and 13: 05 16 00 – Structural Cabling 05 19 00 - Tension Rod and Cable Truss Assemblies 13 31 00 – Fabric Structures 13 31 23 – Tensioned Fabric Structures 13 31 33 – Framed Fabric Structures CSI/CSC MasterFormat 1995 Edition: 13120 – Cable-Supported Structures 13120 – Fabric Structures See also Buckminster Fuller Gaussian curvature Geodesic dome Geodesics Hyperboloid structure Kārlis Johansons Kenneth Snelson Suspended structure Suspension bridge Tensairity Tensegrity Wire rope References Further reading "The Nijni-Novgorod exhibition: Water tower, room under construction, springing of 91 feet span", "The Engineer", № 19.3.1897, P.292-294, London, 1897. Horst Berger, Light structures, structures of light: The art and engineering of tensile architecture (Birkhäuser Verlag, 1996) Alan Holgate, The Art of Structural Engineering: The Work of Jorg Schlaich and his Team (Books Britain, 1996) Elizabeth Cooper English: "Arkhitektura i mnimosti": The origins of Soviet avant-garde rationalist architecture in the Russian mystical-philosophical and mathematical intellectual tradition", a dissertation in architecture, 264 p., University of Pennsylvania, 2000. "Vladimir G. Suchov 1853–1939. Die Kunst der sparsamen Konstruktion.", Rainer Graefe, Jos Tomlow und andere, 192 S., Deutsche Verlags-Anstalt, Stuttgart, 1990, . Conrad Roland: Frei Otto – Spannweiten. Ideen und Versuche zum Leichtbau. Ein Werkstattbericht von Conrad Roland. Ullstein, Berlin, Frankfurt/Main und Wien 1965. Frei Otto, Bodo Rasch: Finding Form - Towards an Architecture of the Minimal, Edition Axel Menges, 1996, Nerdinger, Winfried: Frei Otto. Das Gesamtwerk: Leicht Bauen Natürlich Gestalten, 2005, Roofs Russian inventions Structural system Tensile architecture Tensile membrane structures
Tensile structure
[ "Technology", "Engineering" ]
3,001
[ "Structural engineering", "Building engineering", "Structural system", "Roofs", "Tensile architecture" ]
947,692
https://en.wikipedia.org/wiki/Dredging
Dredging is the excavation of material from a water environment. Possible reasons for dredging include improving existing water features; reshaping land and water features to alter drainage, navigability, and commercial use; constructing dams, dikes, and other controls for streams and shorelines; and recovering valuable mineral deposits or marine life having commercial value. In all but a few situations the excavation is undertaken by a specialist floating plant, known as a dredger. Usually the main objectives of dredging is to recover material of value, or to create a greater depth of water. Dredging systems can either be shore-based, brought to a location based on barges, or built into purpose-built vessels. Dredging can have environmental impacts: it canc disturb marine sediments, creating dredge plumes which can lead to both short- and long-term water pollution, damage or destroy seabed ecosystems, and release legacy human-sourced toxins captured in the sediment. These environmental impacts can reduce marine wildlife populations, contaminate sources of drinking water, and interrupt economic activities such as fishing. Description Dredging is excavation carried out underwater or partially underwater, in shallow waters or ocean waters. It keeps waterways and ports navigable, and assists coastal protection, land reclamation and coastal redevelopment, by gathering up bottom sediments and transporting it elsewhere. Dredging can be done to recover materials of commercial value; these may be high value minerals or sediments such as sand and gravel that are used by the construction industry. Dredging is a four-part process: loosening the material, bringing the material to the surface (together extraction), transportation and disposal. The extract can be disposed of locally or transported by barge or in a liquid suspension in pipelines. Disposal can be to infill sites, or the material can be used constructively to replenish eroded sand that has been lost to coastal erosion, or constructively create sea-walls, building land or whole new landforms such as viable islands in coral atolls. History Ancient authors refer to harbour dredging. The seven arms of the Nile were channelled and wharfs built at the time of the pyramids (4000 BC), there was extensive harbour building in the eastern Mediterranean from 1000 BC and the disturbed sediment layers gives evidence of dredging. At Marseille, dredging phases are recorded from the third century BC onwards, the most extensive during the first century AD. The remains of three dredging boats have been unearthed; they were abandoned at the bottom of the harbour during the first and second centuries AD. The Banu Musa brothers during the Muslim Golden Age in while working at the Bayt-Al-Hikmah (house of wisdom) in Baghdad, designed an original invention in their book named Book of Ingenious Devices, a grab machine that does not appear in any earlier Greek works. The grab they described was used to extract objects from underwater, and recover objects from the beds of streams. During the renaissance Leonardo da Vinci drew a design for a drag dredger. Dredging machines have been used during the construction of the Suez Canal from the late 1800s to present day expansions and maintenance. The completion of the Panama Canal in 1914, the most expensive U.S. engineering project at the time, relied extensively on dredging. Purposes Capital dredging: dredging carried out to create a new harbour, berth or waterway, or to deepen existing facilities in order to allow larger ships access. Because capital works usually involve hard material or high-volume works, the work is usually done using a cutter suction dredge or large trailing suction hopper dredge; but for rock works, drilling and blasting along with mechanical excavation may be used. Land reclamation: dredging to mine sand, clay or rock from the seabed and using it to construct new land elsewhere. This is typically performed by a cutter-suction dredge or trailing suction hopper dredge. The material may also be used for flood or erosion control. Maintenance: dredging to deepen or maintain navigable waterways or channels which are threatened to become silted with the passage of time, due to sedimented sand and mud, possibly making them too shallow for navigation. This is often carried out with a trailing suction hopper dredge. Most dredging is for this purpose, and it may also be done to maintain the holding capacity of reservoirs or lakes. Harvesting materials: dredging sediment for elements like gold, diamonds or other valuable trace substances. Hobbyists examine their dredged matter to pick out items of potential value, similar to the hobby of metal detecting. Fishing dredging is a technique for catching certain species of edible clams and crabs. In Louisiana and other American states, with salt water estuaries that can sustain bottom oyster beds, oysters are raised and harvested. A heavy rectangular metal scoop is towed astern of a moving boat with a chain bridle attached to a cable. This drags along the bottom scooping up oysters. It is periodically winched aboard and the catch is sorted and bagged for shipment. Preparatory: dredging work and excavation for future bridges, piers or docks or wharves, This is often to build the foundations. Winning construction materials: dredging sand and gravels from offshore licensed areas for use in construction industry, principally for use in concrete. This very specialist industry is focused in NW Europe, it uses specialized trailing suction hopper dredgers self discharging the dry cargo ashore. Land based old river beddings can be processed in this manner too. Contaminant remediation: to reclaim areas affected by chemical spills, storm water surges (with urban runoff), and other soil contaminations, including silt from sewage sludge and from decayed matter, like wilted plants. Disposal becomes a proportionally large factor in these operations. Flood prevention: dredging increases the channel depth and therefore increase a channel's capacity for carrying water. Other Beach nourishment: this is mining sand offshore and placing on a beach to replace sand eroded by storms or wave action. This enhances the recreational and protective function of the beach, which are also eroded by human activity. This is typically performed by a cutter-suction dredge or trailing suction hopper dredge. Peat extraction: dredging poles or dredge hauls were used on the back of small boats to manually dredge the beds of peat-moor waterways. The extracted peat was used as a fuel. This tradition is now more or less obsolete. The tools are now significantly changed. Removing rubbish and debris: often done in combination with maintenance dredging, this process removes non-natural matter from the bottoms of rivers and canals and harbours. Law enforcement agencies sometimes need to use a 'drag' to recover evidence or corpses from beneath the water. Anti-eutrophication: A kind of contaminant remediation, dredging is an expensive option for the remediation of eutrophied (or de-oxygenated) water bodies; one of the causes is like mentioned above, sewage sludge. However, as artificially elevated phosphorus levels in the sediment aggravate the eutrophication process, controlled sediment removal is occasionally the only option for the reclamation of still waters. Seabed mining: is a possible future use, recovering natural metal ore nodules from the sea's deepest troughs. Types Suction dredgers These operate by sucking through a long tube like some vacuum cleaners but on a larger scale. A plain suction dredger has no tool at the end of the suction pipe to disturb the material. Trailing suction A trailing suction hopper dredger (TSHD) trails its suction pipe when working. The pipe, which is fitted with a dredge drag head, loads the dredge spoil into one or more hoppers in the vessel. When the hoppers are full, the TSHD sails to a disposal area and either dumps the material through doors in the hull or pumps the material out of the hoppers. Some dredges also self-offload using drag buckets and conveyors. the largest trailing suction hopper dredgers in the world were Jan De Nul's Cristobal Colon (launched 4 July 2008) and her sister ship Leiv Eriksson (launched 4 September 2009). Main design specifications for the Cristobal Colon and the Leiv Eriksson are: 46,000 cubic metre hopper and a design dredging depth of 155 m. Next largest is HAM 318 (Van Oord) with its 37,293 cubic metre hopper and a maximum dredging depth of 101 m. Cutter-suction A cutter-suction dredger's (CSD) suction tube has a cutting mechanism at the suction inlet. The cutting mechanism loosens the bed material and transports it to the suction mouth. The dredged material is usually sucked up by a wear-resistant centrifugal pump and discharged either through a pipe line or to a barge. Cutter-suction dredgers are most often used in geological areas consisting of hard surface materials (for example gravel deposits or surface bedrock) where a standard suction dredger would be ineffective. They can, if sufficiently powerful, be used instead of underwater blasting. , the most powerful cutter-suction dredger in the world is DEME's Spartacus, which entered service in 2021. Auger suction The auger dredge system functions like a cutter suction dredger, but the cutting tool is a rotating Archimedean screw set at right angles to the suction pipe. Mud Cat invented the auger dredge in the 1970s. Jet-lift These use the Venturi effect of a concentrated high-speed stream of water to pull the nearby water, together with bed material, into a pipe. Air-lift An airlift is a type of small suction dredge. It is sometimes used like other dredges. At other times, an airlift is handheld underwater by a diver. It works by blowing air into the pipe, and that air, being lighter than water, rises inside the pipe, dragging water with it. Mechanical dredgers Some bucket dredgers and grab dredgers are powerful enough to rip out coral to make a shipping channel through coral reefs. Bucket dredgers A bucket dredger is equipped with a bucket dredge, which is a device that picks up sediment by mechanical means, often with many circulating buckets attached to a wheel or chain. Grab dredgers A grab dredger picks up seabed material with a clam shell bucket, which hangs from an onboard crane or a crane barge, or is carried by a hydraulic arm, or is mounted like on a dragline. This technique is often used in excavation of bay mud. Most of these dredges are crane barges with spuds, steel piles that can be lowered and raised to position the dredge. Backhoe/dipper dredgers A backhoe/dipper dredger has a backhoe like on some excavators. A crude but usable backhoe dredger can be made by mounting a land-type backhoe excavator on a pontoon. The six largest backhoe dredgers in the world are currently the Vitruvius, the Mimar Sinan, Postnik Yakovlev (Jan De Nul), the Samson (DEME), the Simson and the Goliath (Van Oord). They featured barge-mounted excavators. Small backhoe dredgers can be track-mounted and work from the bank of ditches. A backhoe dredger is equipped with a half-open shell. The shell is filled moving towards the machine. Usually dredged material is loaded in barges. This machine is mainly used in harbours and other shallow water. Excavator dredge attachments The excavator dredge attachment uses the characteristics of cutter-suction dredgers, consisting of cutter heads and a suction pump for transferring material. These hydraulic attachments mount onto the boom arm of an excavator allowing an operator to maneuver the attachment along the shoreline and in shallow water for dredging. Bed leveler This is a bar or blade which is pulled over the seabed behind any suitable ship or boat. It has an effect similar to that of a bulldozer on land. The chain-operated steam dredger Bertha, built in 1844 to a design by Brunel and was the oldest operational steam vessel in Britain, was of this type. Krabbelaar This is an early type of dredger which was formerly used in shallow water in the Netherlands. It was a flat-bottomed boat with spikes sticking out of its bottom. As tide current pulled the boat, the spikes scraped seabed material loose, and the tide current washed the material away, hopefully to deeper water. Krabbelaar is the Dutch word for "scratcher". Water injection A water injection dredger uses a small jet to inject water under low pressure (to prevent the sediment from exploding into the surrounding waters) into the seabed to bring the sediment in suspension, which then becomes a turbidity current, which flows away down slope, is moved by a second burst of water from the WID or is carried away in natural currents. Water injection results in a lot of sediment in the water which makes measurement with most hydrographic equipment (for instance: singlebeam echosounders) difficult. Pneumatic These dredgers use a chamber with inlets, out of which the water is pumped with the inlets closed. It is usually suspended from a crane on land or from a small pontoon or barge. Its effectiveness depends on depth pressure. Snagboat A snagboat is designed to remove big debris such as dead trees and parts of trees from North America waterways. Amphibious Some of these are any of the above types of dredger, which can operate normally, or by extending legs, also known as spuds, so it stands on the seabed with its hull out of the water. Some forms can go on land. Some of these are land-type backhoe excavators whose wheels are on long hinged legs so it can drive into shallow water and keep its cab out of water. Some of these may not have a floatable hull and, if so, cannot work in deep water. Oliver Evans (1755–1819) in 1804 invented the Oruktor Amphibolos, an amphibious dredger which was America's first steam-powered road vehicle. Submersible These are usually used to recover useful materials from the seabed. Many of them travel on continuous track. A unique variant is intended to walk on legs on the seabed. Fishing Fishing dredges are used to collect various species of clams, scallops, oysters or mussels from the seabed. Some dredges are also designed to catch crabs, sea urchins, sea cucumbers, and conch. These dredges have the form of a scoop made of chain mesh, and are towed by a fishing boat. Clam-specific dredges can utilize hydraulic injection to target deeper into the sand. Dredging can be destructive to the seabed and some scallop dredging has been replaced by collecting via scuba diving. Notable individual dredgers As of June 2018, the largest dredger in Asia is , a long dredger constructed in China, with a capacity of . An even larger dredger, retired in 1980, was the U.S. Army Corps of Engineers , which was long. The , a clamshell dredger that maintains levees in San Francisco Bay, has operated continuously since being built in 1936. Dredge monitoring software Dredgers are often equipped with dredge monitoring software to help the dredge operator position the dredger and monitor the current dredge level. The monitoring software often uses Real Time Kinematic satellite navigation to accurately record where the machine has been operating and to what depth the machine has dredged to. Transportation and disposal of materials In a "hopper dredger", the dredged materials end up in a large onboard hold called a "hopper." A suction hopper dredger is usually used for maintenance dredging. A hopper dredge usually has doors in its bottom to empty the dredged materials, but some dredges empty their hoppers by splitting the two-halves of their hulls on large hydraulic hinges. Either way, as the vessel dredges, excess water in the dredged materials is spilled off as the heavier solids settle to the bottom of the hopper. This excess water is returned to the sea to reduce weight and increase the amount of solid material (or slurry) that can be carried in one load. When the hopper is filled with slurry, the dredger stops dredging and goes to a dump site and empties its hopper. Some hopper dredges are designed so they can also be emptied from above using pumps if dump sites are unavailable or if the dredge material is contaminated. Sometimes the slurry of dredgings and water is pumped straight into pipes which deposit it on nearby land. These pipes are also commonly known as dredge hoses, too. There are a few different types of dredge hoses that differ in terms of working pressure, float-ability, armored or not etc. Suction hoses, discharge armored hoses and self-floating hoses are some of the popular types engineered for transporting and discharging dredge materials. Some even had the pipes or hoses customised to exact dredging needs etc. Other times, it is pumped into barges (also called scows), which deposit it elsewhere while the dredge continues its work. A number of vessels, notably in the UK and NW Europe de-water the hopper to dry the cargo to enable it to be discharged onto a quayside 'dry'. This is achieved principally using self discharge bucket wheel, drag scraper or excavator via conveyor systems. When contaminated (toxic) sediments are to be removed, or large volume inland disposal sites are unavailable, dredge slurries are reduced to dry solids via a process known as dewatering, to minimize the dredge plume. Current dewatering techniques employ either centrifuges, geotube containers, large textile based filters or polymer flocculant/congealant based apparatus. In many projects, slurry dewatering is performed in large inland settling pits, although this is becoming less and less common as mechanical dewatering techniques continue to improve. Similarly, many groups (most notable in east Asia) are performing research towards utilizing dewatered sediments for the production of concretes and construction block, although the high organic content (in many cases) of this material is a hindrance toward such ends. The proper management of contaminated sediments is a modern-day issue of significant concern. Because of a variety of maintenance activities, thousands of tonnes of contaminated sediment are dredged worldwide from commercial ports and other aquatic areas at high level of industrialization. Dredged material can be reused after appropriate decontamination. A variety of processes has been proposed and tested at different scales of application (technologies for environmental remediation). Once decontaminated, the material could well suit the building industry, or could be used for beach nourishment. Environmental impacts Dredging can disturb aquatic ecosystems, often with adverse impacts. In addition, dredge spoils may contain toxic chemicals that may have an adverse effect on the disposal area; furthermore, the process of dredging often dislodges chemicals residing in benthic substrates and injects them into the water column, where they become toxic dredge plumes. Dredging can have numerous significant impacts on the environment, including the following: Release of toxic chemicals (including heavy metals and PCB) from bottom sediments into the water column. Short term increases in turbidity, which can affect aquatic species metabolism and interfere with spawning. Suction dredging activity is allowed only during non-spawning time frames set by fish and game (in-water work periods). Secondary impacts to marsh productivity from sedimentation and general changes in wetland chemistry after dredging. Tertiary impacts to avifauna which may prey upon contaminated aquatic organisms. Secondary impacts to aquatic and benthic organisms' metabolism and mortality. Possible contamination of dredge spoils sites. Changes to the topography by creating "spoil islands" from the accumulated spoil. Releases toxic compound Tributyltin, a biocide often used in anti-fouling paint banned in 2008, into the water. The nature of dredging operations and possible environmental impacts requires that the activity often be closely regulated and requires comprehensive regional environmental impact assessments alongside continuous monitoring. For example, in the U.S., the Clean Water Act requires that any discharge of dredged or fill materials into "waters of the United States," including wetlands, is forbidden unless authorized by a permit issued by the Army Corps of Engineers. Due to potential environmental impacts, dredging is often restricted to licensed areas, with vessel activity monitored closely using automatic GPS systems. Major dredging companies According to a Rabobank outlook report in 2013, the largest dredging companies in the world are in order of size, based on dredging sales in 2012 China Harbour Engineering (China) Jan De Nul (Belgium) DEME (Belgium) Royal Boskalis Westminster (Netherlands) Van Oord Dredging and Marine Contractors (Netherlands) National Marine Dredging Company (United Arab Emirates) Great Lakes Dredge & Dock Company (United States) Notable dredging companies in North America Manson Construction Co. (United States) Notable dredging companies in South Asia Dredging Corporation of India Adani Ports & SEZ (India) Maldives Transport and Contracting Company (Maldives) Images See also Chemistry of wetland dredging Dredge ball joint, connection between 2 pipes that are used to transport mixture of water and sand from a dredger to the discharging area Dredge drag head Dumping of dredged sediment in Long Island Sound Navigability Peace in Africa (ship), a diamond-mining dredger Queen of the Netherlands (ship), a big dredger River engineering for inland dredging and other river management systems WT Preston, a snagboat Foreign Dredge Act of 1906, a US law banning foreign-built and foreign-owned dredges from operating in US waters Gold dredge References External links Directory of Dredgers (a close to exhaustive private collection of dredger photographs) Dredging and Spoil Disposal Policy (from the Australian Government) The Art of Dredging (Knowledge sharing) International Association of Dredging Companies Knowledge centre World of Boats (EISCA) Collection ~ Isambard Kingdom Brunel's Bertha Engineering vehicles Coastal construction Nautical terminology Coastal engineering de:Baggerschiff
Dredging
[ "Engineering" ]
4,708
[ "Coastal engineering", "Construction", "Coastal construction", "Civil engineering", "Engineering vehicles" ]
948,014
https://en.wikipedia.org/wiki/Order%20and%20disorder
In physics, the terms order and disorder designate the presence or absence of some symmetry or correlation in a many-particle system. In condensed matter physics, systems typically are ordered at low temperatures; upon heating, they undergo one or several phase transitions into less ordered states. Examples for such an order-disorder transition are: the melting of ice: solid–liquid transition, loss of crystalline order; the demagnetization of iron by heating above the Curie temperature: ferromagnetic–paramagnetic transition, loss of magnetic order. The degree of freedom that is ordered or disordered can be translational (crystalline ordering), rotational (ferroelectric ordering), or a spin state (magnetic ordering). The order can consist either in a full crystalline space group symmetry, or in a correlation. Depending on how the correlations decay with distance, one speaks of long range order or short range order. If a disordered state is not in thermodynamic equilibrium, one speaks of quenched disorder. For instance, a glass is obtained by quenching (supercooling) a liquid. By extension, other quenched states are called spin glass, orientational glass. In some contexts, the opposite of quenched disorder is annealed disorder. Characterizing order Lattice periodicity and X-ray crystallinity The strictest form of order in a solid is lattice periodicity: a certain pattern (the arrangement of atoms in a unit cell) is repeated again and again to form a translationally invariant tiling of space. This is the defining property of a crystal. Possible symmetries have been classified in 14 Bravais lattices and 230 space groups. Lattice periodicity implies long-range order: if only one unit cell is known, then by virtue of the translational symmetry it is possible to accurately predict all atomic positions at arbitrary distances. During much of the 20th century, the converse was also taken for granted – until the discovery of quasicrystals in 1982 showed that there are perfectly deterministic tilings that do not possess lattice periodicity. Besides structural order, one may consider charge ordering, spin ordering, magnetic ordering, and compositional ordering. Magnetic ordering is observable in neutron diffraction. It is a thermodynamic entropy concept often displayed by a second-order phase transition. Generally speaking, high thermal energy is associated with disorder and low thermal energy with ordering, although there have been violations of this. Ordering peaks become apparent in diffraction experiments at low energy. Long-range order Long-range order characterizes physical systems in which remote portions of the same sample exhibit correlated behavior. This can be expressed as a correlation function, namely the spin-spin correlation function: where s is the spin quantum number and x is the distance function within the particular system. This function is equal to unity when and decreases as the distance increases. Typically, it decays exponentially to zero at large distances, and the system is considered to be disordered. But if the correlation function decays to a constant value at large then the system is said to possess long-range order. If it decays to zero as a power of the distance then it is called quasi-long-range order (for details see Chapter 11 in the textbook cited below. See also Berezinskii–Kosterlitz–Thouless transition). Note that what constitutes a large value of is understood in the sense of asymptotics. Quenched disorder In statistical physics, a system is said to present quenched disorder when some parameters defining its behavior are random variables which do not evolve with time. These parameters are said to be quenched or frozen. Spin glasses are a typical example. Quenched disorder is contrasted with annealed disorder in which the parameters are allowed to evolve themselves. Mathematically, quenched disorder is more difficult to analyze than its annealed counterpart as averages over thermal noise and quenched disorder play distinct roles. Few techniques to approach each are known, most of which rely on approximations. Common techniques used to analyzed systems with quenched disorder include the replica trick, based on analytic continuation, and the cavity method, where a system's response to the perturbation due to an added constituent is analyzed. While these methods yield results agreeing with experiments in many systems, the procedures have not been formally mathematically justified. Recently, rigorous methods have shown that in the Sherrington-Kirkpatrick model, an archetypal spin glass model, the replica-based solution is exact. The generating functional formalism, which relies on the computation of path integrals, is a fully exact method but is more difficult to apply than the replica or cavity procedures in practice. Annealed disorder A system is said to present annealed disorder when some parameters entering its definition are random variables, but whose evolution is related to that of the degrees of freedom defining the system. It is defined in opposition to quenched disorder, where the random variables may not change their values. Systems with annealed disorder are usually considered to be easier to deal with mathematically, since the average on the disorder and the thermal average may be treated on the same footing. See also In high energy physics, the formation of the chiral condensate in quantum chromodynamics is an ordering transition; it is discussed in terms of superselection. Entropy Topological order Impurity superstructure (physics) Further reading H Kleinert: Gauge Fields in Condensed Matter (, 2 volumes) Singapore: World Scientific (1989). References Statistical mechanics Crystallography
Order and disorder
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,135
[ "Crystallography", "Statistical mechanics", "Condensed matter physics", "Materials science" ]
948,580
https://en.wikipedia.org/wiki/Magnetic%20hysteresis
Magnetic hysteresis occurs when an external magnetic field is applied to a ferromagnet such as iron and the atomic dipoles align themselves with it. Even when the field is removed, part of the alignment will be retained: the material has become magnetized. Once magnetized, the magnet will stay magnetized indefinitely. To demagnetize it requires heat or a magnetic field in the opposite direction. This is the effect that provides the element of memory in a hard disk drive. The relationship between field strength and magnetization is not linear in such materials. If a magnet is demagnetized () and the relationship between and is plotted for increasing levels of field strength, follows the initial magnetization curve. This curve increases rapidly at first and then approaches an asymptote called magnetic saturation. If the magnetic field is now reduced monotonically, follows a different curve. At zero field strength, the magnetization is offset from the origin by an amount called the remanence. If the relationship is plotted for all strengths of applied magnetic field the result is a hysteresis loop called the main loop. The width of the middle section along the H axis is twice the coercivity of the material. A closer look at a magnetization curve generally reveals a series of small, random jumps in magnetization called Barkhausen jumps. This effect is due to crystallographic defects such as dislocations. Magnetic hysteresis loops are not exclusive to materials with ferromagnetic ordering. Other magnetic orderings, such as spin glass ordering, also exhibit this phenomenon. Physical origin The phenomenon of hysteresis in ferromagnetic materials is the result of two effects: rotation of magnetization and changes in size or number of magnetic domains. In general, the magnetization varies (in direction but not magnitude) across a magnet, but in sufficiently small magnets, it doesn't. In these single-domain magnets, the magnetization responds to a magnetic field by rotating. Single-domain magnets are used wherever a strong, stable magnetization is needed (for example, magnetic recording). Larger magnets are divided into regions called domains. Within each domain, the magnetization does not vary; but between domains are relatively thin domain walls in which the direction of magnetization rotates from the direction of one domain to another. If the magnetic field changes, the walls move, changing the relative sizes of the domains. Because the domains are not magnetized in the same direction, the magnetic moment per unit volume is smaller than it would be in a single-domain magnet; but domain walls involve rotation of only a small part of the magnetization, so it is much easier to change the magnetic moment. The magnetization can also change by addition or subtraction of domains (called nucleation and denucleation). Measurement Magnetic hysteresis can be characterized in various ways. In general, the magnetic material is placed in a varying applied field, as induced by an electromagnet, and the resulting magnetic flux density ( field) is measured, generally by the inductive electromotive force introduced on a pickup coil nearby the sample. This produces the characteristic curve; because the hysteresis indicates a memory effect of the magnetic material, the shape of the curve depends on the history of changes in . Alternatively, the hysteresis can be plotted as magnetization in place of , giving an curve. These two curves are directly related since . The measurement may be closed-circuit or open-circuit, according to how the magnetic material is placed in a magnetic circuit. In open-circuit measurement techniques (such as a vibrating-sample magnetometer), the sample is suspended in free space between two poles of an electromagnet. Because of this, a demagnetizing field develops and the field internal to the magnetic material is different than the applied . The normal B-H curve can be obtained after the demagnetizing effect is corrected. In closed-circuit measurements (such as the hysteresisgraph), the flat faces of the sample are pressed directly against the poles of the electromagnet. Since the pole faces are highly permeable, this removes the demagnetizing field, and so the internal field is equal to the applied field. With hard magnetic materials (such as sintered neodymium magnets), the detailed microscopic process of magnetization reversal depends on whether the magnet is in an open-circuit or closed-circuit configuration, since the magnetic medium around the magnet influences the interactions between domains in a way that cannot be fully captured by a simple demagnetization factor. Models The most known empirical models in hysteresis are Preisach and Jiles-Atherton models. These models allow an accurate modeling of the hysteresis loop and are widely used in the industry. However, these models lose the connection with thermodynamics and the energy consistency is not ensured. A more recent model, with a more consistent thermodynamic foundation, is the vectorial incremental nonconservative consistent hysteresis (VINCH) model of Lavet et al. (2011). is inspired by the kinematic hardening laws and by the thermodynamics of irreversible processes. In particular, in addition to provide an accurate modeling, the stored magnetic energy and the dissipated energy are known at all times. The obtained incremental formulation is variationally consistent, i.e., all internal variables follow from the minimization of a thermodynamic potential. That allows easily obtaining a vectorial model while Preisach and Jiles-Atherton are fundamentally scalar models. The Stoner–Wohlfarth model is a physical model explaining hysteresis in terms of anisotropic response ("easy" / "hard" axes of each crystalline grain). Micromagnetics simulations attempt to capture and explain in detail the space and time aspects of interacting magnetic domains, often based on the Landau-Lifshitz-Gilbert equation. Toy models such as the Ising model can help explain qualitative and thermodynamic aspects of hysteresis (such as the Curie point phase transition to paramagnetic behaviour), though they are not used to describe real magnets. Applications There are a great variety in applications of the theory of hysteresis in magnetic materials. Many of these make use of their ability to retain a memory, for example magnetic tape, hard disks, and credit cards. In these applications, hard magnets (high coercivity) like iron are desirable so the memory is not easily erased. Soft magnets (low coercivity) are used as cores in transformers and electromagnets. The response of the magnetic moment to a magnetic field boosts the response of the coil wrapped around it. Low coercivity reduces that energy loss associated with hysteresis. Magnetic hysteresis material (soft nickel-iron rods) has been used in damping the angular motion of satellites in low Earth orbit since the dawn of the space age. See also Degaussing References Magnetostatics Magnetic hysteresis Electromagnetism Physical quantities
Magnetic hysteresis
[ "Physics", "Materials_science", "Mathematics" ]
1,488
[ "Physical phenomena", "Electromagnetism", "Physical quantities", "Quantity", "Physical properties", "Fundamental interactions", "Hysteresis", "Magnetic hysteresis" ]
948,622
https://en.wikipedia.org/wiki/57-cell
In mathematics, the 57-cell (pentacontaheptachoron) is a self-dual abstract regular 4-polytope (four-dimensional polytope). Its 57 cells are hemi-dodecahedra. It also has 57 vertices, 171 edges and 171 two-dimensional faces. The symmetry order is 3420, from the product of the number of cells (57) and the symmetry of each cell (60). The symmetry abstract structure is the projective special linear group of the 2-dimensional vector space over the finite field of 19 elements, L2(19). It has Schläfli type {5,3,5} with 5 hemi-dodecahedral cells around each edge. It was discovered by . Perkel graph The vertices and edges form the Perkel graph, the unique distance-regular graph with intersection array {6,5,2;1,1,3}, discovered by . See also 11-cell – abstract regular polytope with hemi-icosahedral cells. 120-cell – regular 4-polytope with dodecahedral cells Order-5 dodecahedral honeycomb - regular hyperbolic honeycomb with same Schläfli type, {5,3,5}. (The 57-cell can be considered as being derived from it by identification of appropriate elements.) References . . The Classification of Rank 4 Locally Projective Polytopes and Their Quotients, 2003, Michael I Hartley External links Siggraph 2007: 11-cell and 57-cell by Carlo Sequin Perkel graph Regular 4-polytopes
57-cell
[ "Mathematics" ]
333
[ "Geometry", "Geometry stubs" ]
2,094,963
https://en.wikipedia.org/wiki/Air%20hammer%20%28fabrication%29
An air hammer, also known as an air chisel, is a pneumatic hand tool used to carve in stone, and to break or cut metal objects apart. It is designed to accept different tools depending on the required function. Tools The following are various tools that can be used in an air hammer: Universal joint and tie-rod tool Used to separate universal joints and tie-rod ends. Ball joint separator Used to separate ball joints. Shock absorber chisel Used to break loose shock absorber nuts. Exhaust pipe cutter Used to cut through exhaust pipe for disassembly. Tapered punch A general tool that can be used to free frozen nuts, insert pins, and align holes. Rubber bushing splitter Used to remove rubber bushings. Free-standing style Free-standing air hammers are an adaptation of the hand-held version. An air hammer can stretch or shrink (shape) a variety of metals, from thin aircraft aluminums, all the way down to 10-gauge steel. They are also used for smoothing metal that has already been roughed, shaped or formed. History In the 1920s, two pneumatic devices were invented that would permanently change the way metal and stone were hammered. The pneumatic rivet gun was originally developed to set hot rivets on girder bridges and high steel buildings. This tool was later scaled down for sheet metal, as the 1930s saw the advent of monocoque aluminum aircraft. The other new device, hitting at twice or three times the speed of the rivet gun, was the stone carver's hammer – a great blessing for smooth and rapid dressing of granite and marble. In 1930 F.J. Hauschild adapted the original stone carver's hammer into a portable hand-held steel tube frame for the purpose of straightening auto bodies. For the next 25 years his "Ram's Head Body and Fender Machine" improved and increased production for auto body work men all over the U.S. Copying Hauschild’s patented design, a pneumatic tool company in Chicago marketed a number of "destined-to-be-classic" pneumatic planishing hammers, both hand-held for auto body work, and also free-standing ones, with a variety of throat depths for industry and manufacturing. By World War II, rivet guns were used widely in U.S. aircraft factories both for riveting aluminum sheets, and for flow forming, the process of working aluminum sheet into and over wooden forms by the application of the pneumatic rivet gun. Post-war industry brought many new applications for the "air hammer" technology. Among these were: sand rammers and tampers for sand casting metal plating rack scalers weld chippers destruction guns for cleaning up concrete needle scalers pavement breakers metal chisels. Each of these tools has a different purpose despite nearly identical appearance in many cases. References External links Air Power Hammer shrinking 14-gauge steel demonstrated Hand tools Pneumatic tools
Air hammer (fabrication)
[ "Engineering" ]
613
[ "Human–machine interaction", "Hand tools" ]
2,095,183
https://en.wikipedia.org/wiki/Trapped-ion%20quantum%20computer
A trapped-ion quantum computer is one proposed approach to a large-scale quantum computer. Ions, or charged atomic particles, can be confined and suspended in free space using electromagnetic fields. Qubits are stored in stable electronic states of each ion, and quantum information can be transferred through the collective quantized motion of the ions in a shared trap (interacting through the Coulomb force). Lasers are applied to induce coupling between the qubit states (for single qubit operations) or coupling between the internal qubit states and the external motional states (for entanglement between qubits). The fundamental operations of a quantum computer have been demonstrated experimentally with the currently highest accuracy in trapped-ion systems. Promising schemes in development to scale the system to arbitrarily large numbers of qubits include transporting ions to spatially distinct locations in an array of ion traps, building large entangled states via photonically connected networks of remotely entangled ion chains, and combinations of these two ideas. This makes the trapped-ion quantum computer system one of the most promising architectures for a scalable, universal quantum computer. As of December 2023, the largest number of particles to be controllably entangled is 32 trapped ions. History The first implementation scheme for a controlled-NOT quantum gate was proposed by Ignacio Cirac and Peter Zoller in 1995, specifically for the trapped-ion system. The same year, a key step in the controlled-NOT gate was experimentally realized at NIST Ion Storage Group, and research in quantum computing began to take off worldwide. In 2021, researchers from the University of Innsbruck presented a quantum computing demonstrator that fits inside two 19-inch server racks, the world's first quality standards-meeting compact trapped-ion quantum computer. Paul trap The electrodynamic quadrupole ion trap currently used in trapped-ion quantum computing research was invented in the 1950s by Wolfgang Paul (who received the Nobel Prize for his work in 1989). Charged particles cannot be trapped in 3D by just electrostatic forces because of Earnshaw's theorem. Instead, an electric field oscillating at radio frequency (RF) is applied, forming a potential with the shape of a saddle spinning at the RF frequency. If the RF field has the right parameters (oscillation frequency and field strength), the charged particle becomes effectively trapped at the saddle point by a restoring force, with the motion described by a set of Mathieu equations. This saddle point is the point of minimized energy magnitude, , for the ions in the potential field. The Paul trap is often described as a harmonic potential well that traps ions in two dimensions (assume and without loss of generality) and does not trap ions in the direction. When multiple ions are at the saddle point and the system is at equilibrium, the ions are only free to move in . Therefore, the ions will repel each other and create a vertical configuration in , the simplest case being a linear strand of only a few ions. Coulomb interactions of increasing complexity will create a more intricate ion configuration if many ions are initialized in the same trap. Furthermore, the additional vibrations of the added ions greatly complicate the quantum system, which makes initialization and computation more difficult. Once trapped, the ions should be cooled such that (see Lamb Dicke regime). This can be achieved by a combination of Doppler cooling and resolved sideband cooling. At this very low temperature, vibrational energy in the ion trap is quantized into phonons by the energy eigenstates of the ion strand, which are called the center of mass vibrational modes. A single phonon's energy is given by the relation . These quantum states occur when the trapped ions vibrate together and are completely isolated from the external environment. If the ions are not properly isolated, noise can result from ions interacting with external electromagnetic fields, which creates random movement and destroys the quantized energy states. Requirements for quantum computation The full requirements for a functional quantum computer are not entirely known, but there are many generally accepted requirements. David DiVincenzo outlined several of these criteria for quantum computing. Qubits Any two-level quantum system can form a qubit, and there are two predominant ways to form a qubit using the electronic states of an ion: Two ground state hyperfine levels (these are called "hyperfine qubits") A ground state level and an excited level (these are called the "optical qubits") Hyperfine qubits are extremely long-lived (decay time of the order of thousands to millions of years) and phase/frequency stable (traditionally used for atomic frequency standards). Optical qubits are also relatively long-lived (with a decay time of the order of a second), compared to the logic gate operation time (which is of the order of microseconds). The use of each type of qubit poses its own distinct challenges in the laboratory. Initialization Ionic qubit states can be prepared in a specific qubit state using a process called optical pumping. In this process, a laser couples the ion to some excited states which eventually decay to one state which is not coupled to the laser. Once the ion reaches that state, it has no excited levels to couple to in the presence of that laser and, therefore, remains in that state. If the ion decays to one of the other states, the laser will continue to excite the ion until it decays to the state that does not interact with the laser. This initialization process is standard in many physics experiments and can be performed with extremely high fidelity (>99.9%). The system's initial state for quantum computation can therefore be described by the ions in their hyperfine and motional ground states, resulting in an initial center of mass phonon state of (zero phonons). Measurement Measuring the state of the qubit stored in an ion is quite simple. Typically, a laser is applied to the ion that couples only one of the qubit states. When the ion collapses into this state during the measurement process, the laser will excite it, resulting in a photon being released when the ion decays from the excited state. After decay, the ion is continually excited by the laser and repeatedly emits photons. These photons can be collected by a photomultiplier tube (PMT) or a charge-coupled device (CCD) camera. If the ion collapses into the other qubit state, then it does not interact with the laser and no photon is emitted. By counting the number of collected photons, the state of the ion may be determined with a very high accuracy (>99.99%). Arbitrary single qubit rotation One of the requirements of universal quantum computing is to coherently change the state of a single qubit. For example, this can transform a qubit starting out in 0 into any arbitrary superposition of 0 and 1 defined by the user. In a trapped-ion system, this is often done using magnetic dipole transitions or stimulated Raman transitions for hyperfine qubits and electric quadrupole transitions for optical qubits. The term "rotation" alludes to the Bloch sphere representation of a qubit pure state. Gate fidelity can be greater than 99%. The rotation operators and can be applied to individual ions by manipulating the frequency of an external electromagnetic field from and exposing the ions to the field for specific amounts of time. These controls create a Hamiltonian of the form . Here, and are the raising and lowering operators of spin (see Ladder operator). These rotations are the universal building blocks for single-qubit gates in quantum computing. To obtain the Hamiltonian for the ion-laser interaction, apply the Jaynes–Cummings model. Once the Hamiltonian is found, the formula for the unitary operation performed on the qubit can be derived using the principles of quantum time evolution. Although this model utilizes the rotating wave approximation, it proves to be effective for the purposes of trapped-ion quantum computing. Two qubit entangling gates Besides the controlled-NOT gate proposed by Cirac and Zoller in 1995, many equivalent, but more robust, schemes have been proposed and implemented experimentally since. Recent theoretical work by JJ. Garcia-Ripoll, Cirac, and Zoller have shown that there are no fundamental limitations to the speed of entangling gates, but gates in this impulsive regime (faster than 1 microsecond) have not yet been demonstrated experimentally. The fidelity of these implementations has been greater than 99%. Scalable trap designs Quantum computers must be capable of initializing, storing, and manipulating many qubits at once in order to solve difficult computational problems. However, as previously discussed, a finite number of qubits can be stored in each trap while still maintaining their computational abilities. It is therefore necessary to design interconnected ion traps that are capable of transferring information from one trap to another. Ions can be separated from the same interaction region to individual storage regions and brought back together without losing the quantum information stored in their internal states. Ions can also be made to turn corners at a "T" junction, allowing a two dimensional trap array design. Semiconductor fabrication techniques have also been employed to manufacture the new generation of traps, making the 'ion trap on a chip' a reality. An example is the quantum charge-coupled device (QCCD) designed by D. Kielpinski, Christopher Monroe and David J. Wineland. QCCDs resemble mazes of electrodes with designated areas for storing and manipulating qubits. The variable electric potential created by the electrodes can both trap ions in specific regions and move them through the transport channels, which negates the necessity of containing all ions in a single trap. Ions in the QCCD's memory region are isolated from any operations and therefore the information contained in their states is kept for later use. Gates, including those that entangle two ion states, are applied to qubits in the interaction region by the method already described in this article. Decoherence in scalable traps When an ion is being transported between regions in an interconnected trap and is subjected to a nonuniform magnetic field, decoherence can occur in the form of the equation below (see Zeeman effect). This effectively changes the relative phase of the quantum state. The up and down arrows correspond to a general superposition qubit state, in this case the ground and excited states of the ion. Additional relative phases could arise from physical movements of the trap or the presence of unintended electric fields. If the user could determine the parameter α, accounting for this decoherence would be relatively simple, as known quantum information processes exist for correcting a relative phase. However, since α from the interaction with the magnetic field is path-dependent, the problem is highly complex. Considering the multiple ways that decoherence of a relative phase can be introduced in an ion trap, reimagining the ion state in a new basis that minimizes decoherence could be a way to eliminate the issue. One way to combat decoherence is to represent the quantum state in a new basis called the decoherence-free subspaces, or DFS., with basis states and . The DFS is actually the subspace of two ion states, such that if both ions acquire the same relative phase, the total quantum state in the DFS will be unaffected. Challenges Trapped-ion quantum computers theoretically meet all of DiVincenzo's criteria for quantum computing, but implementation of the system can be quite difficult. The main challenges facing trapped-ion quantum computing are the initialization of the ion's motional states, and the relatively brief lifetimes of the phonon states. Decoherence also proves to be challenging to eliminate, and is caused when the qubits interact with the external environment undesirably. CNOT gate implementation The controlled NOT gate is a crucial component for quantum computing, as any quantum gate can be created by a combination of CNOT gates and single-qubit rotations. It is therefore important that a trapped-ion quantum computer can perform this operation by meeting the following three requirements. First, the trapped-ion quantum computer must be able to perform arbitrary rotations on qubits, which are already discussed in the "arbitrary single-qubit rotation" section. The next component of a CNOT gate is the controlled phase-flip gate, or the controlled-X gate (see quantum logic gate). In a trapped-ion quantum computer, the state of the center of mass phonon functions as the control qubit, and the internal atomic spin state of the ion is the working qubit. The phase of the working qubit will therefore be flipped if the phonon qubit is in the state . Lastly, a SWAP gate must be implemented, acting on both the ion state and the phonon state. Two alternate schemes to represent the CNOT gates are presented in Michael Nielsen and Isaac Chuang's Quantum Computation and Quantum Information and Cirac and Zoller's Quantum Computation with Cold Trapped Ions. References Additional resources Trapped ion computer on arxiv.org Quantum information science Quantum optics
Trapped-ion quantum computer
[ "Physics" ]
2,711
[ "Quantum optics", "Quantum mechanics" ]
2,098,622
https://en.wikipedia.org/wiki/Internal%20conversion%20coefficient
In nuclear physics, the internal conversion coefficient describes the rate of internal conversion. The internal conversion coefficient may be empirically determined by the following formula: There is no valid formulation for an equivalent concept for E0 (electric monopole) nuclear transitions. There are theoretical calculations that can be used to derive internal conversion coefficients. Their accuracy is not generally under dispute, but since the quantum mechanical models they depend on only take into account electromagnetic interactions between the nucleus and electrons, there may be unforeseen effects. Internal conversion coefficients can be looked up from tables, but this is time-consuming. Computer programs have been developed (see the BrIcc Program) which present internal conversion coefficients quickly and easily. Theoretical calculations of interest are the Rösel, Hager-Seltzer, and the Band, superseded by the Band-Raman calculation called BrIcc. The Hager-Seltzer calculations omit the M and higher-energy shells on the grounds (usually valid) that those orbitals have little electron density at the nucleus and can be neglected. To first approximation this assumption is valid, upon comparing several internal conversion coefficients for different isotopes for transitions of about 100 keV. The Band and Band-Raman calculations assume that the M shell may contribute to internal conversion to a non-negligible extent, and incorporates a general term (called "N+") which takes into account the small effect of any higher shells there may be, while the Rösel calculation works like the Band, but does not assume that all shells contribute and so generally terminates at the N shell. Additionally, the Band-Raman calculation can now consider ("frozen orbitals") or neglect ("no hole") the effect of the electron vacancy; the frozen-orbitals approximation is considered generally superior. References F. Rösel, H.M. Fries, K. Alder, H.C. Pauli: At. Data Nucl. Data Tables 21 (1978) 91. R.S. Hager and E.C. Seltzer, Nucl. Data Tables A4 (1968) 1. I.M. Band, M.B. Trzhaskovskaya: Tables of the gamma–ray internal conversion coefficients for the K, L, M shells, 10<Z<104 (Leningrad: Nuclear Physics Institute, 1978). T. Kibédi, T.W. Burrows, M.B. Trzhaskovskaya, P.M. Davidson, C.W. Nestor, Jr. Evaluation of theoretical conversion coefficients using BrIcc, Nucl. Instr. and Meth. A 589 (2008) 202-229. http://www-nds.iaea.org/nsdd/presentations%202011/Wednesday/BrIcc_NSDD2011.pdf or see http://bricc.anu.edu.au/bricc-datatables.php External links Nuclear Structure and Decay Data - IAEA with query on Conversion Coefficients Nuclear physics
Internal conversion coefficient
[ "Physics" ]
624
[ "Nuclear physics" ]
2,098,714
https://en.wikipedia.org/wiki/Polar%20mesospheric%20summer%20echoes
Polar mesospheric summer echoes (PMSE) is the phenomenon of anomalous radar echoes found between 80 and 90 km in altitude from May through early August in the Arctic, and from November through to February in the Antarctic. These strong radar echoes are associated with the extremely cold temperatures that occur above continental Antarctica during the summer. Rocket and radar measurements indicate that a partial reflection from a multitude of ion layers and constructive interference causes at least some of the PMSE. Generally PMSE exhibits dramatic variations in height and intensity as well as large variations in Doppler shift. PMSE exhibit strong signal power enhancements of scattering cross section at VHF radar frequencies in the range 50 MHz to 250 MHz, at times even to over 1 GHz, that occur in summer at high latitudes. The peak PMSE height is slightly below the summer mesopause temperature minimum at 88 km, and above the noctilucent cloud (NLC) and/or polar mesospheric cloud (PMC) layer at 83–84 km. The usual instrument for observing PMSE is a VHF Mesosphere-Stratosphere-Troposphere (MST) radar, although LIDARs and sounding rockets have also been used. PMSE is believed to be caused by structural irregularities in the ionospheric electron density at lower altitudes. The exact cause of PMSE is not yet known, although theorists have proposed steep electron density gradients, heavy positive ions, dressed aerosols, gravity waves and turbulence as possible explanations. PMSE occurs in both the Arctic and Antarctic regions, and is sometimes accompanied by noctilucent clouds. A much less frequent phenomena, related to PMSE, is known as Mesospheric Summer Echoes (MSE). MSE can be observed at middle latitudes, e.g., along the Baltic coast. Many years of MSE observations using VHF radars in Northern Germany show that MSE occurs less frequently because the formation mechanism requires the transport of very cold polar air by equatorward mesospheric winds. See also Ionogram References External links Polar mesosphere summer echoes (PMSE): review of observations and current understanding PMSE at EISCAT 224 MHz observations First observations of Polar Mesosphere Summer Echoes (PMSE) above Davis Antarctic PMSE Radio frequency propagation
Polar mesospheric summer echoes
[ "Physics" ]
477
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
2,099,759
https://en.wikipedia.org/wiki/Pseudopotential
In physics, a pseudopotential or effective potential is used as an approximation for the simplified description of complex systems. Applications include atomic physics and neutron scattering. The pseudopotential approximation was first introduced by Hans Hellmann in 1934. Atomic physics The pseudopotential is an attempt to replace the complicated effects of the motion of the core (i.e. non-valence) electrons of an atom and its nucleus with an effective potential, or pseudopotential, so that the Schrödinger equation contains a modified effective potential term instead of the Coulombic potential term for core electrons normally found in the Schrödinger equation. The pseudopotential is an effective potential constructed to replace the atomic all-electron potential (full-potential) such that core states are eliminated and the valence electrons are described by pseudo-wavefunctions with significantly fewer nodes. This allows the pseudo-wavefunctions to be described with far fewer Fourier modes, thus making plane-wave basis sets practical to use. In this approach usually only the chemically active valence electrons are dealt with explicitly, while the core electrons are 'frozen', being considered together with the nuclei as rigid non-polarizable ion cores. It is possible to self-consistently update the pseudopotential with the chemical environment that it is embedded in, having the effect of relaxing the frozen core approximation, although this is rarely done. In codes using local basis functions, like Gaussian, often effective core potentials are used that only freeze the core electrons. First-principles pseudopotentials are derived from an atomic reference state, requiring that the pseudo- and all-electron valence eigenstates have the same energies and amplitude (and thus density) outside a chosen core cut-off radius . Pseudopotentials with larger cut-off radius are said to be softer, that is more rapidly convergent, but at the same time less transferable, that is less accurate to reproduce realistic features in different environments. Motivation: Reduction of basis set size Reduction of number of electrons Inclusion of relativistic and other effects Approximations: One-electron picture. The small-core approximation assumes that there is no significant overlap between core and valence wave-function. Nonlinear core corrections or "semicore" electron inclusion deal with situations where overlap is non-negligible. Early applications of pseudopotentials to atoms and solids based on attempts to fit atomic spectra achieved only limited success. Solid-state pseudopotentials achieved their present popularity largely because of the successful fits by Walter Harrison to the nearly free electron Fermi surface of aluminum (1958) and by James C. Phillips to the covalent energy gaps of silicon and germanium (1958). Phillips and coworkers (notably Marvin L. Cohen and coworkers) later extended this work to many other semiconductors, in what they called "semiempirical pseudopotentials". Norm-conserving pseudopotential Norm-conserving and ultrasoft are the two most common forms of pseudopotential used in modern plane-wave electronic structure codes. They allow a basis-set with a significantly lower cut-off (the frequency of the highest Fourier mode) to be used to describe the electron wavefunctions and so allow proper numerical convergence with reasonable computing resources. An alternative would be to augment the basis set around nuclei with atomic-like functions, as is done in LAPW. Norm-conserving pseudopotential was first proposed by Hamann, Schlüter, and Chiang (HSC) in 1979. The original HSC norm-conserving pseudopotential takes the following form: where projects a one-particle wavefunction, such as one Kohn-Sham orbital, to the angular momentum labeled by . is the pseudopotential that acts on the projected component. Different angular momentum states then feel different potentials, thus the HSC norm-conserving pseudopotential is non-local, in contrast to local pseudopotential which acts on all one-particle wave-functions in the same way. Norm-conserving pseudopotentials are constructed to enforce two conditions. 1. Inside the cut-off radius , the norm of each pseudo-wavefunction be identical to its corresponding all-electron wavefunction: , where and are the all-electron and pseudo reference states for the pseudopotential on atom . 2. All-electron and pseudo wavefunctions are identical outside cut-off radius . Ultrasoft pseudopotentials Ultrasoft pseudopotentials relax the norm-conserving constraint to reduce the necessary basis-set size further at the expense of introducing a generalized eigenvalue problem. With a non-zero difference in norms we can now define: , and so a normalised eigenstate of the pseudo Hamiltonian now obeys the generalized equation , where the operator is defined as , where are projectors that form a dual basis with the pseudo reference states inside the cut-off radius, and are zero outside: . A related technique is the projector augmented wave (PAW) method. Fermi pseudopotential Enrico Fermi introduced a pseudopotential, , to describe the scattering of a free neutron by a nucleus. The scattering is assumed to be s-wave scattering, and therefore spherically symmetric. Therefore, the potential is given as a function of radius, : , where is the Planck constant divided by , is the mass, is the Dirac delta function, is the bound coherent neutron scattering length, and the center of mass of the nucleus. The Fourier transform of this -function leads to the constant neutron form factor. Phillips pseudopotential James Charles Phillips developed a simplified pseudopotential while at Bell Labs useful for describing silicon and germanium. See also Density functional theory Projector augmented wave method Marvin L. Cohen Alex Zunger References Pseudopotential libraries Pseudopotential Library : A community website for pseudopotentials/effective core potentials developed for high accuracy correlated many-body methods such as quantum Monte Carlo and quantum chemistry NNIN Virtual Vault for Pseudopotentials : This webpage maintained by the NNIN/C provides a searchable database of pseudopotentials for density functional codes as well as links to pseudopotential generators, converters, and other online databases. Vanderbilt Ultra-Soft Pseudopotential Site : Website of David Vanderbilt with links to codes that implement ultrasoft pseudopotentials and libraries of generated pseudopotentials. GBRV pseudopotential site : This site hosts the GBRV pseudopotential library PseudoDojo : This site collates tested pseudo potentials sorted by type, accuracy, and efficiency, shows information on convergence of various tested properties and provides download options. SSSP : Standard Solid State Pseudopotentials Further reading Computational physics Electronic structure methods Quantum mechanical potentials
Pseudopotential
[ "Physics", "Chemistry" ]
1,400
[ "Quantum chemistry", "Quantum mechanics", "Computational physics", "Quantum mechanical potentials", "Electronic structure methods", "Computational chemistry" ]
2,100,199
https://en.wikipedia.org/wiki/Peptidyl%20transferase%20center
The peptidyl transferase center () is an aminoacyltransferase ribozyme (RNA enzyme) located in the large subunit of the ribosome. It forms peptide bonds between adjacent amino acids during the translation process of protein biosynthesis. It is also responsible for peptidyl-tRNA hydrolysis, allowing the release of the synthesized peptide chain at the end of translation. Peptidyl transferase activity is not mediated by any ribosomal proteins, but entirely by ribosomal RNA (rRNA). The peptidyl transferase center is a significant piece of evidence supporting the RNA World hypothesis. In prokaryotes, the 50S (23S component) ribosomal subunit contains the peptidyl transferase center and acts as a ribozyme. The peptidyl transferase center on the 50S subunit lies at the lower tips (acceptor ends) of the A- and P- site tRNAs. In eukaryotes, the 60S (28S component) ribosomal subunit contains the peptidyl transferase center and acts as the ribozyme. Peptidyl transferases are not limited to translation, but there are relatively few enzymes with this function. Mechanism The substrates for the peptidyl transferase reaction are two tRNA molecules: one in the peptidyl site, bearing the growing peptide chain, and the other in the aminoacyl site, bearing the amino acid that will be added to the chain. The peptidyl chain and the incoming amino acid are attached to their respective tRNAs via ester bonds to the oxygen atom at the 3' ends of these tRNAs. The 3' ends of all tRNAs share a universally conserved CCA sequence. The alignment between the CCA ends of the ribosome-bound peptidyl tRNA and aminoacyl tRNA in the peptidyl transferase center contribute to peptide bond formation by providing the proper orientation for the reaction to occur. This reaction occurs via nucleophilic displacement. The amino group of the aminoacyl tRNA attacks the terminal carbonyl group of the peptidyl tRNA. The reaction proceeds through a tetrahedral intermediate and the loss of the P site tRNA as a leaving group. In peptidyl-tRNA hydrolysis, the same mechanism is used, but with a water molecule as the nucleophile. Antibiotic inhibitors The following protein synthesis inhibitors target the peptidyl transferase center: Chloramphenicol binds to residues A2451 and A2452 in the 23S rRNA of the ribosome and inhibits peptide bond formation. Pleuromutilins also bind to the peptidyl transferase center. Macrolide antibiotics are thought to inhibit peptidyl transferase, in addition to inhibiting ribosomal translocation. See also Enzyme Ribozyme Transferase Translation References External links EC 2.3.2 Ribozymes Transferases
Peptidyl transferase center
[ "Chemistry" ]
624
[ "Catalysis", "Ribozymes" ]
27,832,980
https://en.wikipedia.org/wiki/Rigid%20transformation
In mathematics, a rigid transformation (also called Euclidean transformation or Euclidean isometry) is a geometric transformation of a Euclidean space that preserves the Euclidean distance between every pair of points. The rigid transformations include rotations, translations, reflections, or any sequence of these. Reflections are sometimes excluded from the definition of a rigid transformation by requiring that the transformation also preserve the handedness of objects in the Euclidean space. (A reflection would not preserve handedness; for instance, it would transform a left hand into a right hand.) To avoid ambiguity, a transformation that preserves handedness is known as a rigid motion, a Euclidean motion, or a proper rigid transformation. In dimension two, a rigid motion is either a translation or a rotation. In dimension three, every rigid motion can be decomposed as the composition of a rotation and a translation, and is thus sometimes called a rototranslation. In dimension three, all rigid motions are also screw motions (this is Chasles' theorem) In dimension at most three, any improper rigid transformation can be decomposed into an improper rotation followed by a translation, or into a sequence of reflections. Any object will keep the same shape and size after a proper rigid transformation. All rigid transformations are examples of affine transformations. The set of all (proper and improper) rigid transformations is a mathematical group called the Euclidean group, denoted for -dimensional Euclidean spaces. The set of rigid motions is called the special Euclidean group, and denoted . In kinematics, rigid motions in a 3-dimensional Euclidean space are used to represent displacements of rigid bodies. According to Chasles' theorem, every rigid transformation can be expressed as a screw motion. Formal definition A rigid transformation is formally defined as a transformation that, when acting on any vector , produces a transformed vector of the form where (i.e., is an orthogonal transformation), and is a vector giving the translation of the origin. A proper rigid transformation has, in addition, which means that R does not produce a reflection, and hence it represents a rotation (an orientation-preserving orthogonal transformation). Indeed, when an orthogonal transformation matrix produces a reflection, its determinant is −1. Distance formula A measure of distance between points, or metric, is needed in order to confirm that a transformation is rigid. The Euclidean distance formula for is the generalization of the Pythagorean theorem. The formula gives the distance squared between two points and as the sum of the squares of the distances along the coordinate axes, that is where and , and the dot denotes the scalar product. Using this distance formula, a rigid transformation has the property, Translations and linear transformations A translation of a vector space adds a vector to every vector in the space, which means it is the transformation It is easy to show that this is a rigid transformation by showing that the distance between translated vectors equal the distance between the original vectors: A linear transformation of a vector space, , preserves linear combinations, A linear transformation can be represented by a matrix, which means where is an matrix. A linear transformation is a rigid transformation if it satisfies the condition, that is Now use the fact that the scalar product of two vectors v.w can be written as the matrix operation , where the T denotes the matrix transpose, we have Thus, the linear transformation L is rigid if its matrix satisfies the condition where is the identity matrix. Matrices that satisfy this condition are called orthogonal matrices. This condition actually requires the columns of these matrices to be orthogonal unit vectors. Matrices that satisfy this condition form a mathematical group under the operation of matrix multiplication called the orthogonal group of n×n matrices and denoted . Compute the determinant of the condition for an orthogonal matrix to obtain which shows that the matrix can have a determinant of either +1 or −1. Orthogonal matrices with determinant −1 are reflections, and those with determinant +1 are rotations. Notice that the set of orthogonal matrices can be viewed as consisting of two manifolds in separated by the set of singular matrices. The set of rotation matrices is called the special orthogonal group, and denoted . It is an example of a Lie group because it has the structure of a manifold. See also Deformation (mechanics) Motion (geometry) Rigid body dynamics References Functions and mappings Kinematics Euclidean symmetries
Rigid transformation
[ "Physics", "Mathematics", "Technology" ]
884
[ "Machines", "Kinematics", "Euclidean symmetries", "Physical phenomena", "Functions and mappings", "Mathematical analysis", "Mathematical objects", "Classical mechanics", "Physical systems", "Motion (physics)", "Mechanics", "Mathematical relations", "Symmetry" ]
35,851,124
https://en.wikipedia.org/wiki/Carbonyl%20%CE%B1-substitution%20reaction
Carbonyl α-substitution reactions occur at the position next to the carbonyl group, the α-position, and involves the substitution of an α-hydrogen by an electrophile through either an enol or enolate ion intermediate. Reaction mechanism Because their double bonds are electron rich, enols behave as nucleophiles and react with electrophiles in much the same way that alkenes do. But because of resonance electron donation of a lone pair of electrons on the neighboring oxygen, enols are more electron- rich and correspondingly more reactive than alkenes. Notice in the following electrostatic potential map of ethenol (H2C=CHOH) how there is a substantial amount of electron density on the α-carbon. When an alkene reacts with an electrophile, such as HCl, initial addition of H+ gives an intermediate cation and subsequent reaction with Cl− yields an addition product. When an enol reacts with an electrophile, however, only the initial addition step is the same. Instead of reacting with CI− to give an addition product, the intermediate cation loses the OH− proton to give an α-substituted carbonyl compound. α-Halogenation of aldehydes and ketones A particularly common α-substitution reaction in the laboratory is the halogenation of aldehydes and ketones at their α positions by reaction Cl2, Br2 or I2 in acidic solution. Bromine in acetic acid solvent is often used. Remarkably, ketone halogenation also occurs in biological systems, particularly in marine algae, where , bromoacetone, , and other related compounds have been found. The halogenation is a typical α-substitution reaction that proceeds by acid catalyzed formation of an enol intermediate. Acidity of α-hydrogen atoms: enolate ion formation A hydrogen on the α position of a carbonyl compound is weakly acidic and can be removed by a strong base to yield an enolate ion. In comparing acetone (pKa= 19.3) with ethane (pKa= 60), for instance, the presence of a neighboring carbonyl group increases the acidity of the ketone over the alkane by a factor of 1040. Abstraction of a proton from a carbonyl compound occurs when the a C-H bond is oriented roughly parallel to the p orbitals of the carbonyl group. The α carbon atom of the enolate ion is sp2-hybridized and has a p orbital that overlaps the neighboring carbonyl p orbitals. Thus, the negative charge is shared by the electronegative oxygen atom, and the enolate ion is stabilized by resonance. Carbonyl compounds are more acidic than alkanes for the same reason that carboxylic acids are more acidic than alcohols. In both cases, the anions are stabilized by resonance. Enolate ions differ from carboxylate ions, however, in that their two resonance forms are not equivalent- the form with the negative charge on oxygen is lower in energy than the form with the charge on carbon. Nevertheless, the principle behind resonance stabilization is the same in both cases. Because carbonyl compounds are only weakly acidic, a strong base is needed for enolate ion formation . If an alkoxide such as sodium ethoxide is used as base, deprotonation takes place only to the extent of about 0.1% because acetone is a weaker acid than ethanol (pKa= 16). If, however, a more powerful base such as sodium hydride (NaH) or lithium diisopropylamide (LDA) is used, a carbonyl compound can be completely converted into its enolate ion. Lithium diisopropylamide (LDA), which is easily prepared by reaction of the strong base butyllithium with diisopropylamine, is widely used in the laboratory as a base for preparing enolate ions from carbonyl compounds. Many types of carbonyl compounds, including aldehydes, ketones, esters, thioesters, carboxylic acids, and amides, can be converted into enolate ions by reaction with LDA. Note that nitriles, too, are acidic and can be converted into enolate-like anions (referred to as nitrile anions). When a hydrogen atom is flanked by two carbonyl groups, its acidity is enhanced even more. This enhanced acidity of β-dicarbonyl compounds is due to the stabilization of the resultant enolate ions by delocalization of the negative charge over both carbonyl groups. Reactivity of enolate ions Enolate ions are more useful than enols for two reasons. First, pure enols can't normally be isolated but are instead generated only as short lived intermediates in low concentration. By contrast, stable solutions of pure enolate ions are easily prepared from most carbonyl compounds by reaction with a strong base. Second, enolate ions are more reactive than enols and undergo many reactions that enols don't. Whereas enols are neutral, enolate ions are negatively charged, making them much better nucleophiles. As a result, enolate ions are more common than enols in both laboratory and biological chemistry. Because they are resonance hybrids of two nonequivalent forms, enolate ions can be looked at either as vinylic alkoxides (C=C- O−) or as α-ketocarbanions (−C-C= O). Thus, enolate ions can react with electrophiles either on oxygen or on carbon. Reaction on oxygen yields an enol derivative, while reaction on carbon yields an α-substituted carbonyl compound. Both kinds of reactivity are known, but reaction on carbon is more common. Alkylation of enolate ions Perhaps the single most important reaction of enolate ions is their alkylation by treatment with an alkyl halide or tosylate, thereby forming a new C-C bond and joining two smaller pieces into one larger molecule. Alkylation occurs when the nucleophilic enolate ion reacts with the electrophilic alkyl halide in an SN2 reaction and displaces the leaving group by backside attack. Alkylation reactions are subject to the same constraints that affect all SN2 reactions. Thus, the leaving group X in the alkylating agent R-X can be chloride, bromide, iodide, or tosylate. The alkyl group R should be primary or methyl, and preferably should be allylic or benzylic. Secondary halides react poorly, and tertiary halides don't react at all because a competing E2 elimination of HX occurs instead. Vinylic and aryl halides are also unreactive because backside approach is sterically prevented. References Substitution reactions Organic reactions
Carbonyl α-substitution reaction
[ "Chemistry" ]
1,415
[ "Organic reactions" ]
35,853,778
https://en.wikipedia.org/wiki/Tryptophan-rich%20sensory%20protein
Tryptophan-rich sensory proteins (TspO) are a family of proteins that are involved in transmembrane signalling. In either prokaryotes or mitochondria they are localized to the outer membrane, and have been shown to bind and transport dicarboxylic tetrapyrrole intermediates of the haem biosynthetic pathway. They are associated with the major outer membrane porins (in prokaryotes) and with the voltage-dependent anion channel (in mitochondria). TspO of Rhodobacter sphaeroides is involved in signal transduction, functioning as a negative regulator of the expression of some photosynthesis genes (PpsR/AppA repressor/antirepressor regulon). This down-regulation is believed to be in response to oxygen levels. TspO works through (or modulates) the PpsR/AppA system and acts upstream of the site of action of these regulatory proteins. It has been suggested that the TspO regulatory pathway works by regulating the efflux of certain tetrapyrrole intermediates of the haem/bacteriochlorophyll biosynthetic pathways in response to the availability of molecular oxygen, thereby causing the accumulation of a biosynthetic intermediate that serves as a corepressor for the regulated genes. A homologue of the TspO protein in Sinorhizobium meliloti is involved in regulating expression of the ndi locus in response to stress conditions. In animals, the peripheral benzodiazepine receptor is a mitochondrial protein (located in the outer mitochondrial membrane) characterised by its ability to bind with nanomolar affinity to a variety of benzodiazepine-like drugs, as well as to dicarboxylic tetrapyrrole intermediates of the haem biosynthetic pathway. Depending upon the tissue, it was shown to be involved in steroidogenesis, haem biosynthesis, apoptosis, cell growth and differentiation, mitochondrial respiratory control, and immune and stress response, but the precise function of the PBR remains unclear. The role of PBR in the regulation of cholesterol transport from the outer to the inner mitochondrial membrane, the rate-determining step in steroid biosynthesis, has been studied in detail. PBR is required for the binding, uptake and release, upon ligand activation, of the substrate cholesterol. PBR forms a multimeric complex with the voltage-dependent anion channel (VDAC) and adenine nucleotide carrier. Molecular modeling of PBR suggested that it might function as a channel for cholesterol. Indeed, cholesterol uptake and transport by bacterial cells was induced upon PBR expression. Mutagenesis studies identified a cholesterol recognition/interaction motif (CRAC) in the cytoplasmic C terminus of PBR. In complementation experiments, rat PBR (pk18) protein functionally substitutes for its homologue TspO in R. sphaeroides, negatively affecting transcription of specific photosynthesis genes. This suggests that PBR may function as an oxygen sensor, transducing an oxygen-triggered signal leading to an adaptive cellular response. These observations suggest that fundamental aspects of this receptor and the downstream signal transduction pathway are conserved in bacteria and higher eukaryotic mitochondria. The alpha-3 subdivision of the purple bacteria is considered to be a likely source of the endosymbiont that ultimately gave rise to the mitochondrion. Therefore, it is possible that the mammalian PBR remains both evolutionarily and functionally related to the TspO of R. sphaeroides. References Protein families Signal transduction Transmembrane proteins
Tryptophan-rich sensory protein
[ "Chemistry", "Biology" ]
778
[ "Protein classification", "Signal transduction", "Biochemistry", "Protein families", "Neurochemistry" ]
35,854,438
https://en.wikipedia.org/wiki/Goldschmidt%20tolerance%20factor
Goldschmidt's tolerance factor (from the German word Toleranzfaktor) is an indicator for the stability and distortion of crystal structures. It was originally only used to describe the perovskite ABO3 structure, but now tolerance factors are also used for ilmenite. Alternatively the tolerance factor can be used to calculate the compatibility of an ion with a crystal structure. The first description of the tolerance factor for perovskite was made by Victor Moritz Goldschmidt in 1926. Mathematical expression The Goldschmidt tolerance factor () is a dimensionless number that is calculated from the ratio of the ionic radii: In an ideal cubic perovskite structure, the lattice parameter (i.e., length) of the unit cell (a) can be calculated using the following equation: Perovskite structure The perovskite structure has the following tolerance factors (t): See also Goldschmidt classification Victor Goldschmidt References Crystallography Mineralogy
Goldschmidt tolerance factor
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
210
[ "Crystallography", "Condensed matter physics", "Materials science" ]
35,855,057
https://en.wikipedia.org/wiki/Trigonal%20prismatic%20molecular%20geometry
In chemistry, the trigonal prismatic molecular geometry describes the shape of compounds where six atoms, groups of atoms, or ligands are arranged around a central atom, defining the vertices of a triangular prism. The structure commonly occurs for d0, d1 and d2 transition metal complexes with covalently-bound ligands and small charge separation. In d0 complexes it may be ascribed to sd5 hybridization, but in d1 and d2 complexes the dz2 orbital is occupied by nonbonding electron (pair). Furthermore, when unoccupied, said orbital participates in bonding and causes C3v distortion, like in W(CH3)6. Examples Hexamethyltungsten (W(CH3)6) was the first example of a molecular trigonal prismatic complex. The figure shows the six carbon atoms arranged at the vertices of a triangular prism with the tungsten at the centre. The hydrogen atoms are not shown. Some other transition metals have trigonal prismatic hexamethyl complexes, including both neutral molecules such as Mo(CH3)6 and Re(CH3)6 and ions such as and . The complex Mo(S−CH=CH−S)3 is also trigonal prismatic, with each S−CH=CH−S group acting as a bidentate ligand with two sulfur atoms binding the metal atom. Here the coordination geometry of the six sulfur atoms around the molybdenum is similar to that in the extended structure of molybdenum disulfide (MoS2). References Stereochemistry Molecular geometry
Trigonal prismatic molecular geometry
[ "Physics", "Chemistry" ]
328
[ "Molecular geometry", "Molecules", "Stereochemistry", "Space", "Stereochemistry stubs", "nan", "Spacetime", "Matter" ]
35,857,112
https://en.wikipedia.org/wiki/Regularization%20perspectives%20on%20support%20vector%20machines
Within mathematical analysis, Regularization perspectives on support-vector machines provide a way of interpreting support-vector machines (SVMs) in the context of other regularization-based machine-learning algorithms. SVM algorithms categorize binary data, with the goal of fitting the training set data in a way that minimizes the average of the hinge-loss function and L2 norm of the learned weights. This strategy avoids overfitting via Tikhonov regularization and in the L2 norm sense and also corresponds to minimizing the bias and variance of our estimator of the weights. Estimators with lower Mean squared error predict better or generalize better when given unseen data. Specifically, Tikhonov regularization algorithms produce a decision boundary that minimizes the average training-set error and constrain the Decision boundary not to be excessively complicated or overfit the training data via a L2 norm of the weights term. The training and test-set errors can be measured without bias and in a fair way using accuracy, precision, Auc-Roc, precision-recall, and other metrics. Regularization perspectives on support-vector machines interpret SVM as a special case of Tikhonov regularization, specifically Tikhonov regularization with the hinge loss for a loss function. This provides a theoretical framework with which to analyze SVM algorithms and compare them to other algorithms with the same goals: to generalize without overfitting. SVM was first proposed in 1995 by Corinna Cortes and Vladimir Vapnik, and framed geometrically as a method for finding hyperplanes that can separate multidimensional data into two categories. This traditional geometric interpretation of SVMs provides useful intuition about how SVMs work, but is difficult to relate to other machine-learning techniques for avoiding overfitting, like regularization, early stopping, sparsity and Bayesian inference. However, once it was discovered that SVM is also a special case of Tikhonov regularization, regularization perspectives on SVM provided the theory necessary to fit SVM within a broader class of algorithms. This has enabled detailed comparisons between SVM and other forms of Tikhonov regularization, and theoretical grounding for why it is beneficial to use SVM's loss function, the hinge loss. Theoretical background In the statistical learning theory framework, an algorithm is a strategy for choosing a function given a training set of inputs and their labels (the labels are usually ). Regularization strategies avoid overfitting by choosing a function that fits the data, but is not too complex. Specifically: where is a hypothesis space of functions, is the loss function, is a norm on the hypothesis space of functions, and is the regularization parameter. When is a reproducing kernel Hilbert space, there exists a kernel function that can be written as an symmetric positive-definite matrix . By the representer theorem, Special properties of the hinge loss The simplest and most intuitive loss function for categorization is the misclassification loss, or 0–1 loss, which is 0 if and 1 if , i.e. the Heaviside step function on . However, this loss function is not convex, which makes the regularization problem very difficult to minimize computationally. Therefore, we look for convex substitutes for the 0–1 loss. The hinge loss, , where , provides such a convex relaxation. In fact, the hinge loss is the tightest convex upper bound to the 0–1 misclassification loss function, and with infinite data returns the Bayes-optimal solution: Derivation The Tikhonov regularization problem can be shown to be equivalent to traditional formulations of SVM by expressing it in terms of the hinge loss. With the hinge loss where , the regularization problem becomes Multiplying by yields with , which is equivalent to the standard SVM minimization problem. Notes and references Support vector machines Mathematical analysis
Regularization perspectives on support vector machines
[ "Mathematics" ]
808
[ "Mathematical analysis" ]
35,857,512
https://en.wikipedia.org/wiki/Protein%20topology
Protein topology is a property of protein molecule that does not change under deformation (without cutting or breaking a bond). Frameworks Two main topology frameworks have been developed and applied to protein molecules. Knot Theory Knot theory which categorises chain entanglements. The usage of knot theory is limited to a small percentage of proteins as most of them are unknot. Circuit topology Circuit topology categorises intra-chain contacts based on their arrangements. Circuit topology is a determinant of protein folding kinetics and stability. Other Uses In biology literature, the term topology is also used to refer to mutual orientation of regular secondary structures, such as alpha-helices and beta strands in protein structure . For example, two adjacent interacting alpha-helices or beta-strands can go in the same or in opposite directions. Topology diagrams of different proteins with known three-dimensional structure are provided by PDBsum (an example). See also Circuit topology Membrane topology Protein folding References External links Pro-origami: Protein structure cartoons TOPS services at Glasgow University PTGL TOPDRAW Protein structure Molecular topology
Protein topology
[ "Chemistry", "Mathematics" ]
224
[ "Protein stubs", "Biochemistry stubs", "Molecular topology", "Topology", "Structural biology", "Protein structure" ]
24,535,618
https://en.wikipedia.org/wiki/Majumdar%E2%80%93Ghosh%20model
The Majumdar–Ghosh model is a one-dimensional quantum Heisenberg spin model in which the nearest-neighbour antiferromagnetic exchange interaction is twice as strong as the next-nearest-neighbour interaction. It is a special case of the more general - model, with . The model is named after Indian physicists Chanchal Kumar Majumdar and Dipan Ghosh. The Majumdar–Ghosh model is notable because its ground states (lowest energy quantum states) can be found exactly and written in a simple form, making it a useful starting point for understanding more complex spin models and phases. Definition The Majumdar–Ghosh model is defined by the following Hamiltonian: where the S vector is a quantum spin operator with quantum number S = 1/2. Other conventions for the coefficients may be taken in the literature, but the most important fact is that the ratio of first-neighbor to second-neighbor couplings is 2 to 1. As a result of this ratio, it is possible to express the Hamiltonian (shifted by an overall constant) equivalently in the form The summed quantity is none other than the quadratic Casimir operator for representation of the spin algebra on the three consecutive sites , which in turn can be decomposed into a direct sum of spin 1/2 and 3/2 representations. It has the eigenvalues for the spin 1/2 subspace and for the spin 3/2 subspace. Ground states It has been shown that the Majumdar–Ghosh model has two minimum energy states, or ground states, namely the states in which neighboring pairs of spins form singlet configurations. The wavefunction for each ground state is a product of these singlet pairs. This explains why there must be at least two ground states with the same energy, since one may be obtained from the other by merely shifting, or translating, the system by one lattice spacing. Furthermore, it has been shown that these (and linear combinations of them) are the unique ground states. Generalizations The Majumdar–Ghosh model is one of a small handful of realistic quantum spin models that may be solved exactly. Moreover, its ground states are simple examples of what are known as valence-bond solids (VBS). Thus the Majumdar–Ghosh model is related to another famous spin model, the AKLT model, whose ground state is the unique one dimensional spin one (S=1) valence-bond solid. The Majumdar–Ghosh model is also a useful example of the Lieb–Schultz–Mattis theorem which roughly states that an infinite, one dimensional, half-odd-integer spin system must either have no energy spacing (or gap) between its ground and excited states or else have more than one ground state. The Majumdar–Ghosh model has a gap and falls under the second case. The isotropy of the model is actually not important to the fact that it has an exactly dimerised ground state. For example, also has the same aforementioned exactly dimerised ground state for all real . See also Heisenberg model (quantum) Heisenberg model (classical) J1 J2 model Bethe ansatz Ising model t-J model References C K Majumdar and D Ghosh, On Next‐Nearest‐Neighbor Interaction in Linear Chain. J. Math. Phys. 10, 1388 (1969); C K Majumdar, Antiferromagnetic model with known ground state. J. Phys. C: Solid State Phys. 3 911–915 (1970) Assa Auerbach, Interacting Electrons and Quantum Magnetism, Springer-Verlag New York (1992) p. 83 Spin models Statistical mechanics Quantum magnetism Lattice models
Majumdar–Ghosh model
[ "Physics", "Materials_science" ]
774
[ "Spin models", "Quantum mechanics", "Lattice models", "Computational physics", "Quantum magnetism", "Condensed matter physics", "Statistical mechanics" ]
24,536,782
https://en.wikipedia.org/wiki/Supergeometry
Supergeometry is differential geometry of modules over graded commutative algebras, supermanifolds and graded manifolds. Supergeometry is part and parcel of many classical and quantum field theories involving odd fields, e.g., SUSY field theory, BRST theory, or supergravity. Supergeometry is formulated in terms of -graded modules and sheaves over -graded commutative algebras (supercommutative algebras). In particular, superconnections are defined as Koszul connections on these modules and sheaves. However, supergeometry is not particular noncommutative geometry because of a different definition of a graded derivation. Graded manifolds and supermanifolds also are phrased in terms of sheaves of graded commutative algebras. Graded manifolds are characterized by sheaves on smooth manifolds, while supermanifolds are constructed by gluing of sheaves of supervector spaces. There are different types of supermanifolds. These are smooth supermanifolds (-, -, -supermanifolds), -supermanifolds, and DeWitt supermanifolds. In particular, supervector bundles and principal superbundles are considered in the category of -supermanifolds. Definitions of principal superbundles and principal superconnections straightforwardly follow that of smooth principal bundles and principal connections. Principal graded bundles also are considered in the category of graded manifolds. There is a different class of Quillen–Ne'eman superbundles and superconnections. These superconnections have been applied to computing the Chern character in K-theory, noncommutative geometry, and BRST formalism. See also Supermanifold Graded manifold Supersymmetry Connection (algebraic framework) Supermetric References . . . External links G. Sardanashvily, Lectures on supergeometry, . Supersymmetry
Supergeometry
[ "Physics" ]
403
[ "Unsolved problems in physics", "Supersymmetry", "Symmetry", "Physics beyond the Standard Model" ]
24,536,794
https://en.wikipedia.org/wiki/Gell-Mann%E2%80%93Okubo%20mass%20formula
In physics, the Gell-Mann–Okubo mass formula provides a sum rule for the masses of hadrons within a specific multiplet, determined by their isospin (I) and strangeness (or alternatively, hypercharge) where a0, a1, and a2 are free parameters. The rule was first formulated by Murray Gell-Mann in 1961 and independently proposed by Susumu Okubo in 1962. Isospin and hypercharge are generated by SU(3), which can be represented by eight hermitian and traceless matrices corresponding to the "components" of isospin and hypercharge. Six of the matrices correspond to flavor change, and the final two correspond to the third-component of isospin projection, and hypercharge. Theory The mass formula was obtained by considering the representations of the Lie algebra su(3). In particular, the meson octet corresponds to the root system of the adjoint representation. However, the simplest, lowest-dimensional representation of su(3) is the fundamental representation, which is three-dimensional, and is now understood to describe the approximate flavor symmetry of the three quarks u, d, and s. Thus, the discovery of not only an su(3) symmetry, but also of this workable formula for the mass spectrum was one of the earliest indicators for the existence of quarks. The formula is underlain by the octet enhancement hypothesis, which ascribes dominance of SU(3) breaking to the hypercharge generator of SU(3), , and, in modern terms, the relatively higher mass of the strange quark. This formula is phenomenological, describing an approximate relation between meson and baryon masses, and has been superseded as theoretical work in quantum chromodynamics advances, notably chiral perturbation theory. Baryons Using the values of relevant I and S for baryons, the Gell-Mann–Okubo formula can be rewritten for the baryon octet, where N, Λ, Σ, and Ξ represent the average mass of corresponding baryons. Using the current mass of baryons, this yields: and meaning that the Gell-Mann–Okubo formula reproduces the mass of octet baryons within ~0.5% of measured values. For the baryon decuplet, the Gell-Mann–Okubo formula can be rewritten as the "equal-spacing" rule where Δ, Σ*, Ξ*, and Ω represent the average mass of corresponding baryons. The baryon decuplet formula famously allowed Gell-Mann to predict the mass of the then undiscovered Ω−. Mesons The same mass relation can be found for the meson octet, Using the current mass of mesons, this yields and Because of this large discrepancy, several people attempted to find a way to understand the failure of the GMO formula in mesons, when it worked so well in baryons. In particular, people noticed that using the square of the average masses yielded much better results: This now yields and which fall within 5% of each other. For a while, the GMO formula involving the square of masses was simply an empirical relationship; but later a justification for using the square of masses was found in the context of chiral perturbation theory, just for pseudoscalar mesons, since these are the pseudogoldstone bosons of dynamically broken chiral symmetry, and, as such, obey Dashen's mass formula. Other, mesons, such as vector ones, need no squaring for the GMO formula to work. See also Gell-Mann–Nishijima formula Eightfold Way Quark model SU(3) References Further reading The following book contains most (if not all) historical papers on the Eightfold Way and related topics, including the Gell-Mann–Okubo mass formula. Hadrons Quantum chromodynamics
Gell-Mann–Okubo mass formula
[ "Physics" ]
831
[ "Hadrons", "Subatomic particles", "Matter" ]
40,002,050
https://en.wikipedia.org/wiki/Ligation%20%28molecular%20biology%29
Ligation is the joining of two nucleotides, or two nucleic acid fragments, into a single polymeric chain through the action of an enzyme known as a ligase. The reaction involves the formation of a phosphodiester bond between the 3'-hydroxyl terminus of one nucleotide and the 5'-phosphoryl terminus of another nucleotide, which results in the two nucleotides being linked consecutively on a single strand. Ligation works in fundamentally the same way for both DNA and RNA. A cofactor is generally involved in the reaction, usually ATP or NAD+. Eukaryotic ligases belong to the ATP type, while the NAD+ type are found in bacteria (e.g. E. coli). Ligation occurs naturally as part of numerous cellular processes, including DNA replication, transcription, splicing, and recombination, and is also an essential laboratory procedure in molecular cloning, whereby DNA fragments are joined to create recombinant DNA molecules (such as when a foreign DNA fragment is inserted into a plasmid). The discovery of DNA ligase dates back to 1967 and was an important event in the field of molecular biology. Ligation in the laboratory is normally performed using T4 DNA ligase. It is broadly used in vitro due to its capability of joining sticky-ended fragments as well as blunt-ended fragments. However, procedures for ligation without the use of standard DNA ligase are also popular. Human DNA ligase abnormalities have been linked to pathological disorders characterized by immunodeficiency, radiation sensitivity, and developmental problems. Ligation reaction The mechanism of the ligation reaction was first elucidated in the laboratory of I. Robert Lehman. Two fragments of DNA may be joined by DNA ligase which catalyzes the formation of a phosphodiester bond between the 3'-hydroxyl group (-OH) at one end of a strand of DNA and the 5'-phosphate group (-PO4) of another. In animals and bacteriophages, ATP is used as the energy source for the ligation, while in bacteria, NAD+ is used. The DNA ligase first reacts with ATP or NAD+, forming a ligase-AMP intermediate with the AMP linked to the ε-amino group of lysine in the active site of the ligase via a phosphoramide bond. This adenylyl group is then transferred to the phosphate group at the 5' end of a DNA chain, forming a DNA-adenylate complex. Finally, a phosphodiester bond between the two DNA ends is formed via the nucleophilic attack of the 3'-hydroxyl at the end of a DNA strand on the activated 5′-phosphoryl group of another. A nick in the DNA (i.e. a break in one strand of a double-stranded DNA) can be repaired very efficiently by the ligase. However, a complicating feature of ligation conducted presents itself when ligating two separate DNA ends as the two ends need to come together before the ligation reaction can proceed. In the ligation reactions conducted in a laboratory, the ligation of DNA with sticky or cohesive ends, the protruding strands of DNA may be annealed together already, therefore it is a relatively efficient process as it is equivalent to repairing two nicks in the DNA. However, in the ligation of blunt-ends, which lack protruding ends for the DNA to anneal together, the process is dependent on random collision for the ends to align together before they can be ligated, and is consequently a much less efficient process. The DNA ligase from E. coli cannot ligate blunt-ended DNA except under conditions of molecular crowding, and it is therefore not normally used for ligation in the laboratory. Instead the DNA ligase from phage T4 is used as it can ligate blunt-ended DNA as well as single-stranded DNA. Factors affecting ligation In the laboratory, factors that affect an enzyme-mediated chemical reaction would naturally affect a ligation reaction, these include the concentration of enzyme and the reactants, the temperature of reaction and the length of time of incubation. Ligation is complicated by the fact that the reaction can involve both inter- and intra-molecular reactions, but the desired ligation products in many ligation reactions (e.g. ligating a DNA fragment into a vector) needed first to be inter-molecular, i.e. between two different DNA molecules, followed by an intra-molecular reaction to seal and circularize the molecule. For efficient ligation, an additional annealing step is also necessary. The three steps to form a new phosphodiester bond during ligation are: enzyme adenylylation, adenylyl transfer to DNA, and nick sealing. Mg(2+) is a cofactor for catalysis, therefore at high concentration of Mg(2+) the ligation efficiency is high. If the concentration of Mg(2+) is limited, the nick- sealing is the rate- limiting reaction of the process, and adenylylated DNA intermediate stays in the solution. Such adenylylation of the enzyme restrains the rebinding to the adenylylated DNA intermediate comparison of an Achilles' heel of LIG1, and represents a risk if they are not fixed. DNA concentration The concentration of DNA can affect the rate of ligation, and whether the ligation is an inter-molecular or intra-molecular reaction. Ligation involves joining up the ends of a DNA with other ends, however, each DNA fragment has two ends, and if the ends are compatible, a DNA molecule can circularize by joining its own ends. At high DNA concentration, there is a greater chance of one end of a DNA molecule meeting the end of another DNA, thereby forming intermolecular ligation. At a lower DNA concentration, the chance that one end of a DNA molecule would meet the other end of the same molecule increases, therefore intramolecular reaction that circularizes the DNA is more likely. The transformation efficiency of linear DNA is also much lower than circular DNA, and for the DNA to circularize, the DNA concentration should not be too high. As a general rule, the total DNA concentration should be less than 10 μg/ml. The relative concentration of the DNA fragments, their length, as well as buffer conditions are also factors that can affect whether intermolecular or intramolecular reactions are favored. The concentration of DNA can be artificially increased by adding condensing agents such as cobalt hexamine and biogenic polyamines such as spermidine, or by using crowding agents such as polyethylene glycol (PEG) which also increase the effective concentration of enzymes. Note however that additives such as cobalt hexamine can produce exclusively intermolecular reaction, resulting in linear concatemers rather than the circular DNA more suitable for transformation of plasmid DNA, and is therefore undesirable for plasmid ligation. If it is necessary to use additives in plasmid ligation, the use of PEG is preferable as it can promote intramolecular as well as intermolecular ligation. Ligase concentration As is usual for an enzyme, the higher the ligase concentration, the faster is the rate of ligation. Blunt-end ligation is much less efficient than sticky end ligation, so a higher concentration of ligase is used in blunt-end ligations. High DNA ligase concentration may be used in conjunction with PEG for a faster ligation, and they are the components often found in commercial kits designed for rapid ligation. Temperature Two issues are involved when considering the temperature of a ligation reaction. First, the optimum temperature for DNA ligase activity which is 37°C, and second, the melting temperature (Tm) of the DNA ends to be ligated. The melting temperature is dependent on length and base composition of the DNA overhang—the greater the number of G and C, the higher the Tm since there are three hydrogen bonds formed between G-C base pair compared to two for A-T base pair—with some contribution from the stacking of the bases between fragments. For the ligation reaction to proceed efficiently, the ends should be stably annealed, and in ligation experiments, the Tm of the DNA ends is generally much lower than 37°C. The optimal temperature for ligating cohesive ends is therefore a compromise between the best temperature for DNA ligase activity and the Tm where the ends can associate. However, different restriction enzymes generates different ends, and the base composition of the ends produced by these enzymes may also differ, the melting temperature and therefore the optimal temperature can vary widely depending on the restriction enzymes used, and the optimum temperature for ligation may be between 4-15°C depending on the ends. Ligations also often involve ligating ends generated from different restriction enzymes in the same reaction mixture, therefore it may not be practical to select optimal temperature for a particular ligation reaction and most protocols simply choose 12-16°C, room temperature, or 4°C. When conducting a ligation at 4°C, it is necessary to increase the time of ligation reaction, for example by leaving the ligation mixture overnight or longer in the fridge. Buffer composition The ionic strength of the buffer used can affect the ligation. The kinds of cations presence can also influence the ligation reaction, for example, excess amount of Na+ can cause the DNA to become more rigid and increase the likelihood of intermolecular ligation. At high concentration of monovalent cation (>200 mM) ligation can also be almost completely inhibited. The standard buffer used for ligation is designed to minimize ionic effects. Sticky-end ligation Restriction enzymes can generate a wide variety of ends in the DNA they digest, but in cloning experiments most commonly-used restriction enzymes generate a 4-base single-stranded overhang called the sticky or cohesive end (exceptions include NdeI which generates a 2-base overhang, and those that generate blunt ends). These sticky ends can anneal to other compatible ends and become ligated in a sticky-end (or cohesive end) ligation. EcoRI for example generates an AATT end, and since A and T have lower melting temperature than C and G, its melting temperature Tm is low at around 6°C. For most restriction enzymes, the overhangs generated have a Tm that is around 15°C. For practical purposes, sticky end ligations are performed at 12-16°C, or at room temperature, or alternatively at 4°C for a longer period. For the insertion of a DNA fragment into a plasmid vector, it is preferable to use two different restriction enzymes to digest the DNA so that different ends are generated. The two different ends can prevent the religation of the vector without any insert, and it also allows the fragment to be inserted in a directional manner. When it is not possible to use two different sites, then the vector DNA may need to be dephosphorylated to avoid a high background of recircularized vector DNA with no insert. Without a phosphate group at the ends the vector cannot ligate to itself, but can be ligated to an insert with a phosphate group. Dephosphorylation is commonly done using calf-intestinal alkaline phosphatase (CIAP) which removes the phosphate group from the 5′ end of digested DNA, but note that CIAP is not easy to inactivate and can interfere with ligation without an additional step to remove the CIAP, thereby resulting in failure of ligation. CIAP should not be used in excessive amount and should only be used when necessary. Shrimp alkaline phosphatase (SAP) or Antarctic phosphatase (AP) are suitable alternative as they can be easily inactivated. Blunt-end ligation Blunt end ligation does not involve base-pairing of the protruding ends, so any blunt end may be ligated to another blunt end. Blunt ends may be generated by restriction enzymes such as SmaI and EcoRV. A major advantage of blunt-end cloning is that the desired insert does not require any restriction sites in its sequence as blunt-ends are usually generated in a PCR, and the PCR generated blunt-ended DNA fragment may then be ligated into a blunt-ended vector generated from restriction digest. Blunt-end ligation, however, is much less efficient than sticky end ligation, typically the reaction is 100X slower than sticky-end ligation. Since blunt-end does not have protruding ends, the ligation reaction depends on random collisions between the blunt-ends and is consequently much less efficient. To compensate for the lower efficiency, the concentration of ligase used is higher than sticky end ligation (10x or more). The concentration of DNA used in blunt-end ligation is also higher to increase the likelihood of collisions between ends, and longer incubation time may also be used for blunt-end ligations. If both ends needed to be ligated into a vector are blunt-ended, then the vector needs to be dephosphorylated to minimize self-ligation. This may be done using CIAP, but caution in its use is necessary as noted previously. Since the vector has been dephosphorylated, and ligation requires the presence of a 5'-phosphate, the insert must be phosphorylated. Blunt-ended PCR product normally lacks a 5'-phosphate, therefore it needs to be phosphorylated by treatment with T4 polynucleotide kinase. Blunt-end ligation is also reversibly inhibited by high concentration of ATP. PCR usually generates blunt-ended PCR products, but note that PCR using Taq polymerase can add an extra adenine (A) to the 3' end of the PCR product. This property may be exploited in TA cloning where the ends of the PCR product can anneal to the T end of a vector. TA ligation is therefore a form of sticky end ligation. Blunt-ended vectors may be turned into vector for TA ligation with dideoxythymidine triphosphate (ddTTP) using terminal transferase. General guidelines For the cloning of an insert into a circular plasmid: The total DNA concentration used should be less than 10 μg/ml as the plasmid needs to recircularize. The molar ratio of insert to vector is usually used at around 3:1. Very high ratio may produce multiple inserts. The ratio may be adjusted depending on the size of the insert, and other ratios may be used, such as 1:1. Trouble-shooting Sometimes ligation fail to produce the desired ligated products, and some of the possible reasons may be: Damaged DNA – Over-exposure to UV radiation during preparation of DNA for ligation can damage the DNA and significantly reduce transformation efficiency. A higher-wavelength UV radiation (365 nm) which cause less damage to DNA should be used if it is necessary work for work on the DNA on a UV transilluminator for an extended period of time. Addition of cytidine or guanosine to the electrophoresis buffer at 1 mM concentration however may protect the DNA from damage. Incorrect usage of CIAP or its inefficient inactivation or removal. Excessive amount of DNA used. Incomplete DNA digest – The vector DNA that is incompletely digested will give rise to a high background, and this may be checked by doing a ligation without insert as a control. Insert that is not completely digested will also not ligate properly and circularize. When digesting a PCR product, make sure that sufficient extra bases have been added to the 5'-ends of the oligonucleotides used for PCR as many restriction enzymes require a minimum number of extra basepairs for efficient digest. The information on the minimum basepair required is available from restriction enzyme suppliers such as in the catalog of New England Biolabs. Incomplete ligation – Blunt-ends DNA (e.g. SmaI) and some sticky-ends DNA (e.g. NdeI) that have low-melting temperature require more ligase and longer incubation time. Protein expressed from ligated gene insert is toxic to cells. Homologous sequence in insert to sequence in plasmid DNA resulting in deletion. High concentration of EDTA or salts that acts as an inhibitors. Other methods of DNA ligation A number of commercially available DNA cloning kits use other methods of ligation that do not require the use of the usual DNA ligases. These methods allow cloning to be done much more rapidly, as well as allowing for simpler transfer of cloned DNA insert to different vectors. These methods however require the use of specially designed vectors and components, and may lack flexibility. Topoisomerase-mediated ligation Topoisomerase can be used instead of ligase for ligation, and the cloning may be done more rapidly without the need for restriction digest of the vector or insert. In this TOPO cloning method a linearized vector is activated by attaching topoisomerase I to its ends, and this "TOPO-activated" vector may then accept a PCR product by ligating to both of the 5' ends of the PCR product, the topoisomerase is released and a circular vector is formed in the process. Homologous recombination Another method of cloning without the use of ligase is by DNA recombination, for example as used in the Gateway cloning system. The gene, once cloned into the cloning vector (called entry clone in this method), may be conveniently introduced into a variety of expression vectors by recombination. Examples and application of DNA ligases Different types of ligases found in the studied organisms. For instance, Nicotinamide adenine dinucleotide (NAD+)-dependent ligase was found and isolated from bacterial organism, known as E. coli in second third of 20th century. Since then, this model has been widely used to study that DNA ligase family. Moreover, it is found in all bacteria. Examples of genes present in E. coli are LigA, which has essential functions affecting bacterial growth, and LigB. In mammals, including human 3 genes, namely Lig1, Lig3, Lig4 were identified. All eukaryotes contain multiple types of DNA ligases encoded by Lig genes. The smallest known eukaryotic ligase is Chlorella virus DNA ligase (ChVLig). It contains only 298 amino acids. When ChVLig is the only source of ligase in the cell, it can continue to support mitotic development, and nonhomologous end joining in budding yeasts. DNA Ligase I (Lig1) is accountable for Okazaki Fragments ligation. It is consist of 919 amino acids. In a complex process of DNA replication, DNA Ligase I recruited to replications machinery by protein interactions. Lig1 plays role in cell division in plants and yeasts. Knockout of the Lig1 gene is lethal in yeasts and some plants sprouts.  Nevertheless, studies of mouse embryogenesis have shown that until the middle of the growth process embryo developing without DNA ligase I. Enzymatic ligation has been used in various studies related to DNA nanostructures and lead to increase of  efficiency and stability. One of the methods is sealing of covalent DNA bond, namely phosphodiester bond and nicks. Reconstruction of those structures performed with assistance of ligation. For instance, T4 DNA ligase serve as a catalyst for sealing of a nick between 3 prime and 5 prime ends of DNA to make up strong  phosphodiester bond. Ligated structures have higher thermal stability values. T4 DNA ligase has many valuable properties such as already mentioned catalytic, but it is also responsible for sealing of the gaps between DNA strands, nick-closing activity, repair of the DNA damage, etc. In nanostructures architecture, molecular biology researches - ssDNA is an important application model. T4 DNA ligase used to cyclize short ssDNA fragments, but process is complicated by formation of secondary structures. On the other hand, Taq DNA ligase is a thermostable enzyme which can be applied at higher temperatures (45, 55 and 65 °C respectively). Since at these temperature range secondary structures less stable it is enhance cyclization efficiency of oligonucleotides. The kinetic, biological, and other parameters of nanostructures are influenced by presence of the secondary structures in DNA rings. However, Taq DNA ligation occur only when two complementary DNA strands are perfectly paired and have no gaps in between. Analysis of ligases activities, mutations, deficiencies widely used in drug design and biological researches to investigate diseases, pathologies developments and related rare acquired or inherited syndromes (e.g. DNA ligase IV syndrome). The ligation procedure is prevalent in molecular biology cloning techniques, and it has been applied to define and characterize specific nucleotide sequences in the genome using Ligase Chain Reaction (LCR) or Polymerase Chain Reaction (PCR)-based amplification of ligated probes. Analysis Ligation may also serve as a DNA analysis method. Some techniques employ rolling circle amplification. The most notable of these is described by Smolina et al., 2007 & Smolina et al., 2008 using fluorescence in situ hybridization and peptide nucleic acids. They developed and employed this technique for analyses of bacterial chromosomes. See also Ligation-independent cloning Nuclease References Molecular biology techniques Cloning
Ligation (molecular biology)
[ "Chemistry", "Engineering", "Biology" ]
4,549
[ "Cloning", "Molecular biology techniques", "Genetic engineering", "Molecular biology" ]
40,003,849
https://en.wikipedia.org/wiki/Bronchial%20blocker
An bronchial blocker (also called endobronchial blocker) is a device which can be inserted down a tracheal tube after tracheal intubation so as to block off the right or left main bronchus of the lungs in order to be able to achieve a controlled one sided ventilation of the lungs in thoracic surgery. The lung tissue distal to the obstruction will collapse, thus allowing the surgeon's view and access to relevant structures within the thoracic cavity. Bronchial blockers are used to achieve lung separation and one lung ventilation as an alternative to double-lumen endotrachealtubes (DLT) and are the method of choice in children and paediatric patients for whom even the smallest DLTs might be too big. Types Univent tube Made by Fuji Systems, Tokyo, Japan, is a tracheal tube with a second lumen that contains a coaxial, balloon tipped catheter which can be advanced under fiber optic bronchoscopy and blocked in either bronchus. Arndt endobronchial blocker Produced by Cook Critical Care, Bloomington, USA, is a catheter with a balloon tip and inner lumen which contains a flexible wire which is coupled to a fiber optical bronchoscope to guide the device into the desired bronchus. Cohen endobronchial blocker By Cook Critical Care, is a catheter shaft with a distal soft nylon flexible tip and balloon which can be deflected by 90° to guide the device into either bronchus. Coopdech bronchial blocker By Smith Medical, Rosmalen, NL, has a preformed angulation at the distal tip to aid placement in the desired bronchus. EZ-blocker By Teleflex Inc., USA, a Y-shaped bronchial blocker with two distal extensions to be placed in both main stem bronchi. See also Combitube Endotracheal tube Airtraq Laryngeal tube References Medical equipment Anesthetic equipment
Bronchial blocker
[ "Biology" ]
432
[ "Medical equipment", "Medical technology" ]
40,005,913
https://en.wikipedia.org/wiki/Pandoravirus
Pandoravirus is a proposed genus of giant virus, first discovered in 2013. It is the third largest in physical size of any known viral genus, behind Pithovirus and Megaklothovirus. Pandoraviruses have double stranded DNA genomes, with the largest genome size (2.5 million base pairs) of any known viral genus. Discovery The discovery of Pandoraviruses by a team of French scientists, led by husband and wife Jean-Michel Claverie and Chantal Abergel, was announced in a report in the journal Science in July 2013. Other scientists had previously observed the pandoravirus particles, but owing to their enormous size they were not expected to be viruses. Patrick Scheid, a parasitologist from the Central Institute of the Bundeswehr Medical Service in Koblenz, Germany, found one in 2008, in an amoeba living in the contact lens of a woman with keratitis. Its development within the amoebal host was documented extensively. Unlike in other cases with such giant viruses, the large particles within Acanthamoeba were not mistaken for bacteria. The authors initially termed them "endocytobionts". Mimivirus, a nucleocytoplasmic large DNA virus with a genome size of about 1.1 megabases, was described in 1992 but not recognized as a virus until 2003. Megavirus, discovered in seawater off the coast of Chile in 2011, has a genome size of approximately 1.2 megabases. The prior discovery of these viruses prompted a search for other types of large amoeba-infecting viruses, which led to the finding of two species; Pandoravirus salinus, found in seawater taken from the coast of Chile, with a genome size of ~2.5 megabases, and Pandoravirus dulcis, found in a shallow freshwater pond in La Trobe University, Melbourne, Australia, with a 1.9 megabase genome. Description Pandoraviruses are oval in shape and are about 1 micrometer (1000 nanometers) in length. Other viruses range from 25 to 100 nanometers. In addition to being large physically, Pandoraviruses have a large genome made up of 2,500 genes, compared to only 10 genes on average in other viruses. For example, the Influenza A virus contains 7 genes and HIV contains only 9 genes. Gene content varies among species of Pandoravirus, with Pandoravirus salinus containing 2,500 genes and Pandoravirus dulcis containing about 1,500 genes. Pandoraviruses were originally mistaken for bacteria; however, they lack some of the characteristics of bacteria, such as the ability to make their own proteins. The dissimilarity of the remaining genes to any cellular genes led researchers to speculate that this virus represents a previously unknown branch of the tree of life. However, other experts have called this proposal premature because there is very little evidence supporting the idea. Replication Pandoraviruses have double stranded DNA. Like most giant viruses, Pandoraviruses have a viral replication cycle. They lack the ability to make their own proteins, rely on the host cells for ATP (energy) and replication, and also do not contain ribosomes or produce energy to divide. Under the microscope, scientists observed the virus enter the amoeba through fusion with membrane vacuoles, and integrate their DNA into the host cells. The host cell replicates the viral particles and eventually splits open, releasing the viral particles. The process of replication lasts 10–15 hours. Viral replication and assembly happens simultaneously. In other words, viral DNA is replicated within the cytoplasm of the host cell and assembled into new viral particles followed by lysis of the host cell. Prevalence in the environment Pandoraviruses do not seem to be harmful to humans. They are mostly found in marine environments, infecting amoebae. One reason for their only relatively recent discovery is because they exist in environments that are not well studied. Pandoraviruses, like other marine viruses, infect plankton, which are organisms that live in the water column and form the basis of the food chain for other marine species. More study and research needs to be done in order to confirm the prevalence of Pandoraviruses in different environments. Currently, not much is known about their role in marine ecosystems. however, viruses are not mere pathogens for their host, but are also key players in aquatic ecosystems and the biosphere. Almost all genomes of cellular organisms contain viral sequences, elements of which are also essential in gene regulation. Viral infection and lysis can influence community structure, as well as the transfer of matter and energy in aquatic ecosystems. They can also dramatically alter host physiology through viral gene expression and drive evolutionary innovation through virus-mediated horizontal gene transfer. Phylogenetic affinities Approximately 93% of Pandoravirus genes are not known from any other microbes, suggesting that they belong to an as of yet undescribed "fourth domain" aside from Bacteria, Archaea, and Eukaryotes. Viruses are not widely considered to belong within these three domains, although they have been proposed as one in the past by some biologists. Comparison with other giant viruses Other giant viruses such as the Mimivirus, Pithovirus, and Megavirus have much smaller genomes. For example, Mimivirus, considered one of the largest giant viruses, has a genome size of 1.1 million base pairs compared to 2.5 million base pairs for Pandoraviruses. Another feature that is different in Pandoraviruses compared to other giant viruses is the replication cycle. Pandoraviruses infect amoebas, which are single celled eukaryotes. Pandoravirus enters amoebas through phagocytic vacuoles, then fuses with the membrane vacuole of the amoeba. This leads to viral particles to be released into the cytoplasm of the amoeba. See also DNA virus Introduction to viruses Largest organisms List of viruses Microbiology Virology Virus classification References External links Viralzone: Pandoravirus 2013 in science Nucleocytoplasmic large DNA viruses Unaccepted virus taxa
Pandoravirus
[ "Biology" ]
1,250
[ "Biological hypotheses", "Unaccepted virus taxa", "Controversial taxa" ]
40,006,957
https://en.wikipedia.org/wiki/Isovalent%20hybridization
In chemistry, isovalent or second order hybridization is an extension of orbital hybridization, the mixing of atomic orbitals into hybrid orbitals which can form chemical bonds, to include fractional numbers of atomic orbitals of each type (s, p, d). It allows for a quantitative depiction of bond formation when the molecular geometry deviates from ideal bond angles. Only bonding with 4 equivalent substituents results in exactly hybridization. For molecules with different substituents, we can use isovalent hybridization to rationalize the differences in bond angles between different atoms. In the molecule methyl fluoride for example, the HCF bond angle (108.73°) is less than the HCH bond angle (110.2°). This difference can be attributed to more character in the C−F bonding and more character in the C−H bonding orbitals. The hybridisation of bond orbitals is determined by Bent's rule: "Atomic s character concentrates in orbitals directed toward electropositive substituents". The bond length between similar atoms also shortens with increasing s character. For example, the C−H bond length is 110.2 pm in ethane, 108.5 pm in ethylene and 106.1 pm in acetylene, with carbon hybridizations sp3 (25% s), sp2 (33% s) and sp (50% s) respectively. To determine the degree of hybridization of each bond one can utilize a hybridization parameter (). For hybrids of s and p orbitals, this is the coefficient multiplying the p orbital when the hybrid orbital is written in the form . The square of the hybridization parameter equals the hybridization index () of an orbital. . The fractional s character of orbital i is , and the s character of all the hybrid orbitals must sum to one, so that The fractional character of orbital i is , and the p character of all the hybrid orbitals sums to the number of p orbitals involved in the formation of hybrids: These hybridization parameters can then be related to physical properties like bond angles. Using the two bonding atomic orbitals and we are able to find the magnitude of the interorbital angle. The orthogonality condition implies the relation known as Coulson's theorem: For two identical ligands the following equation can be utilized: The hybridization index cannot be measured directly in any way. However, one can find it indirectly by measuring specific physical properties. Because nuclear spins are coupled through bonding electrons, and the electron penetration to the nucleus is dependent on s character of the hybrid orbital used in bonding, J-coupling constants determined through NMR spectroscopy is a convenient experimental parameter that can be used to estimate the hybridization index of orbitals on carbon. The relationships for one-bond 13C-1H and 13C-13C coupling are and , where 1JX-Y is the one-bond NMR spin-spin coupling constant between nuclei X and Y and χS(α) is the s character of orbital α on carbon, expressed as a fraction of unity. As an application, the 13C-1H coupling constants show that for the cycloalkanes, the amount of s character in the carbon hybrid orbital employed in the C-H bond decreases as the ring size increases. The value of 1J13C-1H for cyclopropane, cyclobutane and cyclopentane are 161, 134, and 128 Hz, respectively. This is a consequence of the fact that the C-C bonds in small, strained rings (cyclopropane and cyclobutane) employ excess p character to accommodate their molecular geometries (these bonds are famously known as 'banana bonds'). In order to conserve the total number of s and p orbitals used in hybridization for each carbon, the hybrid orbital used to form the C-H bonds must in turn compensate by taking on more s character. Experimentally, this is also demonstrated by the significantly higher acidity of cyclopropane (pKa ~ 46) compared to, for instance, cyclohexane (pKa ~ 52). References Chemical bonding Quantum chemistry
Isovalent hybridization
[ "Physics", "Chemistry", "Materials_science" ]
874
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Condensed matter physics", " molecular", "nan", "Atomic", "Chemical bonding", " and optical physics" ]
3,900,510
https://en.wikipedia.org/wiki/Bosenova
A bosenova or bose supernova is a very small, supernova-like explosion, which can be induced in a Bose–Einstein condensate (BEC) by changing the external magnetic field, so that the "self-scattering" interaction transitions from repulsive to attractive due to the Feshbach resonance, causing the BEC to "collapse and bounce" or "rebound." Although the total energy of the explosion is very small, the "collapse and bounce" scenario qualitatively resembles a condensed matter version of a core-collapse supernova, hence the term bosenova. The nomenclature is not a play of words on the Brazilian music style bossa nova, but a play of words with bose-einstein and supernova. Experiment In the particular experiment when a bosenova was first detected, transitioning the self-interaction from repulsive to attractive caused the BEC to implode and shrink to a size smaller than the optical detector's minimum resolution limit, and then suddenly "explode." In this explosion, about half of the atoms in the condensate superficially seemed to have "disappeared" from the experiment altogether, i.e., they were not detected in either the cold particle remnants nor in the expanding gas cloud produced. Under current BEC theory, which only very crudely accounts for the interactions between the particles composing the BEC, the bosenova phenomenon remains unexplained, because the energy available to the individual atoms of the condensate near absolute zero appears to be insufficient to cause the observed implosion. However, subsequent mean-field theories have been proposed to explain bosenovas as a collective phenomenon. The bosenova behaviour of a BEC may provide insights into the behavior of a neutron star, as well as into the possible properties of still-hypothetical boson stars and into the quantum theory of "collective phenomena" in general. References Further reading Bose–Einstein condensates
Bosenova
[ "Physics", "Chemistry", "Materials_science" ]
394
[ "Bose–Einstein condensates", "Phases of matter", "Condensed matter physics", "Matter" ]
3,901,316
https://en.wikipedia.org/wiki/Paclitaxel%20total%20synthesis
Paclitaxel total synthesis in organic chemistry is a major ongoing research effort in the total synthesis of paclitaxel (Taxol). This diterpenoid is an important drug in the treatment of cancer but, also expensive because the compound is harvested from a scarce resource, namely the Pacific yew (Taxus brevifolia). Not only is the synthetic reproduction of the compound itself of great commercial and scientific importance, but it also opens the way to paclitaxel derivatives not found in nature but with greater potential. The paclitaxel molecule consists of a tetracyclic core called baccatin III and an amide tail. The core rings are conveniently called (from left to right) ring A (a cyclohexene), ring B (a cyclooctane), ring C (a cyclohexane) and ring D (an oxetane). The paclitaxel drug development process took over 40 years. The anti-tumor activity of a bark extract of the Pacific yew tree was discovered in 1963 as a follow-up of a US government plant screening program already in existence 20 years before that. The active substance responsible for the anti-tumor activity was discovered in 1969 and structure elucidation was completed in 1971. Robert A. Holton of Florida State University succeeded in the total synthesis of paclitaxel in 1994, a project that he had started in 1982. In 1988 Jean-Noël Denis had also developed a semisynthetic route to paclitaxel starting from 10-deacetylbaccatin III. This compound is a biosynthetic precursor and is found in larger quantities than paclitaxel itself in Taxus baccata (the european yew). In 1990 Bristol-Myers Squibb bought a licence to the patent for this process which in the years to follow earned Florida State University and Holton (with a 40% take) over 200 million US dollars. Total synthesis The total synthesis of taxol is called one of the most hotly contested of the 1990s with around 30 competing research groups by 1992. The number of research groups actually having reported a total synthesis currently stands at 11 with the Holton group (article first accepted for publication) and the Nicolaou group (article first published) first and second in what is called a photo finish. Some of the efforts are truly synthetic but in others a precursor molecule found in nature is included. The key data are collected below. What all strategies have in common is synthesis of the baccatin molecule followed by last stage addition of the tail, a process (except for one) based on the Ojima lactam. Holton Taxol total synthesis - year: 1994 - precursor: Patchoulol strategy: linear synthesis AB then C then D - references: see related article Nicolaou Taxol total synthesis - year: 1994 - precursor: Mucic acid strategy: convergent synthesis A and C merge to ABC then D - references: see related article Danishefsky Taxol total synthesis - year: 1996 - precursor: Wieland-Miescher ketone strategy: convergent synthesis C merges with D then with A merges to ABCD - references: See related article Wender Taxol total synthesis - year: 1997 - precursor: Pinene strategy: linear synthesis AB then C then D - references: Kuwajima Taxol total synthesis I. Kuwajima, - year: 1998 - precursor: synthetic building blocks strategy: linear synthesis A then B then C then D Mukaiyama Taxol total synthesis - year: 1998 - Precursor: L-serine strategy: linear synthesis B, then C, then A then D. References: see related article. Takahashi Taxol total synthesis - year: 2006 - Precursor: geraniol strategy: convergent synthesis A and C merge to ABC then D Sato-Chida Taxol total synthesis - year: 2015, formal synthesis to a Takahashi intermediate Nakada Taxol total synthesis - year: 2015, formal synthesis to a Takahashi intermediate Baran Taxol total synthesis - year: 2020, total synthesis via a two-phase divergent synthetic approach. Li Taxol total synthesis - year: 2021, total synthesis via B ring closure by forming C1–C2 bond. Ongoing research efforts are directed at the synthesis of taxadiene and taxadienone intermediates. The synthesis of related taxanes decinnamoyltaxinine E and taxabaccatin III has been reported Semisynthesis The commercial semisynthesis (by Bristol-Myers Squibb) of paclitaxel starting from 10-deacetylbaccatin III (isolated from the European yew) is based on tail addition of the so-called Ojima lactam to its free hydroxyl group: Another commercial semisynthesis (by the company Natural Pharmaceuticals) relies on the isolation of a group of paclitaxel derivatives isolated from primary ornamental taxanes. These derivatives have the same skeleton as paclitaxel except for the organic residue R of the terminal tail amide group which can be phenyl, or propyl or pentyl (among others) whereas in paclitaxel it is an explicit phenyl group. The semisynthesis consists of conversion of the amide group to an amine with Schwartz's reagent through an imine followed by acidic workup and a benzoylation. In the production process Michigan grown yews which mature in 8 years are periodically topped and dried. This material is shipped to Mexico for a first extraction step (10% paclitaxel content) and then to Canada for further purification to 95% purity. The semisynthesis to final product takes place in China. Biosynthesis The biosynthetic pathway to paclitaxel has been investigated and consists of approximately 20 enzymatic steps. The complete scheme is still unavailable. The segments that are known are very different from the synthetic pathways tried thus far (Scheme 1). The starting compound is geranylgeranyl diphosphate 2 which is a dimer of geraniol 1. This compound already contains all the required 20 carbon atoms for the paclitaxel skeleton. More ring closing through intermediate 3 (taxadiene) leads to taxusin 4. The two main reasons why this type of synthesis is not feasible in the laboratory is that nature does a much better job controlling stereochemistry and a much better job activating a hydrocarbon skeleton with oxygen substituents for which cytochrome P450 is responsible in some of the oxygenations. Intermediate 5 is called 10-deacetylbaccatin III. A biochemical kilogram-scale production of taxadiene was reported using genetically engineered E. coli in 2011. References and notes External links Paclitaxel Total Syntheses @ SynArchive.com Taxolog for Taxol research, founded by Holto The complete Taxol story from Chemical & Engineering News: Article Extensive Florida State University article Story of taxol total synthesis Total synthesis Taxanes
Paclitaxel total synthesis
[ "Chemistry" ]
1,447
[ "Total synthesis", "Chemical synthesis" ]
3,901,787
https://en.wikipedia.org/wiki/Anomeric%20effect
In organic chemistry, the anomeric effect or Edward-Lemieux effect (after J. T. Edward and Raymond Lemieux) is a stereoelectronic effect that describes the tendency of heteroatomic substituents adjacent to a heteroatom within a cyclohexane ring to prefer the axial orientation instead of the less-hindered equatorial orientation that would be expected from steric considerations. This effect was originally observed in pyranose rings by J. T. Edward in 1955 when studying carbohydrate chemistry. The term anomeric effect was introduced in 1958. The name comes from the term used to designate the lowest-numbered ring carbon of a pyranose, the anomeric carbon. Isomers that differ only in the configuration at the anomeric carbon are called anomers. The anomers of D-glucopyranose are diastereomers, with the beta anomer having a hydroxyl () group pointing up equatorially, and the alpha anomer having that () group pointing down axially. The anomeric effect can also be generalized to any cyclohexyl or linear system with the general formula , where Y is a heteroatom with one or more lone pairs, and X is an electronegative atom or group. The magnitude of the anomeric effect is estimated at 4-8 kJ/mol in the case of sugars, but is different for every molecule. In the above case, the methoxy group ) on the cyclohexane ring (top) prefers the equatorial position. However, in the tetrahydropyran ring (bottom), the methoxy group prefers the axial position. This is because in the cyclohexane ring, Y = carbon, which is not a heteroatom, so the anomeric effect is not observed and sterics dominates the observed substituent position. In the tetrahydropyran ring, Y = oxygen, which is a heteroatom, so the anomeric effect contributes and stabilizes the observed substituent position. In both cases, X = methoxy group. The anomeric effect is most often observed when Y = oxygen, but can also be seen with other lone pair bearing heteroatoms in the ring, such as nitrogen, sulfur, and phosphorus. The exact method by which the anomeric effect causes stabilization is a point of controversy, and several hypotheses have been proposed to explain it. Physical explanation and controversy The physical reason for the anomeric effect is not completely understood. Several, in part conflicting, explanations have been offered and the topic is still not settled. Hyperconjugation Cyclic molecules A widely accepted explanation is that there is a stabilizing interaction (hyperconjugation) between the unshared electron pair on the endocyclic heteroatom (within the sugar ring) and the σ* orbital of the axial (exocyclic) C–X bond. This causes the molecule to align the donating lone pair of electrons antiperiplanar (180°) to the exocyclic C-X σ bond, lowering the overall energy of the system and causing more stability. Some authors also question the validity of this hyperconjugation model based on results from the quantum theory of atoms in molecules. While most studies on the anomeric effects have been theoretical in nature, the n–σ* (hyperconjugation) hypothesis has also been extensively criticized on the basis that the electron density redistribution in acetals proposed by this hypothesis is not congruent with the known experimental chemistry of acetals and, in particular, the chemistry of monosaccharides. Acyclic molecules Hyperconjugation is also found in acyclic molecules containing heteroatoms, another form of the anomeric effect. If a molecule has an atom with a lone pair of electrons and the adjacent atom is able to accept electrons into the σ* orbital, hyperconjugation occurs, stabilizing the molecule. This forms a "no bond" resonance form. For this orbital overlap to occur, the trans, trans conformation is preferred for most heteroatoms, however for the stabilization to occur in dimethoxymethane, the gauche, gauche conformation is about 3–5 kcal/mol lower in energy (more stable) than the trans,trans conformation—this is about two times as big as the effect in sugars because there are two rotatable bonds (hence it is trans around both bonds or gauche around both) that are affected. Dipole minimization Another accepted explanation for the anomeric effect is the equatorial configuration has the dipoles involving both heteroatoms partially aligned, and therefore repelling each other. By contrast the axial configuration has these dipoles roughly opposing, thus representing a more stable and lower energy state. Both the hyperconjugation and the dipole minimization contribute to the preferred (Z)-conformation of esters over the (E)-conformation. In the (Z) conformation the lone pair of electrons in the alpha oxygen can donate into the neighboring σ* C-O orbital. In addition, the dipole is minimized in the (Z)-conformation and maximized in the (E)-conformation. n-n repulsions and C-H hydrogen bonding If the lone pairs of electrons on the oxygens at the anomeric center of 2-methoxypyran are shown, then a brief examination of the conformations of the anomers reveal that the β-anomer always has at least one pair of eclipsing (coplanar 1,3-interacting) lone pairs, this n-n repulsion is a high energy situation. On the other hand, the α-anomer has conformations in which there are no n-n repulsions, and that is true in the exo-anomeric conformation. The energetically unfavourable n-n repulsion present in the β-anomer, coupled with the energetically favourable hydrogen bond between the axial H-5 and a lone pair of electrons on the axial α-anomeric substituent (C-H/n hydrogen bond), have been suggested [references 7 and 8] to account for most of the energetic difference between the anomers, the anomeric effect. The molecular mechanics program StruMM3D, which is not specially parameterized for the anomeric effect, estimates that the dipolar contributions to the anomeric effect (primarily the n-n repulsion, and C-H hydrogen bonding discussed above) are about 1.5 kcal/mol. Influences While the anomeric effect is a general explanation for this type of stabilization for a molecule, the type and amount of stabilization can be affected by the substituents being examined as well as the solvent being studied. Substituent effect In a closed system, there is a difference observed in the anomeric effect for different substituents on a cyclohexane or tetrahydropyran ring (Y=Oxygen). When X=OH, the generic anomeric effect can be seen, as previously explained. When X=CN, the same results are seen, where the equatorial position is preferred on the cyclohexane ring, but the axial position is preferred on the tetrahydropyran ring. This is consistent with the anomeric effect stabilization. When X=F, the anomeric effect is in fact observed for both rings. However, when X=NH2, no anomeric effect stabilization is observed and both systems prefer the equatorial position. This is attributed to both sterics and an effect called the reverse anomeric effect (see below). Solvent effect One common criticism of the hyperconjugation theory is that it fails to explain why the anomeric effect is not observed when substituted tetrahydropyran molecules are placed in polar solvents, and the equatorial position is once again preferred. It has been shown, however, that hyperconjugation does depend on the solvent in the system. Each of the substituted systems described above were tested in the gas phase (i.e. with no solvent) and in aqueous solution (i.e. polar solvent). When X=F, the anomeric effect was observed in both media, and the axial position was always preferred. This is attributed to hyperconjugation. When X=OH or CN, the anomeric effect was seen in the gas phase, when the axial position was preferred. However, in aqueous solutions, both substituents preferred the equatorial position. This is attributed to the fact that there are more electrostatic repulsions with the axial positioned substituent and the polar solvent, causing the equatorial position to be preferred. When X=NH2, again, no anomeric effect was observed and the equatorial position was always preferred. Overcoming the anomeric effect While the anomeric effect can cause stabilization of molecules, it does have a magnitude to its stabilization, and this value can be overcome by other, more destabilizing effects in some cases. In the example of spiroketals, the orientation on the upper left shows stabilization by the hyperconjugative anomeric effect twice, thus greatly stabilizing the orientation of the molecule. The orientation on the upper right only shows this hyperconjugative anomeric stabilization once, causing it to be the lesser preferred structure. However, when substituent are added onto the spiroketal backbone, the more preferred structure can be changed. When a large substituent is added to the spiroketal backbone, as seen in the lower left, the strain from having this large substituent, R, in the axial position is greatly destabilizing to the molecule. In the molecule on the lower right, R is now in the equatorial position, which no longer causes destabilization on the molecule. Therefore, without substituents, the upper equilibrium reaction is favored on the left hand side, while the lower equilibrium is favored on the right hand side, simply from the addition of a large, destabilizing substituent. Exo anomeric effect An extension of the anomeric effect, the exo anomeric effect is the preference of substituents coming off a ring to adopt the gauche conformation, while sterics would suggest an antiperiplanar conformation would be preferred. An example of this is 2-methoxytetrahydropyran. As the anomeric effect predicts, the methoxy substituent shows an increased preference for the axial conformation. However, there is actually more than one possible axial conformation due to rotation about the C-O bond between the methoxy substituent and the ring. When one applies the principles of the reverse anomeric effect, it can be predicted that the gauche conformer is preferred, suggesting the top left conformation is best in the figure above. This prediction is supported by experimental evidence. Furthermore, this preference for the gauche position is still seen in the equatorial conformation. Reverse anomeric effect This term refers to the apparent preference of positively charged nitrogen substituents for the equatorial conformation beyond what normal steric interactions would predict in rings containing an electronegative atom, such as oxygen. Substituents containing carbons with partial positive charges are not seen to exhibit the same effect. Theoretical explanations for the reverse anomeric effect include an electrostatic explanation and the delocalization of the sp3 electrons of the anomeric carbon and oxygen lone pair. There is some debate as to whether or not this is a real phenomenon. The nitrogen containing substituents it has been reported with are quite bulky, making it hard to separate the normal effects of steric bulk and the reverse anomeric effect, if it does exist. For example, in the molecule shown below, the pyridinium substituent strongly prefers the equatorial position, as steric factors would predict, but actually shows a stronger preference for this conformation than predicted, suggesting the reverse anomeric effect is contributing. Metallo-anomeric effect Late transition metals from groups 10, 11, and 12 when placed at the anomeric carbon show strong axial preferences. This phenomenon termed as the metallo-anomeric effect originates from stabilizing hyperconjugative interactions between oxygen or other heteroatoms with lone pairs and C-M anti-bonding orbitals that act as good acceptors. The generalized metallo-anomeric effect refers to thermodynamic stabilization of synclinal conformers of compounds with the general formula M-CH2-OR. Axial/equatorial preferences can be influenced by ligands attached to the metal and electronic configuration. In general terms, moving from a lighter to a heavier element in the group, the magnitude of the metallo-anomeric effect increases. Furthermore, higher oxidation states favor axial/synclinal conformers. Synthetic applications The anomeric effect is taken into consideration synthetically. Due to its discovery in sugars, sugar and carbohydrate chemistry is one of the more common synthetic uses of the anomeric effect. For instance, the Koenigs-Knorr glycosidation installs an α-OR or β-OR group in high diastereoselectivity which is effected by the anomeric effect. Sophorolipid lactone, (+)-Lepicidin A, and (−)-Lithospermoside are a few of the products synthesized via the Koenigs-Knorr Glycosidation overcoming the anomeric effect. See also Alkane stereochemistry Anomer Carbohydrate conformation Conformational isomerism Cyclohexane conformation Gauche effect Intramolecular forces Monosaccharide Raymond Lemieux Steric effects References External links Carbohydrate chemistry Physical organic chemistry Carbohydrates Acetals
Anomeric effect
[ "Chemistry" ]
2,974
[ "Biomolecules by chemical classification", "Carbohydrates", "Acetals", "Functional groups", "Organic compounds", "Carbohydrate chemistry", "Physical organic chemistry", "nan", "Chemical synthesis", "Glycobiology" ]
3,901,932
https://en.wikipedia.org/wiki/Thorium%20fuel%20cycle
The thorium fuel cycle is a nuclear fuel cycle that uses an isotope of thorium, , as the fertile material. In the reactor, is transmuted into the fissile artificial uranium isotope which is the nuclear fuel. Unlike natural uranium, natural thorium contains only trace amounts of fissile material (such as ), which are insufficient to initiate a nuclear chain reaction. Additional fissile material or another neutron source is necessary to initiate the fuel cycle. In a thorium-fuelled reactor, absorbs neutrons to produce . This parallels the process in uranium breeder reactors whereby fertile absorbs neutrons to form fissile . Depending on the design of the reactor and fuel cycle, the generated either fissions in situ or is chemically separated from the used nuclear fuel and formed into new nuclear fuel. The thorium fuel cycle has several potential advantages over a uranium fuel cycle, including thorium's greater abundance, superior physical and nuclear properties, reduced plutonium and actinide production, and better resistance to nuclear weapons proliferation when used in a traditional light water reactor though not in a molten salt reactor. History Concerns about the limits of worldwide uranium resources motivated initial interest in the thorium fuel cycle. It was envisioned that as uranium reserves were depleted, thorium would supplement uranium as a fertile material. However, for most countries uranium was relatively abundant and research in thorium fuel cycles waned. A notable exception was India's three-stage nuclear power programme. In the twenty-first century thorium's claimed potential for improving proliferation resistance and waste characteristics led to renewed interest in the thorium fuel cycle. While thorium is more abundant in the continental crust than uranium and easily extracted from monazite as a side product of rare earth element mining, it is much less abundant in seawater than uranium. At Oak Ridge National Laboratory in the 1960s, the Molten-Salt Reactor Experiment used as the fissile fuel in an experiment to demonstrate a part of the Molten Salt Breeder Reactor that was designed to operate on the thorium fuel cycle. Molten salt reactor (MSR) experiments assessed thorium's feasibility, using thorium(IV) fluoride dissolved in a molten salt fluid that eliminated the need to fabricate fuel elements. The MSR program was defunded in 1976 after its patron Alvin Weinberg was fired. In 1993, Carlo Rubbia proposed the concept of an energy amplifier or "accelerator driven system" (ADS), which he saw as a novel and safe way to produce nuclear energy that exploited existing accelerator technologies. Rubbia's proposal offered the potential to incinerate high-activity nuclear waste and produce energy from natural thorium and depleted uranium. Kirk Sorensen, former NASA scientist and Chief Technologist at Flibe Energy, has been a long-time promoter of thorium fuel cycle and particularly liquid fluoride thorium reactors (LFTRs). He first researched thorium reactors while working at NASA, while evaluating power plant designs suitable for lunar colonies. In 2006 Sorensen started "energyfromthorium.com" to promote and make information available about this technology. A 2011 MIT study concluded that although there is little in the way of barriers to a thorium fuel cycle, with current or near term light-water reactor designs there is also little incentive for any significant market penetration to occur. As such they conclude there is little chance of thorium cycles replacing conventional uranium cycles in the current nuclear power market, despite the potential benefits. Nuclear reactions with thorium In the thorium cycle, fuel is formed when captures a neutron (whether in a fast reactor or thermal reactor) to become . This normally emits an electron and an anti-neutrino () by decay to become . This then emits another electron and anti-neutrino by a second decay to become , the fuel: \overset{neutron}{n}+{^{232}_{90}Th} -> {^{233}_{90}Th} ->[\beta^-] {^{233}_{91}Pa} ->[\beta^-] \overset{fuel}{^{233}_{92}U} Fission product waste Nuclear fission produces radioactive fission products which can have half-lives from days to greater than 200,000 years. According to some toxicity studies, the thorium cycle can fully recycle actinide wastes and only emit fission product wastes, and after a few hundred years, the waste from a thorium reactor can be less toxic than the uranium ore that would have been used to produce low enriched uranium fuel for a light water reactor of the same power. Other studies assume some actinide losses and find that actinide wastes dominate thorium cycle waste radioactivity at some future periods. Some fission products have been proposed for nuclear transmutation, which would further reduce the amount of nuclear waste and the duration during which it would have to be stored (whether in a deep geological repository or elsewhere). However, while the principal feasibility of some of those reactions has been demonstrated at laboratory scale, there is, as of 2024, no large scale deliberate transmutation of fission products anywhere in the world, and the upcoming MYRRHA research project into transmutation is mostly focused on transuranic waste. Furthermore, the cross section of some fission products is relatively low and others - such as caesium - are present as a mixture of stable, short lived and long lived isotopes in nuclear waste, making transmutation dependent on expensive isotope separation. Actinide waste In a reactor, when a neutron hits a fissile atom (such as certain isotopes of uranium), it either splits the nucleus or is captured and transmutes the atom. In the case of , the transmutations tend to produce useful nuclear fuels rather than transuranic waste. When absorbs a neutron, it either fissions or becomes . The chance of fissioning on absorption of a thermal neutron is about 92%; the capture-to-fission ratio of , therefore, is about 1:12 – which is better than the corresponding capture vs. fission ratios of (about 1:6), or or (both about 1:3). The result is less transuranic waste than in a reactor using the uranium-plutonium fuel cycle. , like most actinides with an even number of neutrons, is not fissile, but neutron capture produces fissile . If the fissile isotope fails to fission on neutron capture, it produces , , , and eventually fissile and heavier isotopes of plutonium. The can be removed and stored as waste or retained and transmuted to plutonium, where more of it fissions, while the remainder becomes , then americium and curium, which in turn can be removed as waste or returned to reactors for further transmutation and fission. However, the (with a half-life of ) formed via (n,2n) reactions with (yielding that decays to ), while not a transuranic waste, is a major contributor to the long-term radiotoxicity of spent nuclear fuel. While can in principle be converted back to by neutron absorption, its neutron absorption cross section is relatively low, making this rather difficult and possibly uneconomic. Uranium-232 contamination is also formed in this process, via (n,2n) reactions between fast neutrons and , , and : Unlike most even numbered heavy isotopes, is also a fissile fuel fissioning just over half the time when it absorbs a thermal neutron. has a relatively short half-life (), and some decay products emit high energy gamma radiation, such as , and particularly . The full decay chain, along with half-lives and relevant gamma energies, is: decays to where it joins the decay chain of Thorium-cycle fuels produce hard gamma emissions, which damage electronics, limiting their use in bombs. cannot be chemically separated from from used nuclear fuel; however, chemical separation of thorium from uranium removes the decay product and the radiation from the rest of the decay chain, which gradually build up as reaccumulates. The contamination could also be avoided by using a molten-salt breeder reactor and separating the before it decays into . The hard gamma emissions also create a radiological hazard which requires remote handling during reprocessing. Nuclear fuel As a fertile material thorium is similar to , the major part of natural and depleted uranium. The thermal neutron absorption cross section (σa) and resonance integral (average of neutron cross sections over intermediate neutron energies) for are about three and one third times those of the respective values for . Advantages The primary physical advantage of thorium fuel is that it uniquely makes possible a breeder reactor that runs with slow neutrons, otherwise known as a thermal breeder reactor. These reactors are often considered simpler than the more traditional fast-neutron breeders. Although the thermal neutron fission cross section (σf) of the resulting is comparable to and , it has a much lower capture cross section (σγ) than the latter two fissile isotopes, providing fewer non-fissile neutron absorptions and improved neutron economy. The ratio of neutrons released per neutron absorbed (η) in is greater than two over a wide range of energies, including the thermal spectrum. A breeding reactor in the uranium–plutonium cycle needs to use fast neutrons, because in the thermal spectrum one neutron absorbed by on average leads to less than two neutrons. Thorium is estimated to be about three to four times more abundant than uranium in Earth's crust, although present knowledge of reserves is limited. Current demand for thorium has been satisfied as a by-product of rare-earth extraction from monazite sands. Notably, there is very little thorium dissolved in seawater, so seawater extraction is not viable, as it is with uranium. Using breeder reactors, known thorium and uranium resources can both generate world-scale energy for thousands of years. Thorium-based fuels also display favorable physical and chemical properties that improve reactor and repository performance. Compared to the predominant reactor fuel, uranium dioxide (), thorium dioxide () has a higher melting point, higher thermal conductivity, and lower coefficient of thermal expansion. Thorium dioxide also exhibits greater chemical stability and, unlike uranium dioxide, does not further oxidize. Because the produced in thorium fuels is significantly contaminated with in proposed power reactor designs, thorium-based used nuclear fuel possesses inherent proliferation resistance. cannot be chemically separated from and has several decay products that emit high-energy gamma radiation. These high-energy photons are a radiological hazard that necessitate the use of remote handling of separated uranium and aid in the passive detection of such materials. The long-term (on the order of roughly to ) radiological hazard of conventional uranium-based used nuclear fuel is dominated by plutonium and other minor actinides, after which long-lived fission products become significant contributors again. A single neutron capture in is sufficient to produce transuranic elements, whereas five captures are generally necessary to do so from . 98–99% of thorium-cycle fuel nuclei would fission at either or , so fewer long-lived transuranics are produced. Because of this, thorium is a potentially attractive alternative to uranium in mixed oxide (MOX) fuels to minimize the generation of transuranics and maximize the destruction of plutonium. Disadvantages There are several challenges to the application of thorium as a nuclear fuel, particularly for solid fuel reactors: In contrast to uranium, naturally occurring thorium is effectively mononuclidic and contains no fissile isotopes; fissile material, generally , or plutonium, must be added to achieve criticality. This, along with the high sintering temperature necessary to make thorium-dioxide fuel, complicates fuel fabrication. Oak Ridge National Laboratory experimented with thorium tetrafluoride as fuel in a molten salt reactor from 1964 to 1969, which was expected to be easier to process and separate from contaminants that slow or stop the chain reaction. In an open fuel cycle (i.e. utilizing in situ), higher burnup is necessary to achieve a favorable neutron economy. Although thorium dioxide performed well at burnups of 170,000 MWd/t and 150,000 MWd/t at Fort St. Vrain Generating Station and AVR respectively, challenges complicate achieving this in light water reactors (LWR), which compose the vast majority of existing power reactors. In a once-through thorium fuel cycle, thorium-based fuels produce far less long-lived transuranics than uranium-based fuels, some long-lived actinide products constitute a long-term radiological impact, especially and . On a closed cycle, and can be reprocessed. is also considered an excellent burnable poison absorber in light water reactors. Another challenge associated with the thorium fuel cycle is the comparatively long interval over which breeds to . The half-life of is about 27 days, which is an order of magnitude longer than the half-life of . As a result, substantial develops in thorium-based fuels. is a significant neutron absorber and, although it eventually breeds into fissile , this requires two more neutron absorptions, which degrades neutron economy and increases the likelihood of transuranic production. Alternatively, if solid thorium is used in a closed fuel cycle in which is recycled, remote handling is necessary for fuel fabrication because of the high radiation levels resulting from the decay products of . This is also true of recycled thorium because of the presence of , which is part of the decay sequence. Further, unlike proven uranium fuel recycling technology (e.g. PUREX), recycling technology for thorium (e.g. THOREX) is only under development. Although the presence of complicates matters, there are public documents showing that has been used once in a nuclear weapon test. The United States tested a composite -plutonium bomb core in the MET (Military Effects Test) blast during Operation Teapot in 1955, though with much lower yield than expected. Advocates for liquid core and molten salt reactors such as LFTRs claim that these technologies negate thorium's disadvantages present in solid fuelled reactors. As only two liquid-core fluoride salt reactors have been built (the ORNL ARE and MSRE) and neither have used thorium, it is hard to validate the exact benefits. Thorium-fueled reactors Thorium fuels have fueled several different reactor types, including light water reactors, heavy water reactors, high temperature gas reactors, sodium-cooled fast reactors, and molten salt reactors. List of thorium-fueled reactors From IAEA TECDOC-1450 "Thorium Fuel Cycle – Potential Benefits and Challenges", Table 1: Thorium utilization in different experimental and power reactors. Additionally from Energy Information Administration, "Spent Nuclear Fuel Discharges from U. S. Reactors", Table B4: Dresden 1 Assembly Class. See also Thorium Thorium-232 Occurrence of thorium Thorium-based nuclear power List of countries by thorium resources List of countries by uranium reserves Advanced heavy-water reactor Alvin Radkowsky CANDU reactor Fuji MSR Peak uranium Radioactive waste Thorium Energy Alliance Weinberg Foundation World energy resources and consumption References Further reading Kasten, P. R. (1998). "Review of the Radkowsky Thorium reactor concept" Science & Global Security, 7(3), 237–269. Duncan Clark (9 September 2011), "Thorium advocates launch pressure group. Huge optimism for thorium nuclear energy at the launch of the Weinberg Foundation", The Guardian B.D. Kuz'minov, V.N. Manokhin, (1998) "Status of nuclear data for the thorium fuel cycle", IAEA translation from the Russian journal Yadernye Konstanty (Nuclear Constants) Issue No. 3–4, 1997 Thorium and uranium fuel cycles comparison by the UK National Nuclear Laboratory Fact sheet on thorium at the World Nuclear Association. Annotated bibliography for the thorium fuel cycle from the Alsos Digital Library for Nuclear Issues External links International Thorium Energy Committee Nuclear chemistry Nuclear fuels Nuclear reprocessing Nuclear technology Actinides Thorium
Thorium fuel cycle
[ "Physics", "Chemistry" ]
3,365
[ "Nuclear chemistry", "Nuclear technology", "nan", "Nuclear physics" ]
3,902,028
https://en.wikipedia.org/wiki/Coacervate
Coacervate ( or ) is an aqueous phase rich in macromolecules such as synthetic polymers, proteins or nucleic acids. It forms through liquid-liquid phase separation (LLPS), leading to a dense phase in thermodynamic equilibrium with a dilute phase. The dispersed droplets of dense phase are also called coacervates, micro-coacervates or coacervate droplets. These structures draw a lot of interest because they form spontaneously from aqueous mixtures and provide stable compartmentalization without the need of a membrane—they are protocell candidates. The term coacervate was coined in 1929 by Dutch chemist Hendrik G. Bungenberg de Jong and Hugo R. Kruyt while studying lyophilic colloidal dispersions. The name is a reference to the clustering of colloidal particles, like bees in a swarm. The concept was later borrowed by Russian biologist Alexander I. Oparin to describe the proteinoid microspheres proposed to be primitive cells (protocells) on early Earth. Coacervate-like protocells are at the core of the Oparin-Haldane hypothesis. A reawakening of coacervate research was seen in the 2000s, starting with the recognition in 2004 by scientists at the University of California, Santa Barbara (UCSB) that some marine invertebrates (such as the sandcastle worm) exploit complex coacervation to produce water-resistant biological adhesives. A few years later in 2009 the role of liquid-liquid phase separation was further recognized to be involved in the formation of certain membraneless organelles by the biophysicists Clifford Brangwynne and Tony Hyman. Liquid organelles share features with coacervate droplets and fueled the study of coacervates for biomimicry. Thermodynamics Coacervates are a type of lyophilic colloid; that is, the dense phase retains some of the original solvent – generally water – and does not collapse into solid aggregates, rather keeping a liquid property. Coacervates can be characterized as complex or simple based on the driving force for the LLPS: associative or segregative. Associative LLPS is dominated by attractive interactions between macromolecules (such as electrostatic force between oppositely charged polymers), and segregative LLPS is driven by the minimization of repulsive interactions (such as hydrophobic effect on proteins containing a disordered region). The thermodynamics of segregative LLPS can be described by a Flory-Huggins polymer mixing model (see equation). In ideal polymer solutions, the free-energy of mixing (ΔmixG) is negative because the mixing entropy (ΔmixS, combinatorial in the Flory-Huggins approach) is positive and the interaction enthalpies are all taken as equivalent (ΔmixH or χ = 0). In non-ideal solutions, ΔmixH can be different from zero, and the process endothermic enough to overcome the entropic term and favor the de-mixed state (the blue curve shifts up). Low molecular-weight solutes will hardly reach such non-ideality, whereas for polymeric solutes, with increasing interactions sites N and therefore decreasing entropic contribution, simple coacervation is much more likely. The phase diagram of the mixture can be predicted by  experimentally determining the two-phase boundary, or binodal curve. In a simplistic theoretical approach, the binodes are the compositions at which the free energy of de-mixing is minimal ( ), across different temperatures (or other interaction parameter). Alternatively, by minimizing the change in free energy of de-mixing in regards to composition (), the spinodal curve is defined. The conditions of the mixture in comparison to the two curves defines the phase separation mechanism: nucleation-growth of coacervate droplets (when the binodal region is crossed slowly) and spinodal decomposition. Associative LLPS is more complex to describe, as both solute polymers are present in the dilute and dense phase. Electrostatic-based complex coacervates are the most common, and in that case the solutes are two polyelectrolytes of opposite charge. The Voorn-Overbeek approach applies the Debye-Hückel approximation to the enthalpic term in the Flory-Huggins model, and considers two polyelectrolytes of the same length and at the same concentration. Complex coacervates are a subset of aqueous two-phase systems (ATPS), which also include segregatively separated systems in which both phases are enriched in one type of polymer. Coacervates in biology Membraneless organelles (MLOs), also known as biomolecular condensates, are a form of cell compartmentalization. Unlike classic membrane-bound organelles (e.g. mitochondrion, nucleus or lysosome), MLOs are not separated from their surroundings by a lipid bilayer. MLOs are mostly composed of proteins and nucleic acids, held together by weak intermolecular forces. MLOs are present in the cytoplasm (e.g. stress granules, processing bodies) and in the nucleus (e.g. nucleolus, nuclear speckles). They have been shown to serve various functions: they can store and protect cellular material during stress conditions, they participate in gene expression and they are involved in the control of signal transduction. It is now widely believed that MLOs form through LLPS. This was first proposed after observing that Cajal bodies and P granules show liquid-like properties, and was later confirmed by showing that liquid condensates can be reconstituted from purified protein and RNA in vitro. However, whether MLOs should be referred to as liquids, remains disputable. Even if initially they are liquid-like, over time some of them maturate into solids (gel-like or even crystalline, depending on the extent of spatial ordering within the condensate). Many proteins participating in the formation of MLO contain so-called intrinsically disordered regions (IDRs), parts of the polypeptide chain that can adopt multiple secondary structures and form random coils in solution. IDRs can provide interactions responsible for LLPS, but over time conformational changes (sometimes promoted by mutations or post-translational modifications) may lead to the formation of higher ordered structures and solidification of MLOs. Some MLOs serve their biological role as solid particles (e.g. Balbiani body stabilised by β-sheet structure), but in many cases transformation from liquid to solid results in the formation of pathological aggregates. Examples of both liquid-liquid phase separating and aggregation-prone proteins include FUS, TDP-43 and hnRNPA1. Aggregates of these proteins are associated with neurodegenerative diseases (e.g. amyotrophic lateral sclerosis, or frontotemporal dementia). History At the start of the 20th century, scientists had become interested in the stability of colloids, both the dispersions of solid particles and the solutions of polymeric molecules. It was known that salts and temperature could often be used to cause flocculation of a colloid. The German chemist F.W. Tiebackx reported in 1911 that flocculation could also be induced in certain polymer solutions by mixing them together. In particular, he reported the observation of opalescence (a turbid mixture) when equal volumes of acidified 0.5% “washed” gelatine solution, and 2% gum arabic solution were mixed. Tiebackx did not further analyse the nature of the flocs, but it is likely that this was an example of complex coacervation. Dutch chemist H. G. Bungenberg-de Jong reported in his PhD thesis (Utrecht, 1921) two types of flocculation in agar solutions: one that leads to a suspensoid state, and one that leads to an emulsoid state. He observed the emulsoid state under the microscope and described small particles that merged into larger particles (Thesis, p. 82), most likely a description of coalescing coacervate droplets. Several years later, in 1929, Bungenberg-de Jong published a seminal paper with his PhD advisor, H. R. Kruyt, entitled “Coacervation. Partial miscibility in colloid systems”. In their paper, they give many more examples of colloid systems that flocculate into an emulsoid state, either by varying the temperature, by adding salts, co-solvents or by mixing together two oppositely charged polymer colloids, and illustrate their observations with the first microscope pictures of coacervate droplets. They term this phenomenon coacervation, derived from the prefix co and the Latin word acervus (heap), which relates to the dense liquid droplets. Coacervation is thus loosely translated as ‘to come together in a heap’. Since then, Bungenberg-de Jong and his research group in Leiden published a range of papers on coacervates, including results on self-coacervation, salt effects, interfacial tension, multiphase coacervates and surfactant-based coacervates. In the meantime, Russian chemist Alexander Oparin, published a pioneering work in which he laid out his protocell theory on the origin of life. In his initial protocell model, Oparin took inspiration from Graham's description of colloids from 1861 as substances that usually give cloudy solutions and cannot pass through membranes. Oparin linked these properties to the protoplasm, and reasoned that precipitates of colloids form as clots or lumps of mucus or jelly, some of which have structural features that resemble the protoplasm. According to Oparin, protocells could therefore have formed by precipitation of colloids. In his later work, Oparin became more specific about his protocell model. He described the work of Bungenberg-de Jong on coacervates in his book from 1938, and postulated that the first protocells were coacervates. Other researchers followed, and in the 1930s and 1940s various examples of coacervation were reported, by Bungenberg-de Jong, Oparin, Koets, Bank, Langmuir and others. In the 1950s and 1960s, focus shifted to a theoretical description of the phenomenon of (complex) coacervation. Voorn and Overbeek developed the first mean-field theory to describe coacervation. They estimated the total free energy of mixing as a sum of mixing entropy terms and mean-field electrostatic interactions in a Debye-Hückel approximation. Veis and Aranyi suggested to extend this model with an electrostatic aggregation step in which charge-paired symmetrical soluble aggregates are formed, followed by phase separation into liquid droplets. In the decades after that, until about 2000, the scientific interest in coacervates had faded. Oparin's theory on the role of coacervates in the origin of life had been replaced by interest in the RNA world hypothesis. Renewed interest in coacervates originated as scientists recognized the relevance and versatility of the interactions that underlie complex coacervation in the natural fabrication of biological materials and in their self-assembly. Since 2009, coacervates have become linked to membraneless organelles and there has been a renewed interest in coacervates as protocells. Coacervates hypothesis for the origin of life Russian biochemist Aleksander Oparin and British biologist J.B.S. Haldane independently hypothesized in the 1920s that the first cells in early Earth's oceans could be, in essence, coacervate droplets. Haldane used the term primordial soup to refer to the dilute mixture of organic molecules that could have built up as a result of reactions between inorganic building blocks such as ammonia, carbon dioxide and water, in presence of UV light as an energy source. Oparin proposed that simple building blocks with increasing complexity could organize locally, or self-assemble, to form protocells with living properties. He performed experiments based on Bungenberg de Jong's colloidal aggregates (coacervates) to encapsulate proteinoids and enzymes within protocells. Work by chemists Sidney Fox, Kaoru Harada, Stanley Miller and Harold Urey further strengthened the theory that inorganic building blocks could increase in complexity and give rise to cell-like structures. The Oparin-Haldane hypothesis established the foundations of research on the chemistry of abiogenesis, but the lipid-world and RNA-world scenarios have gained more attention since the 1980s with the work of Morowitz, Luisi and Szostak. However, recently, there has been a rising interest in coacervates as protocells, resonating with current findings that reactions too slow or unlikely in aqueous solutions can be significantly favored in such membraneless compartments. See also Protocell Artificial cell References Colloidal chemistry Polymer chemistry Origin of life
Coacervate
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
2,791
[ "Colloidal chemistry", "Origin of life", "Materials science", "Surface science", "Colloids", "Polymer chemistry", "Biological hypotheses" ]
3,904,336
https://en.wikipedia.org/wiki/Heat%20kernel
In the mathematical study of heat conduction and diffusion, a heat kernel is the fundamental solution to the heat equation on a specified domain with appropriate boundary conditions. It is also one of the main tools in the study of the spectrum of the Laplace operator, and is thus of some auxiliary importance throughout mathematical physics. The heat kernel represents the evolution of temperature in a region whose boundary is held fixed at a particular temperature (typically zero), such that an initial unit of heat energy is placed at a point at time . Definition The most well-known heat kernel is the heat kernel of -dimensional Euclidean space , which has the form of a time-varying Gaussian function, which is defined for all and . This solves the heat equation where is a Dirac delta distribution and the limit is taken in the sense of distributions, that is, for every function in the space of smooth functions with compact support, we have On a more general domain in , such an explicit formula is not generally possible. The next simplest cases of a disc or square involve, respectively, Bessel functions and Jacobi theta functions. Nevertheless, the heat kernel still exists and is smooth for on arbitrary domains and indeed on any Riemannian manifold with boundary, provided the boundary is sufficiently regular. More precisely, in these more general domains, the heat kernel the solution of the initial boundary value problem Spectral theory It is not difficult to derive a formal expression for the heat kernel on an arbitrary domain. Consider the Dirichlet problem in a connected domain (or manifold with boundary). Let be the eigenvalues for the Dirichlet problem of the Laplacian Let denote the associated eigenfunctions, normalized to be orthonormal in . The inverse Dirichlet Laplacian is a compact and selfadjoint operator, and so the spectral theorem implies that the eigenvalues of satisfy The heat kernel has the following expression: Formally differentiating the series under the sign of the summation shows that this should satisfy the heat equation. However, convergence and regularity of the series are quite delicate. The heat kernel is also sometimes identified with the associated integral transform, defined for compactly supported smooth by The spectral mapping theorem gives a representation of in the form There are several geometric results on heat kernels on manifolds; say, short-time asymptotics, long-time asymptotics, and upper/lower bounds of Gaussian type. See also Heat kernel signature Minakshisundaram–Pleijel zeta function Mehler kernel Notes References Heat conduction Spectral theory Parabolic partial differential equations
Heat kernel
[ "Physics", "Chemistry" ]
533
[ "Heat conduction", "Thermodynamics" ]
3,904,342
https://en.wikipedia.org/wiki/Zeotropic%20mixture
A zeotropic mixture, or non-azeotropic mixture, is a mixture with liquid components that have different boiling points. For example, nitrogen, methane, ethane, propane, and isobutane constitute a zeotropic mixture. Individual substances within the mixture do not evaporate or condense at the same temperature as one substance. In other words, the mixture has a temperature glide, as the phase change occurs in a temperature range of about four to seven degrees Celsius, rather than at a constant temperature. On temperature-composition graphs, this temperature glide can be seen as the temperature difference between the bubble point and dew point. For zeotropic mixtures, the temperatures on the bubble (boiling) curve are between the individual component's boiling temperatures. When a zeotropic mixture is boiled or condensed, the composition of the liquid and the vapor changes according to the mixtures's temperature-composition diagram. Zeotropic mixtures have different characteristics in nucleate and convective boiling, as well as in the organic Rankine cycle. Because zeotropic mixtures have different properties than pure fluids or azeotropic mixtures, zeotropic mixtures have many unique applications in industry, namely in distillation, refrigeration, and cleaning processes. Dew and bubble points In mixtures of substances, the bubble point is the saturated liquid temperature, whereas the saturated vapor temperature is called the dew point. Because the bubble and dew lines of a zeotropic mixture's temperature-composition diagram do not intersect, a zeotropic mixture in its liquid phase has a different fraction of a component than the gas phase of the mixture. On a temperature-composition diagram, after a mixture in its liquid phase is heated to the temperature at the bubble (boiling) curve, the fraction of a component in the mixture changes along an isothermal line connecting the dew curve to the boiling curve as the mixture boils. At any given temperature, the composition of the liquid is the composition at the bubble point, whereas the composition of the vapor is the composition at the dew point. Unlike azeotropic mixtures, there is no azeotropic point at any temperature on the diagram where the bubble line and dew lines would intersect. Thus, the composition of the mixture will always change between the bubble and dew point component fractions upon boiling from a liquid to a gas until the mass fraction of a component reaches 1 (i.e. the zeotropic mixture is completely separated into its pure components). As shown in Figure 1, the mole fraction of component 1 decreases from 0.4 to around 0.15 as the liquid mixture boils to the gas phase. Temperature glides Different zeotropic mixtures have different temperature glides. For example, zeotropic mixture R152a/R245fa has a higher temperature glide than R21/R245fa. A larger gap between the boiling points creates a larger temperature glide between the boiling curve and dew curve at a given mass fraction. However, with any zeotropic mixture, the temperature glide decreases when the mass fraction of a component approaches 1 or 0 (i.e. when the mixture is almost separated into its pure components) because the boiling and dew curves get closer near these mass fractions. A larger difference in boiling points between the substances also affects the dew and bubble curves of the graph. A larger difference in boiling points creates a larger shift in mass fractions when the mixture boils at a given temperature. Zeotropic vs. azeotropic mixtures Azeotropic and zeotropic mixtures have different dew and bubble curves characteristics in a temperature-composition graph. Namely, azeotropic mixtures have dew and bubble curves that intersect, but zeotropic mixtures do not. In other words, zeotropic mixtures have no azeotropic points. An azeotropic mixture that is near its azeotropic point has negligible zeotropic behavior and is near-azeotropic rather than zeotropic. Zeotropic mixtures differ from azeotropic mixtures in that the vapor and liquid phases of an azeotropic mixture have the same fraction of constituents. This is due to the constant boiling point of the azeotropic mixture. Boiling When superheating a substance, nucleate pool boiling and convective flow boiling occur when the temperature of the surface used to heat a liquid is higher than the liquid's boiling point by the wall superheat. Nucleate pool boiling The characteristics of pool boiling are different for zeotropic mixtures than that of pure mixtures. For example, the minimum superheating needed to achieve this boiling is greater for zeotropic mixtures than for pure liquids because of the different proportions of individual substances in the liquid versus gas phases of the zeotropic mixture. Zeotropic mixtures and pure liquids also have different critical heat fluxes. In addition, the heat transfer coefficients of zeotropic mixtures are less than the ideal values predicted using the coefficients of pure liquids. This decrease in heat transfer is due to the fact that the heat transfer coefficients of zeotropic mixtures do not increase proportionately with the mass fractions of the mixture's components. Convective flow boiling Zeotropic mixtures have different characteristics in convective boiling than pure substances or azeotropic mixtures. Overall, zeotropic mixtures transfer heat more efficiently at the bottom of the fluid, whereas pure and azeotropic substances transfer heat better at the top. During convective flow boiling, the thickness of the liquid film is less at the top of the film than at the bottom because of gravity. In the case of pure liquids and azeotropic mixtures, this decrease in thickness causes a decrease in the resistance to heat transfer. Thus, more heat is transferred and the heat transfer coefficient is higher at the top of the film. The opposite occurs for zeotropic mixtures. The decrease in film thickness near the top causes the component in the mixture with the higher boiling point to decrease in mass fraction. Thus, the resistance to mass transfer increases near the top of the liquid. Less heat is transferred, and the heat transfer coefficient is lower than at the bottom of the liquid film. Because the bottom of the liquid transfers heat better, it requires a lower wall temperature near the bottom than at the top to boil the zeotropic mixture. Heat transfer coefficient From low cryogenic to room temperatures, the heat transfer coefficients of zeotropic mixtures are sensitive to the mixture's composition, the diameter of the boiling tube, heat and mass fluxes, and the roughness of the surface. In addition, diluting the zeotropic mixture reduces the heat transfer coefficient. Decreasing the pressure when boiling the mixture only increases the coefficient slightly. Using grooved rather than smooth boiling tubes increases the heat transfer coefficient. Distillation The ideal case of distillation uses zeotropic mixtures. Zeotropic fluid and gaseous mixtures can be separated by distillation due to the difference in boiling points between the component mixtures. This process involves the use of vertically-arranged distillation columns (see Figure 2). Distillation columns When separating zeotropic mixtures with three or greater liquid components, each distillation column removes only the lowest-boiling point component and the highest boiling point component. In other words, each column separates two components purely. If three substances are separated with a single column, the substance with the intermediate boiling point will not be purely separated, and a second column would be needed. To separate mixtures consisting of multiple substances, a sequence of distillation columns must be used. This multi-step distillation process is also called rectification. In each distillation column, pure components form at the top (rectifying section) and bottom (stripping section) of the column when the starting liquid (called feed composition) is released in the middle of the column. This is shown in Figure 2. At a certain temperature, the component with the lowest boiling point (called distillate or overhead fraction) vaporizes and collects at the top of the column, whereas the component with the highest boiling point (called bottoms or bottom fraction) collects at the bottom of the column. In a zeotropic mixture, where more than one component exists, individual components move relative to each other as vapor flows up and liquid falls down. The separation of mixtures can be seen in a concentration profile. In a concentration profile, the position of a vapor in the distillation column is plotted against the concentration of the vapor. The component with the highest boiling point has a max concentration at the bottom of the column, where the component with the lowest boiling point has a max concentration at the top of the column. The component with the intermediate boiling point has a max concentration in the middle of the distillation column. Because of how these mixtures separate, mixtures with greater than three substances require more than one distillation column to separate the components. Distillation configurations Many configurations can be used to separate mixtures into the same products, though some schemes are more efficient, and different column sequencings are used to achieve different needs. For example, a zeotropic mixture ABC can be first separated into A and BC before separating BC to B and C. On the other hand, mixture ABC can be first separated into AB and C, and AB can lastly be separated into A and B. These two configurations are sharp-split configurations in which the intermediate boiling substance does not contaminate each separation step. On the other hand, the mixture ABC could first be separated into AB and BC, and lastly split into A, B, and C in the same column. This is a non-sharp split configuration in which the substance with the intermediate boiling point is present in different mixtures after a separation step. Efficiency optimization When designing distillation processes for separating zeotropic mixtures, the sequencing of distillation columns is vital to saving energy and costs. In addition, other methods can be used to lower the energy or equipment costs required to distill zeotropic mixtures. This includes combining distillation columns, using side columns, combining main columns with side columns, and re-using waste heat for the system. After combining distillation columns, the amount of energy used is only that of one separated column rather than both columns combined. In addition, using side columns saves energy by preventing different columns from carrying out the same separation of mixtures. Combining main and side columns saves equipment costs by reducing the number of heat exchangers in the system. Re-using waste heat requires the amount of heat and temperature levels of the waste to match that of the heat needed. Thus, using waste heat requires changing the pressure inside evaporators and condensers of the distillation system in order to control the temperatures needed. Controlling the temperature levels in a part of a system is possible with Pinch Technology. These energy-saving techniques have a wide application in industrial distillation of zeotropic mixtures: side columns have been used to refine crude oil, and combining main and side columns is increasingly used. Examples of zeotropic mixtures Examples of distillation for zeotropic mixtures can be found in industry. Refining crude oil is an example of multi-component distillation in industry that has been used for more than 75 years. Crude oil is separated into five components with main and side columns in a sharp split configuration. In addition, ethylene is separated from methane and ethane for industrial purposes using multi-component distillation. Separating aromatic substances requires extractive distillation, for example, distilling a zeotropic mixture of benzene, toluene, and p-xylene. Refrigeration Zeotropic mixtures that are used in refrigeration are assigned a number in the 400 series to help identify its component and their proportions as a part of nomenclature. Whereas for azeotropic mixtures they are assigned a number in the 500 series. According to ASHRAE, refrigerants names start with 'R' followed by a series of numbers—400 series if it is zeotropic or 500 if it is azeotropic—followed by uppercase letters that denote the composition. Research has proposed using zeotropic mixtures as substitutes to halogenated refrigerants due to the harmful effects that hydrochlorofluorocarbons (HCFC) and chlorofluorocarbons (CFC) have on the ozone layer and global warming. Researchers have focused on using new mixtures that have the same properties as past refrigerants to phase out harmful halogenated substances, in accordance to the Montreal Protocol and Kyoto Protocol. For example, researchers found that zeotropic mixture R-404A can replace R-12, a CFC, in household refrigerators. However, there are some technical difficulties for using zeotropic mixtures. This includes leakages, as well as the high temperature glide associated with substances of different boiling points, though the temperature glide can be matched to the temperature difference between the two refrigerants when exchanging heat to increase efficiency. Replacing pure refrigerants with mixtures calls for more research on the environmental impact as well as the flammability and safety of refrigerant mixtures. Organic Rankine cycle In the Organic Rankine Cycle (ORC), zeotropic mixtures are more thermally efficient than pure fluids. Due to their higher boiling points, zeotropic working fluids have higher net outputs of energy at the low temperatures of the Rankine Cycle than pure substances. Zeotropic working fluids condense across a range of temperatures, allowing external heat exchangers to recover the heat of condensation as a heat source for the Rankine Cycle. The changing temperature of the zeotropic working fluid can be matched to that of the fluid being heated or cooled to save waste heat because the mixture's evaporation process occurs at a temperature glide (see Pinch Analysis). R21/R245fa and R152a/R245fa are two examples of zeotropic working fluids that can absorb more heat than pure R245fa due to their increased boiling points. The power output increases with the proportion of R152a in R152a/R245fa. R21/R245fa uses less heat and energy than R245fa. Overall, zeotropic mixture R21/R245fa has better thermodynamic properties than pure R245fa and R152a/R245fa as a working fluid in the ORC. Cleaning processes Zeotropic mixtures can be used as solvents in cleaning processes in manufacturing. Cleaning processes that use zeotropic mixtures include cosolvent processes and bisolvent processes. Cosolvent and bisolvent processes In a cosolvent system, two miscible fluids with different boiling points are mixed to create a zeotropic mixture. The first fluid is a solvating agent that dissolves soil in the cleaning process. This fluid is an organic solvent with a low-boiling point and a flash point greater than the system's operating temperature. After the solvent mixes with the oil, the second fluid, a hydrofluoroether rinsing agent (HFE), rinses off the solvating agent. The solvating agent can be flammable because its mixture with the HFE is nonflammable. In bisolvent cleaning processes, the rinsing agent is separated from the solvating agent. This makes the solvating and rinsing agents more effective because they are not diluted. Cosolvent systems are used for heavy oils, waxes, greases and fingerprints, and can remove heavier soils than processes that use pure or azeotropic solvents. Cosolvent systems are flexible in that different proportions of substances in the zeotropic mixture can be used to satisfy different cleaning purposes. For example, increasing the proportion of solvating agent to rinsing agent in the mixture increases the solvency, and thus is used for removing heavier soils. The operating temperature of the system depends on the boiling point of the mixture, which in turn depends on the compositions of these agents in zeotropic mixture. Since zeotropic mixtures have different boiling points, the cleaning and rinse sump have different ratios of cleaning and solvating agents. The lower-boiling point solvating agent is not found in the rinse sump due to the large difference in boiling points between the agents. Examples of zeotropic solvents Mixtures containing HFC-43-10mee can replace CFC-113 and perfluorocarbon (PFC) as solvents in cleaning systems because HFC-43-10mee does not harm the ozone layer, unlike CFC-113 and PFC. Various mixtures of HFC-43-10mee are commercially available for a variety of cleaning purposes. Examples of zeotropic solvents in cleaning processes include: Zeotropic mixtures of HFC-43-10mee and hexamethyldisiloxane can dissolve silicones and are highly compatible with polycarbonates and polyurethane. They can be used to remove silicone lubricant from medical devices. Zeotropic mixtures of HFC-43-10mee and isopropanol can remove ions and water from materials without porous surfaces. This zeotropic mixture helps with absorption drying. Zeotropic mixtures of HFC-43-10mee, fluorosurfactant, and antistatic additives are energy-efficient and environmentally safe drying fluids that provide spot-free drying. See also List of Refrigerants Azeotrope References Chemical engineering thermodynamics
Zeotropic mixture
[ "Chemistry", "Engineering" ]
3,748
[ "Chemical engineering", "Chemical engineering thermodynamics" ]
3,904,828
https://en.wikipedia.org/wiki/Chemical%20fouling%20inhibitors
Chemical fouling inhibitors are products that are mixtures of fouling and corrosion inhibitors use in boiler feedwater treatment. Several of these products use aliphatic polyamines to coat the surface of pipes. Helamin Helamin is a boiler feedwater treatment based on amines and polyamines. Helamin is a registered trademark of Helamin Technology Holding SA, Switzerland. Patents have been obtained for Helamin products, and in 2016, the following patents exist: EP1045045, JP4663046, HK1032080, BR9903614. Chemically, most of the Helamin types are stated by the manufacturer to be a "mixture of polyamines and polycarboxylates in aqueous solution", but some also utilize volatile amines, ammonia, polyelectrolytes, organic polymers, and scavengers of dissolved oxygen. In contrast to the conventional method of the water treatment, its action is based on a preventive protection of the surfaces. Helamin forms a film (i.e., is one of numerous available "filming amines"), which prevents corrosion and fouling on the water-side walls in steam boilers and piping systems, due to the affinity of Helamin to metal and oxide surfaces. Crystals which form in the presence of Helamine are isolated, so that they do not tend to group themselves. Thus deposit consolidation is inhibited. Already existing oxide surface deposits are gradually removed. In the boiler, a fine, liquid mud, which is easier to remove from the boiler walls, develops. Helamin does not significantly decompose even at high temperature and pressure employed in the modern sub-critical -water power-plant boilers. Helamin treatment can be successfully employed in steam generators, warm and hot water piping systems, superheaters, as well as cooling circuits to mitigate some of the difficult problems of the corrosion and fouling. However, cation conductivity of water tends to increase with the use of Helamin. Fineamin Fineamin is an anticorrosion water treatment technology based on filming polyamines and dispersive polymers. The manufacturing of the amine-based technology is done in Switzerland by h2o facilities SA, Geneva and it is ISO9001:2015 and ISO14001:2015 certified. Chemically, the Fineamin products are described by the manufacturer to be a "mixture of polyamines and polycarboxylates in aqueous solution", but some contain also volatile and neutralizing amines, organic polymers and/or organic oxygen scavengers (DEHA). Fineamin reacts by forming a protective, homogeneous film on all metal surfaces, improving the existing magnetite layer and acting as a barrier against water carryover and residual oxygen. It prevents the contact of the electrolyte with the metal surface without reducing the heat transfer, while any crystals which do form in its presence are isolated, and any tendency of accumulation is inhibited. Any existing corrosion products and deposits get dispersed and gently removed. Fineamin treatment is used against corrosion and fouling in steam boilers, warm and hot water piping systems, superheaters, as well as cooling circuits. Fineamin is an environmentally friendly technology and does not significantly decompose even at high temperature and pressure required by the modern power-plant boilers. It can be used in steam water circuits with pressures up to 220 bar and temperatures up to 540°C due to a very low degradation ratio. Fineamin generates ammonia and acetate in an almost insignificant quantity – as low as 1 ppb for 1 ppm of dosed product. However, cation conductivity of water tends to increase with the use of Fineamin. The treatment also has an alkalizing effect on the boiler feed water and steam (the pH is maintained at optimal values). Fineamin was developed in accordance with TÜV requirements and it holds the following certifications: Readily biodegradable, with a biodegradability rate of 90% by Eurofins Ecotoxicologie France (Test report n° 20FER6-1175 – 2020/12/09); Safe for district heating - independently tested by the Institute of Hygiene of Ruhrgebiets (DIN EN 1717 and DIN 1988-100) and approved as being fit for usage in district heating systems and domestic hot water production; Acceptable for treating boilers, steam lines and / or cooling systems in food industry (G6 – This product is acceptable for treating boilers or steam lines (G6) where steam produced may contact edible products) - NSF Registration No: 165458. See also Boiler feedwater Dispersants Passivation (chemistry) References Fouling Steam boilers Corrosion prevention Water treatment
Chemical fouling inhibitors
[ "Chemistry", "Materials_science", "Engineering", "Environmental_science" ]
988
[ "Corrosion prevention", "Water treatment", "Corrosion", "Water pollution", "Environmental engineering", "Water technology", "Materials degradation", "Fouling" ]
3,905,860
https://en.wikipedia.org/wiki/Environmental%20stress%20fracture
In materials science, environmental stress fracture or environment assisted fracture is the generic name given to premature failure under the influence of tensile stresses and harmful environments of materials such as metals and alloys, composites, plastics and ceramics. Metals and alloys exhibit phenomena such as stress corrosion cracking, hydrogen embrittlement, liquid metal embrittlement and corrosion fatigue all coming under this category. Environments such as moist air, sea water and corrosive liquids and gases cause environmental stress fracture. Metal matrix composites are also susceptible to many of these processes. Plastics and plastic-based composites may suffer swelling, debonding and loss of strength when exposed to organic fluids and other corrosive environments, such as acids and alkalies. Under the influence of stress and environment, many structural materials, particularly the high-specific strength ones become brittle and lose their resistance to fracture. While their fracture toughness remains unaltered, their threshold stress intensity factor for crack propagation may be considerably lowered. Consequently, they become prone to premature fracture because of sub-critical crack growth. This article aims to give a brief overview of the various degradation processes mentioned above. Stress corrosion cracking Stress corrosion cracking is a phenomenon where a synergistic action of corrosion and tensile stress leads to brittle fracture of normally ductile materials at generally lower stress levels. During stress corrosion cracking, the material is relatively unattacked by the corrosive agent (no general corrosion, only localized corrosion), but fine cracks form within it. This process has serious implications on the utilisation of the material because the applicable safe stress levels are drastically reduced in the corrosive medium. Season cracking and caustic embrittlement are two stress corrosion cracking processes which affected the serviceability of brass cartridge cases and riveted steel boilers respectively. Hydrogen embrittlement Small quantities of hydrogen present inside certain metallic materials make the latter brittle and susceptible to sub-critical crack growth under stress. Hydrogen embrittlement may occur as a side effect of electroplating processes. Delayed failure is the fracture of a component under stress after an elapsed time, is a characteristic feature of hydrogen embrittlement (2). Hydrogen entry into the material may be effected during plating, pickling, phosphating, melting, casting or welding. Corrosion during service in moist environments generates hydrogen, part of which may enter the metal as atomic hydrogen (H•) and cause embrittlement. Presence of a tensile stress, either inherent or externally applied, is necessary for metals to be damaged. As in the case of stress corrosion cracking, hydrogen embrittlement may also lead to a decrease in the threshold stress intensity factor for crack propagation or an increase in the sub-critical crack growth velocity of the material. The most visible effect of hydrogen in materials is a drastic reduction in ductility during tensile tests. It may increase, decrease or leave unaffected the yield strength of the material. Hydrogen may also cause serrated yielding in certain metals such as niobium, nickel and some steels (3). Case studies One of the worst disasters caused by stress corrosion cracking was the fall of the Silver Bridge, WV in 1967, when a single brittle crack formed by rusting grew to criticality. The crack was on one of the tie bar links of one of the suspension chains, and the whole joint failed quickly by overload. The event escalated and the whole bridge disappeared in less than a minute, killing 46 drivers or passengers on the bridge at the time. See also References Mars G. Fontana, Corrosion Engineering, 3rd Edition, McGraw-Hill, Singapore, 1987 A. R. Troiano, Trans. American Society for Metals, 52 (1960), 54 T. K. G. Namboodhiri, Trans. Indian Institute of Metals, 37 (1984), 764 A. S. Tetelman, Fundamental Aspects of Stress Corrosion Cracking, eds., R. W. Staehle, A. J. Forty and D. Van Rooyan, National Association of Corrosion Engineers, Houston, Texas, (1967), 446 N. J. Petch and P. Stables, Nature, 169 (1952), 842 R.A. Oriani, Berichte der Bunsen-Gesellschaft für physikalische Chemie, 76 (1972), 705 C. D. Beachem, Metall. Trans., 3 (1972), 437 D. G. Westlake, Trans. ASM, 62 (1969), 1000 Corrosion Fracture mechanics
Environmental stress fracture
[ "Chemistry", "Materials_science", "Engineering" ]
933
[ "Structural engineering", "Fracture mechanics", "Metallurgy", "Materials science", "Corrosion", "Electrochemistry", "Materials degradation" ]
3,906,633
https://en.wikipedia.org/wiki/Slip%20%28materials%20science%29
In materials science, slip is the large displacement of one part of a crystal relative to another part along crystallographic planes and directions. Slip occurs by the passage of dislocations on close/packed planes, which are planes containing the greatest number of atoms per area and in close-packed directions (most atoms per length). Close-packed planes are known as slip or glide planes. A slip system describes the set of symmetrically identical slip planes and associated family of slip directions for which dislocation motion can easily occur and lead to plastic deformation. The magnitude and direction of slip are represented by the Burgers vector, . An external force makes parts of the crystal lattice glide along each other, changing the material's geometry. A critical resolved shear stress is required to initiate a slip. Slip systems Face centered cubic crystals Slip in face centered cubic (fcc) crystals occurs along the close packed plane. Specifically, the slip plane is of type {111}, and the direction is of type <10>. In the diagram on the right, the specific plane and direction are (111) and [10], respectively. Given the permutations of the slip plane types and direction types, fcc crystals have 12 slip systems. In the fcc lattice, the norm of the Burgers vector, b, can be calculated using the following equation: Where a is the lattice constant of the unit cell. Body centered cubic crystals Slip in body-centered cubic (bcc) crystals occurs along the plane of shortest Burgers vector as well; however, unlike fcc, there are no truly close-packed planes in the bcc crystal structure. Thus, a slip system in bcc requires heat to activate. Some bcc materials (e.g. α-Fe) can contain up to 48 slip systems. There are six slip planes of type {110}, each with two <111> directions (12 systems). There are 24 {123} and 12 {112} planes each with one <111> direction (36 systems, for a total of 48). Although the number of possible slip systems is much higher in bcc crystals than fcc crystals, the ductility is not necessarily higher due to increased lattice friction stresses. While the {123} and {112} planes are not exactly identical in activation energy to {110}, they are so close in energy that for all intents and purposes they can be treated as identical. In the diagram on the right the specific slip plane and direction are (110) and [11], respectively. Hexagonal close packed crystals Slip in hexagonal close packed (hcp) metals is much more limited than in bcc and fcc crystal structures. Usually, hcp crystal structures allow slip on the densely packed basal {0001} planes along the <110> directions. The activation of other slip planes depends on various parameters, e.g. the c/a ratio. Since there are only 2 independent slip systems on the basal planes, for arbitrary plastic deformation additional slip or twin systems needs to be activated. This typically requires a much higher resolved shear stress and can result in the brittle behavior of some hcp polycrystals. However, other hcp materials such as pure titanium show large amounts of ductility. Cadmium, zinc, magnesium, titanium, and beryllium have a slip plane at {0001} and a slip direction of <110>. This creates a total of three slip systems, depending on orientation. Other combinations are also possible. There are two types of dislocations in crystals that can induce slip - edge dislocations and screw dislocations. Edge dislocations have the direction of the Burgers vector perpendicular to the dislocation line, while screw dislocations have the direction of the Burgers vector parallel to the dislocation line. The type of dislocations generated largely depends on the direction of the applied stress, temperature, and other factors. Screw dislocations can easily cross slip from one plane to another if the other slip plane contains the direction of the Burgers vector. Slip band Formation of slip bands indicates a concentrated unidirectional slip on certain planes causing a stress concentration. Typically, slip bands induce surface steps (i.e. roughness due persistent slip bands during fatigue) and a stress concentration which can be a crack nucleation site. Slip bands extend until impinged by a boundary, and the generated stress from dislocation pile-up against that boundary will either stop or transmit the operating slip. Formation of slip bands under cyclic conditions is addressed as persistent slip bands (PSBs) where formation under monotonic condition is addressed as dislocation planar arrays (or simply slip-bands). Slip-bands can be simply viewed as boundary sliding due to dislocation glide that lacks (the complexity of ) PSBs high plastic deformation localisation manifested by tongue- and ribbon-like extrusion. And, where PSBs normally studied with (effective) Burger’s vector aligned with extrusion plane because PSB extends across the grain and exacerbate during fatigue; monotonic slip-band has a Burger’s vector for propagation and another for plane extrusions both controlled by the conditions at the tip. Identification of slip activity The main methods to identify the active slip system involve either slip trace analysis of single crystals or polycrystals, using diffraction techniques such as neutron diffraction and high angular resolution electron backscatter diffraction elastic strain analysis, or Transmission electron microscopy diffraction imaging of dislocations. In slip trace analysis, only the slip plane is measured, and the slip direction is inferred. In zirconium, for example, this enables the identification of slip activity on a basal, prism, or 1st/2nd order pyramidal plane. In the case of a 1st-order pyramidal plane trace, the slip could be in either &langle;𝑎&rangle; or &langle;𝑐 + 𝑎&rangle; directions; slip trace analysis cannot discriminate between these. Diffraction-based studies measure the residual dislocation content instead of the slipped dislocations, which is only a good approximation for systems that accumulate networks of geometrically necessary dislocations, such as Face-centred cubic polycrystals. In low-symmetry crystals such as hexagonal zirconium, there could be regions of the predominantly single slip where geometrically necessary dislocations may not necessarily accumulate. Residual dislocation content does not distinguish between glissile and sessile dislocations. Glissile dislocations contribute to slip and hardening, but sessile dislocations contribute only to latent hardening. Diffraction methods cannot generally resolve the slip plane of a residual dislocation. For example, in Zr, the screw components of &langle;𝑎&rangle; dislocations could slip on prismatic, basal, or 1st-order pyramidal planes. Similarly, &langle;𝑐 + 𝑎&rangle; screw dislocations could slip on either 1st or 2nd order pyramidal planes. See also Miller indices Persistent slip bands References External links An online tutorial on slip, explained on DoITPoMS Materials science
Slip (materials science)
[ "Physics", "Materials_science", "Engineering" ]
1,498
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
3,908,752
https://en.wikipedia.org/wiki/5-Methyluridine%20triphosphate
5-Methyluridine triphosphate or m5UTP is one of five nucleoside triphosphates. It is the ribonucleoside triphosphate of thymidine, but the nomenclature with "5-methyluridine" is used because the term thymidine triphosphate is used for the deoxyribonucleoside by convention. References Nucleotides Phosphate esters
5-Methyluridine triphosphate
[ "Chemistry", "Biology" ]
91
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
3,908,779
https://en.wikipedia.org/wiki/SOX%20gene%20family
SOX genes (SRY-related HMG-box genes) encode a family of transcription factors that bind to the minor groove in DNA, and belong to a super-family of genes characterized by a homologous sequence called the HMG-box (for high mobility group). This HMG box is a DNA binding domain that is highly conserved throughout eukaryotic species. Homologues have been identified in insects, nematodes, amphibians, reptiles, birds and a range of mammals. However, HMG boxes can be very diverse in nature, with only a few amino acids being conserved between species. Sox genes are defined as containing the HMG box of a gene involved in sex determination called SRY, which resides on the Y-chromosome. There are 20 SOX genes present in humans and mice, and 8 present in Drosophila. Almost all Sox genes show at least 50% amino acid similarity with the HMG box in Sry. The family is divided into subgroups according to homology within the HMG domain and other structural motifs, as well as according to functional assays. The developmentally important Sox family has no singular function, and many members possess the ability to regulate several different aspects of development. While many Sox genes are involved in sex determination, some are also important in processes such as neuronal development. For example, Sox2 and Sox3 are involved in the transition of epithelial granule cells in the cerebellum to their migratory state. Sox 2 is also a transcription factor in the maintenance of pluripotency in both Early Embryos and ES Cells. Granule cells then differentiate to granule neurons, with Sox11 being involved in this process. It is thought that some Sox genes may be useful in the early diagnosis of childhood brain tumours due to this sequential expression in the cerebellum, making them a target for significant research. Sox proteins bind to the sequence WWCAAW and similar sequences (W=A or T). They have weak binding specificity and unusually low affinity for DNA. Sox genes are related to the Tcf/Lef1 group of genes which also contain a sequence-specific high mobility group and have a similar sequence specificity (roughly TWWCAAAG). Groups Sox genes are classified into groups. Sox genes from different groups share little similarity outside the DNA-binding domain. In mouse and human the members of the groups are: SoxA: SRY SoxB1: SOX1, SOX2, SOX3 SoxB2: SOX14, SOX21 SoxC: SOX4, SOX11, SOX12 SoxD: SOX5, SOX6, SOX13 SoxE: SOX8, SOX9, SOX10 SoxF: SOX7, SOX17, SOX18 SoxG: SOX15 SoxH: SOX30 See also Body plan Evolutionary developmental biology FOX proteins Hox gene Pax genes References External links NCBI CDD: cd01388 (SOX-TCF_HMG-box); human proteins Gene families Transcription factors
SOX gene family
[ "Chemistry", "Biology" ]
624
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
37,214,007
https://en.wikipedia.org/wiki/Neoboletus%20pseudosulphureus
Neoboletus pseudosulphureus is a species of bolete fungus in the family Boletaceae. It is found in Europe, Central America, North America, and India, where it grows in deciduous and mixed forests. Initially uniformly yellow in color, all external surfaces of the fruit body undergo a variety of discolorations as it matures. Habitat and distribution The fungus is known from Europe, eastern North America, and Costa Rica, where it fruits on the ground in deciduous and mixed forests, usually in a mycorrhizal association with oak, but occasionally with pine. It was reported from Himachal Pradesh, India for the first time in 1993. Taxonomy The fungus was first described scientifically by German mycologist Franz Joseph Kallenbach in 1923, from collections made in Germany. A year later, Kallenbach published a more thorough description. Some authors have historically considered Boletus junquilleus—a species described by Lucien Quelet in 1897—to be a synonym, including first Gilbert and Leclair in 1942, and Rolf Singer in 1947. The confusion between the two arises over the amount of red pigmentation in the pores near the stem, and on the base of the stem. Reid has suggested that differences are due only to climatic conditions, with the red colors appearing in conditions of lower temperature. It was transferred to the genus Neoboletus in 2015. Description The mushroom has a cushion-shaped to convex cap measuring wide. The cap color is bright yellow when young, fading to dull yellow or tan when mature, and usually develops orange or reddish discolorations. The pore surface is initially bright yellow before turning greenish yellow to brownish yellow. The stem is long by thick, and somewhat thicker near the base. Although it is usually not reticulate, the upper part of the stem may have reticulations. All parts of the mushroom stain blue to bluish black when injured. The stark color changes that occur over the lifespan of the fruit body led one author to suggest that "the mushroom's personal grooming skills go to hell in a handbasket". A variety N. pseudosulphureus var. pallidus, found in Nova Scotia, is pale yellow with a lighter colored olive spore print compared to the nominate variety. Similar species Orton compared the similar Neoboletus junquilleus, concluding that it could be distinguished from N. pseudosulphureus by the following features: red-orange pores near the stem (compared to completely yellow); red color in stem base (compared to yellow or brownish); and a red-punctate stem (compared to yellow-punctate). Occurrence in the UK N. pseudosulphureus is an incredibly rare species in the UK, one of the 5 boletes assessed as "Endangered" by the JNCC, which estimate there are only 130 mature fruiting individuals in the UK. References External links pseudosulphureus Fungi described in 1923 Fungi of India Fungi of Europe Fungi of Central America Fungi of North America Fungus species
Neoboletus pseudosulphureus
[ "Biology" ]
628
[ "Fungi", "Fungus species" ]
37,214,939
https://en.wikipedia.org/wiki/History%20of%20genetic%20engineering
Genetic engineering is the science of manipulating genetic material of an organism. The concept of genetic engineering was first proposed by Nikolay Timofeev-Ressovsky in 1934. The first artificial genetic modification accomplished using biotechnology was transgenesis, the process of transferring genes from one organism to another, first accomplished by Herbert Boyer and Stanley Cohen in 1973. It was the result of a series of advancements in techniques that allowed the direct modification of the genome. Important advances included the discovery of restriction enzymes and DNA ligases, the ability to design plasmids and technologies like polymerase chain reaction and sequencing. Transformation of the DNA into a host organism was accomplished with the invention of biolistics, Agrobacterium-mediated recombination and microinjection. The first genetically modified animal was a mouse created in 1974 by Rudolf Jaenisch. In 1976, the technology was commercialised, with the advent of genetically modified bacteria that produced somatostatin, followed by insulin in 1978. In 1983, an antibiotic resistant gene was inserted into tobacco, leading to the first genetically engineered plant. Advances followed that allowed scientists to manipulate and add genes to a variety of different organisms and induce a range of different effects. Plants were first commercialized with virus resistant tobacco released in China in 1992. The first genetically modified food was the Flavr Savr tomato marketed in 1994. By 2010, 29 countries had planted commercialized biotech crops. In 2000 a paper published in Science introduced golden rice, the first food developed with increased nutrient value. Agriculture Genetic engineering is the direct manipulation of an organism's genome using certain biotechnology techniques that have only existed since the 1970s. Human directed genetic manipulation was occurring much earlier, beginning with the domestication of plants and animals through artificial selection. The dog is believed to be the first animal domesticated, possibly arising from a common ancestor of the grey wolf, with archeological evidence dating to about 12,000 BC. Other carnivores domesticated in prehistoric times include the cat, which cohabited with human 9,500 years ago. Archeological evidence suggests sheep, cattle, pigs and goats were domesticated between 9,000 BC and 8,000 BC in the Fertile Crescent. The first evidence of plant domestication comes from emmer and einkorn wheat found in pre-Pottery Neolithic A villages in Southwest Asia dated about 10,500 to 10,100 BC. The Fertile Crescent of Western Asia, Egypt, and India were sites of the earliest planned sowing and harvesting of plants that had previously been gathered in the wild. Independent development of agriculture occurred in northern and southern China, Africa's Sahel, New Guinea and several regions of the Americas. The eight Neolithic founder crops (emmer wheat, einkorn wheat, barley, peas, lentils, bitter vetch, chick peas and flax) had all appeared by about 7,000 BC. Horticulture first appears in the Levant during the Chalcolithic period about 6,800 to 6,300 BC. Due to the soft tissues, archeological evidence for early vegetables is scarce. The earliest vegetable remains have been found in Egyptian caves that date back to the 2nd millennium BC. Selective breeding of domesticated plants was once the main way early farmers shaped organisms to suit their needs. Charles Darwin described three types of selection: methodical selection, wherein humans deliberately select for particular characteristics; unconscious selection, wherein a characteristic is selected simply because it is desirable; and natural selection, wherein a trait that helps an organism survive better is passed on. Early breeding relied on unconscious and natural selection. The introduction of methodical selection is unknown. Common characteristics that were bred into domesticated plants include grains that did not shatter to allow easier harvesting, uniform ripening, shorter lifespans that translate to faster growing, loss of toxic compounds, and productivity. Some plants, like the Banana, were able to be propagated by vegetative cloning. Offspring often did not contain seeds, and was therefore sterile. However, these offspring were usually juicier and larger. Propagation through cloning allows these mutant varieties to be cultivated despite their lack of seeds. Hybridization was another way that rapid changes in plant's makeup were introduced. It often increased vigor in plants, and combined desirable traits together. Hybridization most likely first occurred when humans first grew similar, yet slightly different plants in close proximity. Triticum aestivum, wheat used in baking bread, is an allopolyploid. Its creation is the result of two separate hybridization events. Grafting can transfer chloroplasts, mitochondrial DNA and the entire cell nucleus containing the genome to potentially make a new species making grafting a form of natural genetic engineering. X-rays were first used to deliberately mutate plants in 1927. Between 1927 and 2007, more than 2,540 genetically mutated plant varieties had been produced using x-rays. Genetics Various genetic discoveries have been essential in the development of genetic engineering. Genetic inheritance was first discovered by Gregor Mendel in 1865 following experiments crossing peas. Although largely ignored for 34 years he provided the first evidence of hereditary segregation and independent assortment. In 1889 Hugo de Vries came up with the name "(pan)gene" after postulating that particles are responsible for inheritance of characteristics and the term "genetics" was coined by William Bateson in 1905. In 1928 Frederick Griffith proved the existence of a "transforming principle" involved in inheritance, which Avery, MacLeod and McCarty later (1944) identified as DNA. Edward Lawrie Tatum and George Wells Beadle developed the central dogma that genes code for proteins in 1941. The double helix structure of DNA was identified by James Watson and Francis Crick in 1953.As well as discovering how DNA works, tools had to be developed that allowed it to be manipulated. In 1970 Hamilton Smith's lab discovered restriction enzymes that allowed DNA to be cut at specific places and separated out on an electrophoresis gel. This enabled scientists to isolate genes from an organism's genome. DNA ligases, that join broken DNA together, had been discovered earlier in 1967 and by combining the two enzymes it was possible to "cut and paste" DNA sequences to create recombinant DNA. Plasmids, discovered in 1952, became important tools for transferring information between cells and replicating DNA sequences. Frederick Sanger developed a method for sequencing DNA in 1977, greatly increasing the genetic information available to researchers. Polymerase chain reaction (PCR), developed by Kary Mullis in 1983, allowed small sections of DNA to be amplified and aided identification and isolation of genetic material. As well as manipulating the DNA, techniques had to be developed for its insertion (known as transformation) into an organism's genome. Griffiths experiment had already shown that some bacteria had the ability to naturally take up and express foreign DNA. Artificial competence was induced in Escherichia coli in 1970 when Morton Mandel and Akiko Higa showed that it could take up bacteriophage λ after treatment with calcium chloride solution (CaCl2). Two years later, Stanley Cohen showed that CaCl2 treatment was also effective for uptake of plasmid DNA. Transformation using electroporation was developed in the late 1980s, increasing the efficiency and bacterial range. In 1907 a bacterium that caused plant tumors, Agrobacterium tumefaciens, was discovered and in the early 1970s the tumor inducing agent was found to be a DNA plasmid called the Ti plasmid. By removing the genes in the plasmid that caused the tumor and adding in novel genes researchers were able to infect plants with A. tumefaciens and let the bacteria insert their chosen DNA into the genomes of the plants. Early genetically modified organisms In 1972 Paul Berg used restriction enzymes and DNA ligases to create the first recombinant DNA molecules. He combined DNA from the monkey virus SV40 with that of the lambda virus. Herbert Boyer and Stanley Norman Cohen took Berg's work a step further and introduced recombinant DNA into a bacterial cell. Cohen was researching plasmids, while Boyers work involved restriction enzymes. They recognised the complementary nature of their work and teamed up in 1972. Together they found a restriction enzyme that cut the pSC101 plasmid at a single point and were able to insert and ligate a gene that conferred resistance to the kanamycin antibiotic into the gap. Cohen had previously devised a method where bacteria could be induced to take up a plasmid and using this they were able to create a bacterium that survived in the presence of the kanamycin. This represented the first genetically modified organism. They repeated experiments showing that other genes could be expressed in bacteria, including one from the toad Xenopus laevis, the first cross kingdom transformation. In 1974 Rudolf Jaenisch created a transgenic mouse by introducing foreign DNA into its embryo, making it the world's first transgenic animal. Jaenisch was studying mammalian cells infected with simian virus 40 (SV40) when he happened to read a paper from Beatrice Mintz describing the generation of chimera mice. He took his SV40 samples to Mintz's lab and injected them into early mouse embryos expecting tumours to develop. The mice appeared normal, but after using radioactive probes he discovered that the virus had integrated itself into the mice genome. However the mice did not pass the transgene to their offspring. In 1981 the laboratories of Frank Ruddle, Frank Constantini and Elizabeth Lacy injected purified DNA into a single-cell mouse embryo and showed transmission of the genetic material to subsequent generations. The first genetically engineered plant was tobacco, reported in 1983. It was developed by Michael W. Bevan, Richard B. Flavell and Mary-Dell Chilton by creating a chimeric gene that joined an antibiotic resistant gene to the T1 plasmid from Agrobacterium. The tobacco was infected with Agrobacterium transformed with this plasmid resulting in the chimeric gene being inserted into the plant. Through tissue culture techniques a single tobacco cell was selected that contained the gene and a new plant grown from it. Regulation The development of genetic engineering technology led to concerns in the scientific community about potential risks. The development of a regulatory framework concerning genetic engineering began in 1975, at Asilomar, California. The Asilomar meeting recommended a set of guidelines regarding the cautious use of recombinant technology and any products resulting from that technology. The Asilomar recommendations were voluntary, but in 1976 the US National Institute of Health (NIH) formed a recombinant DNA advisory committee. This was followed by other regulatory offices (the United States Department of Agriculture (USDA), Environmental Protection Agency (EPA) and Food and Drug Administration (FDA), effectively making all recombinant DNA research tightly regulated in the US. In 1982 the Organisation for Economic Co-operation and Development (OECD) released a report into the potential hazards of releasing genetically modified organisms into the environment as the first transgenic plants were being developed. As the technology improved and genetically organisms moved from model organisms to potential commercial products the US established a committee at the Office of Science and Technology (OSTP) to develop mechanisms to regulate the developing technology. In 1986 the OSTP assigned regulatory approval of genetically modified plants in the US to the USDA, FDA and EPA. In the late 1980s and early 1990s, guidance on assessing the safety of genetically engineered plants and food emerged from organizations including the FAO and WHO. The European Union first introduced laws requiring GMO's to be labelled in 1997. In 2013 Connecticut became the first state to enact a labeling law in the US, although it would not take effect until other states followed suit. Research and medicine The ability to insert, alter or remove genes in model organisms allowed scientists to study the genetic elements of human diseases. Genetically modified mice were created in 1984 that carried cloned oncogenes that predisposed them to developing cancer. The technology has also been used to generate mice with genes knocked out. The first recorded knockout mouse was created by Mario R. Capecchi, Martin Evans and Oliver Smithies in 1989. In 1992 oncomice with tumor suppressor genes knocked out were generated. Creating Knockout rats is much harder and only became possible in 2003. After the discovery of microRNA in 1993, RNA interference (RNAi) has been used to silence an organism's genes. By modifying an organism to express microRNA targeted to its endogenous genes, researchers have been able to knockout or partially reduce gene function in a range of species. The ability to partially reduce gene function has allowed the study of genes that are lethal when completely knocked out. Other advantages of using RNAi include the availability of inducible and tissue specific knockout. In 2007 microRNA targeted to insect and nematode genes was expressed in plants, leading to suppression when they fed on the transgenic plant, potentially creating a new way to control pests. Targeting endogenous microRNA expression has allowed further fine tuning of gene expression, supplementing the more traditional gene knock out approach. Genetic engineering has been used to produce proteins derived from humans and other sources in organisms that normally cannot synthesize these proteins. Human insulin-synthesising bacteria were developed in 1979 and were first used as a treatment in 1982. In 1988 the first human antibodies were produced in plants. In 2000 Vitamin A-enriched golden rice, was the first food with increased nutrient value. Further advances As not all plant cells were susceptible to infection by A. tumefaciens other methods were developed, including electroporation, micro-injection and particle bombardment with a gene gun (invented in 1987). In the 1980s techniques were developed to introduce isolated chloroplasts back into a plant cell that had its cell wall removed. With the introduction of the gene gun in 1987 it became possible to integrate foreign genes into a chloroplast. Genetic transformation has become very efficient in some model organisms. In 1998 genetically modified seeds were produced in Arabidopsis thaliana by simply dipping the flowers in an Agrobacterium solution. The range of plants that can be transformed has increased as tissue culture techniques have been developed for different species. The first transgenic livestock were produced in 1985, by micro-injecting foreign DNA into rabbit, sheep and pig eggs. The first animal to synthesise transgenic proteins in their milk were mice, engineered to produce human tissue plasminogen activator. This technology was applied to sheep, pigs, cows and other livestock. In 2010 scientists at the J. Craig Venter Institute announced that they had created the first synthetic bacterial genome. The researchers added the new genome to bacterial cells and selected for cells that contained the new genome. To do this the cells undergoes a process called resolution, where during bacterial cell division one new cell receives the original DNA genome of the bacteria, whilst the other receives the new synthetic genome. When this cell replicates it uses the synthetic genome as its template. The resulting bacterium the researchers developed, named Synthia, was the world's first synthetic life form. In 2014 a bacterium was developed that replicated a plasmid containing an unnatural base pair. This required altering the bacterium so it could import the unnatural nucleotides and then efficiently replicate them. The plasmid retained the unnatural base pairs when it doubled an estimated 99.4% of the time. This is the first organism engineered to use an expanded genetic alphabet. In 2015 CRISPR and TALENs was used to modify plant genomes. Chinese labs used it to create a fungus-resistant wheat and boost rice yields, while a U.K. group used it to tweak a barley gene that could help produce drought-resistant varieties. When used to precisely remove material from DNA without adding genes from other species, the result is not subject the lengthy and expensive regulatory process associated with GMOs. While CRISPR may use foreign DNA to aid the editing process, the second generation of edited plants contain none of that DNA. Researchers celebrated the acceleration because it may allow them to "keep up" with rapidly evolving pathogens. The U.S. Department of Agriculture stated that some examples of gene-edited corn, potatoes and soybeans are not subject to existing regulations. As of 2016, other review bodies had yet to make statements. Commercialisation In 1976 Genentech, the first genetic engineering company was founded by Herbert Boyer and Robert Swanson and a year later the company produced a human protein (somatostatin) in E.coli. Genentech announced the production of genetically engineered human insulin in 1978. In 1980 the U.S. Supreme Court in the Diamond v. Chakrabarty case ruled that genetically altered life could be patented. The insulin produced by bacteria, branded humulin, was approved for release by the Food and Drug Administration in 1982. In 1983 a biotech company, Advanced Genetic Sciences (AGS) applied for U.S. government authorization to perform field tests with the ice-minus strain of P. syringae to protect crops from frost, but environmental groups and protestors delayed the field tests for four years with legal challenges. In 1987 the ice-minus strain of P. syringae became the first genetically modified organism (GMO) to be released into the environment when a strawberry field and a potato field in California were sprayed with it. Both test fields were attacked by activist groups the night before the tests occurred: "The world's first trial site attracted the world's first field trasher". The first genetically modified crop plant was produced in 1982, an antibiotic-resistant tobacco plant. The first field trials of genetically engineered plants occurred in France and the US in 1986, tobacco plants were engineered to be resistant to herbicides. In 1987 Plant Genetic Systems, founded by Marc Van Montagu and Jeff Schell, was the first company to genetically engineer insect-resistant plants by incorporating genes that produced insecticidal proteins from Bacillus thuringiensis (Bt) into tobacco. Genetically modified microbial enzymes were the first application of genetically modified organisms in food production and were approved in 1988 by the US Food and Drug Administration. In the early 1990s, recombinant chymosin was approved for use in several countries. Cheese had typically been made using the enzyme complex rennet that had been extracted from cows' stomach lining. Scientists modified bacteria to produce chymosin, which was also able to clot milk, resulting in cheese curds. The People's Republic of China was the first country to commercialize transgenic plants, introducing a virus-resistant tobacco in 1992. In 1994 Calgene attained approval to commercially release the Flavr Savr tomato, a tomato engineered to have a longer shelf life. Also in 1994, the European Union approved tobacco engineered to be resistant to the herbicide bromoxynil, making it the first genetically engineered crop commercialized in Europe. In 1995 Bt Potato was approved safe by the Environmental Protection Agency, after having been approved by the FDA, making it the first pesticide producing crop to be approved in the US. In 1996 a total of 35 approvals had been granted to commercially grow 8 transgenic crops and one flower crop (carnation), with 8 different traits in 6 countries plus the EU. By 2010, 29 countries had planted commercialized biotech crops and a further 31 countries had granted regulatory approval for transgenic crops to be imported. In 2013 Robert Fraley (Monsanto's executive vice president and chief technology officer), Marc Van Montagu and Mary-Dell Chilton were awarded the World Food Prize for improving the "quality, quantity or availability" of food in the world. The first genetically modified animal to be commercialised was the GloFish, a Zebra fish with a fluorescent gene added that allows it to glow in the dark under ultraviolet light. The first genetically modified animal to be approved for food use was AquAdvantage salmon in 2015. The salmon were transformed with a growth hormone-regulating gene from a Pacific Chinook salmon and a promoter from an ocean pout enabling it to grow year-round instead of only during spring and summer. Opposition Opposition and support for the use of genetic engineering has existed since the technology was developed. After Arpad Pusztai went public with research he was conducting in 1998 the public opposition to genetically modified food increased. Opposition continued following controversial and publicly debated papers published in 1999 and 2013 that claimed negative environmental and health impacts from genetically modified crops. References Sources Engineering Genetic engineering
History of genetic engineering
[ "Chemistry", "Engineering", "Biology" ]
4,190
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
37,219,172
https://en.wikipedia.org/wiki/JPOS
jPOS is a free and open source library/framework that provides a high-performance bridge between card messages generated at the point of sale or ATM terminals and internal systems along the entire financial messaging network. jPOS is an enabling technology that can be used to handle all card processing from messaging, to processing, through reporting. It can be used to implement financial interchanges based on the ISO 8583 standard and related protocols and currently supports versions 1987, 1993 and 2003 of the standard as well as multiple ANSX9.24 standards. As such, it serves as the messaging foundation for systems that exchange electronic transactions made by cardholders using payment cards. Whether an organization is tracking millions of transactions daily or tens of thousands, jPOS can be implemented to create a clean, efficient financial solution for documenting data associated with all transactions. References Ohloh Free software programmed in Java (programming language) Java platform Java (programming language) libraries
JPOS
[ "Technology" ]
188
[ "Computing platforms", "Java platform" ]
37,220,648
https://en.wikipedia.org/wiki/Link%20Motion%20Inc
Link Motion Inc, formerly NetQin and NQ Mobile, is a multinational technology company that develops, licenses, supports and sells software and services that focus on the smart ride business. Link Motion sells carputers for car businesses, consumer ride sharing services, as well as legacy mobile security, productivity and other related applications. Link Motion maintains dual headquarters in Dallas, Texas, United States and Beijing, China. A Court Receiver, lawyer Robert Seiden, was appointed over Link Motion in February 2019 in the United States in the federal district court in the Southern District of New York by Judge Victor Marrero. The Receiver removed Wenyong “Vincent” Shi as chairman and chief executive officer, and replaced him by appointing Mr. Lilin “Francis” Guo. History 2005–2011: Founding and company beginnings Link Motion was founded as NQ Mobile in 2005 by Dr. Henry Lin, formerly the youngest associate professor at the Beijing University of Posts and Telecommunications, and Dr. Vincent Shi. The company began its business by offering mobile security services and later started offering productivity products to families and enterprise customers. Their services were compatible with a wide range of handset models and almost all currently available operating systems for smartphones, including Java, Symbian, iOS, Android, Windows Phone and BlackBerry OS. NQ Mobile also collaborated closely with other mobile ecosystem participants, including chipmakers, handset manufacturers, wireless carriers, third party payment channels, retailers and other distribution channels in order to broaden the reach of their services. NQ Mobile's initial focus was the China marketplace. The company cooperated with China Mobile, China Unicom and China Telecom, the three largest mobile companies in China. NQ Mobile also cooperated with Nokia and Sony to pre-installed NQ products on their companywide mobile phones. NQ Mobile has also worked closely with Symbian, Windows Mobile and Android, developing mobile security applications based on those operating systems. In addition, Samsung, Motorola, Dopod, Lenovo, Tencent, and Baidu have all been the company's partners. In August 2011, Chris Stier was appointed managing director for the Americas and became responsible for NQ Mobile's business development throughout the Americas, overseeing sales and marketing operations as well as establishing strategic partnerships with key industry players in the region. In October 2011, Geoff Casely was appointed managing director for the Europe, Middle East, and Africa (EMEA) region based in London and became responsible for NQ Mobile's business development in EMEA and building strategic partner relationships. 2012–2013: International expansion Omar Khan joined the company in January 2012 as co-CEO to direct the company alongside the current chairman and chief executive officer Dr. Henry Lin and the company changed its corporate name from NetQin Mobile Inc. to NQ Mobile Inc. Mr. Khan focused on the global expansion of NQ Mobile into markets such as North America, Latin America, Europe, Japan, Korea and India. Dr. Lin continued to focus on the core markets such as China and Taiwan among other developing countries. During the first half of 2012, NQ Mobile expanded its international management with the additions of Gavin Kim as chief product officer, Kim Titus, senior director of Communication, Conrad Edwards as chief experience officer, and Victoria Repice as senior director of product management. NQ Mobile expanded its mobile internet services in November 2012 with the acquisition of Feiliu. Feiliu was founded in 2009 and was subsequently rebranded to FL Mobile. It is a leading mobile interest-based community platform with coverage in China that engages users in real-time mobile online activities. FL Mobile provides application recommendation services, interest-based exchanges, and mobile games to its user communities. According to data published by third party marketing research company Sino MR, FL Mobile was the top iOS mobile game publisher and operator in the Chinese market in December 2012. FL Mobile had 87.3 million registered users and 16.1 million monthly active users by the end of June 2013. EnfoDesk Analysys International (EnfoDesk), a major market tracking company, reported that FL Mobile became the number one publisher on the iOS platform and increased its market share to 36.6 percent in the first half of 2013. The first-place ranking included the top spot for both revenues and number of mobile users. The report also claims FL Mobile ranks third place across all platforms for both revenues and mobile users and maintains 18.8 percent share of total revenues in the first half of the 2013. NQ Mobile also expanded into enterprise security products and services starting in May 2012 when it acquired 55% of NationSky and the remaining 45% in July 2013. Founded in 2005, NationSky is a leader in providing mobile services to more than 1,250 enterprises in China. By working with carriers and smart phone platform providers, NationSky delivers device agnostic managed mobile services, self developed mobile device management (MDM) software NQSky and other mobile SaaS services. Headquartered in Beijing, NationSky also has offices in Shanghai and Shenzhen. In June 2013, NQ Mobile hired Matt Mathison to the senior management position of vice president, Capital Markets. In August 2013, NQ Mobile opened a second global headquarters in Dallas, Texas. The company also further expanded its products and service offerings with the acquisitions of Shanghai Yinlong Information and Technology Co., Ltd. ("Yinlong") to develop content-based music information retrieval (MIR) technology based on multi platforms, NQ Mobile (Shenzhen) Co., Ltd. ("NQ Shenzhen") to offer online security education and value added services, Best Partners Ltd. ("Best Partner") for mobile advertising, Beijing Tianya Co., Ltd. ("Tianya") for mobile healthcare applications development and search engine marketing in the healthcare industry in China, Chengdu Ruifeng Technology Co., Ltd. ("Ruifeng") to provide enterprise mobility system development and iOS training programs, Tianjin Huayong Wireless Technology Co., Ltd. ("Huayong ") for research and development and marketing of live wallpapers for smart phones using the Android operating system, and expanded its market with NQ Mobile KK ("NQ Japan") in Japan. 2014–2015: Consolidation and divestments In 2014 NQ Mobile continued expanding through acquisitions with Beijing Trustek Technology Co., Ltd. ("Trustek") to provide enterprise mobility services, including system management, application development, business intelligence and maintenance services, Yipai Tianxia Network Technology Co., Ltd. ("Yipai") to provide mobile intelligent interactive advertising services, through integration of media channels of outdoor, newspapers, magazines etc., Beijing Showself Technology Co., Ltd. ("Showself") to provide entertainment and dating platforms on mobile internet, and established Beijing NQ Mobile Co., Ltd. ("NQ Yizhuang") to engage in software design and development for computer and mobile devices and other technology consulting services. The company also took a controlling stake in Link Motion. In May 2015, Mr. Zemin Xu took over as CEO and the company held a press conference in Beijing to announce their new business strategy and reorganized along two lines, a technical division representing mobile security, mobile enterprise and mobile health care, and an entertainment division covering mobile advertising, mobile entertainment and mobile games. During the conference NQ Mobile also announced its new Showself Entertainment brand which includes Showself, Showself Live Wallpaper, Showself Music Radar and Showself Launcher. In June 2015, Mr. Roland Wu was appointed as chief financial officer. In August 2015 the company along with the other existing shareholders of FL Mobile Inc. agreed to sell to Beijing Jinxing Rongda Investment Management Co. Ltd., a subsidiary of Tsinghua Holdings Co., Ltd, the entire stake in FL Mobile Inc. that they currently hold for no less than RMB 4 billion (or approximately no less than US$626 million) and also the sale of all of NQ Mobile's interest in Beijing NationSky Network Technology Co., Ltd., to Mr. Hou Shuli, a founder and senior management member of Beijing NationSky, for an aggregate consideration of US$80 million. The company completed the divestment of NationSky for $80 million at the end of 2015. 2016–Present: Business transformation Throughout 2016 NQ Mobile continued to consolidate and began shifting its core business to smart cars while working on the divestments of FL Mobile and other businesses. On March 30, 2017, the company announced a new agreement to sell FL Mobile for RMB 4 billion along with Beijing Showself for RMB 1.23 million to Tongfang Investment Fund Series SPC, an affiliate of Tsinghua Tongfang. The divestment of FL Mobile and Beijing Showself was completed in December 2017. In January 2018, NQ Mobile announced that its board of directors approved a rebranding effort around its new focus as a vehicular automation and mobility as a service company by change its name from “NQ Mobile Inc.” to “Link Motion Inc.” and its ticker from “NQ” to “LKM.” In February 2018, the company hired MZ Group for investor relations and financial communications across all key markets and changed its name to Link Motion Inc. and their ticker to LKM. In March 2018, Link Motion Inc. appointed Mr. Duo Tang to executive vice president and the head of the company's smart ride business. In February 2019, the federal court in New York appointed Robert W. Seiden, a lawyer and former prosecutor, as Receiver over Link Motion to preserve the assets of the company. Seiden was also appointed receiver over LKM and its subsidiaries in Hong Kong by the High Court of the Hong Kong Special Administrative Region Court of the First Instance, along with Lauren Lau of KLC. The Receiver removed Wenyong “Vincent” Shi as chairman and chief executive officer of Link Motion and replaced him by appointing Mr. Lilin “Francis” Guo. Products Mobile value added services Freemium products including NQ Mobile Security Applications, Vault and Family Guardian. Advertising Revenue sources include third-party application referrals from mobile applications, banner ads and intelligent interactive advertising services through user modeling and image recognition technology to search for advertisers’ products and services that are of potential interest. Enterprise mobility Trustek offers mobility strategy consulting, architecture design, hardware and software procurement and deployment, mobile device and application management, training, maintenance and other ongoing support services to enterprise customers. Timeline of key events 2005–2011 In October 2005, the company launched its first mobile security product NetQin 1.0. In November 2009, The 2009 China Frost & Sullivan Award for Mobile Security Market Leadership of the year was presented to NetQin Tech. Co., Ltd. (NetQin) for its leading market share in China mobile security market, continued commitment and excellence in R&D, and outstanding contribution to the industry. In May 2011, The company announced that its initial public offering of 7,750,000 American depositary shares ("ADSs"), each representing five Class A common shares of the company, was priced at $11.50 per ADS, with a total offering size of US$89.125 million, assuming no exercise of the over-allotment option. On May 5, 2011, NQ Mobile started trading on the New York Stock Exchange (NYSE) under the symbol “NQ”. In July 2011, NQ Mobile reached 100 Million registered users nearly 100% growth since June, 2010 and signed a framework agreement with Telefónica, S.A. (Telefónica) to provide mobile Internet services to the subscribers of Telefónica. Under the agreement, NQ Mobile's mobile internet services will be integrated in Telefónica's and its subsidiary's App Store and in mobile devices distributed by Telefónica and subsidiaries. In September 2011, NQ Mobile and Brightstar Corp. signed a global go-to-market agreement to promote adoption of NQ Mobile security products. The company also opened the NQ Mobile Security Research Center based in Raleigh, N.C. led by Dr. Xuxian Jiang, who was appointed chief scientist. 2012–2015 In January 2012, NetQin launched its new "NQ Mobile" brand, under which it now conducts all of its international business, and announced plans to change the company's corporate name from NetQin Mobile Inc. to NQ Mobile Inc. The company also signed an agreement to pre-install NQ Mobile Security on Motorola Android smartphones in China and released a new version of its antivirus software, Mobile Security V6.0 for Android. In February 2012, NQ Mobile integrated the BlueVia payment API from Telefónica, providing a mobile payment option to Telefónica's subscribers. In April 2012, NQ Mobile announced that The Cellular Connection (TCC) will offer NQ Mobile Security at more than 800 Verizon Wireless Premium Retail locations across the U.S. Rollout of this program will begin with availability at TCC's nearly 300 corporate stores. In May 2012, NQ Mobile visited the NYSE to celebrate the company's 1-year anniversary of listing on the NYSE. In honor of the occasion, Omar Khan and Yu Lin, CO-CEOs of NQ Mobile, rang The Closing Bell. The company also acquired 55% of Beijing NationSky Network Technology, Inc. ("NationSky"), a provider of mobile services to enterprises in China and signed a collaboration agreement with A Wireless to offer NQ Mobile Guard in more than 125 Verizon Wireless Premium Retail locations in the US. In August 2012, NQ Mobile and MediaTek Inc. reached an agreement regarding NQ Mobile's acquisition of approximately one-third interest in Hesine Technologies International Worldwide Inc. ("Hesine"), a wholly owned subsidiary of MediaTek and a premier mobile messaging provider. NQ Mobile's co-founder, chairman and co-CEO, Henry Lin joined the board of directors of Hesine. The company also announced the launch of NQ Mobile Vault for iPhone. In September 2012, NQ Mobile announced the launch of NQ Family Guardian. In November 2012, acquired Beijing Feiliu Jiutian Technology Co. ("Feiliu") and later rebranded it to FL Mobile. The company also announced that epay Australia, a Division of Euronet Worldwide, Inc. (NASDAQ: EEFT), will offer NQ Mobile Guard in major retail locations across Australia, including Harvey Norman and Allphones, UK retailer Phones 4u will offer NQ Mobile Security at over 600 retail locations across the UK. In December 2012, NQ Mobile announced the launch of a proprietary security check service for HTC's App Store in mainland China. In July 2013, NQ Mobile agrees to purchase the remaining 45 percent stake in its subsidiary, NationSky. In September 2013, NQ Mobile announced the release of "Music Radar," a content-based music information retrieval (MIR) application from one of its subsidiaries, Yinlong making the app available in China for both Android and iOS platforms. The app was later renamed Doreso. In October 2013, NQ's stock “fell a shattering 47 percent”, followed by lawsuits. The short-seller research firm Muddy Waters LLC alleged that "at least 72 percent of the company’s revenue in China is fictitious and that its actual market share in China is 1.5 percent instead of 55 percent that it had claimed". An independent investigation conducted by an independent special committee of its board of directors and carried out by its independent counsel Shearman & Sterling LLP and Deloitte & Touche Financial Advisory Services Limited acting as forensic accountants found the companies disclosures were verifiable. However, in April 2015, the co-CEO of NQ Mobile, Omar Khan, stepped down after the stock had fallen nearly 84 percent. Reception NQ Mobile Security and NQ Family Guardian were both selected as top 25 apps at the Mobile Apps Showdown for CES 2013 in December, 2012. NQ Mobile was granted the 2011 Technology Pioneer Award by the World Economic Forum for its technology leadership and innovation in mobile security. “The company’s heavy investment in R&D has resulted in 23 patented and patent-pending technologies, giving the company a leading edge in the burgeoning mobile security market.” Time Magazine named the company one of the “10 Start-Ups That Will Change Your Life” in September, 2010. NQ Mobile Security was selected as a top 20 app at the Global Mobile Internet Conference Silicon Valley (GMIC SV) in October, 2012. In addition, NQ Mobile Vault for Android was selected as a top 100 app. Deloitte Technology Fast 50 (2010) Reviews and analysis of products NQ Mobile Security received 4 out of 5 stars when reviewed by PC Advisor. NQ Mobile Vault received 4 out of 5 star, both from CNet and from PC Magazines. Muddy Waters Research accused NQ Mobile of fraud in a 2013 report, alleging inflated revenues and misrepresented operations. In April 2015 an analysis of the NQ Vault product indicated that it only encrypted the first 128 bytes of the data, leaving the rest unencrypted. NQ Mobile responded by saying that the encryption level was "appropriate". Partnerships In August 2011, NQ Mobile and MediaTek reached an agreement on mobile security cooperation whereby MediaTek will make NQ Mobile's mobile security service available to the MediaTek's smartphone chipset. The company also signed an agreement with Taiwan Mobile to provide mobile anti-virus services to Taiwan Mobile subscribers in Taiwan. In June 2012, NQ Mobile announced an alliance with TDMobility, the joint U.S. venture between Brightstar Corp and Tech Data Corporation. The collaboration will enable TDMobility to bring NQ Enterprise Shield to Tech Data's network of 65,000 Value Added Resellers across the US, serving small, medium, and large businesses. The company also announced the official global launch of NQ Enterprise Shield and scientists from NQ Mobile's Mobile Security Research Center, in collaboration with North Carolina State University disclosed a new way to detect mobile threats without relying on known malware samples and their signatures. In October 2012, NQ Mobile announced that its applications, including NQ Mobile Guard, NQ Mobile Vault for Android and NQ Family Guardian, will be offered by GoWireless at more than 350 and Wireless at more than 80 Verizon Wireless Premium Retail locations across the United States. References Companies formerly listed on the New York Stock Exchange Mobile security Mobile technology companies Mobile device management Online advertising services and affiliate networks Cloud computing providers Software companies based in Beijing Chinese brands Software companies established in 2005 Chinese companies established in 2005 2011 initial public offerings
Link Motion Inc
[ "Technology", "Engineering" ]
3,877
[ "Mobile security", "Cybersecurity engineering", "Mobile technology companies" ]
37,231,994
https://en.wikipedia.org/wiki/3D%20Wayfinder
3D Wayfinder is an indoor wayfinding software and service used to help visitors to navigate in large public buildings (shopping centers, airports, train stations, hospitals, universities etc.) 3D Wayfinder uses a 3D floor plans of a building and renders it in real-time. It displays interactive information layers. The software can be used on interactive kiosks or as a mobile application. 3D Wayfinder enables users to visualize the shortest path from the user's position to the searched location. At the same time, users can zoom, rotate, and change floors on the building's 3D map. All the existing objects, like escalators, doors, elevators, trees, stairs, etc., can be included on the 3D floorplan. Displayed floor plans can be switched, and usually the roof of the building is also included on the 3D model of the building to provide a better understanding of the building and map direction. 3D Wayfinder engines 3D Wayfinder is using JavaScript and the WebGL-based FRAK (engine) developed by 3D Technologies R&D. The 3D Wayfinder application is simple to embed on the user's website. It works with most common web browsers (Mozilla, IE, Safari etc.) on Windows, Linux and Mac. Supported platforms Kiosk solution works on Windows 7, Linux, and Macintosh machines; Mobile application runs on Android and iOS devices; Web version runs on all modern browsers that have HTML5 support. Features 3D, semi 3D or 2D map; Search engine for finding locations Pinch zoom and rotate the 3D map and move in all directions; “You are here” spot – indicates user's current location; Graphical route animation; Realtime 3D walkthrough; Route and coupons printout; Multilingual content – Arabic, Chinese, English, French, Hindi, Russian, Spanish etc.; Content Management System for remote management; Statistics of usage – tenant popularity, language popularity, popular search keywords, most popular advertisements; Custom user interface; Possible to integrate with different physical devices – proximity sensors, speakers, printers and others; Multiple integrations are possible – social networks, timetables, clock, weather, transportation, websites, campaigns and others. Advertising 3D Wayfinder is also a channel for communicating with customers. It is possible to add three types of advertisements: Banner ads (images, videos, animation etc.) Highlighted directory items with additional ad text (similar to Google Sponsored Links) Small pop-up banners in the 3D plan. 3D Wayfinder application is used by many shopping malls around Europe to guide visitors, and keep tenant information up to date. External links References Signage 3D graphics software Building information modeling Kiosks Maps
3D Wayfinder
[ "Engineering" ]
553
[ "Building engineering", "Building information modeling" ]
37,232,595
https://en.wikipedia.org/wiki/Architectural%20ironmongery
Architectural ironmongery or architectural hardware is a term used for the manufacture and wholesale distribution of items made from iron, steel, brass, aluminium or other metals, including plastics, for use in all types of buildings. Architectural ironmongery includes door handles, closers, locks, cylinder pulls and hinges (door furniture), window fittings, cupboard fittings, iron railings, handrails, balustrades, switches and sockets. The term is sometimes used to distinguish between these items and retail of consumer goods sold in ironmongers' shops or hardware stores. History Use of ironware in buildings has a long tradition, with local blacksmiths producing items for use in houses, churches and other buildings. During the Industrial Revolution, mass production of ironmongery became more widespread, though businesses often remained regionally focused. For example, in the UK, Laidlaw was founded in Manchester in 1876; Derby-based Bennetts Ironmongery can trace its history back to 1734; William Tonks & Sons was established in Leeds in 1789; and Quiggins served the Victorian era Liverpool market. The West Midlands region saw several well-known businesses established: Parker Winder & Achurch started in Birmingham in 1836, J Legge in Willenhall in 1881, and William Newton in Wolverhampton in 1750 (relocating to Birmingham in the 1820s). After the Second World War, the industry began to consolidate. For example, the Newton and Tonks businesses merged in 1970, acquired Legge in 1988 and Laidlaw in 1993, and were then taken over by Ingersoll Rand in 1997, and are today part of Ingersoll Rand Security Technologies. The Guild of Architectural Ironmongers was established in 1961 to promote standards in the business of architectural ironmongery. It manages an industry accreditation scheme, GuildMark, and runs an education programme, including a three-year diploma course and a Registered Architectural Ironmonger (RegAI) scheme. References External links Guild of Architectural Ironmongers Equipment Industrial history Building materials Metalworking Architectural elements Ironmongery
Architectural ironmongery
[ "Physics", "Technology", "Engineering" ]
422
[ "Building engineering", "Architecture", "Construction", "Materials", "Architectural elements", "Components", "Matter", "Building materials" ]
37,234,692
https://en.wikipedia.org/wiki/Powder-douce
Powder-douce (also poudre-douce, literally "sweet powder") is a spice mix used in Medieval and Renaissance cookery. Like modern spice mixes such as Italian seasoning or garam masala, there was not a set ingredient list, and it varied from cook to cook. The author of the 14th-century manuscript Le Ménagier de Paris suggested a mix of grains of paradise, ginger, cinnamon, nutmeg, sugar, and galangal. The 16th-century Catalan cookbook Llibre del Coch gives two recipes for polvora de duch: The first is made with ginger, cinnamon, cloves and sugar, all finely chopped and sifted with a cedaç (a fine sieve made of horsehair), while the second adds galangal and long pepper. There is a related mixed spice called powder-forte, literally "strong powder". References Medieval cuisine Herb and spice mixtures Douce
Powder-douce
[ "Physics" ]
200
[ "Materials", "Powders", "Matter" ]
30,655,042
https://en.wikipedia.org/wiki/Tin-silver-copper
Tin-silver-copper (Sn-Ag-Cu, also known as SAC), is a lead-free (Pb-free) alloy commonly used for electronic solder. It is the main choice for lead-free surface-mount technology (SMT) assembly in the industry, as it is near eutectic, with adequate thermal fatigue properties, strength, and wettability. Lead-free solder is gaining much attention as the environmental effects of lead in industrial products is recognized, and as a result of Europe's RoHS legislation to remove lead and other hazardous materials from electronics. Japanese electronics companies have also looked at Pb-free solder for its industrial advantages. Typical alloys are 3–4% silver, 0.5–0.7% copper, and the balance (95%+) tin. For example, the common "SAC305" solder is 3.0% silver and 0.5% copper. Cheaper alternatives with less silver are used in some applications, such as SAC105 and SAC0307 (0.3% silver, 0.7% copper), at the expense of a somewhat higher melting point. History In 2000, there were several lead-free assemblies and chip products initiatives being driven by the Japan Electronic Industries Development Association (JEIDA) and Waste Electrical and Electronic Equipment Directive (WEEE). These initiatives resulted in tin-silver-copper alloys being considered and tested as lead-free solder ball alternatives for array product assemblies. In 2003, tin-silver-copper was being used as a lead-free solder. However, its performance was criticized because it left a dull, irregular finish and it was difficult to keep the copper content under control. In 2005, tin-silver-copper alloys constituted approximately 65% of lead-free alloys used in the industry and this percentage has been increasing. Large companies such as Sony and Intel switched from using lead-containing solder to a tin-silver-copper alloy. Constraints and tradeoffs The process requirements for (Pb-free) SAC solders and Sn-Pb solders are different both materially and logistically for electronic assembly. In addition, the reliability of Sn-Pb solders is well established, while SAC solders are still undergoing study, (though much work has been done to justify the use of SAC solders, such as the iNEMI Lead Free Solder Project). One important difference is that Pb-free soldering requires higher temperatures and increased process control to achieve the same results as that of the tin-lead method. The melting point of SAC alloys is 217–220°C, or about 34°C higher than the melting point of the eutectic tin-lead (63/37) alloy. This requires peak temperatures in the range of 235–245°C to achieve wetting and wicking. Some of the components susceptible to SAC assembly temperatures are electrolytic capacitors, connectors, opto-electronics, and older style plastic components. However, a number of companies have started offering 260 °C compatible components to meet the requirements of Pb-free solders. iNEMI has proposed that a good target for development purposes would be around 260°C. Also, SAC solders are alloyed with a larger number of metals so there is the potential for a far wider variety of intermetallics to be present in a solder joint. These more complex compositions can result in solder joint microstructures that are not as thoroughly studied as current tin-lead solder microstructures. These concerns are magnified by the unintentional use of lead-free solders in either processes designed solely for tin-lead solders or environments where material interactions are poorly understood. For example, the reworking of a tin-lead solder joint with Pb-free solder. These mixed-finish possibilities could negatively impact the solder's reliability. Advantages SAC solders have outperformed high-Pb solders C4 joints in ceramic ball grid array (CBGA) systems, which are ball-grid arrays with a ceramic substrate. The CBGA showed consistently better results in thermal cycling for Pb-free alloys. The findings also show that SAC alloys are proportionately better in thermal fatigue as the thermal cycling range decreases. SAC performs better than Sn-Pb at the less extreme cycling conditions. Another advantage of SAC is that it appears to be more resistant to gold embrittlement than Sn-Pb. In test results, the strength of the joints is substantially higher for the SAC alloys than the Sn-Pb alloy. Also, the failure mode is changed from a partially brittle joint separation to a ductile tearing with the SAC. References Tin alloys Fusible alloys Brazing and soldering
Tin-silver-copper
[ "Chemistry", "Materials_science" ]
986
[ "Tin alloys", "Alloys", "Metallurgy", "Fusible alloys" ]
30,662,719
https://en.wikipedia.org/wiki/Giant%20magnetoimpedance
In materials science Giant Magnetoimpedance (GMI) is the effect that occurs in some materials where an external magnetic field causes a large variation in the electrical impedance of the material. It should not be confused with the separate physical phenomenon of Giant Magnetoresistance. The phenomenology of the GMI GMI is caused by the penetration length that is a measure of how deep an ac electrical current can flow inside an electrical conductor. The penetration length (also known as the skin-depth effect) increases with the square root of the electrical resistivity of the material and is inversely proportional to the square root of the product of the magnetic permeability and the frequency of the ac electrical current. Thus, in materials with very high values of magnetic permeability, such as soft-ferromagnetic materials, the penetration-length can be much less than the thickness of the conductor even for moderate values of frequencies driving the current near the surface of the material. When an external magnetic field is applied, the size of the permeability diminishes, increasing the penetration of the current in the magnetic material. Large variations are observed in both in-phase and out-of-phase components of the magnetoimpedance for applied magnetic fields close to the value of the Earth magnetic field up to few tens of Oersted. For comparison, in normal electrical conductors the effect of the skin-depth becomes important for frequencies in the microwave range only. Despite the fact that the dependence of the GMI on the geometry of the electrical conductor (ribbons, wires, multilayers meander-likes) and external parameters is somewhat complex, there are theoretical models that allow calculation of the GMI to within some approximations. Beside the dependence of the GMI on the frequency of the current there are other sources that contribute to the frequency dependence of the GMI, such as the motion of the domain wall and the ferromagnetic resonance. Experimental measurement A typical experimental set-up for investigating the GMI in research laboratories is shown below. It includes an alternating current source, a phase sensitive amplifier for detecting the ac voltage across the sample and an electromagnet for applying a dc magnetic field. A cryostat or an oven may be required for measuring the temperature dependence of the GMI. Several experimental measurements were also performed to characterize the long-term stability and the thermal drift of the GMI, which were supported by a theoretical model describing the physical modeling of the sensing element. History The observation that the impedance of soft-magnetic materials is influenced by the frequency and amplitudes of applied magnetic fields was first observed in the 1930s. These initial studies were limited to frequencies of a few hundreds of Hz and the changes of impedance reported in those works were not large. Starting in the 1990s, this phenomenon was investigated again, this time making use of currents with frequencies of hundreds of kHz. Because of the huge variations observed in the magnetic field dependence of the magnetoimpedance it was named giant magnetoimpedance. Due to the high sensitivity of the sensors using the GMI effect, they have been used in compasses, accelerometers, virus detection, biomagnetism, among other applications. References Electromagnetism
Giant magnetoimpedance
[ "Physics" ]
662
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
31,636,714
https://en.wikipedia.org/wiki/Hexafoil
The hexafoil is a design with six-fold dihedral symmetry composed from six vesica piscis lenses arranged radially around a central point, often shown enclosed in a circumference of another six lenses. It is also sometimes known as a "daisy wheel". A second, quite different, design is also sometimes referred to by this name; see alternate symbol. The design is found as a rosette ornament in artwork dating back to at least the Late Bronze Age. Construction The pattern figure can be drawn by pen and compass, by creating seven interlinking circles of the same diameter touching the previous circle's center. The second circle is centered at any point on the first circle. All following circles are centered on the intersection of two other circles. The design is sometimes expanded into a regular overlapping circles grid. Bartfeld (2005) describes the construction: "This design consists of circles having a 1-[inch] radius, with each point of intersection serving as a new center. The design can be expanded ad infinitum depending upon the number of times the odd-numbered points are marked off." Usage The hexafoil has been very widely used throughout European folk art for a very long period of time. It is attested from at least the beginning of the Late Bronze Age, represented, for example, on ornamental golden disks found in Shaft Grave III at Mycenae (16th century BC). It is also found in some Cantabrian stelae, dated to the Iron Age, as well as Norwegian bronze kettles from the same period The six-petal rosette is common in 17th to 20th century folk art throughout Europe. In Portugal, it is common to find it in medieval churches and cathedrals, as the engraved signature of a mason; but also as decoration and symbol of protection on the chimneys of old houses in Alentejo (at times together with the lauburu, or with the pentagram). In Galicia (Spain) and all the Cantabrian Mountains, hexafoils are found since the Iron Age in torc terminals and decoration, and is still used in folk art. It can also be found in the Pyrenees (Navarre, Aragon, and Catalonia). Since 2003 the hexafoil is being used as the logo of the Alt Pirineu Natural Park, the largest of Catalonia. In the United Kingdom the hexafoil is commonly found on churches, but also in barns and private buildings, as well as on cross slabs. The use of the hexafoil as a folk magic symbol was brought from the United Kingdom to Australia by settlers, where six leaf designs with concentric circles have been found in homes and occasionally in public buildings to serve as a sign of protection. The hexafoil was also widely used on gravestones in Colonial America, especially popular in parts of Connecticut, Massachusetts, and Pennsylvania. The design was commonly used from the later 17th century until the early 19th century. The design is also known as "Sun of the Alps" (Sole delle Alpi) in Italy from its widespread use in alpine folk art. It resembles a pattern often found in that area on buildings. It is used in the coat of arms of Lecco Province. It has also been used as the emblem of Padanian nationalism in northern Italy since the 1990s. In 2001, Editoriale Nord, the publishing company of La Padania, registered the green-on-white design as a trademark. In Norway it can mostly be found on wooden objects, such as beer bowls, clothes smoothing boards, milk butts, wooden chests, beds, and so on, but it can also be found on the doors of buildings. In Norwegian it's sometimes known as "Olavsrose" (rose of Olaf), although that name is used for another symbol as well. In Lithuania the hexafoil was found on wooden beer bowls, spindles, and other wooden objects. It is known as "little sun" (saulute) in Lithuanian. In the Tatra mountains, southeastern Poland and western Ukraine, the mark was commonly carved on roof beams inside peasant huts. In Ukraine it was known as "the symbol of Perun" (Peruna znak) and "the thunder mark" (gromovoi znak). In the Russian North the hexafoil was carved near the outside roof of peasant houses to protect them against lightning. The symbol was known as the thunder sign (gromovoi znak) or the thunder wheel (gromovoe koleso), and was associated with the thunder god Perun. Gallery Origin The origin and meaning of the symbol are not known, but many researchers have independently suggested that it is of religious origin, and very likely served as a protective symbol. There are two main theories for its meaning and origin. Solar symbol Peralta Labrador (1989) cites a proposal according to which the design in the La Tène (Celtic) period was a solar symbol associated with the god Taranis. Other researchers have also described it as a solar symbol, but no reasoning for this has been given. However, the Lithuanian ("little sun") and Italian ("sun of the Alps") names do suggest a solar origin. Thunder wheel Garshol (2021) suggests that the rosette is actually a wheel with spokes, and that it originally signified the Proto-Indo-European thunder god Perkwunos, later becoming associated with his various incarnations, such as Perun, Tarḫunz, Taranis, Thor, and Jupiter. The Russian and Ukrainian names of the symbol, as well as other more involved arguments, are given as rationale. Alternate symbol The name hexafoil is sometimes also used to refer to a different geometric design that is used as a traditional element of Gothic architecture, created by overlapping six circular arcs to form a flower-like image. The hexafoil design is modeled after the six petal lily, for its symbolism of purity and relation to the Trinity. The hexafoil form is created from a series of compound units, and exists as a more complex variation of the same extruded figure. Other forms similar to the hexafoil include the trefoil, quatrefoil, and cinquefoil. The other hexafoil design is implemented in various Gothic buildings constructed in the 12th through 16th century. The traditional design is used in cloisters, triforiums, and stained glass windows of famous buildings such as Notre-Dame, Salisbury Cathedral, and Regensburg Cathedral. Stone cut-out hexafoils are displayed in a plate tracery style in the Salisbury Cathedral, creating a pattern along the triforium. It can also be seen as a framing design in Bible moralisée. They are often rendered in red, blue, gold, or vibrant orange and surround biblical scenes in the bible. The hexafoil style of framing was often used in conjunction with architectural framing to provide the text with more depth, creativity, invention, and volume. Old Testament illustrations were surrounded by hexafoil frames while moralization depictions favored architectural frames. See also Foil (architecture) Overlapping circles grid Sudarshana Chakra Triquetra Triskelion References External links Circles Geometric shapes Gothic architecture Magic (supernatural) Ornaments Rotational symmetry
Hexafoil
[ "Physics", "Mathematics" ]
1,486
[ "Geometric shapes", "Mathematical objects", "Geometric objects", "Circles", "Pi", "Symmetry", "Rotational symmetry" ]
31,638,066
https://en.wikipedia.org/wiki/Media%20control%20symbols
In digital electronics, analogue electronics and entertainment, the user interface may include media controls, transport controls or player controls, to enact and change or adjust the process of video playback, audio playback, and alike. These controls are commonly depicted as widely known symbols found in a multitude of products, exemplifying what is known as dominant design. Symbols Media control symbols are commonly found on both software and physical media players, remote controls, and multimedia keyboards. Their application is described in ISO/IEC 18035. The main symbols date back to the 1960s, with the Pause symbol having reportedly been invented at Ampex during that decade for use on reel-to-reel audio recorder controls, due to the difficulty of translating the word "pause" into some languages used in foreign markets. The Pause symbol was designed as a combination of the existing square Stop symbol and the caesura, and was intended to evoke the concept of an interruption or "stutter stop". In popular culture Consumer products The Play symbol is arguably the most widely used of the media control symbols. In many ways, this symbol has become synonymous with music culture and more broadly the digital download era. As such, there are now a multitude of items such as T-shirts, posters, and tattoos that feature this symbol. Similar cultural references can be observed with the Power symbol which is especially popular among video gamers and technology enthusiasts. Branding Media symbols can be found on an array of advertisements: from live music venues to streaming services. In 2012, Google rebranded its digital download store to Google Play, using the Play symbol in its logo. The Play symbol also serves as a logo for YouTube since 2017. Television station owners Morgan Murphy Media and TEGNA have begun to institute the Play symbol into the logos of their stations to further connect their websites to their over-the-air television presences. Use on appliances and other mechanical devices In recent years, there has been a proliferation of electronics that use media control symbols in order to represent the Run, Stop, and Pause functions. Likewise, user interface programing pertaining to these functions has also been influenced by that of media players. For example, some washers and dryers with an illuminated Play/pause button are programmed such that it stays lit when the appliance is running. A line of Philips pasta makers has the Play/pause button for controlling the pasta-making process. See also List of international common standards Power symbol Miscellaneous Technical References Audio electronics Symbols Video hardware
Media control symbols
[ "Mathematics", "Engineering" ]
499
[ "Audio electronics", "Symbols", "Electronic engineering", "Video hardware", "Audio engineering" ]
31,639,799
https://en.wikipedia.org/wiki/Europium%20barium%20titanate
Europium barium titanate is a chemical compound composed of barium, europium, titanium, and oxygen. It is magnetic and ferroelectric at 1.5 K. It is a ceramic material which was used in 2010 experiments on a new theory on baryon asymmetry. References Titanates Barium compounds Europium(III) compounds Ferroelectric materials Perovskites
Europium barium titanate
[ "Physics", "Chemistry", "Materials_science" ]
85
[ "Physical phenomena", "Inorganic compounds", "Ferroelectric materials", "Inorganic compound stubs", "Materials", "Electrical phenomena", "Hysteresis", "Matter" ]
31,640,985
https://en.wikipedia.org/wiki/Piping%20corrosion%20circuit
Piping corrosion circuit or Corrosion loop / Piping Circuitization and Corrosion Modelling, is carried out as part of either a Risk Based Inspection analysis (RBI) or Materials Operating Envelope analysis (MOE). It is the systematization of the piping components versus failure modes analysis into materials operating envelope. It groups piping materials / chemical make-up into systems / sub systems and assigns corrosion mechanisms. These are then monitored over the operating lifetime of the facility. This analysis is performed on circuit inspection results to determine and optimize circuit corrosion rates and measured thickness/dates for circuit components. Corrosion Circuits are utilized in the Integrity Management Plan (IMP) which forms a part of the overall Asset integrity management system and is an integral part of any RBI analysis. Many times a "system" will be a broad overview of the facilities process flow, broken by stream constituents, while a circuit level analysis breaks systems into smaller "circuits" that group common metallurgies, equal (or roughly equal) temperatures and pressures, and expected damage mechanisms. Background It is carried out in order to: Manage the inspection of piping. Identifying piping systems/circuits and assign failure modes. Capture any changes due to those upgrades or design creep. Ensure that circuits are identified to indicate inspection points as well as facilitate the implementation of various inspection techniques. Identify potential damage mechanisms and their locations. Typically, this is performed at the outset of any Mechanical Integrity program i.e. as the facility is built, modified and operated throughout its life. General Requirements of Circuitization: Use an experienced corrosion/materials engineer to define systems in each unit Define corrosion circuits within each system based on materials of construction, operating conditions and active damage mechanisms Circuit identification and naming convention is used for both API RBI and IDMS programs to provide linking and sharing inspection data Circuit corrosion rates are used in API RBI to calculate circuit risk Determine the circuit and component next inspection date and inspection effectiveness, including detailed inspection plan Review or Placement of CML/TML (Condition Monitoring Locations/Thickness Monitoring Locations) recommended by corrosion/materials engineer CML/TML installed and documented on piping isometric drawings See also Corrosion engineering References American Petroleum Institute http://www.api.org/meetings/proceedings/upload/Piping_Circuitization_and_RBI_Requirements_Lynne_Kaley.pdf Maintenance Corrosion
Piping corrosion circuit
[ "Chemistry", "Materials_science", "Engineering" ]
476
[ "Metallurgy", "Corrosion", "Electrochemistry", "Mechanical engineering", "Electrochemistry stubs", "Maintenance", "Materials degradation", "Physical chemistry stubs", "Chemical process stubs" ]
31,642,265
https://en.wikipedia.org/wiki/Solar%20Building
The Solar Building, located in Albuquerque, New Mexico, was the world's first commercial building to be heated primarily by solar energy. It was built in 1956 to house the engineering firm of Bridgers & Paxton, who were responsible for the heating system design. The novel building received widespread attention, with articles in national publications like Life and Popular Mechanics, and was the subject of a National Science Foundation-funded research project in the 1970s. It was added to the New Mexico State Register of Cultural Properties in 1985 and the National Register of Historic Places in 1989, only 33 years after it was built. History The firm of Bridgers & Paxton Consulting Engineers was founded in 1951 by Frank Bridgers (1922–2005) and Donald Paxton (1912–2007), both of whom were interested in the potential applications of solar energy. Initially operating out of a garage behind Bridgers' house, the two men conceived a new office building for their firm which would include an experimental solar heating system. They believed such a system would not only save money, but would also allow them to collect valuable data for future projects. In 1954, they were able to put some of their ideas into practice with an innovative heating and cooling system for the Simms Building, which took advantage of the building's south-facing glass curtain wall to provide solar heating in winter. However, additional heating or cooling was still required under most conditions. Bridgers and Paxton began serious design work on the Solar Building in early 1954, and it was constructed between March and August 1956. Stanley & Wright were the architects for the building. Its total cost was $58,500, of which the heating and cooling system made up about $15,000—roughly twice the cost of a conventional system. However, Bridgers and Paxton believed the reduced operating costs would save money in the long run. The novel building attracted considerable attention, receiving write-ups in a number of national publications including Architectural Forum, Life, Architectural Record, Progressive Architecture, and Popular Mechanics, and directly inspired a number of subsequent active solar heating systems. Despite some minor problems, the building's heating system operated successfully for six years, even during the particularly cold and cloudy month of January 1957, which recorded only three sunny days. However, it was not as economical as Bridgers and Paxton had hoped, mainly due to the extremely low cost of fuel at the time. When the building was expanded in 1962, the solar collector was abandoned in favor of a conventional boiler system, though the equipment was left intact for possible future use. This decision paid off just a few years later, when the 1973 oil crisis caused a renewed interest in solar energy and brought fresh attention to the Solar Building. In early 1974, Penn State researcher Stanley Gilman received a National Science Foundation grant to restore the building's solar heating system and operate it as part of a multi-year field study intended to identify optimal design criteria for such systems. Following the conclusion of the project, the solar heating system remained in use. Bridgers & Paxton eventually outgrew the building, moving to a new location in 1985. The Solar Building was added to the New Mexico State Register of Cultural Properties in 1985 and the National Register of Historic Places in 1989. The building was considered "exceptionally significant", justifying its inclusion in the National Register even though it was only 33 years old at the time. Architecture The Solar Building is a one-story, International Style building consisting of two main sections. The north wing, containing the main drafting room as well as the solar heating equipment, made up the main portion of the original building. It has an irregular quadrilateral cross-section with the roof and south wall both angled (at 20 and 30 degrees, respectively) in order to provide a high southern exposure for the solar collectors. The wing is framed by seven structural steel bents, spaced apart and filled in with wooden ceiling joists and masonry. The north wall has a narrow, continuous band of windows running just below the roofline which light the drafting room, while the street-facing eastern elevation is windowless brick. The south wing is a low, flat-roofed structure containing office space. It is partially faced with brick, marking the original extent of the building; it was later extended with an addition in 1962. The main entrance is positioned at the intersection of the two wings. Heating system The building's active solar heating system employed an array of 56 solar thermal collectors with a total area of . The array was positioned on a south-facing exterior wall which was angled at 30 degrees to the vertical in order to catch the maximum amount of winter sunlight. The collectors were custom-fabricated aluminum panels with built-in flow channels for water to pass through. The surface of each collector was coated with low-reflectivity black paint and a layer of glass to capture the maximum amount of thermal energy. In sunny weather, water passing through the collectors would reach a maximum temperature of before being deposited in a 6,000-gallon insulated underground tank which provided a hot water reserve for up to three days of cloudy weather. Under normal conditions (about 90% of an average heating season), the water in the tank would be warm enough to directly heat the building by circulating it through radiant panels in the floor and ceiling. If the temperature in the tank dropped due to prolonged cloudy weather, a heat pump could be employed to maintain the hot water supply to the panels. The heat pump was a standard commercial water chiller unit, but with heating rather than cooling as its intended purpose—chilling the water in the tank and delivering the "waste" heat to the hot water stream. The heat pump could continue to function as long as the tank temperature remained above . In summer, the system could also provide cooling by circulating cold water through the building rather than hot water. In this mode, the storage tank became a reservoir for cold water, which allowed the system to save energy in milder weather by storing heat during the day and releasing it at night when the outside temperatures were lower. Most of the time, the water in the tank could be kept cool using only an evaporative cooler. If the water in the tank got too warm, the heat pump would go back into operation in order to continue transferring heat from the cold water stream into the tank. It was also possible to operate in cooling mode during the day while storing hot water from the solar collectors and heat pump to heat the building at night. Minor changes were made to the system during its operational life. One of the first problems that arose was corrosion of the collector panels, which originally had integral flow channels formed from two bonded sheets of aluminum. After leaks started to develop, the flow channels were replaced with copper tubing attached to the back of the panels. Gilman made additional modifications to the system in the 1970s, including changing the working fluid in the collector loop to ethylene glycol (in order to prevent freezing) and re-soldering the collector panels for better thermal contact. Gilman also installed an automated control system and upgraded the air handling equipment to allow individual temperature control for each office. Despite the modifications, the system remains mostly intact as originally designed. See also List of pioneering solar buildings References External links Office buildings completed in 1956 Office buildings in Albuquerque, New Mexico Commercial buildings on the National Register of Historic Places in New Mexico Solar design New Mexico State Register of Cultural Properties National Register of Historic Places in Albuquerque, New Mexico Modernist architecture in New Mexico
Solar Building
[ "Engineering" ]
1,506
[ "Solar design", "Energy engineering" ]
31,644,748
https://en.wikipedia.org/wiki/Pathway%20Commons
Pathway Commons is a database of biological pathways and interactions. See also Biological pathway Reactome References External links pathwaycommons Biological databases Metabolism Systems biology
Pathway Commons
[ "Chemistry", "Biology" ]
30
[ "Bioinformatics", "Cellular processes", "Biochemistry", "Biological databases", "Metabolism", "Systems biology" ]
31,646,448
https://en.wikipedia.org/wiki/Transcomputational%20problem
In computational complexity theory, a transcomputational problem is a problem that requires processing of more than 1093 bits of information. Any number greater than 1093 is called a transcomputational number. The number 1093, called Bremermann's limit, is, according to Hans-Joachim Bremermann, the total number of bits processed by a hypothetical computer the size of the Earth within a time period equal to the estimated age of the Earth. The term transcomputational was coined by Bremermann. Examples Testing integrated circuits Exhaustively testing all combinations of an integrated circuit with 309 boolean inputs and 1 output requires testing of a total of 2309 combinations of inputs. Since the number 2309 is a transcomputational number (that is, a number greater than 1093), the problem of testing such a system of integrated circuits is a transcomputational problem. This means that there is no way one can verify the correctness of the circuit for all combinations of inputs through brute force alone. Pattern recognition Consider a q×q array of the chessboard type, each square of which can have one of k colors. Altogether there are kn color patterns, where n = q2. The problem of determining the best classification of the patterns, according to some chosen criterion, may be solved by a search through all possible color patterns. For two colors, such a search becomes transcomputational when the array is 18×18 or larger. For a 10×10 array, the problem becomes transcomputational when there are 9 or more colors. This has some relevance in the physiological studies of the retina. The retina contains about a million light-sensitive cells. Even if there were only two possible states for each cell (say, an active state and an inactive state) the processing of the retina as a whole requires processing of more than 10300,000 bits of information. This is far beyond Bremermann's limit. General systems problems A system of n variables, each of which can take k different states, can have kn possible system states. To analyze such a system, a minimum of kn bits of information are to be processed. The problem becomes transcomputational when kn > 1093. This happens for the following values of k and n: Implications The existence of real-world transcomputational problems implies the limitations of computers as data processing tools. This point is best summarized in Bremermann's own words: "The experiences of various groups who work on problem solving, theorem proving and pattern recognition all seem to point in the same direction: These problems are tough. There does not seem to be a royal road or a simple method which at one stroke will solve all our problems. My discussion of ultimate limitations on the speed and amount of data processing may be summarized like this: Problems involving vast numbers of possibilities will not be solved by sheer data processing quantity. We must look for quality, for refinements, for tricks, for every ingenuity that we can think of. Computers faster than those of today will be a great help. We will need them. However, when we are concerned with problems in principle, present day computers are about as fast as they ever will be. We may expect that the technology of data processing will proceed step by step – just as ordinary technology has done. There is an unlimited challenge for ingenuity applied to specific problems. There is also an unending need for general notions and theories to organize the myriad details." See also Hypertask Matrioshka brain, a theoretical computing megastructure Strict finitism References Theory of computation Computational complexity theory Limits of computation
Transcomputational problem
[ "Physics" ]
759
[ "Physical phenomena", "Limits of computation" ]
23,044,361
https://en.wikipedia.org/wiki/Raman%20microscope
The Raman microscope is a laser-based microscopic device used to perform Raman spectroscopy. The term MOLE (molecular optics laser examiner) is used to refer to the Raman-based microprobe. The technique used is named after C. V. Raman, who discovered the scattering properties in liquids. Configuration The Raman microscope begins with a standard optical microscope, and adds an excitation laser, laser rejection filters, a spectrometer or monochromator, and an optical sensitive detector such as a charge-coupled device (CCD), or photomultiplier tube, (PMT). Traditionally Raman microscopy was used to measure the Raman spectrum of a point on a sample, more recently the technique has been extended to implement Raman spectroscopy for direct chemical imaging over the whole field of view on a 3D sample. Imaging modes In direct imaging, the whole field of view is examined for scattering over a small range of wavenumbers (Raman shifts). For instance, a wavenumber characteristic for cholesterol could be used to record the distribution of cholesterol within a cell culture. The other approach is hyperspectral imaging or chemical imaging, in which thousands of Raman spectra are acquired from all over the field of view. The data can then be used to generate images showing the location and amount of different components. Taking the cell culture example, a hyperspectral image could show the distribution of cholesterol, as well as proteins, nucleic acids, and fatty acids. Sophisticated signal- and image-processing techniques can be used to ignore the presence of water, culture media, buffers, and other interference. Resolution Raman microscopy, and in particular confocal microscopy, can reach down to sub-micrometer lateral spatial resolution. Because a Raman microscope is a diffraction-limited system, its spatial resolution depends on the wavelength of light and the numerical aperture of the focusing element. In confocal Raman microscopy, the diameter of the confocal aperture is an additional factor. As a rule of thumb, the lateral spatial resolution can reach approximately the laser wavelength when using air objective lenses, while oil or water immersion objectives can provide lateral resolutions of around half the laser wavelength. This means that when operated in the visible to near-infrared range, a Raman microscope can achieve lateral resolutions of approx. 1 µm down to 250 nm, while the depth resolution (if not limited by the optical penetration depth of the sample) can range from 1-6 µm with the smallest confocal pinhole aperture to tens of micrometers when operated without a confocal pinhole. Since the objective lenses of microscopes focus the laser beam down to the micrometer range, the resulting photon flux is much higher than achieved in conventional Raman setups. This has the added effect of increased photobleaching of molecules emitting interfering fluorescence. However, the high photon flux can also cause sample degradation, and thus, for each type of sample, the laser wavelength and laser power have to be carefully selected. Raman imaging Another tool that is becoming more popular is global Raman imaging. This technique is being used for the characterization of large scale devices, mapping of different compounds and dynamics study. It has already been used for the characterization of graphene layers, J-aggregated dyes inside carbon nanotubes and multiple other 2D materials such as MoS2 and WSe2. Since the excitation beam is dispersed over the whole field of view, those measurements can be done without damaging the sample. By using Raman microspectroscopy, in vivo time- and space-resolved Raman spectra of microscopic regions of samples can be measured. As a result, the fluorescence of water, media, and buffers can be removed. Consequently, it is suitable to examine proteins, cells and organelles. Raman microscopy for biological and medical specimens generally uses near-infrared (NIR) lasers (785 nm diodes and 1064 nm Nd:YAG are especially common). This reduces the risk of damaging the specimen by applying higher energy wavelengths. However, the intensity of NIR Raman scattering is low (owing to the ω4 dependence of Raman scattering intensity), and most detectors require very long collection times. Recently, more sensitive detectors have become available, making the technique better suited to general use. Raman microscopy of inorganic specimens, such as rocks, ceramics and polymers, can use a broader range of excitation wavelengths. A related technique, tip-enhanced Raman spectroscopy, can produce high-resolution hyperspectral images of single molecules and DNA. Correlative Raman imaging Confocal Raman microscopy can be combined with numerous other microscopy techniques. By using different methods and correlating the data, the user attains a more comprehensive understanding of the sample. Common examples of correlative microscopy techniques are Raman-AFM, Raman-SNOM, and Raman-SEM. Correlative SEM-Raman imaging is the integration of a confocal Raman microscope into an SEM chamber which allows correlative imaging of several techniques, such as SE, BSE, EDX, EBSD, EBIC, CL, AFM. The sample is placed in the vacuum chamber of the electron microscope. Both analysis methods are then performed automatically at the same sample location. The obtained SEM and Raman images can then be superimposed. Moreover, adding a focused ion beam (FIB) on the chamber allows removal of the material and therefore 3D imaging of the sample. Low-vacuum mode allows analysis on biological and non-conductive samples. Biological Applications By using Raman microspectroscopy, in vivo time- and space-resolved Raman spectra of microscopic regions of samples can be measured. Sampling is non-destructive and water, media, and buffers typically do not interfere with the analysis. Consequently, in vivo time- and space-resolved Raman spectroscopy is suitable to examine proteins, cells and organs. In the field of microbiology, confocal Raman microspectroscopy has been used to map intracellular distributions of macromolecules, such as proteins, polysaccharides, and nucleic acids and polymeric inclusions, such as poly-β-hydroxybutyric acid and polyphosphates in bacteria and sterols in microalgae. Combining stable isotopic probing (SIP) experiments with confocal Raman microspectroscopy has permitted determination of assimilation rates of 13C and 15N-substrates as well as D2O by individual bacterial cells. See also Raman scattering Coherent Raman Scattering Microscopy Scanning electron microscope Tip-enhanced Raman spectroscopy References Raman scattering Microscopes Cell imaging Laboratory equipment Microscopy Optical microscopy
Raman microscope
[ "Chemistry", "Technology", "Engineering", "Biology" ]
1,402
[ "Optical microscopy", "Measuring instruments", "Microscopes", "Microscopy", "Cell imaging" ]
23,046,982
https://en.wikipedia.org/wiki/Nicking%20enzyme%20amplification%20reaction
Nicking Enzyme Amplification Reaction (NEAR) is a method for in vitro DNA amplification like the polymerase chain reaction (PCR). NEAR is isothermal, replicating DNA at a constant temperature using a polymerase (and nicking enzyme) to exponentially amplify the DNA at a temperature range of 55 °C to 59 °C. One disadvantage of PCR is that it consumes time uncoiling the double-stranded DNA with heat into single strands (a process called denaturation) . This leads to amplification times typically thirty minutes or more for significant production of amplified products. Potential advantages of NEAR over PCR are increased speed and lower energy requirements, characteristics that are shared with other isothermal amplification schemes. A major disadvantage of NEAR relative to PCR is that production of nonspecific amplification products is a common issue with isothermal amplification reactions. The NEAR reaction uses naturally occurring or engineered endonucleases that introduce a strand break on only one strand of a double-stranded DNA cleavage site. The ability of several of these enzymes to catalyze isothermal DNA amplification was disclosed but not claimed in the patents issued for the enzymes themselves. References United States Patent Application 20090081670. March 26, 2009. NICKING AND EXTENSION AMPLIFICATION REACTION FOR THE EXPONENTIAL AMPLIFICATION OF NUCLEIC ACIDS. Biochemistry detection methods Genetics techniques
Nicking enzyme amplification reaction
[ "Chemistry", "Engineering", "Biology" ]
289
[ "Biochemistry methods", "Genetics techniques", "Biotechnology stubs", "Genetic engineering", "Chemical tests", "Biochemistry stubs", "Biochemistry detection methods", "Biochemistry" ]
23,047,742
https://en.wikipedia.org/wiki/2-Acetylaminofluorene
2-Acetylaminofluorene (AAF, 2-AAF) is a carcinogenic and mutagenic derivative of fluorene. It is used as a biochemical tool in the study of carcinogenesis. It induces tumors in a number of species in the liver, bladder and kidney. The metabolism of this compound in the body by means of biotransformation reactions is the key to its carcinogenicity. 2-AAF is a substrate for cytochrome P-450 (CYP) enzyme, which is a part of a super family found in almost all organisms. This reaction results in the formation of hydroxyacetylaminofluorene which is a proximal carcinogen and is more potent than the parent molecule. The N-hydroxy metabolite undergoes several enzymatic and non-enzymatic rearrangements. It can be O-acetylated by cytosolic N-acetyltransferase enzyme to yield N-acetyl-N-acetoxyaminofluorene. This intermediate can spontaneously rearrange to form the arylamidonium ion and a carbonium ion which can interact directly with DNA to produce DNA adducts. In addition to esterification by acetylation, the N-hydroxy derivative can be O-sulfated by cytosolic sulfur transferase enzyme giving rise to the N-acetyl-N-sulfoxy product. In addition, the cytosolic N,O-aryl hydroxamic acid acyltransferase enzyme catalyzes the transfer of the acetyl group from the N atom of the N-OH-2-AAF to the O atom of the N-OH group to produce N-acetoxy-2-aminofluorene (N-OH-2-AF). This reactive metabolite spontaneously decomposes to form a nitrenium ion which will also react with DNA. However, the product of this latter reaction is the deacetylated aminofluorene adduct. The interconversion of amide and amine metabolites of 2-AAF can further occur via the microsomal enzyme deacetylase producing the N-hydroxy metabolite of the amine derivative. Subsequent esterification of the aryl hydroxylamine by sulfur transferase yields the sulfate ester which also spontaneously decompose to form nitrenium ion. The reactive nitrenium, carbonium and arylamidonium ion metabolites of 2-AAF react with the nucleophilic groups in DNA, proteins and endogenous thiols such as glutathione. Other metabolites such as the N,O-glucuronide, although not directly activated products, can be important in the carcinogenic process because they are capable of degradation to proximal N-hydroxy metabolites. This metabolite is presumed to be involved in formation of bladder tumors. The mechanism for this is thought to involve degradation of glucuronide in the bladder due to acidic pH of urine. See also Acetoxyacetylaminofluorene Hydroxyacetylaminofluorene References Carcinogens Acetamides
2-Acetylaminofluorene
[ "Chemistry", "Environmental_science" ]
694
[ "Carcinogens", "Toxicology" ]
23,057,806
https://en.wikipedia.org/wiki/Avogadro%20%28software%29
Avogadro is a molecule editor and visualizer designed for cross-platform use in computational chemistry, molecular modeling, bioinformatics, materials science, and related areas. It is extensible via a plugin architecture. Features Molecule builder-editor for Windows, Linux, Unix, and macOS. All source code is licensed under the GNU General Public License (GPL) version 2. Supported languages include: Chinese, English, French, German, Italian, Russian, Spanish, and Polish. Supports multi-threaded rendering and computation. Plugin architecture for developers, including rendering, interactive tools, commands, and Python scripts. OpenBabel import of files, input generation for multiple computational chemistry packages, X-ray crystallography, and biomolecules. See also References External links Free chemistry software Free software programmed in C++ Molecular modelling software Computational chemistry software Science software that uses Qt Chemistry software for Linux Software using the GNU General Public License Free bioinformatics software
Avogadro (software)
[ "Chemistry" ]
202
[ "Molecular modelling software", "Free chemistry software", "Chemistry software", "Computational chemistry software", "Theoretical chemistry stubs", "Molecular modelling", "Computational chemistry", "Computational chemistry stubs", "Chemistry software for Linux", "Physical chemistry stubs" ]
23,058,094
https://en.wikipedia.org/wiki/Extensible%20Computational%20Chemistry%20Environment
The Extensible Computational Chemistry Environment (ECCE, pronounced "etch-ā") provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. Major features Support for building molecular models. Graphical user interface to a broad range of electronic structure theory types. Supported codes currently include NWChem, GAMESS (UK), Gaussian 03, Gaussian 98, and Amica. Other codes are registered based on user requirements. Graphical user interface for basis set selection. Remote submission of calculations to UNIX and Linux workstations, Linux clusters, and supercomputers. Supported queue management systems include PBS, LSF, NQE/NQS, LoadLeveler and Maui Scheduler. Three-dimensional visualization and graphical display of molecular data properties while jobs are running and after completion. Molecular orbitals and vibrational frequencies are among the properties displayed. Support for importing results from NWChem, Gaussian 94, Gaussian 98, and Gaussian 03 calculations run outside of the ECCE environment. Extensive web-based help. See also External links Git Hub source code Computational chemistry software Molecular modelling software
Extensible Computational Chemistry Environment
[ "Chemistry" ]
267
[ "Molecular modelling software", "Molecular physics", "Computational chemistry software", "Chemistry software", "Molecular modelling", "Computational chemistry", "Molecular physics stubs" ]
25,950,683
https://en.wikipedia.org/wiki/Operating%20temperature
An operating temperature is the allowable temperature range of the local ambient environment at which an electrical or mechanical device operates. The device will operate effectively within a specified temperature range which varies based on the device function and application context, and ranges from the minimum operating temperature to the maximum operating temperature (or peak operating temperature). Outside this range of safe operating temperatures the device may fail. It is one component of reliability engineering. Similarly, biological systems have a viable temperature range, which might be referred to as an "operating temperature". Ranges Most semiconductor devices are manufactured in several temperature grades. Broadly accepted grades are: Commercial: 0 °C to 70 °C () Industrial: −40 °C to 85 °C () Military: −55 °C to 125 °C () Nevertheless, each manufacturer defines its own temperature grades so designers must pay attention to datasheet specifications. For example, Maxim Integrated uses five temperature grades for its products: Full Military: −55 °C to 125 °C () Automotive: −25 °C to 125 °C () AEC-Q100 Level 2: −40 °C to 105 °C () Extended Industrial: −40 °C to 85 °C () Industrial: −20 °C to 85 °C () The use of such grades ensures that a device is suitable for its application, and will withstand the environmental conditions in which it is used. Normal operating temperature ranges are affected by several factors, such as the power dissipation of the device. These factors are used to define a "threshold temperature" of a device, i.e. its maximum normal operating temperature, and a maximum operating temperature beyond which the device will no longer function. Between these two temperatures, the device will operate at a non-peak level. For instance, a resistor may have a threshold temperature of and a maximum temperature of , between which it exhibits a thermal derating. For electrical devices, the operating temperature may be the junction temperature (TJ) of the semiconductor in the device. The junction temperature is affected by the ambient temperature, and for integrated circuits, is given by the equation: in which TJ is the junction temperature in °C, Ta is the ambient temperature in °C, PD is the power dissipation of the integrated circuit in W, and Rja is the junction to ambient thermal resistance in °C/W. Aerospace and military Electrical and mechanical devices used in military and aerospace applications may need to endure greater environmental variability, including temperature range. In the United States Department of Defense has defined the United States Military Standard for all products used by the United States Armed Forces. A product's environmental design and test limits to the conditions that it will undergo throughout its service life are specified in MIL-STD-810, the Department of Defense Test Method Standard for Environmental Engineering Considerations and Laboratory Tests. The MIL-STD-810G standard specifies that the "operating temperature stabilization is attained when the temperature of the functioning part(s) of the test item considered to have the longest thermal lag is changing at a rate of no more than per hour." It also specifies procedures to assess the performance of materials to extreme temperature loads. Military engine turbine blades experience two significant deformation stresses during normal service, creep and thermal fatigue. Creep life of a material is "highly dependent on operating temperature", and creep analysis is thus an important part of design validation. Some of the effects of creep and thermal fatigue may be mitigated by integrating cooling systems into the device's design, reducing the peak temperature experienced by the metal. Commercial and retail Commercial and retail products are manufactured to less stringent requirements than those for military and aerospace applications. For example, microprocessors produced by Intel Corporation are manufactured to three grades: commercial, industrial and extended. Because some devices generate heat during operation, they may require thermal management to ensure they are within their specified operating temperature range; specifically, that they are operating at or below the maximum operating temperature of the device. Cooling a microprocessor mounted in a typical commercial or retail configuration requires "a heatsink properly mounted to the processor, and effective airflow through the system chassis". Systems are designed to protect the processor from unusual operating conditions, such as "higher than normal ambient air temperatures or failure of a system thermal management component (such as a system fan)", though in "a properly designed system, this feature should never become active". Cooling and other thermal management techniques may affect performance and noise level. Noise mitigation strategies may be required in residential applications to ensure that the noise level does not become uncomfortable. Battery service life and efficacy is affected by operating temperature. Efficacy is determined by comparing the service life achieved by the battery as a percentage of its service life achieved at versus temperature. Ohmic load and operating temperature often jointly determine a battery's discharge rate. Moreover, if the expected operating temperature for a primary battery deviates from the typical 10 °C to 25 °C () range, then operating temperature "will often have an influence on the type of battery selected for the application". Energy reclamation from partially depleted lithium sulfur dioxide battery has been shown to improve when "appropriately increasing the battery operating temperature". Biology Mammals attempt to maintain a comfortable body temperature under various conditions by thermoregulation, part of mammalian homeostasis. The lowest normal temperature of a mammal, the basal body temperature, is achieved during sleep. In women, it is affected by ovulation, causing a biphasic pattern which may be used as a component of fertility awareness. In humans, the hypothalamus regulates metabolism, and hence the basal metabolic rate. Amongst its functions is the regulation of body temperature. The core body temperature is also one of the classic phase markers for measuring the timing of an individual's Circadian rhythm. Changes to the normal human body temperature may result in discomfort. The most common such change is a fever, a temporary elevation of the body's thermoregulatory set-point, typically by about . Hyperthermia is an acute condition caused by the body absorbing more heat than it can dissipate, whereas hypothermia is a condition in which the body's core temperature drops below that required for normal metabolism, and which is caused by the body's inability to replenish the heat that is being lost to the environment. Notes References Threshold temperatures
Operating temperature
[ "Physics", "Chemistry" ]
1,283
[ "Physical phenomena", "Phase transitions", "Threshold temperatures" ]
25,958,403
https://en.wikipedia.org/wiki/Pleiotropy%20%28drugs%29
In pharmacology, pleiotropy includes all of a drug's actions other than those for which the agent was specifically developed. It may include adverse effects which are detrimental ones, but is often used to denote additional beneficial effects. For example, statins are HMG-CoA reductase inhibitors that primarily act by decreasing cholesterol synthesis, but which are believed to have other beneficial effects, including acting as antioxidants and stabilizing atherosclerotic plaques. Steroid drugs, such as prednisone and prednisolone, have pleiotropic effects, including systemic ones, for the same reason that endogenous steroid hormones do: cells throughout the body have receptors that can respond to them, because the endogenous ones are endocrine messengers. Another example is melatonin, which has a wide range of effects on biological systems on multiple scales, from modulating the circadian rhythm and inducing sleep via the activation of melatoninergic receptors, to recepto-independent antioxydative and anti-inflammatory effects over all organs down to cells. See also Adverse effect Pleiotropy in genetics References Pharmacology
Pleiotropy (drugs)
[ "Chemistry" ]
250
[ "Pharmacology", "Medicinal chemistry" ]
25,958,537
https://en.wikipedia.org/wiki/Phyloscan
Phyloscan is a web service for DNA sequence analysis that is free and open to all users (without login requirement). For locating matches to a user-specified sequence motif for a regulatory binding site, Phyloscan provides a statistically sensitive scan of user-supplied mixed aligned and unaligned DNA sequence data. Phyloscan's strength is that it brings together the Staden method for computing statistical significance, the "phylogenetic motif model" scanning functionality of the MONKEY software that models evolutionary relationships among aligned sequences, the use of the Bailey & Gribskov method for combining statistics across non-aligned sequence data, and the Neuwald & Green technique for combining statistics across multiple binding sites found within a single gene promoter region. References External links Phyloscan homepage at Brown University Bioinformatics Bioinformatics software Computational science
Phyloscan
[ "Mathematics", "Engineering", "Biology" ]
177
[ "Biological engineering", "Bioinformatics software", "Applied mathematics", "Computational science", "Bioinformatics" ]
2,910,030
https://en.wikipedia.org/wiki/Great%20dirhombicosidodecahedron
In geometry, the great dirhombicosidodecahedron (or great snub disicosidisdodecahedron) is a nonconvex uniform polyhedron, indexed last as . It has 124 faces (40 triangles, 60 squares, and 24 pentagrams), 240 edges, and 60 vertices. This is the only non-degenerate uniform polyhedron with more than six faces meeting at a vertex. Each vertex has 4 squares which pass through the vertex central axis (and thus through the centre of the figure), alternating with two triangles and two pentagrams. Another unusual feature is that the faces all occur in coplanar pairs. This is also the only uniform polyhedron that cannot be made by the Wythoff construction from a spherical triangle. It has a special Wythoff symbol relating it to a spherical quadrilateral. This symbol suggests that it is a sort of snub polyhedron, except that instead of the non-snub faces being surrounded by snub triangles as in most snub polyhedra, they are surrounded by snub squares. It has been nicknamed "Miller's monster" (after J. C. P. Miller, who with H. S. M. Coxeter and M. S. Longuet-Higgins enumerated the uniform polyhedra in 1954). Related polyhedra If the definition of a uniform polyhedron is relaxed to allow any even number of faces adjacent to an edge, then this definition gives rise to one further polyhedron: the great disnub dirhombidodecahedron which has the same vertices and edges but with a different arrangement of triangular faces. The vertices and edges are also shared with the uniform compounds of 20 octahedra or 20 tetrahemihexahedra. 180 of the 240 edges are shared with the great snub dodecicosidodecahedron. This polyhedron is related to the nonconvex great rhombicosidodecahedron (quasirhombicosidodecahedron) by a branched cover: there is a function from the great dirhombicosidodecahedron to the quasirhombicosidodecahedron that is 2-to-1 everywhere, except for the vertices. Cartesian coordinates Let the point be given by , where is the golden ratio. Let the matrix be given by . is the rotation around the axis by an angle of , counterclockwise. Let the linear transformations be the transformations which send a point to the even permutations of with an even number of minus signs. The transformations constitute the group of rotational symmetries of a regular tetrahedron. The transformations , constitute the group of rotational symmetries of a regular icosahedron. Then the 60 points are the vertices of a great dirhombicosidodecahedron. The edge length equals , the circumradius equals , and the midradius equals . For a great dirhombicosidodecahedron whose edge length is 1, the circumradius is . Its midradius is . Gallery References Har'El, Z. Uniform Solution for Uniform Polyhedra., Geometriae Dedicata 47, 57-110, 1993. Zvi Har’El, Kaleido software, Images, dual images Mäder, R. E. Uniform Polyhedra. Mathematica J. 3, 48-57, 1993. External links http://www.mathconsult.ch/showroom/unipoly/75.html http://www.software3d.com/MillersMonster.php Uniform polyhedra
Great dirhombicosidodecahedron
[ "Physics" ]
765
[ "Uniform polytopes", "Uniform polyhedra", "Symmetry" ]
2,911,349
https://en.wikipedia.org/wiki/Central%20tolerance
In immunology, central tolerance (also known as negative selection) is the process of eliminating any developing T or B lymphocytes that are autoreactive, i.e. reactive to the body itself. Through elimination of autoreactive lymphocytes, tolerance ensures that the immune system does not attack self peptides. Lymphocyte maturation (and central tolerance) occurs in primary lymphoid organs such as the bone marrow and the thymus. In mammals, B cells mature in the bone marrow and T cells mature in the thymus. Central tolerance is not perfect, so peripheral tolerance exists as a secondary mechanism to ensure that T and B cells are not self-reactive once they leave primary lymphoid organs. Peripheral tolerance is distinct from central tolerance in that it occurs once developing immune cells exit primary lymphoid organs (the thymus and bone-marrow), prior to their export into the periphery. Function Central tolerance is essential to proper immune cell functioning because it helps ensure that mature B cells and T cells do not recognize self-antigens as foreign microbes. More specifically, central tolerance is necessary because T cell receptors (TCRs) and B cell receptors (BCRs) are made by cells through random somatic rearrangement. This process, known as V(D)J recombination, is important because it increases the receptor diversity which increases the likelihood that B cells and T cells will have receptors for novel antigens. Junctional diversity occurs during recombination and serves to further increase the diversity of BCRs and TCRs. The production of random TCRs and BCRs is an important method of defense against microbes due to their high mutation rate. This process also plays an important role in promoting the survival of a species, because there will be a variety of receptor arrangements within a species – this enables a very high chance of at least one member of the species having receptors for a novel antigen. While the process of somatic recombination is essential to a successful immune defense, it can lead to autoreactivity. For example, lack of functional RAG1/2, enzymes necessary for somatic recombination, has been linked to development of immune cytopenias in which antibodies are produced against the patient's blood cells. Due to the nature of a random receptor recombination, there will be some BCRs and TCRs produced that recognize self antigens as foreign. This is problematic, since these B and T cells would, if activated, mount an immune response against self if not killed or inactivated by central tolerance mechanisms. Therefore, without central tolerance, the immune system could attack self, which is not sustainable and could result in an autoimmune disorder. Mechanism The result of central tolerance is a population of lymphocytes that do not mount immune response towards self-antigens. These cells use their TCR or BCR specificity to recognize foreign antigens, in order to play their specific roles in immune reaction against those antigens. In this way, the mechanisms of central tolerance ensure that lymphocytes that would recognise self-antigens in a way that could endanger the host, are not released into the periphery. It is of note that T cells, despite tolerance mechanisms, are at least to some extent self-reactive. TCR of conventional T cells must be able to recognize parts of major histocompatibility complex (MHC) molecules (MHC class I in case of CD8+ T cells or MHC class II in case of CD4+ T cells) to create proper interaction with antigen-presenting cell. Furthermore, TCRs of regulatory T cells (Treg cells) are directly reactive towards self-antigens (although their self-reactivity is not very strong) and use this autoreactivity to regulate immune reactions by suppressing immune system when it should not be active. Importantly, lymphocytes can only develop tolerance towards antigens that are present in the bone marrow (for B cells) and thymus (for T cells). T cell T cell progenitors (also called thymocytes) are created in the bone marrow and then migrate to the thymus where they continue their development. During this development, the thymocytes perform the V(D)J recombination and some of the developing T cell clones produce TCR that is completely unfunctional (unable to bind peptide-MHC complexes) and some produce TCR that is self-reactive and could therefore promote autoimmunity. These "problematic" clones are therefore removed from the pool of T cells by specific mechanisms. First, during "positive selection" the thymocytes are tested, whether their TCR works properly and those with unfunctional TCR are removed by apoptosis. The mechanism has its name because it selects for survival only those thymocytes whose TCRs do interact with peptide-MHC complexes on antigen presenting cells in the thymus. During the late stage of positive selection, another process called "MHC restricition" (or lineage commitment) takes place. In this process the thymocytes whose TCR recognize with MHCI (MHC class I) molecules become CD4- CD8+ and thymocytes whose TCR recognize MHCII (MHC class II) become CD4+ CD8-. Subsequently, the positively selected thymocytes go through "negative selection" which tests the thymocytes for self-reactivity. The cells that are strongly self-reactive (and therefore prone to attacking the host cells) are removed by apoptosis. Thymocytes that are still self-reactive, but only slightly develop into T regulatory (Treg) cells. Thymocytes that are not self-reactive become mature naïve T cells. Both the Treg and mature naïve T cells subsequently migrate to the secondary lymphoid organs. The negative selection has its name because it selects for survival only those thymocytes whose TCRs do not interact (or interact only slightly) with peptide-MHC complexes on antigen presenting cells in the thymus. Two other terms - recesive and dominant tolerance are also important regarding the T cell central tolerance. Both the terms refer to two possible ways of tolerance establishment towards particular antigen (typically self antigen). The "recesive tolerance" means that the antigen is tolerated via deletion of those T cells that would facilitate immune response against the antigen (deletion of autoreactive cells in negative selection). The "dominant tolerance" means that the T cell clones specific for the antigen are deviated into Treg cells and therefore suppress the immune response against the antigen (Treg selection during the negative selection). Steps of T cell tolerance Development of T cell progenitors T cell precursors originate from bone marrow (BM). Population of the earliest hematopoietic progenitors do not bear markers of differentiated cells (for that they are called Lin- „lineage negative“) but express molecules such as SCA1 (stem cell antigen) and KIT (receptor for stem cell factor SCF). Based on these markers the cells are called LSKs (Lineage-SCA1-KIT). This population can be further divided, based on expression of markers such as CD150 and FMS-related tyrosine kinase 3 (FLT3), into CD150+ FLT3-hematopoietic stem cells (HSCs) and CD150- FLT3low multipotent progenitors (MPPs). The HSCs are „true hematopoietic stem cells“ because they have the ability of self-renewal (generating new HSCs) and also have the potential to differentiate into all blood cell types. The direct descendants of HSCs are the more mature multipotent progenitors (MPPs) that highly proliferate, can differentiate into all blood cell types but are not capable of self-renewal (do not have the ability to indefinitely generate new MPPs and therefore HSCs are needed for generation of new MPPs). Some of the MPPs further upregulate expression of FLT3 (becoming CD150- FLT3high) and start to upregulate genes specific for lymphoid lineage (for example Rag1) (but remain Lin-). These progenitors (still belong to the LSK cells) consist of two similar populations termed lymphoid-primed MPPs (LMPPs) and early lymphoid progenitors (ELPs). The LMPPs/ELPs subsequently give rise to common lymphoid progenitors (CLPs). These cells (FLT3high LIN- KITlow) do not belong to LSK pool, are more mature and more prone towards the lymphoid lineage, meaning that under normal circumstances they will ultimately give rise to T or B cells or other lymphocytes (NK cells). But since they are only progenitors, their cell fate is not strictly predetermined and they still have the ability to differentiate into other lineages. Migration into the thymus Progenitors from bone marrow (BM), even the HSCs, have the ability to randomly exit the BM to the bloodstream and thus can be readily detected there. Therefore, after being generated, the T cell progenitors exit the BM and are randomly carried by blood throughout the body. At the moment they reach postcapillary venules in the thymic cortico-medullary junction, they start slowing down and rolling on the endothelium, because all the progenitors, including LSK cells, express on their surface glycoprotein PSGL1, which is a ligand for P-selectin, expressed on the thymic endothelium. But out of all the aforementioned T cell progenitors, only the LMPPs/ELPs and CLPs express chemokine receptors CCR7 and CCR9 that enable them to enter the thymus. The thymic endothelium express chemokines CCL19 and CCL21, which are ligands for CCR7 and CCL25 which is a ligand for CCR9. The final part of thymic entry is not yet fully understood. Suggested model is that receptor sensing of chemokines by the progenitors activates their integrins (suggested integrins are VLA-4 and LFA-1) which engage with ligands on the endothelium. This interaction stops the rolling, leads to cellular arrest and finally to transmigration along the chemokine gradient inside thymus. Therefore, all the progenitors will be rolling on the thymic endothelium, but only the LMPPs/ELPs and CLPs will enter the thymus because only they have the proper receptor equipement to do so. The mechanism is highly similar to the transmigration, which is used by leukocytes to enter lymph nodes or inflamed tissues. Early thymic development From the moment LMPPs/ETPs and CLPs enter the thymus in the corticomedullary junction, they are referred to as thymus settling progenitors (TSPs). The TSPs highly proliferate and start to migrate to the subcapsullar zone of the thymus. It is not celar what signals drive the migration. One possibility is that they migrate along chemokine gradients, using CXCR4, CCR7 and CCR9 receptors but the migration can be also driven only by interactions of integrins and other cells and ECM (extra-cellular matrix) without direct involvement of chemokines. As they migrate towards the subcapsular zone, the TSPs further continue in their differentiation, which is driven mainly by the thymic microenvironment. Out of many signals the TSPs and other subsequent precursors receive from the microenvironment, the Notch signalling is especially important to drive their differentiation fate. The precursors express Notch1 receptor which is activated by ligands present in the thymic tissue. The subsequent activation of Notch pathway leads to gradual loss of the progenitors capability to generate other cell lineages and they ultimately become only capable to create T cells but this comes at the later stages of the differentiation. At the stage of TSPs, the progenitors still retain the capacity to create both lymphoid and myeloid cells. Given their capability to generate other cell lineages (mainly in vitro) it is even debated that they can physiologically, at least partially contribute to generation of other cell types, present in the thymus, mainly plasmacytoid dendritic cells (pDCs). But this has not yet been clearly proven. DN to DP stages In the next step, the TSPs give rise to early thymic precursors (ETPs), also called as double negative 1 (DN1) cells. The term „double negative“ refers to the fact that at this stage the precursors do not express CD4 nor CD8 coreceptors (sometimes they are even termed „triple negative“ because they also do not express CD3 complex). The DN stages can be distinguished by the expression of surface markers CD44 and CD25, with the DN1 cells being CD44+ CD25-. Similarly to the TSPs, the DN1 cells are still capable of generating other cell types aside from T cells, such as B cells, NK cells, DCs and macrophages (lymphoid and myeloid lineage). But, due to the Notch signalling, they start to committing towards T cell lineage by expression of transcription factors (TFs) such as GATA3 and TCF1. Subsequently, the DN1 cells differentiate into DN2 cells, that are CD44+ and CD25+. The DN2 stage can be further divided into two substages DN2a and DN2b. The transition from the earlier DN2a substage to the later DN2b is also called commitement, because it is at this moment when the T cell precursor finally and completely lose their ability to generate other cell lineages and from that moment they can (even in vitro) only differentiate into T cells. After the commitement, at the DN2b substage, the precursors also start to produce CD3 complex (signalling component of the future TCR receptor complex). Next, the precursors continue their differentiation into DN3 phase in which they are CD44- CD25+. At this stage, the cells finally arrive to the subcapsular zone of the thymus, further proliferate and most importantly, start to express Rag1 and Rag2 (recombinases of the V(D)J recombination of T or B cell receptors). Therefore, it is the DN3 stage at which the T cell precursors start to build their TCRs. It is also at this stage when the precursors decide whether they become αβ or γδ T cell. There are two possible models of how this decision step is made. The first possibility is that the cell fate is simply determined during the development of the precursor by the commitment similar to the development of other cell lineages. Therefore, some T cell precursors commit to γδ T cell and therefore in this step recombine γδTCR and some commit to αβ T cell and similarly recombine αβTCR. The other and generally more accepted model is that the commitment is determined during the TCR rearrangement and formation. Since the V(D)J recombination is step-by-step process, the precursors firstly recombine their genes to produce γδTCR. At the moment, the strength of signal that is produced by the newly formed TCR decides. If the γδTCR is properly formed and receives strong signal by interacting with the ligands present in the thymus, then the precursor continue its development into γδ T cell through specific selection processes. If the T cell precursor receives only weak signal, then the γδTCR formation is scratched and the recombination towards αβTCR starts. Those precursors firstly recombine TCRβ chain and combine it with invariant TCRα (substitute chain) and in previous stages formed CD3 complex to create so-called pre-TCR. With this premature TCR, they enter process called β-selection. This is a control step, in which the progenitor needs to receive positive signal from the pre-TCR to survive. They further need signal from CXCR4 (ligand is CXCL12) which does not serve here to direct migration but as a survival signal along with Notch signalling. Therefore, the β-selection step controls whether the TCRβ chain is properly formed and functional. It can be also understood as a positive selection specific only for the TCRβ chain (TCRα chain is not yet formed) but control for self-reactivity is not included in this step and comes later, especially in the medullary section. The cells that do not create functional γδTCR or pre-TCR or do not successfully pass through β-selection are removed by apoptosis. The cells that successfully pass the β-selection continue their development into DN4 stage, stop the expression of CD25 becoming CD44- CD25- and begin migration inside thymic cortex. It is, again, not completely clear what drives the migration. Probably, the receptors CXCR4 and CXCR9 on the DN4 cells drive the migration along gradients of chemokines CXCL12 and CCL25, although other models of migration to the cortex were established mainly based on movement dynamics of cells due to their extensive proliferation or fluid currents in the thymus without direct involvement of chemokine-driven migration. The DN4 cells subsequently begin the expression of CD8 and CD4 coreceptors becoming CD8+ CD4+ DP cells (DP means double positive because they express both the coreceptors).  Once in the thymic cortex, the DP cells finalize the rearrangement of TCRα chain, which results in production of complete αβTCR complex, which marks the cells ready to enter the positive selection, which takes place in the thymic cortex. During positive selection, T cells are checked for their ability to bind peptide-MHC complexes with affinity. If the T cell cannot bind the MHC class I or MHC class II complex, it does not receive survival signals, so it dies via apoptosis. T cell receptors with sufficient affinity for peptide-MHC complexes are selected for survival. Depending on whether the T cell binds MHC I or II, it will become a CD8+ or CD4+ T cell, respectively. Positive selection occurs in the thymic cortex with the help of thymic epithelial cells that contain surface MHC I and MHC II molecules. During negative selection, T cells are tested for their affinity to self. If they bind a self peptide, then they are signaled to apoptose (process of clonal deletion). The thymic epithelial cells display self antigen to the T cells to test their affinity for self. Transcriptional regulators AIRE and Fezf2 play important roles in the expression of self tissue antigens on the thymic epithelial cells in the thymus. Negative selection occurs in the cortico-medullary junction and in the thymic medulla. The T cells that do not bind self, but do recognize antigen/MHC complexes, and are either CD4+ or CD8+, migrate to secondary lymphoid organs as mature naïve T cells. Regulatory T cells are another type of T cell that mature in the thymus. Selection of T reg cells occurs in the thymic medulla and is accompanied by the transcription of FOXP3. T reg cells are important for regulating autoimmunity by suppressing the immune system when it should not be active. B cell Immature B cells in the bone marrow undergo negative selection when they bind self peptides. Properly functioning B cell receptors recognize non-self antigen, or pathogen-associated molecular proteins (PAMPs). Main outcomes of autoreactivity of BCRs Apoptosis (clonal deletion) Receptor editing: the self-reactive B cell changes specificity by rearranging genes and develops a new BCR that does not respond to self. This process gives the B cell a chance for editing the BCR before it is signaled to apoptose or becomes anergic. Induction of anergy (a state of non-reactivity) Genetic diseases Genetic defects in central tolerance can lead to autoimmunity. Autoimmune Polyendocrinopathy Syndrome Type I is caused by mutations in the human gene AIRE. This leads to a lack of expression of peripheral antigens in the thymus, and hence a lack of negative selection towards key peripheral proteins such as insulin. Multiple autoimmune symptoms result. History The first use of central tolerance was by Ray Owen in 1945 when he noticed that dizygotic twin cattle did not produce antibodies when one of the twins was injected with the other's blood. His findings were confirmed by later experiments by Hasek and Billingham. The results were explained by Burnet's clonal selection hypothesis. Burnet and Medawar won the Nobel Prize in 1960 for their work in explaining how immune tolerance works. See also Autoimmunity Immunology Peripheral tolerance References Immunology
Central tolerance
[ "Biology" ]
4,498
[ "Immunology" ]
2,912,664
https://en.wikipedia.org/wiki/Precipitable%20water
Precipitable water is the depth of water in a column of the atmosphere, if all the water in that column were precipitated as rain. As a depth, the precipitable water is measured in millimeters or inches. Often abbreviated as "TPW", for Total Precipitable Water. Measurement There are different measurement techniques: One type of measurement is based on the measurement of the solar irradiance on two wavelengths, one in a water absorption band, and the other not. The precipitable water column is determined using the irradiances in these bands and the Beer–Lambert law. The precipitable water can also be calculated by integration of radiosonde data (relative humidity, pressure and temperature) over the whole atmosphere. Data can be viewed on a Lifted-K index. The numbers represent inches of water as mentioned above for a geographical location. Recently, methods using the Global Positioning System have been developed. Some work has been performed to create empirical relationships between surface specific humidity and precipitable water based on localized measurements (generally a 2nd to 5th order polynomial). However, this method has not received widespread use in part because humidity is a local measurement and precipitable water is a total column measurement. References External links Current global map of precipitable water Remote Sensing of Water Vapor From GPS Receivers Water Atmospheric thermodynamics
Precipitable water
[ "Environmental_science" ]
287
[ "Water", "Hydrology" ]
2,914,802
https://en.wikipedia.org/wiki/Majorana%20equation
In physics, the Majorana equation is a relativistic wave equation. It is named after the Italian physicist Ettore Majorana, who proposed it in 1937 as a means of describing fermions that are their own antiparticle. Particles corresponding to this equation are termed Majorana particles, although that term now has a more expansive meaning, referring to any (possibly non-relativistic) fermionic particle that is its own anti-particle (and is therefore electrically neutral). There have been proposals that massive neutrinos are described by Majorana particles; there are various extensions to the Standard Model that enable this. The article on Majorana particles presents status for the experimental searches, including details about neutrinos. This article focuses primarily on the mathematical development of the theory, with attention to its discrete and continuous symmetries. The discrete symmetries are charge conjugation, parity transformation and time reversal; the continuous symmetry is Lorentz invariance. Charge conjugation plays an outsize role, as it is the key symmetry that allows the Majorana particles to be described as electrically neutral. A particularly remarkable aspect is that electrical neutrality allows several global phases to be freely chosen, one each for the left and right chiral fields. This implies that, without explicit constraints on these phases, the Majorana fields are naturally CP violating. Another aspect of electric neutrality is that the left and right chiral fields can be given distinct masses. That is, electric charge is a Lorentz invariant, and also a constant of motion; whereas chirality is a Lorentz invariant, but is not a constant of motion for massive fields. Electrically neutral fields are thus less constrained than charged fields. Under charge conjugation, the two free global phases appear in the mass terms (as they are Lorentz invariant), and so the Majorana mass is described by a complex matrix, rather than a single number. In short, the discrete symmetries of the Majorana equation are considerably more complicated than those for the Dirac equation, where the electrical charge symmetry constrains and removes these freedoms. Definition The Majorana equation can be written in several distinct forms: As the Dirac equation written so that the Dirac operator is purely Hermitian, thus giving purely real solutions. As an operator that relates a four-component spinor to its charge conjugate. As a 2×2 differential equation acting on a complex two-component spinor, resembling the Weyl equation with a properly Lorentz covariant mass term. These three forms are equivalent, and can be derived from one-another. Each offers slightly different insight into the nature of the equation. The first form emphasises that purely real solutions can be found. The second form clarifies the role of charge conjugation. The third form provides the most direct contact with the representation theory of the Lorentz group. Purely real four-component form The conventional starting point is to state that "the Dirac equation can be written in Hermitian form", when the gamma matrices are taken in the Majorana representation. The Dirac equation is then written as with being purely real 4×4 symmetric matrices, and being purely imaginary skew-symmetric; as required to ensure that the operator (that part inside the parentheses) is Hermitian. In this case, purely real 4‑spinor solutions to the equation can be found; these are the Majorana spinors. Charge-conjugate four-component form The Majorana equation is with the derivative operator written in Feynman slash notation to include the gamma matrices as well as a summation over the spinor components. The spinor is the charge conjugate of By construction, charge conjugates are necessarily given by where denotes the transpose, is an arbitrary phase factor conventionally taken as and is a 4×4 matrix, the charge conjugation matrix. The matrix representation of depends on the choice of the representation of the gamma matrices. By convention, the conjugate spinor is written as A number of algebraic identities follow from the charge conjugation matrix One states that in any representation of the gamma matrices, including the Dirac, Weyl, and Majorana representations, that and so one may write where is the complex conjugate of The charge conjugation matrix also has the property that in all representations (Dirac, chiral, Majorana). From this, and a fair bit of algebra, one may obtain the equivalent equation: A detailed discussion of the physical interpretation of matrix as charge conjugation can be found in the article on charge conjugation. In short, it is involved in mapping particles to their antiparticles, which includes, among other things, the reversal of the electric charge. Although is defined as "the charge conjugate" of the charge conjugation operator has not one but two eigenvalues. This allows a second spinor, the ELKO spinor to be defined. This is discussed in greater detail below. Complex two-component form The Majorana operator, is defined as where is a vector whose components are the 2×2 identity matrix for and (minus) the Pauli matrices for The is an arbitrary phase factor, typically taken to be one: The is a 2×2 matrix that can be interpreted as the symplectic form for the symplectic group which is a double covering of the Lorentz group. It is which happens to be isomorphic to the imaginary unit (i.e. and for ) with the matrix transpose being the analog of complex conjugation. Finally, the is a short-hand reminder to take the complex conjugate. The Majorana equation for a left-handed complex-valued two-component spinor is then or, equivalently, with the complex conjugate of The subscript is used throughout this section to denote a left-handed chiral spinor; under a parity transformation, this can be taken to a right-handed spinor, and so one also has a right-handed form of the equation. This applies to the four-component equation as well; further details are presented below. Key ideas Some of the properties of the Majorana equation, its solution and its Lagrangian formulation are summarized here. The Majorana equation is similar to the Dirac equation, in the sense that it involves four-component spinors, gamma matrices, and mass terms, but includes the charge conjugate  of a spinor . In contrast, the Weyl equation is for two-component spinor without mass. Solutions to the Majorana equation can be interpreted as electrically neutral particles that are their own anti-particle. By convention, the charge conjugation operator takes particles to their anti-particles, and so the Majorana spinor is conventionally defined as the solution where That is, the Majorana spinor is "its own antiparticle". Insofar as charge conjugation takes an electrically charge particle to its anti-particle with opposite charge, one must conclude that the Majorana spinor is electrically neutral. The Majorana equation is Lorentz covariant, and a variety of Lorentz scalars can be constructed from its spinors. This allows several distinct Lagrangians to be constructed for Majorana fields. When the Lagrangian is expressed in terms of two-component left and right chiral spinors, it may contain three distinct mass terms: left and right Majorana mass terms, and a Dirac mass term. These manifest physically as two distinct masses; this is the key idea of the seesaw mechanism for describing low-mass neutrinos with a left-handed coupling to the Standard model, with the right-handed component corresponding to a sterile neutrino at GUT-scale masses. The discrete symmetries of C, P and T conjugation are intimately controlled by a freely chosen phase factor on the charge conjugation operator. This manifests itself as distinct complex phases on the mass terms. This allows both CP-symmetric and CP-violating Lagrangians to be written. The Majorana fields are CPT invariant, but the invariance is, in a sense "freer" than it is for charged particles. This is because charge is necessarily a Lorentz-invariant property, and is thus constrained for charged fields. The neutral Majorana fields are not constrained in this way, and can mix. Two-component Majorana equation The Majorana equation can be written both in terms of a real four-component spinor, and as a complex two-component spinor. Both can be constructed from the Weyl equation, with the addition of a properly Lorentz-covariant mass term. This section provides an explicit construction and articulation. Weyl equation The Weyl equation describes the time evolution of a massless complex-valued two-component spinor. It is conventionally written as Written out explicitly, it is The Pauli four-vector is that is, a vector whose components are the 2 × 2 identity matrix for μ = 0 and the Pauli matrices for μ = 1, 2, 3. Under the parity transformation one obtains a dual equation where . These are two distinct forms of the Weyl equation; their solutions are distinct as well. It can be shown that the solutions have left-handed and right-handed helicity, and thus chirality. It is conventional to label these two distinct forms explicitly, thus: Lorentz invariance The Weyl equation describes a massless particle; the Majorana equation adds a mass term. The mass must be introduced in a Lorentz invariant fashion. This is achieved by observing that the special linear group is isomorphic to the symplectic group Both of these groups are double covers of the Lorentz group The Lorentz invariance of the derivative term (from the Weyl equation) is conventionally worded in terms of the action of the group on spinors, whereas the Lorentz invariance of the mass term requires invocation of the defining relation for the symplectic group. The double-covering of the Lorentz group is given by where and and is the Hermitian transpose. This is used to relate the transformation properties of the differentials under a Lorentz transformation to the transformation properties of the spinors. The symplectic group is defined as the set of all complex 2×2 matrices that satisfy where is a skew-symmetric matrix. It is used to define a symplectic bilinear form on Writing a pair of arbitrary two-vectors as the symplectic product is where is the transpose of This form is invariant under Lorentz transformations, in that The skew matrix takes the Pauli matrices to minus their transpose: for The skew matrix can be interpreted as the product of a parity transformation and a transposition acting on two-spinors. However, as will be emphasized in a later section, it can also be interpreted as one of the components of the charge conjugation operator, the other component being complex conjugation. Applying it to the Lorentz transformation yields These two variants describe the covariance properties of the differentials acting on the left and right spinors, respectively. Differentials Under the Lorentz transformation the differential term transforms as provided that the right-handed field transforms as Similarly, the left-handed differential transforms as provided that the left-handed spinor transforms as Mass term The complex conjugate of the right handed spinor field transforms as The defining relationship for can be rewritten as From this, one concludes that the skew-complex field transforms as This is fully compatible with the covariance property of the differential. Taking to be an arbitrary complex phase factor, the linear combination transforms in a covariant fashion. Setting this to zero gives the complex two-component Majorana equation for the right-handed field. Similarly, the left-chiral Majorana equation (including an arbitrary phase factor ) is The left and right chiral versions are related by a parity transformation. As shown below, these square to the Klein–Gordon operator only if The skew complex conjugate can be recognized as the charge conjugate form of this is articulated in greater detail below. Thus, the Majorana equation can be read as an equation that connects a spinor to its charge-conjugate form. Left and right Majorana operators Define a pair of operators, the Majorana operators, where is a short-hand reminder to take the complex conjugate. Under Lorentz transformations, these transform as whereas the Weyl spinors transform as just as above. Thus, the matched combinations of these are Lorentz covariant, and one may take as a pair of complex 2-spinor Majorana equations. The products and are both Lorentz covariant. The product is explicitly Verifying this requires keeping in mind that and that The RHS reduces to the Klein–Gordon operator provided that , that is, These two Majorana operators are thus "square roots" of the Klein–Gordon operator. Four-component Majorana equation The real four-component version of the Majorana equation can be constructed from the complex two-component equation as follows. Given the complex field satisfying as above, define Using the algebraic machinery given above, it is not hard to show that Defining a conjugate operator The four-component Majorana equation is then Writing this out in detail, one has Multiplying on the left by brings the above into a matrix form wherein the gamma matrices in the chiral representation can be recognized. This is That is, Applying this to the 4-spinor and recalling that one finds that the spinor is an eigenstate of the mass term, and so, for this particular spinor, the four-component Majorana equation reduces to the Dirac equation The skew matrix can be identified with the charge conjugation operator (in the Weyl basis). Explicitly, this is Given an arbitrary four-component spinor its charge conjugate is with an ordinary 4×4 matrix, having a form explicitly given in the article on gamma matrices. In conclusion, the 4-component Majorana equation can be written as Charge conjugation and parity The charge conjugation operator appears directly in the 4-component version of the Majorana equation. When the spinor field is a charge conjugate of itself, that is, when then the Majorana equation reduces to the Dirac equation, and any solution can be interpreted as describing an electrically neutral field. However, the charge conjugation operator has not one, but two distinct eigenstates, one of which is the ELKO spinor; it does not solve the Majorana equation, but rather, a sign-flipped version of it. The charge conjugation operator for a four-component spinor is defined as A general discussion of the physical interpretation of this operator in terms of electrical charge is given in the article on charge conjugation. Additional discussions are provided by Bjorken & Drell or Itzykson & Zuber. In more abstract terms, it is the spinorial equivalent of complex conjugation of the coupling of the electromagnetic field. This can be seen as follows. If one has a single, real scalar field, it cannot couple to electromagnetism; however, a pair of real scalar fields, arranged as a complex number, can. For scalar fields, charge conjugation is the same as complex conjugation. The discrete symmetries of the gauge theory follows from the "trivial" observation that is an automorphism of For spinorial fields, the situation is more confusing. Roughly speaking, however, one can say that the Majorana field is electrically neutral, and that taking an appropriate combination of two Majorana fields can be interpreted as a single electrically charged Dirac field. The charge conjugation operator given above corresponds to the automorphism of In the above, is a 4×4 matrix, given in the article on the gamma matrices. Its explicit form is representation-dependent. The operator cannot be written as a 4×4 matrix, as it is taking the complex conjugate of , and complex conjugation cannot be achieved with a complex 4×4 matrix. It can be written as a real 8×8 matrix, presuming one also writes as a purely real 8-component spinor. Letting stand for complex conjugation, so that one can then write, for four-component spinors, It is not hard to show that and that It follows from the first identity that has two eigenvalues, which may be written as The eigenvectors are readily found in the Weyl basis. From the above, in this basis, is explicitly and thus Both eigenvectors are clearly solutions to the Majorana equation. However, only the positive eigenvector is a solution to the Dirac equation: The negative eigenvector "doesn't work", it has the incorrect sign on the Dirac mass term. It still solves the Klein–Gordon equation, however. The negative eigenvector is termed the ELKO spinor. Parity Under parity, the left-handed spinors transform to right-handed spinors. The two eigenvectors of the charge conjugation operator, again in the Weyl basis, are As before, both solve the four-component Majorana equation, but only one also solves the Dirac equation. This can be shown by constructing the parity-dual four-component equation. This takes the form where Given the two-component spinor define its conjugate as It is not hard to show that and that therefore, if then also and therefore that or equivalently This works, because and so this reduces to the Dirac equation for To conclude, and reiterate, the Majorana equation is It has four inequivalent, linearly independent solutions, Of these, only two are also solutions to the Dirac equation: namely and Solutions Spin eigenstates One convenient starting point for writing the solutions is to work in the rest frame way of the spinors. Writing the quantum Hamiltonian with the conventional sign convention leads to the Majorana equation taking the form In the chiral (Weyl) basis, one has that with the Pauli vector. The sign convention here is consistent with the article gamma matrices. Plugging in the positive charge conjugation eigenstate given above, one obtains an equation for the two-component spinor and likewise These two are in fact the same equation, which can be verified by noting that yields the complex conjugate of the Pauli matrices: The plane wave solutions can be developed for the energy-momentum and are most easily stated in the rest frame. The spin-up rest-frame solution is while the spin-down solution is That these are being correctly interpreted can be seen by re-expressing them in the Dirac basis, as Dirac spinors. In this case, they take the form and These are the rest-frame spinors. They can be seen as a linear combination of both the positive and the negative-energy solutions to the Dirac equation. These are the only two solutions; the Majorana equation has only two linearly independent solutions, unlike the Dirac equation, which has four. The doubling of the degrees of freedom of the Dirac equation can be ascribed to the Dirac spinors carrying charge. Momentum eigenstates In a general momentum frame, the Majorana spinor can be written as Electric charge The appearance of both and in the Majorana equation means that the field  cannot be coupled to a charged electromagnetic field without violating charge conservation, since particles have the opposite charge to their own antiparticles. To satisfy this restriction, must be taken to be electrically neutral. This can be articulated in greater detail. The Dirac equation can be written in a purely real form, when the gamma matrices are taken in the Majorana representation. The Dirac equation can then be written as with being purely real symmetric matrices, and being purely imaginary skew-symmetric. In this case, purely real solutions to the equation can be found; these are the Majorana spinors. Under the action of Lorentz transformations, these transform under the (purely real) spin group This stands in contrast to the Dirac spinors, which are only covariant under the action of the complexified spin group The interpretation is that complexified spin group encodes the electromagnetic potential, the real spin group does not. This can also be stated in a different way: the Dirac equation, and the Dirac spinors contain a sufficient amount of gauge freedom to naturally encode electromagnetic interactions. This can be seen by noting that the electromagnetic potential can very simply be added to the Dirac equation without requiring any additional modifications or extensions to either the equation or the spinor. The location of this extra degree of freedom is pin-pointed by the charge conjugation operator, and the imposition of the Majorana constraint removes this extra degree of freedom. Once removed, there cannot be any coupling to the electromagnetic potential, ergo, the Majorana spinor is necessarily electrically neutral. An electromagnetic coupling can only be obtained by adding back in a complex-number-valued phase factor, and coupling this phase factor to the electromagnetic potential. The above can be further sharpened by examining the situation in spatial dimensions. In this case, the complexified spin group has a double covering by with the circle. The implication is that encodes the generalized Lorentz transformations (of course), while the circle can be identified with the action of the gauge group on electric charges. That is, the gauge-group action of the complexified spin group on a Dirac spinor can be split into a purely-real Lorentzian part, and an electromagnetic part. This can be further elaborated on non-flat (non-Minkowski-flat) spin manifolds. In this case, the Dirac operator acts on the spinor bundle. Decomposed into distinct terms, it includes the usual covariant derivative The field can be seen to arise directly from the curvature of the complexified part of the spin bundle, in that the gauge transformations couple to the complexified part, and not the real-spinor part. That the field corresponds to the electromagnetic potential can be seen by noting that (for example) the square of the Dirac operator is the Laplacian plus the scalar curvature (of the underlying manifold that the spinor field sits on) plus the (electromagnetic) field strength For the Majorana case, one has only the Lorentz transformations acting on the Majorana spinor; the complexification plays no role. A detailed treatment of these topics can be found in Jost while the case is articulated in Bleeker. Unfortunately, neither text explicitly articulates the Majorana spinor in direct form. Field quanta The quanta of the Majorana equation allow for two classes of particles, a neutral particle and its neutral antiparticle. The frequently applied supplemental condition corresponds to the Majorana spinor. Majorana particle Particles corresponding to Majorana spinors are known as Majorana particles, due to the above self-conjugacy constraint. All the fermions included in the Standard Model have been excluded as Majorana fermions (since they have non-zero electric charge they cannot be antiparticles of themselves) with the exception of the neutrino (which is neutral). Theoretically, the neutrino is a possible exception to this pattern. If so, neutrinoless double-beta decay, as well as a range of lepton-number violating meson and charged lepton decays, are possible. A number of experiments probing whether the neutrino is a Majorana particle are currently underway. Notes References Additional reading "Majorana Legacy in Contemporary Physics", Electronic Journal of Theoretical Physics (EJTP) Volume 3, Issue 10 (April 2006) Special issue for the Centenary of Ettore Majorana (1906-1938?). ISSN 1729-5254 Frank Wilczek, (2009) "Majorana returns", Nature Physics Vol. 5 pages 614–618. Eponymous equations of physics Quantum field theory Spinors
Majorana equation
[ "Physics" ]
4,978
[ "Quantum field theory", "Quantum mechanics", "Eponymous equations of physics", "Equations of physics" ]
2,915,617
https://en.wikipedia.org/wiki/Pure%20fusion%20weapon
A pure fusion weapon is a hypothetical hydrogen bomb design that does not need a fission "primary" explosive to ignite the fusion of deuterium and tritium, two heavy isotopes of hydrogen used in fission-fusion thermonuclear weapons. Such a weapon would require no fissile material and would therefore be much easier to develop in secret than existing weapons. Separating weapons-grade uranium (U-235) or breeding plutonium (Pu-239) requires a substantial and difficult-to-conceal industrial investment, and blocking the sale and transfer of the needed machinery has been the primary mechanism to control nuclear proliferation to date. Explanation All current thermonuclear weapons use a fission bomb as a first stage to create the high temperatures and pressures necessary to start a fusion reaction between deuterium and tritium in a second stage. For many years, nuclear weapon designers have researched whether it is possible to create high enough temperatures and pressures inside a confined space to ignite a fusion reaction, without using fission. Pure fusion weapons offer the possibility of generating arbitrarily small nuclear yields because no critical mass of fissile fuel need be assembled for detonation, as with a conventional fission primary needed to spark a fusion explosion. There is also the advantage of reduced collateral damage stemming from fallout because these weapons would not create the highly radioactive byproducts made by fission-type weapons. These weapons would be lethal not only because of their explosive force, which could be large compared to bombs based on chemical explosives, but also because of the neutrons they generate. While various neutron source devices have been developed, some of them based on fusion reactions, none of them are able to produce a net energy yield, either in controlled form for energy production or uncontrolled for a weapon. Progress Despite the many millions of dollars spent by the U.S. between 1952 and 1992 to produce a pure fusion weapon, no measurable success was ever achieved. In 1998, the U.S. Department of Energy (DOE) released a restricted data declassification decision stating that even if the DOE made a substantial investment in the past to develop a pure fusion weapon, "the U.S. is not known to have and is not developing a pure fusion weapon and no credible design for a pure fusion weapon resulted from the DOE investment". The power densities needed to ignite a fusion reaction still seem attainable only with the aid of a fission explosion, or with large apparatus such as powerful lasers like those at the National Ignition Facility, the Sandia Z-pinch machine, or various magnetic tokamaks. Regardless of any claimed advantages of pure fusion weapons, building those weapons does not appear to be feasible using currently available technologies and many have expressed concern that pure fusion weapons research and development would subvert the intent of the Nuclear Non-Proliferation Treaty and the Comprehensive Test Ban Treaty. It has been claimed that it is possible to conceive of a crude, deliverable, pure fusion weapon, using only present-day, unclassified technology. The weapon design weighs approximately 3 tonnes, and might have a total yield of approximately 3 tonnes of TNT. The proposed design uses a large explosively pumped flux compression generator to produce the high power density required to ignite the fusion fuel. From the point of view of explosive damage, such a weapon would have no clear advantages over a conventional explosive, but the massive neutron flux could deliver a lethal dose of radiation to humans within a 500-meter radius (most of those fatalities would occur over a period of months, rather than immediately). Alternative fusion trigger Some researchers have examined the use of antimatter as an alternative fusion trigger, mainly in the context of antimatter-catalyzed nuclear pulse propulsion but also nuclear weapons. Such a system, in a weapons context, would have many of the desired properties of a pure fusion weapon. The technical barriers to producing and containing the required quantities of antimatter appear formidable, well beyond present capabilities. Induced gamma emission is another approach that is currently being researched. Very high energy-density chemicals such as ballotechnics and others have also been suggested as a means of triggering a pure fusion weapon. Nuclear isomers have also been investigated for use in pure fusion weaponry. Hafnium and tantalum isomers can be induced to emit very strong gamma radiation. Gamma emission from these isomers may have enough energy to start a thermonuclear reaction, without requiring any fissile material. References References to pure fusion weapon are in section V. C. 1. g. External links "Opening Pandora's nuclear war chest", article on "fourth generation" weapons Nuclear fusion Nuclear weapon design Technology forecasting Weapon development
Pure fusion weapon
[ "Physics", "Chemistry" ]
958
[ "Nuclear fusion", "Nuclear physics" ]
2,915,834
https://en.wikipedia.org/wiki/Error%20threshold%20%28evolution%29
In evolutionary biology and population genetics, the error threshold (or critical mutation rate) is a limit on the number of base pairs a self-replicating molecule may have before mutation will destroy the information in subsequent generations of the molecule. The error threshold is crucial to understanding "Eigen's paradox". The error threshold is a concept in the origins of life (abiogenesis), in particular of very early life, before the advent of DNA. It is postulated that the first self-replicating molecules might have been small ribozyme-like RNA molecules. These molecules consist of strings of base pairs or "digits", and their order is a code that directs how the molecule interacts with its environment. All replication is subject to mutation error. During the replication process, each digit has a certain probability of being replaced by some other digit, which changes the way the molecule interacts with its environment, and may increase or decrease its fitness, or ability to reproduce, in that environment. Fitness landscape It was noted by Manfred Eigen in his 1971 paper (Eigen 1971) that this mutation process places a limit on the number of digits a molecule may have. If a molecule exceeds this critical size, the effect of the mutations becomes overwhelming and a runaway mutation process will destroy the information in subsequent generations of the molecule. The error threshold is also controlled by the "fitness landscape" for the molecules. The fitness landscape is characterized by the two concepts of height (=fitness) and distance (=number of mutations). Similar molecules are "close" to each other, and molecules that are fitter than others and more likely to reproduce, are "higher" in the landscape. If a particular sequence and its neighbors have a high fitness, they will form a quasispecies and will be able to support longer sequence lengths than a fit sequence with few fit neighbors, or a less fit neighborhood of sequences. Also, it was noted by Wilke (Wilke 2005) that the error threshold concept does not apply in portions of the landscape where there are lethal mutations, in which the induced mutation yields zero fitness and prohibits the molecule from reproducing. Eigen's paradox Eigen's paradox is one of the most intractable puzzles in the study of the origins of life. It is thought that the error threshold concept described above limits the size of self replicating molecules to perhaps a few hundred digits, yet almost all life on earth requires much longer molecules to encode their genetic information. This problem is handled in living cells by enzymes that repair mutations, allowing the encoding molecules to reach sizes on the order of millions of base pairs. These large molecules must, of course, encode the very enzymes that repair them, and herein lies Eigen's paradox, first put forth by Manfred Eigen in his 1971 paper (Eigen 1971). Simply stated, Eigen's paradox amounts to the following: Without error correction enzymes, the maximum size of a replicating molecule is about 100 base pairs. For a replicating molecule to encode error correction enzymes, it must be substantially larger than 100 bases. This is a chicken-or-egg kind of a paradox, with an even more difficult solution. Which came first, the large genome or the error correction enzymes? A number of solutions to this paradox have been proposed: Stochastic corrector model (Szathmáry & Maynard Smith, 1995). In this proposed solution, a number of primitive molecules of say, two different types, are associated with each other in some way, perhaps by a capsule or "cell wall". If their reproductive success is enhanced by having, say, equal numbers in each cell, and reproduction occurs by division in which each of various types of molecules are randomly distributed among the "children", the process of selection will promote such equal representation in the cells, even though one of the molecules may have a selective advantage over the other. Relaxed error threshold (Kun et al., 2005) - Studies of actual ribozymes indicate that the mutation rate can be substantially less than first expected - on the order of 0.001 per base pair per replication. This may allow sequence lengths of the order of 7-8 thousand base pairs, sufficient to incorporate rudimentary error correction enzymes. A simple mathematical model Consider a 3-digit molecule [A,B,C] where A, B, and C can take on the values 0 and 1. There are eight such sequences ([000], [001], [010], [011], [100], [101], [110], and [111]). Let's say that the [000] molecule is the most fit; upon each replication it produces an average of copies, where . This molecule is called the "master sequence". The other seven sequences are less fit; they each produce only 1 copy per replication. The replication of each of the three digits is done with a mutation rate of μ. In other words, at every replication of a digit of a sequence, there is a probability that it will be erroneous; 0 will be replaced by 1 or vice versa. Let's ignore double mutations and the death of molecules (the population will grow infinitely), and divide the eight molecules into three classes depending on their Hamming distance from the master sequence: {| class="wikitable" |- |Hammingdistance || Sequence(s) |- |align=center| 0 |align=center| [000] |- |align=center| 1 |align=center| [001][010][100] |- |align=center| 2 |align=center| [110][101][011] |- |align=center| 3 |align=center| [111] |} Note that the number of sequences for distance d is just the binomial coefficient for L=3, and that each sequence can be visualized as the vertex of an L=3 dimensional cube, with each edge of the cube specifying a mutation path in which the change Hamming distance is either zero or ±1. It can be seen that, for example, one third of the mutations of the [001] molecules will produce [000] molecules, while the other two thirds will produce the class 2 molecules [011] and [101]. We can now write the expression for the child populations of class i in terms of the parent populations . where the matrix 'w’ that incorporates natural selection and mutation, according to quasispecies model, is given by: where is the probability that an entire molecule will be replicated successfully. The eigenvectors of the w matrix will yield the equilibrium population numbers for each class. For example, if the mutation rate μ is zero, we will have Q=1, and the equilibrium concentrations will be . The master sequence, being the fittest will be the only one to survive. If we have a replication fidelity of Q=0.95 and genetic advantage of a=1.05, then the equilibrium concentrations will be roughly . It can be seen that the master sequence is not as dominant; nevertheless, sequences with low Hamming distance are in majority. If we have a replication fidelity of Q approaching 0, then the equilibrium concentrations will be roughly . This is a population with equal number of each of 8 sequences. (If we had perfectly equal population of all sequences, we would have populations of [1,3,3,1]/8.) If we now go to the case where the number of base pairs is large, say L=100, we obtain behavior that resembles a phase transition. The plot below on the left shows a series of equilibrium concentrations divided by the binomial coefficient . (This multiplication will show the population for an individual sequence at that distance, and will yield a flat line for an equal distribution.) The selective advantage of the master sequence is set at a=1.05. The horizontal axis is the Hamming distance d . The various curves are for various total mutation rates . It is seen that for low values of the total mutation rate, the population consists of a quasispecies gathered in the neighborhood of the master sequence. Above a total mutation rate of about 1-Q=0.05, the distribution quickly spreads out to populate all sequences equally. The plot below on the right shows the fractional population of the master sequence as a function of the total mutation rate. Again it is seen that below a critical mutation rate of about 1-Q=0.05, the master sequence contains most of the population, while above this rate, it contains only about of the total population. It can be seen that there is a sharp transition at a value of 1-Q  just a bit larger than 0.05. For mutation rates above this value, the population of the master sequence drops to practically zero. Above this value, it dominates. In the limit as L approaches infinity, the system does in fact have a phase transition at a critical value of Q: . One could think of the overall mutation rate (1-Q) as a sort of "temperature", which "melts" the fidelity of the molecular sequences above the critical "temperature" of . For faithful replication to occur, the information must be "frozen" into the genome. See also Error catastrophe Extinction vortex Genetic entropy Genetic erosion Muller's ratchet References Evolutionary biology Population genetics Microbial population biology
Error threshold (evolution)
[ "Biology" ]
1,933
[ "Evolutionary biology" ]
33,174,865
https://en.wikipedia.org/wiki/Decoupling%20Natural%20Resource%20Use%20and%20Environmental%20Impacts%20from%20Economic%20Growth%20report
The report Decoupling Natural Resource Use and Environmental Impacts from Economic Growth is one of a series of reports researched and published by the International Resource Panel (IRP) of the United Nations Environment Programme. The IRP provides independent scientific assessments and expert advice on a variety of areas, including: The volume of selected raw material reserves and how efficiently these resources are being used The lifecycle-long environmental impacts of products and services created and consumed around the globe Options to meet human and economic needs with fewer or cleaner resources. About the report The concept of decoupling is not about stopping economic growth, but rather doing more with less. In the report's preface, the panel explained that the "conceptual framework for decoupling and understanding of the instrumentalities for achieving it is still in an infant stage" and that this "first report is simply an attempt to scope the challenges." The report considered the amount of resources currently being consumed by humanity and analysed how that would likely increase with population growth and future economic development. Its scenarios showed that by 2050 humans could use triple the amount of minerals, ores, fossil fuels and biomass annually – 140 billion tonnes per year – unless the rate of resource consumption could be decoupled from that of economic growth. Developed country citizens currently consume as much as 25 tonnes of those four key resources each year, while the average person in India consumes four tonnes annually. Another billion middle-class are set to emerge as developing countries rapidly become industrialised. There is evidence that decoupling is already underway; world gross domestic product grew by a factor of 23 in the 20th century, while resource use rose by a factor of eight. However, this will not be enough to avoid meeting resource scarcity and severe environmental limits. Resource use may ultimately need to fall to between five and six tonnes per person annually. Recycling, re-use and greater efficiency can all help achieve decoupling. It showed that decoupling might be a good strategy for economic growth in developing countries to avoid becoming resource-intensive economies in the future. See also Ecological modernization References External links www.resourcepanel.org www.unep.org United Nations Environment Programme Human impact on the environment Environmental mitigation Environmental impact assessment
Decoupling Natural Resource Use and Environmental Impacts from Economic Growth report
[ "Chemistry", "Engineering" ]
454
[ "Environmental mitigation", "Environmental engineering" ]
33,175,921
https://en.wikipedia.org/wiki/Spherically%20complete%20field
In mathematics, a field K with an absolute value is called spherically complete if the intersection of every decreasing sequence of balls (in the sense of the metric induced by the absolute value) is nonempty: The definition can be adapted also to a field K with a valuation v taking values in an arbitrary ordered abelian group: (K,v) is spherically complete if every collection of balls that is totally ordered by inclusion has a nonempty intersection. Spherically complete fields are important in nonarchimedean functional analysis, since many results analogous to theorems of classical functional analysis require the base field to be spherically complete. Examples Any locally compact field is spherically complete. This includes, in particular, the fields Qp of p-adic numbers, and any of their finite extensions. Every spherically complete field is complete. On the other hand, Cp, the completion of the algebraic closure of Qp, is not spherically complete. Any field of Hahn series is spherically complete. References Algebra Functional analysis Field (mathematics)
Spherically complete field
[ "Mathematics" ]
214
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations", "Algebra" ]
33,177,820
https://en.wikipedia.org/wiki/Mesoporous%20organosilica
Mesoporous organosilica (periodic mesoporous organosilicas, PMO) are a type of silica containing organic groups that give rise to mesoporosity. They exhibit pore size ranging from 2 nm - 50 nm, depending on the organic substituents. In contrast, zeolites exhibit pore sizes less than a nanometer. PMOs have potential applications as catalysts, adsorbents, trapping agents, drug delivery agents, stationary phases in chromatography and chemical sensors. History The breakthrough report in this area described the use of surfactants to produce periodic mesoporous silicas (PMS) in 1992 with pores larger than that of zeolites. Early mesoporous organosilicas developed had organic groups attached terminally to the silica surface. They were prepared either by grafting of organic group onto the channel walls or by template-directed co-condensation. For example, by modifying the channels of PMSs with alkanethiol groups that could sequester heavy metals. However, there were some major limitations like, inhomogeneity of the pores compared to PMSs, and limited organic content (around 25% with respect to the silicon wall sites). In 1999, reports described mesoporous organosilicas with organic groups located within the pore channel walls as "bridges" between Si centers. Since these materials had both organic and inorganic groups as integral part of the porous framework, they were considered as composites of organic and inorganic material and designated as periodic mesoporous organosilicas (PMOs). This family of porous materials had high degree of order and uniformity of pores compared to those with terminal organic groups. Structure of PMOs The framework of PMOs consists of inorganic components (polysilsesquioxanes) uniformly bridged by organic linkers. Most of the bridged polysilsesquioxane can be generically represented by the formula O1.5Si-R-SiO1.5. where R represents the organic bridging group. Each individual organic group is covalently bonded to two or more silicon atoms in the framework. The pores in the material are periodically ordered with diameter in the range 2 -30 nm. Depending on the synthetic conditions used to make mesoporous organosilicas, the mesoscale structure can either be amorphous or crystalline. Most of the mesoporous organosilicas that have been synthesized are amorphous. Although, x-ray diffraction of these materials indicate periodicity in the structure, sharp peaks in the medium scattering angle representative of crystalline materials are usually absent, except for (00l) reflections. However, few crystalline mesoporous organosilica have been reported,. Synthesis The primary methods used to make mesoporous organosilicas are evaporation-induced self-assembly, surfactant-mediated synthesis, post-synthetic grafting, and co-condensation. Organosilicas with amorphous structures are typically made by functionalizing organic groups rather than directly integrating the functional groups in the framework, which produces a periodic structure. Furthermore, basic hydrolytic conditions typically produce a periodic structure because of hydrophobic and hydrophilic interactions between hydrolyzed precursors that then self-assemble. Evaporation-induced self-assembly usually causes random alignment of the material pores. This method of synthesis uses the difference in vapor pressure of solvents to vary the rate of evaporation and therefore the assembly of the organosilica framework. Surfactant-mediated synthesis has been widely used for the production of mesoporous materials in general, and PMOs specifically,. It involves the addition of a surfactant or copolymer to a specific molecular precursor. The surfactant directs the structure of the material by interacting with the precursor in such a way that is dependent on the properties of the precursor. After the bulk structure is assembled, the surfactant is removed, leaving pores, or channels, embedded in the material framework. The surfactant template can be removed by solvent extraction or ion-exchange mechanisms. An aging process is usually performed at high temperature before removal of the surfactant. During surfactant-mediated synthesis, hydrolysis and polycondensation, or co-condensation, are used to fuse precursor molecules in a framework. Acidic or basic conditions are used for the hydrolysis depending on the precursor being introduced. The other two synthesis methods used for these materials are post-synthetic grafting and co-condensation. In the case of post-synthetic grafting, organic functional groups, typically organosilanes or alkoxyorganosilanes, are reacted with the assembled silicon mesostructure with or without the surfactant template present. If the template is still present, the grafting process will involve simultaneously removing the template and attaching the functional group. However, the pores of the material can be blocked during this process so a one-pot synthesis using the necessary components is more advantageous. This one-pot synthesis is known as co-condensation, in which the desired organosilyl functional groups are combined with the surfactant or other structure-directing agent. In this method, the material becomes structured and functionalized. Co-condensation gives rise to periodicity with the mesostructure, and it accommodates larger organic groups as well as larger pore sizes because of the one-step assembly process. Most PMOs have been made using the co-condensation method. The most recent method developed builds on co-condensation by combining multiple reactive organic precursors to form a new functional group, which is still combined with the framework molecule and copolymer. Mesoporous organosilicate materials have been made using bridged organic precursors, in which an organic fragment is positioned between silicon-containing fragments. Single precursor syntheses are typically done with bridged organosilane groups. When only one bridged organic precursor is used, there is a homogeneous distribution of the molecule in the framework. This phenomenon is referred to as molecular-scale periodicity. Chiral precursors can also be introduced into the material framework, and using acidic conditions in the hydrolysis and condensation process proves better for chiral precursors because no racemization occurs. Co-condensation of multiple organosilane precursors can create multi-functional organosilica materials. Tetraethoxysilane (TEOS) is a common silicon precursor used in co-condensation reactions. Applications Highly porous compounds are potential catalysts, adsorption, and separation. These have been the roles of zeolites, but their small pore size limits them to work with small molecules. The larger pore size (2-50 nm) of mesoporous materials gives them wider application – larger molecules can be admitted, and guest molecules can migrate faster. Catalysis To effect catalytic transformations using mesoporous organosilicas, it is necessary to functionalize them. The two major methods are to add a group or heteroatom, such as a metal center, to the organic framework, and to anchor an organic or organometallic group to the pore surface. Anchoring a homogeneous catalyst onto a mesoporous organosilicas framework has two primary disadvantages: the bulky group in the pore can block travel of guest molecules through it, and preparation of candidate molecules for anchoring to the framework is difficult. However, anchoring can create heterogeneous catalysts for a wide variety of chemical transformations: acid catalysis, base catalysis, coupling and condensation reaction catalysis, and even asymmetric catalysis. Anchored functional groups often have higher catalytic activity than does the bulk material, as one study showed for Nafion, or even than groups incorporated into the organosilica framework, as with sulfonic acid. Other potential uses Mesoporous organosilicas can be functionalized give adsorbants, for removal specific contaminants from air and water. Candidate adsorbants include toxic heavy metals, radioactive material, and various organic pollutants have been synthesized. Mesoporous organosilicas have been functionalized with fluorescent probes. The advantage of this material as a sensor is its high surface area combined with the high specificity achievable by careful functionalization. Mesoporous organosilicas have been used to sense a wide variety of analytes: metals, industrial pollutants, small organic molecules, and large biological molecules. Mesoporous organosilicas have been tested as potential materials for separation using HPLC. Froba et al. have shown that by using benzene PMO microspheres as stationary phases better separation can be achieved in the HPLC system. The theory was that the π-π interaction between the aromatic analytes and the phenylene bridge of the PMO framework leads to stronger retention and hence better separation. Controlled drug release is another aspect in which PMOs have been shown promise. The hydrophobic nature of the PMO walls allow for better control in drug release. In this respect, it is not just the mesoporosity of the PMOs make them advantageous, the tunability of the organic groups also play an important role. Future directions It has been proposed that the periodicity of PMOs may produce anisotropic mechanical, electrical and optical responses, in the same manner that periodicity magnifies anisotropy in the unit cell of conventional crystals. Also, studies that have shown that dendrimers, polyhedral oligomeric silsesquioxanes, and carbon nanomaterials like C60 can be incorporated into the pore walls of PMOs offers new directions in the possible applications of these materials. It has been shown that PMOs are more suitable for the construction of organic donor–acceptor systems for photocatalysis than periodic mesoporous silica because organic donor or acceptor groups within the framework provide larger empty spaces for mass transfer in photocatalysis than in mesoporous silicas. Recent investigations on charge transfer systems based on PMOs are suggestive of possible applications of PMOs in areas as such as heterojunction solar cells, photodetectors and light emitting diodes. More exciting applications can emerge by combining these materials with biological molecules such as lipids and proteins. PMOs with unconventional structures and properties have found high potential for future developments. See also Mesoporous materials Zeolites Mesoporous silica Chirality Metal-organic framework Asymmetric synthesis References External links http://sciencewatch.com/ana/st/mes-mat/ Porous media Silicon dioxide Organosilica
Mesoporous organosilica
[ "Materials_science", "Engineering" ]
2,230
[ "Mesoporous material", "Porous media", "Materials science" ]
33,177,993
https://en.wikipedia.org/wiki/Nanoscale%20plasmonic%20motor
A nanoscale plasmonic motor (sometimes called a "light mill") is a type of nanomotor, converting light energy to rotational motion at nanoscale. It is constructed from pieces of gold sheet in a gammadion shape, embedded within layers of silica. When irradiated with light from a laser, the gold pieces rotate. The functioning is explained by the quantum concept of the plasmon. This type of nanomotor is much smaller than other types, and its operation can be controlled by varying the frequency of the incident light. A working demonstration model has been produced by researchers with the Lawrence Berkeley National Laboratory and the University of California, Berkeley. Likely further developments include improving strength and flexibility, and identifying lower-cost materials. Applications envisaged include unwinding the DNA of living cells, and efficiently making use of solar energy. Introduction The increased demands in microtechnology and nanotechnology has been triggering the vast interests and opportunities for the developments of various micro- (MEMS) and nano-(NEMS) mechanical system based products. One of the features of this technology is its unique ability to imitate various natural phenomena. For example, biomedical engineering has succeeded to replace and increase the function of damaged or diseased organs, by designing the artificial ones using the nanoscale approach. The science behind the nanotechnology help them to design devices used for transplantation in medicine, suggesting that one should understand how nanoscale devices work by exploring living cells and its working principles. It could certainly inspire the ideas behind the design of powerful devices. Mechanism of auto-regeneration of energy by microorganisms has drawn attention to understand how energy can be generated from nanomaterials. As demonstrated in the works of various researchers, nanotechnology has a great ability to power and improve several natural biological devices by replacing those entities and mimicking natural processes within the living being. The primary concern behind such an approach is to provide an alternative source with higher ability under a controlled environment. One of the breakthrough discoveries among them is the nanomotor, a tiny device which has the ability to convert various forms of energy into motion using approaches observed in nature. The discovery in this field explains the use of wave and particle properties together to make the nanomotor work. This leads to observation of the so-called plasmonic nanomotor using the properties of plasmon to make the nanomotor work. Researchers with the U.S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory and the University of California (UC) Berkeley have created the first nano-sized light mill motor whose rotational speed and direction can be controlled by tuning the frequency of the incident light waves. Background Nanomotors are broadly classified into biological, hybrid and non-biological ones. Biological nanomotors are typically the microscopic engines created by the nature like the bacterial flagella which can come into motion by using ATP synthase, produced within the cell. This motor allows the bacterial to move independently. The man made counterpart is called a non-biological nanomotor and mimics the function of natural or biological nanomotor to allow the devices to work. However, these man-made nanodevices are less efficient compared to the biological counterpart. They require certain functionalization to accelerate movement or to improve the functions of the artificial nanomotor. For instance incorporation of carbon nanotube into platinum component of asymmetric metal nanowire leads to its dramatically accelerated movement in hydrogen peroxide solution. The hybrid nanomotor uses the chemical principle which are regularly observed in the biological nanomotor and other principles like magnetic interactions to perform their functions. The motion of a nanomotor could result from optical, electrical, magnetic or chemical interactions. These principles are applied according to the scale of the materials we are dealing with. One of the breakthrough reports on nanomotor is the possibility to use energy from the quantum behavior of photons to induce motion in the devices, where the authors were able to induce and control rotation, velocity and directions of nanosized gold (motor) within silica microdisk. This relevant report pointed out that velocity, direction and rotation were strongly dependent on the nature of light (wavelength) impinging upon the motor. Working principle Mostly photons exhibit linear momentum as well as angular momentum. These properties attribute towards different phenomena like induction of mechanical torque, optical trapping and cooling both in macro scale and nanoscale observations. Plasmon is the resonant mode that involves the interaction between free charges and light. In a metallic nanostructure, when the applied electric field is resonant with its plasmons, the interaction between light and matter can be greatly enhanced. Free electrons in metals can be driven by the interaction of these plasmon waves of metals and the electric field, generated by the incident light. This phenomenon also modifies the light by influencing its electric and magnetic field. The whole process induces the optical torque which can give a motion to the metallic nanostructures. Experimental configuration Based on the plasmonic concept, Liu and coworker demonstrated the plasmonic motor at nanoscale. The gammadion-shaped nanostructures were made up of Gold (size ~ 190x 190 nm) which were symmetrically sandwiched between two Silicon dioxide layers. The whole system was fabricated by using standard electron beam lithography. When the system is illuminated with linearly polarized light, it produces a torque which drives these tiny nanostructures, called "plasmonic nanomotors". The imposed torque results solely from the gammadion structure’s symmetry and interaction with the incident light. These nanomotors seems to change their directions of motions (clockwise and anticlockwise) according to the wavelength (longer and shorter) of the incident laser beam. Applications Because of its size and driven energy, the nanoscale plasmonic motor could provide rotational force at nanoscale, which would be widely used in energy conversion and biology. In biology The structural dynamics of cellular processes such as replication and transcription could determine the mechanical properties of DNA. However, the effect of torque should be considered when measuring DNA mechanics. Under low tension, DNA behaves like an isotropic flexible rod; whereas at higher tensions, the behaviour of over- and underwound molecules differs. When the nanoscale plasmonic motor is used, torsional stress will build up in the molecule by holding the rotor bead stationary using fluid flow. Through observing the twist angle of DNA, the elastic properties of DNA could be obtained. The newly developed light-driven nanoscale motor could address the limitations of the earlier light mills. It generates comparable torque, which was made of gold and had much smaller size. At 100 nanometers (one-tenth the size of other motors) it would make possible applications like unwinding DNA in living cells. While the system is under controlled winding and unwinding of DNA, the small motor could be illuminated at different wavelengths for in vivo manipulation. In energy conversion The microelectromechanical system is different from the traditional electromechanical system. For the nanoscale plasmonic motor, it could harvest light energy through rotating microscopic-scale objects. In addition, a nanoscale plasmonic motor could link transduction mechanisms in series (e.g., convert a thermal signal first into a mechanical signal, then into an optical signal, and finally into an electrical signal). So these motors could apply to solar light harvesting in nanoscopic systems through designing multiple motors to work at different resonance frequencies and single directions. And such multiple motor structures could be used to acquire torque from a broad wavelength range instead of a single frequency. Limitations In the past, nanoparticles were rotated by exploiting the incident intrinsic movement of the light, but it is the first time to induce the rotation of a nanoparticle without exploiting the intrinsic angular momentum of light. Because the nanoscale plasmonic motor is a new technology, several problems are faced, such as the price of higher development costs, greater complexity and a longer development time and the workhorse methods and materials of nanometre-scale electromechanical system (NEMS) technology are not universally well suited to the nanoscale. The nanoscale plasmonic motor also has limitations in strength and flexibility. Future plans In the future, scientists will pay more attention to synthesis, the efficiency of the light mills. Alternative materials for motors also will be developed as substitutes for the expensive materials - such as gold, silicon, carbon nanotube - used in the experimental stage. The strength and flexibility of nanoscale plasmonic motors will also be improved. See also Plasmonic nanorod motors Nanomotor References Nanoelectronics
Nanoscale plasmonic motor
[ "Materials_science" ]
1,808
[ "Nanotechnology", "Nanoelectronics" ]
33,178,845
https://en.wikipedia.org/wiki/TopFIND
TopFIND is the Termini oriented protein Function Inferred Database (TopFIND) is an integrated knowledgebase focused on protein termini, their formation by proteases and functional implications. It contains information about the processing and the processing state of proteins and functional implications thereof derived from research literature, contributions by the scientific community and biological databases. Background Among the most fundamental characteristics of a protein are the N- and C-termini defining the start and end of the polypeptide chain. While genetically encoded, protein termini isoforms are also often generated during translation, following which, termini are highly dynamic, being frequently trimmed at their ends by a large array of exopeptidases. Neo-termini can also be generated by endopeptidases after precise and limited proteolysis, termed processing. Necessary for the maturation of many proteins, processing can also occur afterwards, often resulting in dramatic functional consequences. Aberrant proteolysis can cause wide range of diseases like arthritis or cancer. Hence, proteolytic generation of pleiotrophic stable forms of proteins, the universal susceptibility of proteins to proteolysis, and its irreversibility, distinguishes proteolysis from many highly studied posttranslational modifications. Proteases are tightly interconnected in the protease web and their aberrant activity in disease can lead to diagnostic fragment profiles with characteristic protein termini. Following proteolysis, the newly formed protein termini can be further modified, a process that affects protein function and stability. Knowledgebase content TopFIND is a resource for comprehensive coverage of protein N- and C-termini discovered by all available in silico, in vitro as well as in vivo methodologies. It makes use of existing knowledge by seamless integration of data from UniProt and MEROPS and provides access to new data from community submission and manual literature curating. It renders modifications of protein termini, such as acetylation and citrullination, easily accessible and searchable and provides the means to identify and analyse extend and distribution of terminal modifications across a protein. Since its inception TopFIND has been expanded to further species. Data access The data is presented to the user with a strong emphasis on the relation to curated background information and underlying evidence that led to the observation of a terminus, its modification or proteolytic cleavage. In brief the protein information, its domain structure, protein termini, terminus modifications and proteolytic processing of and by other proteins is listed. All information is accompanied by metadata like its original source, method of identification, confidence measurement or related publication. A positional cross correlation evaluation matches termini and cleavage sites with protein features (such as amino acid variants) and domains to highlight potential effects and dependencies in a unique way. Also, a network view of all proteins showing their functional dependency as protease, substrate or protease inhibitor tied in with protein interactions is provided for the easy evaluation of network wide effects. A powerful yet user friendly filtering mechanism allows the presented data to be filtered based on parameters like methodology used, in vivo relevance, confidence or data source (e.g. limited to a single laboratory or publication). This provides means to assess physiological relevant data and to deduce functional information and hypotheses relevant to the bench scientist. In a later release analysis tools for the evaluation of proteolytic pathways in experimental data have been added. See also MEROPS UniProt Cytoscape Computational genomics Metabolic network modelling Protein–protein interaction prediction References External links TopFIND - main website and web interface Host institution website Research group of Philipp Lange - inventor & core developer Research Group of Christopher Overall - home of TopFIND Merops - the peptidase database UniProt Molecular biology Biological databases Bioinformatics software Systems biology Mathematical and theoretical biology Protein domains Protein families Post-translational modification
TopFIND
[ "Chemistry", "Mathematics", "Biology" ]
807
[ "Mathematical and theoretical biology", "Bioinformatics software", "Applied mathematics", "Gene expression", "Protein classification", "Biochemical reactions", "Bioinformatics", "Post-translational modification", "Protein domains", "Molecular biology", "Biochemistry", "Protein families", "Bi...
33,179,436
https://en.wikipedia.org/wiki/Spin%20angular%20momentum%20of%20light
The spin angular momentum of light (SAM) is the component of angular momentum of light that is associated with the quantum spin and the rotation between the polarization degrees of freedom of the photon. Introduction Spin is the fundamental property that distinguishes the two types of elementary particles: fermions, with half-integer spins; and bosons, with integer spins. Photons, which are the quanta of light, have been long recognized as spin-1 gauge bosons. The polarization of the light is commonly accepted as its “intrinsic” spin degree of freedom. However, in free space, only two transverse polarizations are allowed. Thus, the photon spin is always only connected to the two circular polarizations. To construct the full quantum spin operator of light, longitudinal polarized photon modes have to be introduced. An electromagnetic wave is said to have circular polarization when its electric and magnetic fields rotate continuously around the beam axis during propagation. The circular polarization is left () or right () depending on the field rotation direction and, according to the convention used: either from the point of view of the source, or the receiver. Both conventions are used in science, depending on the context. When a light beam is circularly polarized, each of its photons carries a spin angular momentum (SAM) of , where is the reduced Planck constant and the sign is positive for left and negative for right circular polarizations (this is adopting the convention from the point of view of the receiver most commonly used in optics). This SAM is directed along the beam axis (parallel if positive, antiparallel if negative). The above figure shows the instantaneous structure of the electric field of left () and right () circularly polarized light in space. The green arrows indicate the propagation direction. The mathematical expressions reported under the figures give the three electric-field components of a circularly polarized plane wave propagating in the direction, in complex notation. Mathematical expression The general expression for the spin angular momentum is where is the speed of light in free space and is the conjugate canonical momentum of the vector potential . The general expression for the orbital angular momentum of light is where denotes four indices of the spacetime and Einstein's summation convention has been applied. To quantize light, the basic equal-time commutation relations have to be postulated, where is the reduced Planck constant and is the metric tensor of the Minkowski space. Then, one can verify that both and satisfy the canonical angular momentum commutation relations and they commute with each other . After the plane-wave expansion, the photon spin can be re-expressed in a simple and intuitive form in the wave-vector space where the vector is the field operator of the photon in wave-vector space and the matrix is the spin-1 operator of the photon with the SO(3) rotation generators and the two unit vectors denote the two transverse polarizations of light in free space and unit vector denotes the longitudinal polarization. Due to the longitudinal polarized photon and scalar photon have been involved, both and are not gauge invariant. To incorporate the gauge invariance into the photon angular momenta, a re-decomposition of the total QED angular momentum and the Lorenz gauge condition have to be enforced. Finally, the direct observable part of spin and orbital angular momenta of light are given by and which recover the angular momenta of classical transverse light. Here, () is the transverse part of the electric field (vector potential), is the vacuum permittivity, and we are using SI units. We can define the annihilation operators for circularly polarized transverse photons: with polarization unit vectors Then, the transverse-field photon spin can be re-expressed as For a single plane-wave photon, the spin can only have two values , which are eigenvalues of the spin operator . The corresponding eigenfunctions describing photons with well defined values of SAM are described as circularly polarized waves: See also Helmholtz equation Orbital angular momentum of light Polarization (physics) Photon polarization Spin polarization References Further reading Angular momentum of light Light Physical quantities
Spin angular momentum of light
[ "Physics", "Mathematics" ]
852
[ "Physical phenomena", "Spectrum (physical sciences)", "Physical quantities", "Quantity", "Angular momentum of light", "Electromagnetic spectrum", "Waves", "Light", "Angular momentum", "Physical properties" ]
33,183,251
https://en.wikipedia.org/wiki/Hartogs%E2%80%93Rosenthal%20theorem
In mathematics, the Hartogs–Rosenthal theorem is a classical result in complex analysis on the uniform approximation of continuous functions on compact subsets of the complex plane by rational functions. The theorem was proved in 1931 by the German mathematicians Friedrich Hartogs and Arthur Rosenthal and has been widely applied, particularly in operator theory. Statement The Hartogs–Rosenthal theorem states that if K is a compact subset of the complex plane with Lebesgue measure zero, then any continuous complex-valued function on K can be uniformly approximated by rational functions. Proof By the Stone–Weierstrass theorem any complex-valued continuous function on K can be uniformly approximated by a polynomial in and . So it suffices to show that can be uniformly approximated by a rational function on K. Let g(z) be a smooth function of compact support on C equal to 1 on K and set By the generalized Cauchy integral formula since K has measure zero. Restricting z to K and taking Riemann approximating sums for the integral on the right hand side yields the required uniform approximation of by a rational function. See also Runge's theorem Mergelyan's theorem Notes References Rational functions Theorems in approximation theory Theorems in complex analysis
Hartogs–Rosenthal theorem
[ "Mathematics" ]
252
[ "Theorems in approximation theory", "Theorems in mathematical analysis", "Theorems in complex analysis" ]
33,183,990
https://en.wikipedia.org/wiki/Cannibalization%20%28parts%29
Cannibalization of machine parts, in the maintenance of mechanical or electronic systems with interchangeable parts, refers to the practice of removing parts or subsystems necessary for repair from another similar device, rather than from inventory, usually when resources become limited. The source system is usually crippled as a result, perhaps only temporarily, in order to allow the recipient device to function properly again. Cannibalization usually occurs due to unavailability of spare parts, an emergency, long resupply times, physical distance, or insufficient planning/budget. Cannibalization can also be due to reusing surplus inventory. At the end of World War II a large quantity of high quality, but unusable war surplus equipment such as radar devices made a ready source of parts to build radio equipment. Cannibalization can also be an economic/ecological choice for end of life products. Germany, rather than sell/export functional used cars, will disassemble and store parts no longer being produced because their individual value exceed the whole car's value. The same thing happens to certain semiconductors where they are "pulled" from working machines and sold for a profit. In the electronics market, machines being cannibalized are known as parts machines or kept in a boneyard until needed. Diminishing manufacturing sources Sometimes, removing parts from old equipment is the only way to obtain spare parts, either because they are no longer made, are obsolete, or can only be manufactured in large quantities. In logistics, this is known as Diminishing Manufacturing Sources (DMS). This is often the case in the military, and ships and aircraft, as well as other expensive equipment that is produced in limited quantities. Such was the case with the aircraft carrier USS Kitty Hawk, the sole survivor of a class of three ships built during the early-1960s. The ship herself is over forty years old, and having manufacturers build individual custom replacement parts would be highly impractical, and thus decommissioned ships, such as the USS Independence, have been utilized for the necessary parts to keep the Kitty Hawk in operation. Another example is the Union Pacific's 4-8-4 locomotive 838 is used as a spare parts source for the 844, since the type has been out of production for decades and its manufacturer no longer exists along with another engine, which is the Canadian National 3377 which is a source of parts for the Canadian National 3254. One strategy used to combat DMS is to buy additional inventory during the production run of a system or part, in quantities sufficient to cover the expected number of failures. This strategy is known as a lifetime buy. See also Aircraft boneyard Knockdown aircraft Wrecking yard References UK Aircraft Parts Cannibalization - Regulatory Article (RA) 4812 Maintenance Scarcity
Cannibalization (parts)
[ "Engineering" ]
567
[ "Maintenance", "Mechanical engineering" ]
33,184,343
https://en.wikipedia.org/wiki/Jaintia%20Rajbari
Jaintia Rajbari () is a royal residence located in Jaintiapur, Sylhet, Bangladesh. It was the residence of the rulers of the Jaintia Kingdom. See also Khasi people Pnar people References Architecture in Bangladesh Architectural history Tourist attractions in Bangladesh Sylhet District
Jaintia Rajbari
[ "Engineering" ]
60
[ "Architectural history", "Architecture" ]
35,861,790
https://en.wikipedia.org/wiki/Mississippi%20River%20Basin%20Model
The Mississippi River Basin Model was a large-scale hydraulic model of the entire Mississippi River basin, covering an area of 200 acres. It is part of the Waterways Experiment Station, located near Clinton, Mississippi. The model was built from 1943 to 1966 and in operation from 1949 until 1973. By comparison, the better known San Francisco Bay Model covers 1.5 acres and the Chesapeake Bay Model covers 8 acres. The model is now derelict, but open to the public within Buddy Butts Park, Jackson. Background Large scale, localised flood control measures such as levees had been constructed since the early 1900s, especially in the decade after the Great Mississippi Flood of 1927 and following the Flood Control Act 1936. From 1928 onwards, the Army Corps of Engineers built a huge number of locks, run-off channels and extended and raised existing levees. These control measures only targeted single sites, and did not look at the entire river system. There had already been extensive modelling of individual sections of the river at the Waterways Experiment Station in Vicksburg, including a 1060 ft long model of the 600 river miles from Helena, Arkansas to Donaldsonville, Louisiana, but in early 1937 it was clear that impact of control measures were not completely successful. In 1941 Eugene Reybold proposed a large-scale hydraulic model which would allow the engineers to simulate weather, floods and evaluate the effect of flood control measures on the entire system. This would cover approximately 200 acres, include all existing and proposed control measures, and a network of streams nearly 8 miles in total length. Design The scale of the model was 1:100 vertical and 1:2000 horizontal. At this scale, the Appalachian Mountains are raised 20 ft above the Gulf of Mexico, the Rocky Mountains by 50 ft. The larger vertical scale was thought to reduce surface-tension and therefore better simulate turbulence. The model used individually cast 10 ft x 10 ft (approximate) concrete panels, contoured with the land shape and river bed, including tributaries, cliffs, lakes, flood plains, bridges, and levees. Metal plugs or divots in the river bed provided roughness to simulate different types of material, whilst folded metal mesh simulated dense foliage. With each gallon of water representing 1.5 million gallons, an entire day of river flow along the whole system could be simulated in 5 minutes. Construction As wartime labour was short, it was proposed to make use of Italian and German prisoners of war. Construction at the site was begun in January 1943, commencing with housing for the WES personnel needed to direct work on the model, as well as an internment camp for 3,000 men at nearby Camp Clinton. The first POW's (200 of Rommel's Afrika Korps) arrived in August 1943, and by December, there were almost 1800. Enlisted men received 90 cents for 8 hours labor. Officers and non-commissioned officers were not required to work, but could volunteer. By May 1946, the last of the prisoners had been repatriated, and the site was almost ready for model construction. Individual sections were in operation from 1949, but construction was not completed until 1966, partly due to the complexity of modelling such a vast area, but also due to irregular funding. Operation By 1952 the Missouri River segment was fully operational and used extensively to predict problems during that year's April floods, helping to avoid damage of an estimated $65 million. By 1959 the model was complete as far as Memphis and a comprehensive testing program was begun which coordinated the entire model. In 1964, the site was opened to visitors for self-guided tours, and facilities included an assembly center, 40 ft observation tower, operation observation room, and elevated platforms, drawing about 5,000 visitors a year. On completion in 1966, basin-wide tests examined the effectiveness of reservoirs and looked at maximizing flood protection. For the next three years, the historic floods of 1937, 1943, 1945 and 1952 were reproduced, as well as hypothetical floods at different periods of the year. Tests on individual problems were conducted until 1971 but high costs and growth of computer modelling meant that the facility was put on standby. The last use was in 1973, when a potentially catastrophic failure arose at the Old River Control Structure. The model was used to show that the untested Morganza Spillway could be opened effectively, without diverting polluted water through New Orleans and Baton Rouge, as well as identifying levees that required topping up. Current status In 1993 the site was taken over by the City of Jackson, designated as a Mississippi Landmark and a city park was formed around the site. The cost of maintaining the site as a tourist attraction was too high, so the model was abandoned and became overgrown. In 2000, the model was included in the Mississippi Heritage Trusts' 10 Most Endangered List, featured in a Google Sightseeing post in 2007 , and thereafter was visited and blogged about by several urban explorers and photographers. In 2010, it was reported that the panels were still intact, and observation platforms and walkways still in place. In 2011 students from Louisiana State University received an Honor Award from the American Society of Landscape Architects for their project to revitalise the park and relaunch the model as a tourist attraction. Richard Coupe of the Jackson Free Press visited the site in 2013 and reported it as overgrown, but open to the public within Buddy Butts Park. A team from 16 WAPT News surveyed the site using the Eagle Eye 16 drone and reported it as overgrown, defaced, and with several pieces of the grid collapsing. Access is via the park entrance on McRaven Road, the model is next to the soccer fields. Friends of the Mississippi River Basin Model is a group consisting of local volunteers has started clearing out the model with the hope of opening it to the public. The Libertarian Party of Hinds County also assists with this project. The model was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2018. References External links https://friendsofmrbm.org/ Mississippi River watershed Hydrology models Buildings and structures in Hinds County, Mississippi Mississippi Landmarks Mississippi embayment Scale modeling 1943 establishments in Mississippi Historic Civil Engineering Landmarks
Mississippi River Basin Model
[ "Physics", "Engineering", "Biology", "Environmental_science" ]
1,249
[ "Scale modeling", "Hydrology", "Biological models", "Historic Civil Engineering Landmarks", "Civil engineering", "Hydrology models", "Environmental modelling" ]
35,862,485
https://en.wikipedia.org/wiki/Microsoft%20SmartScreen
SmartScreen (officially called Windows SmartScreen, Windows Defender SmartScreen and SmartScreen Filter in different places) is a cloud-based anti-phishing and anti-malware component included in several Microsoft products: All versions of the Microsoft Windows operating system since Windows 8 Web browsers Internet Explorer and Microsoft Edge Xbox One and Xbox Series X and Series S video game consoles Online services Microsoft 365 (including Microsoft Outlook and Exchange) and Microsoft Bing. SmartScreen as a business unit includes the intelligence platform, backend, serving frontend, UX, policy, expert graders, and closed-loop intelligence (machine learning and statistical techniques) designed to help protect Microsoft customers from safety threats like social engineering and drive-by downloads. SmartScreen in Internet Explorer Internet Explorer 7: Phishing Filter SmartScreen was first introduced in Internet Explorer 7, then known as the Phishing Filter. Phishing Filter does not check every website visited by the user, only those that are known to be suspicious. Internet Explorer 8: SmartScreen Filter With the release of Internet Explorer 8, the Phishing Filter was renamed to SmartScreen and extended to include protection from socially engineered malware. Every website and download is checked against a local list of popular legitimate websites; if the site is not listed, the entire address is sent to Microsoft for further checks. If it has been labeled as an impostor or harmful, Internet Explorer 8 will show a screen prompting that the site is reported harmful and shouldn't be visited. From there the user can either visit their homepage, visit the previous site, or continue to the unsafe page. If a user attempts to download a file from a location reported harmful, then the download is cancelled. The effectiveness of SmartScreen filtering has been reported to be superior to socially engineered malware protection in other browsers. According to Microsoft, the SmartScreen technology used by Internet Explorer 8 was successful against phishing or other malicious sites and in blocking of socially engineered malware. Beginning with Internet Explorer 8, SmartScreen can be enforced using Group Policy. Internet Explorer 9: Application Reputation In Internet Explorer 9, SmartScreen added protection against malware downloads by launching SmartScreen Application Reputation to identify both safe and malicious software. The system blocked known malware while warning the user if an executable was not yet known to be safe. The system took into account the download website’s reputation based on SmartScreen’s phishing filter launched in prior web browser versions Internet Explorer 7 and Internet Explorer 8. Internet Explorer Mobile 10 Internet Explorer Mobile 10 was the first release of Internet Explorer Mobile to support the SmartScreen Filter. Microsoft Edge Microsoft Edge [Legacy] was Microsoft's new browser beginning in Windows 10, built on the same Windows web platform powering Internet Explorer. Microsoft Edge was later rebuilt on Google's Chromium browser stack to go cross-platform onto macOS and down-level into Windows 8.1 and below. SmartScreen shipped with each version of Microsoft Edge, mostly with Internet Explorer parity, in progressive versions adding protection improvements targeting new consumer threat classes like tech support scams or adding new enterprise configurability features. Addressed criticisms In October 2017, criticisms regarding URL submission methods were addressed with the creation of the Report unsafe site URL submission page. Prior to 2017, Microsoft required a user to visit a potentially dangerous website to use the in-browser reporting tool, potentially exposing users to dangerous web content. In 2017, Microsoft reversed that policy by adding the URL submission page, allowing a user to submit an arbitrary URL without having to visit the website. SmartScreen Filter in Microsoft Outlook was previously bypassable due to a data gap in Internet Explorer. Some phishing attacks use a phishing email linking to a front-end URL unknown to Microsoft; clicking this URL in the inbox opens the URL in Internet Explorer; the loaded website then, using client-side or server-side redirections, redirects the user to the malicious site. In the original implementation of SmartScreen, the "Report this website" option in Internet Explorer only reported the currently-open page (the final URL in the redirect chain); the original referrer URL in the phishing attack was not reported to Microsoft and remained accessible. This was mitigated beginning with some versions of Microsoft Edge Legacy by sending the full redirection chain to Microsoft for further analysis. SmartScreen in Windows Windows 8 and Windows 8.1 In Microsoft Windows 8, SmartScreen added built-in operating system protections against web-delivered malware performing reputation checks by default on any file or application downloaded from the Internet, including those downloaded from email clients like Microsoft Outlook or non-Microsoft web browsers like Google Chrome. Windows SmartScreen functioned inline at the Windows shell directly prior to execution of any downloaded software. Whereas SmartScreen in Internet Explorer 9 warned against downloading and executing unsafe programs only in Internet Explorer, Windows SmartScreen blocked execution of unsafe programs of any Internet origin. With SmartScreen left at its default settings, administrator privilege would be required to launch and run an unsafe program. Reactions Microsoft faced concerns surrounding the privacy, legality and effectiveness of the new system, suggesting that the automatic analysis of files (which involves sending a cryptographic hash of the file and the user's IP address to a server) could be used to build a database of users' downloads online, and that the use of the outdated SSL 2.0 protocol for communication could allow an attacker to eavesdrop on the data. In response, Microsoft later issued a statement noting that IP addresses were only being collected as part of the normal operation of the service and would be periodically deleted, that SmartScreen on Windows 8 would only use SSL 3.0 for security reasons, and that information gathered via SmartScreen would not be used for advertising purposes or sold to third parties. Windows 10 and Windows 11 Beginning in Windows 10, Microsoft placed the SmartScreen settings into the Windows Defender Security Center. Further Windows 10 and Windows 11 updates have added more enterprise configurability as part of Microsoft's enterprise endpoint protection product. SmartScreen in Outlook Outlook.com uses SmartScreen to protect users from unsolicited e-mail messages (spam/junk), fraudulent emails (phishing) and malware spread via e-mail. After its initial review of the body text, the system focuses on the hyperlinks and attachments. Junk mail (spam) To filter spam, SmartScreen Filter uses machine learning from Microsoft Research which learns from known spam threats and user feedback when emails are marked as "Spam" by the user. Over time, these preferences help SmartScreen Filter to distinguish between the characteristics of unwanted and legitimate e-mail and can also determine the reputation of senders by a number of emails having had this checked. Using these algorithms and the reputation of the sender is an SCL rating (Spam Confidence Level score) assigned to each e-mail message (the lower the score, the more desirable). A score of -1, 0, or 1 is considered not spam, and the message is delivered to the recipient's inbox. A score of 5, 6, 7, 8, or 9 is considered spam and is delivered to the recipient's Junk Folder. Scores of 5 or 6 are considered to be suspected spam, while a score of 9 is considered certainly spam. The SCL score of an email can be found in the various x-headers of the received email. Phishing SmartScreen Filter also analyses email messages from fraudulent and suspicious Web links. If such suspicious characteristics are found in an email, the message is either directly sent to the Spam folder with a red information bar at the top of the message which warns of the suspect properties. SmartScreen also protects against spoofed domain names (spoofing) in emails to verify whether an email is sent by the domain which it claims to be sent. For this, it uses the technology Sender ID and DomainKeys Identified Mail (DKIM). SmartScreen Filter also ensures that one email from authenticated senders can distinguish more easily by placing a green-shield icon for the subject line of these emails. Effectiveness Browser social engineering protection In late 2010, the results of browser malware testing undertaken by NSS Labs were published. The study looked at the browser's capability to prevent users following socially engineered links of a malicious nature and downloading malicious software. It did not test the browser's ability to block malicious web pages or code. According to NSS Labs, Internet Explorer 9 blocked 99% of malware downloads compared to 90% for Internet Explorer 8 that does not have the SmartScreen Application Reputation feature as opposed to the 13% achieved by Firefox, Chrome, and Safari; which all use a Google Safe Browsing malware filter. Opera 11 was found to block just 5% of malware. SmartScreen Filter was also noted for adding legitimate sites to its blocklists almost instantaneously, as opposed to the several hours it took for blocklists to be updated on other browsers. In early 2010, similar tests had given Internet Explorer 8 an 85% passing grade, the 5% improvement being attributed to "continued investments in improved data intelligence". By comparison, the same research showed that Chrome 6, Firefox 3.6 and Safari 5 scored 6%, 19% and 11%, respectively. Opera 10 scored 0%, failing to "detect any of the socially engineered malware samples". In July 2010, Microsoft claimed that SmartScreen on Internet Explorer had blocked over a billion attempts to access sites containing security risks. According to Microsoft, the SmartScreen Filter included in Outlook.com blocks 4.5 billion unwanted e-mails daily from reaching users. Microsoft also claims that only 3% of incoming email is junk mail but a test by Cascade Insights says that just under half of all junk mail still arrives in the inbox of users. In a September 2011 blog post, Microsoft stated that 1.5 billion attempted malware attacks and over 150 million attempted phishing attacks have been stopped. In 2017, Microsoft addressed criticisms about the URL submission process by creating a dedicated page to report unsafe sites, rather than requiring users to visit the potentially dangerous site. Overtime, SmartScreen has expanded to protect against new threats like tech support scam, potentially unwanted applications (PUAs) and drive by attacks that don't require user interaction. Criticism Validity of browser protection tests Manufacturers of other browsers have criticized the third-party tests which claim Internet Explorer has superior phishing and malware protection compared to that of Chrome, Firefox, or Opera. Criticisms have focused mostly on the lack of transparency of URLs tested and the lack of consideration of layered security additional to the browser, with Google commenting that "The report itself clearly states that it does not evaluate browser security related to vulnerabilities in plug-ins or the browsers themselves", and Opera commenting that the results appeared "odd that they received no results from our data providers" and that "social malware protection is not an indicator of overall browser security". Windows malware protection SmartScreen builds reputation based on code signing certificates that identify the author of the software. This means that once a reputation has been built, new versions of an application can be signed with the same certificate and maintain the same reputation. However, code signing certificates need to be renewed every two years. SmartScreen does not relate a renewed certificate to an expired one. This means that reputations need to be rebuilt every two years, with users getting frightening messages in the meantime. Extended Validation (EV) certificates seem to avoid this issue, but they are expensive and difficult to obtain for small developers. SmartScreen Filter creates a problem for small software vendors when they distribute an updated version of installation or binary files over the internet. Whenever an updated version is released, SmartScreen responds by stating that the file is not commonly downloaded and can therefore install harmful files on your system. This can be fixed by the author digitally signing the distributed software. Reputation is then based not only on a file's hash but on the signing certificate as well. A common distribution method for authors to bypass SmartScreen warnings is to pack their installation program (for example Setup.exe) into a ZIP-archive and distribute it that way, though this can confuse novice users. Another criticism is that SmartScreen increases the cost of non-commercial and small scale software development. Developers either have to purchase standard code signing certificates or more expensive extended validation certificates. Extended validation certificates allow the developer to immediately establish reputation with SmartScreen but are often unaffordable for people developing software either for free or not for immediate profit. The standard code signing certicates however pose a "catch-22" for developers, since SmartScreen warnings make people reluctant to download software, as a consequence to get downloads requires first passing SmartScreen, passing SmartScreen requires getting reputation and getting reputation is dependent on downloads. See also Anti-phishing software Google Safe Browsing macOS Gatekeeper References External links A detailed FAQ by Microsoft on SmartScreen Filter SmartScreen Computer network security Microsoft Windows security technology
Microsoft SmartScreen
[ "Engineering" ]
2,672
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security" ]
35,863,555
https://en.wikipedia.org/wiki/AIR%20Shipper
The AIR Shipper is a regulatory manual utilized by air shippers for shipping dangerous goods. A.I.R. Shipper is the first regulations publication recognized by the International Civil Aviation Organization and is developed in compliance with ICAO standards. A.I.R. Shipper is published by Labelmaster, a U.S.-based manufacturer of regulatory compliance products. See also List of UN numbers Packaging and labeling Packing groups UN Recommendations on the Transport of Dangerous Goods References Safety Hazardous materials
AIR Shipper
[ "Physics", "Chemistry", "Technology" ]
99
[ "Materials", "Hazardous materials", "Matter" ]
35,864,002
https://en.wikipedia.org/wiki/Siegel%20identity
In mathematics, Siegel's identity refers to one of two formulae that are used in the resolution of Diophantine equations. Statement The first formula is The second is Application The identities are used in translating Diophantine problems connected with integral points on hyperelliptic curves into S-unit equations. See also Siegel formula References Algebraic identities Diophantine equations
Siegel identity
[ "Mathematics" ]
76
[ "Algebraic identities", "Mathematical objects", "Equations", "Diophantine equations", "Mathematical identities", "Number theory" ]
5,264,737
https://en.wikipedia.org/wiki/Decay%20scheme
The decay scheme of a radioactive substance is a graphical presentation of all the transitions occurring in a decay, and of their relationships. Examples are shown below. It is useful to think of the decay scheme as placed in a coordinate system, where the vertical axis is energy, increasing from bottom to top, and the horizontal axis is the proton number, increasing from left to right. The arrows indicate the emitted particles. For the gamma rays (vertical arrows), the gamma energies are given; for the beta decay (oblique arrow), the maximum beta energy. Examples These relations can be quite complicated; a simple case is shown here: the decay scheme of the radioactive cobalt isotope cobalt-60. 60Co decays by emitting an electron (beta decay) with a half-life of 5.272 years into an excited state of 60Ni, which then decays very fast to the ground state of 60Ni, via two gamma decays. All known decay schemes can be found in the Table of Isotopes., Nickel is to the right of cobalt, since its proton number (28) is higher by one than that of cobalt (27). In beta decay, the proton number increases by one. For a positron decay and also for an alpha decay (see below), the oblique arrow would go from right to left since in these cases, the proton number decreases. Since energy is conserved and since the particles emitted carry away energy, arrows can only go downward (vertically or at an angle) in a decay scheme. A somewhat more complicated scheme is shown here: the decay of the nuclide 198Au which can be produced by irradiating natural gold in a nuclear reactor. 198Au decays via beta decay to one of two excited states or to the ground state of the mercury isotope 198Hg. In the figure, mercury is to the right of gold, since the atomic number of gold is 79, that of mercury is 80. The excited states decay after very short times (2.5 and 23 ps, resp.; 1 picosecond is a millionth of a millionth of a second) to the ground state. While excited nuclear states are usually very short lived, decaying almost immediately after a beta decay (see above), the excited state of the technetium isotope shown here to the right is comparatively long lived. It is therefore called "metastable" (hence the "m" in 99mTc ). It decays to the ground state via gamma decay with a half-life of 6 hours. Here, to the left, we now have an alpha decay. It is the decay of the element Polonium discovered by Marie Curie, with mass number 210. The isotope 210Po is the penultimate member of the uranium-radium-decay series; it decays into a stable lead-isotope with a half-life of 138 days. In almost all cases, the decay is via the emission of an alpha particle of 5.305 MeV. Only in one case of 100000, an alpha particle of lower energy appears; in this case, the decay leads to an excited level of 206Pb, which then decays to the ground state via gamma radiation. Selection rules Alpha- beta- and gamma rays can only be emitted if the conservation laws (energy, angular momentum, parity) are obeyed. This leads to so-called selection rules. Applications for gamma decay can be found in Multipolarity of gamma radiation. To discuss such a rule in a particular case, it is necessary to know angular momentum and parity for every state. The figure shows the 60Co decay scheme again, with spins and parities given for every state. References Radioactivity Nuclear physics
Decay scheme
[ "Physics", "Chemistry" ]
757
[ "Radioactivity", "Nuclear physics" ]
5,268,864
https://en.wikipedia.org/wiki/Collaborative%20Computational%20Project%20Number%204
The Collaborative Computational Project Number 4 in Protein Crystallography (CCP4) was set up in 1979 in the United Kingdom to support collaboration between researchers working in software development and assemble a comprehensive collection of software for structural biology. The CCP4 core team is located at the Research Complex at Harwell (RCaH) at Rutherford Appleton Laboratory (RAL) in Didcot, near Oxford, UK. CCP4 was originally supported by the UK Science and Engineering Research Council (SERC), and is now supported by the Biotechnology and Biological Sciences Research Council (BBSRC). The project is coordinated at CCLRC Daresbury Laboratory. The results of this effort gave rise to the CCP4 program suite, which is now distributed to academic and commercial users worldwide. Projects CCP4i – CCP4 Graphical User Interface CCP4MG – CCP4 Molecular Graphics Project Coot – Graphical Model Building HAPPy – automated experimental phasing MrBUMP – automated Molecular Replacement PISA – Protein Interfaces, Surfaces and Assemblies MOSFLM GUI – building a modern interface to MOSFLM See also CCP4 (file format) External links CCP4 Documentation wiki — concentrates only on CCP4 CCP4 Community wiki — general X-ray crystallography topics related to CCP4 References Crystallography Medical Research Council (United Kingdom) Science and technology in Oxfordshire Vale of White Horse
Collaborative Computational Project Number 4
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
288
[ "Crystallography", "Condensed matter physics", "Materials science" ]
5,272,642
https://en.wikipedia.org/wiki/International%20Council%20on%20Systems%20Engineering
The International Council on Systems Engineering (INCOSE; pronounced ) is a not-for-profit membership organization and professional society in the field of systems engineering with about 25,000 members and associates including individual, corporate, and student members. INCOSE's main activities include conferences, publications, local chapters, certifications and technical working groups. The INCOSE International Symposium is usually held in July, and the INCOSE International Workshop is held in the United States or Europe in January. Currently, there are about 70 local INCOSE chapters globally with most chapters outside the United States representing entire countries, while chapters within the United States represent cities or regions. INCOSE organizes about 55 technical working groups with international membership, aimed at collaboration and the creation of INCOSE products, printed and online, in the field of Systems engineering. There are working groups for topics within systems engineering practice, systems engineering in particular industries and systems engineering's relationship to other related disciplines. INCOSE produces two main periodicals: the journal, and the practitioner magazine, and a number of individual published works, including the INCOSE Handbook. In collaboration with the IEEE Systems Society and the Systems Engineering Research Center (SERC)/Steven Institute of Technology, INCOSE produces and maintains the online Systems Engineering Body of Knowledge (SEBoK)], a wiki-style reference open to contributions from anyone, but with content controlled and managed by an editorial board. INCOSE certifies systems engineers through its three-tier certification process, which requires a combination of education, years of experience and passing an examination based on the INCOSE Systems Engineering Handbook. INCOSE is a member organization of the Federation of Enterprise Architecture Professional Organizations (FEAPO), a worldwide association of professional organizations formed to advance the discipline of Enterprise Architecture. Purpose The stated vision of INCOSE is "A better world through a systems approach" and its mission is "To address complex societal and technical challenges by enabling, promoting and advancing systems engineering and systems approaches. The organization's goals are focused on the creation and dissemination of systems engineering information, promoting international collaboration and promoting the profession of Systems engineering. Publications INCOSE Systems Engineering Handbook Systems Engineering INSIGHT Practitioner Journal Metrics Guidebook for Integrated Systems and Product Development I-pub publication database Systems Engineering Tools Database Standards INCOSE's International Council on Systems Engineering Standards Technical Committee (STC) is working to advance and harmonize systems engineering standards used worldwide. Some of the notable standards the STC has been involved with are: ECSS-E-10 Space Engineering - Systems Engineering Part 1B: Requirements and process, 18 Nov 2004 ECSS-E-10 Space Engineering - Systems Engineering Part 6A: Functional and technical specifications, 09 Jan 2004 ECSS-E-10 Space Engineering - Systems Engineering Part 7A: Product data exchange, 25 August 2004 ISO/IEC/IEEE 15288: 2015 - System and Software Life Cycle Processes OMG Systems Modeling Language (OMG SysML), July 2006 Awards granted INCOSE Pioneer Award: annual prize for people who have made significant pioneering contributions to the field of Systems Engineering References External links INCOSE Organizations established in 1990 Engineering societies Systems engineering Systems science societies
International Council on Systems Engineering
[ "Engineering" ]
639
[ "Engineering societies", "Systems engineering" ]
27,838,006
https://en.wikipedia.org/wiki/North%20light%20%28architecture%29
North light (in the Northern Hemisphere) is sunlight coming through a north-facing window. Because it does not come directly from the sun, it remains at a consistent angle and colour throughout the day and does not create sharp shadows. It is also cooler than direct sunlight due to the way the Earth's atmosphere scatters light via Rayleigh scattering. These properties make it the natural light of choice in certain styles of architecture, painting and photography. In addition, the cool colour of north light has been studied for its effect on our perception of art in galleries and museums. South of the equator (in the Southern Hemisphere), the same characteristics are seen in south light. Formation and properties Because the sun passes to the south of most observers in the northern hemisphere, north light is the light coming from the sky, rather than directly from the sun. This is the reason for its diffused nature, as well as why it casts softer shadows than direct sunlight and remains more consistent in colour than light from the east or west (which would be affected by sunrise and sunset respectively). The colour of diffused daylight is 5500 - 6500K, meaning that north light is cooler (more blue tinted) than light from other directions. The main cause of this is Rayleigh scattering, which was first mathematically described by British physicist Lord Rayleigh (John William Strutt) in 1871. Earth's atmosphere contains large proportions of nitrogen and oxygen which, via Rayleigh scattering, scatter short wavelengths of electromagnetic radiation more efficiently than longer wavelengths. Thus, light scattered or diffused away from the main beam of sunlight appears cooler while direct light (especially during sunlight or sunset) appears warmer. North light is also less bright than direct sunlight, as only a portion of incoming white light is scattered. The brightness of north light can be between 10,000 and 30,000 lux, depending on location and season, while direct sunlight can be as bright as 100,000 lux. In architecture The use of natural light in architecture is called daylighting. It first rose to professional importance during the Roman Empire, when architects struggled to improve the ambience of public and religious spaces while reducing glare. At first, this involved physical structures such as clerestory windows and roof slits, however by the first century CE the direction of light began to play a bigger role. The Pantheon, rebuilt during the reign of emperor Hadrian (117CE – 138CE) employs an oculus (roof window) to let in unobstructed but diffused light, much like a modern north-facing window or skylight would. Because the sun moves to the south of buildings in the Northern Hemisphere, the northern side of these buildings is always in the shade. For daylighting, this has certain implications: North-facing rooms can feel dark and cool. This can be controlled by painting the room a bright colour such as red, yellow, and orange, finishing the room with wood or by using larger windows. In addition, north light does not change colour (warmth) throughout the day, so whichever mood is achieved through the room's painted colour, it is likely to remain constant. As north light is more diffuse than direct sunlight, it poses less problems with glare. This makes it suitable for office spaces, home theatres or reading rooms. This advantage can help with the previous point – less glare means that larger windows can be used without the need for exterior shading. Infrequently used rooms such as laundries, hallways and bathrooms can be oriented north. This would free up south-facing floor plan space for rooms used more often such as living areas and kitchens. In addition, the latitude of a building changes the effect of north light. Near the equator, the only difference between north and south light may be seasonal. In temperate regions, the implications above apply year-round. In polar areas, they may be even more extreme. For example, Anchorage in Alaska receives only five hours of sunlight at winter solstice, with the sun rising only 5.5 degrees above the horizon. This would make north-facing windows too dark to be of any use during winter. Passive housing Houses that rely on sunlight, wind and insulation to regulate their temperature (as opposed to artificial cooling and heating) are known as passive houses. Because of its lower brightness and lower amount of infrared radiation, north light transmits less warmth to buildings than direct sunlight does. This makes large north-facing windows better suited for passive houses in warm, tropical climates, as it allows living areas to receive ample sunlight without overheating the building. In painting North light has been an important feature in painting studios before the development of electric lights, but its use continues to this day. Much like in architecture, light direction is important for the mood of a painting, but the light's ambience is even more critical. This is because rays of sun entering from the east, west and south change shape and direction during the day. A complex scene would be very difficult to paint while the shadows and reflections move. Furthermore, accurately depicting harsh light requires a large dynamic range (difference in darkness between blacks and whites) which most traditional paints can not reproduce. Italian Renaissance painter Leon Battista Alberti alluded to this lack of dynamic range in 1435, writing that “no surface should be made so white that you cannot make it [...] whiter still”. However, it was Leonardo da Vinci who first wrote about studio lighting in detail. By 1492 he had discovered the adaptation of the human pupil to darkness, noting that “the eye perceives and recognises the objects with greater intensity [when] the pupil is more widely dilated”. Following this, he painted in a dimly lit studio and avoided harsh southern light. The effects of this soft, dim lighting can be seen in later works such as the Mona Lisa. The use of specifically northern, rather than merely dim light became more common during the Dutch Golden Age with painters such as Rembrandt and Johannes Vermeer. This can be seen in works such as Vermeer's The Milkmaid. Artists without access to north facing windows have developed alternative solutions to replicate some of the desirable properties of north light for painting. These include: Diffusing direct sunlight (usually from the south) by adding opaque covers to windows. These covers can be sheets, blinds or even tracing or baking paper, however, they have the potential to affect the light's colour if they are tinted off-white. This may or may not be a desirable effect depending on the artist. Painting early or late in the day. While this may add warmth if done close to sunrise or sunset, it allows for diffuse and dim light to be obtained from a south facing window. Using artificial lighting such as a daylight lamp or continuous, daylight-balanced studio light. Softboxes can be added to improve diffusion. Artificial lights can be expensive, but allow for greater versatility, as several of them can be arranged to fill in certain shadows and create more customised lighting. This is important when trying to recreate Rembrandt lighting or in fine art reproduction. However, due to the inverse square law, light falling on an object close to its source (e.g. from an artificial lamp) will decrease in brightness over a shorter distance than light falling on an object further away from its source (e.g. from a north facing window). This is known as falloff, and means that any artificial lighting used must be fairly powerful and positioned a long distance away from the subject in order to accurately imitate north light. In photography Photographers use north light for similar reasons as painters but have access to electric lights as well. The use of softboxes and bouncing speedlights against umbrella-shaped diffusers strives to recreate the soft shadows that north light produces. These processes are popular in portrait photography. The Dutch Golden Age of painting has also left a legacy – since the early 20th century, photographers and filmmakers have used Rembrandt lighting to return some dramatic effect to diffused light, both in portraits and cinema. This entails a key light illuminating the subject (such as a north-facing window) and a reflector facing the other side of the subject. While this lighting was not formally named in Rembrandt's time, it appears in some of his paintings. Like painters, photographers can modify south light or use artificial lighting as a replacement for north light if needed. However, they can also employ post production software such as Adobe Lightroom and Photoshop to correct exposure and temperature. In addition, dedicated software such as Robin Myers Imaging EquaLight can adjust for lens and lighting falloff, which is especially useful for fine art photographers. Effect of art appreciation Before artificial lighting, both the artist and their audience would both see art under natural light (either coming from the north or scattered in some way to reduce glare). However, since the 1980s museums and galleries have become reliant on some degree of electric lighting. This means that the audience may see art differently to how it was intended and may also miss out on observing subtle changes in shadows and highlights as light moves throughout the day. Studies into art perception have found that the colour correlated temperature (CCT) of north light (~6000K) may be too cool for optimum appreciation of most art. For example, a 2004 study found 3600K to be the preferred temperature – a warm CCT which is commonly used in museums. A 2008 study by the Optical Society of America used different methodology to suggest 5100K as the optimal temperature – although this is still slightly warmer than natural north light. However, the most comprehensive study on this topic was done by the University of Vienna in 2019. It divided appreciation further into beauty, emotional arousal and interest, and studied both portraits and abstract art. While the findings for portraits suggest a warmer CCT in line with previous studies, a cooler CCT was preferred for abstract art. References http://vastulab.com/importance-of-north-light-in-building-architect-guide-to-construction/ Architectural design Architectural theory
North light (architecture)
[ "Engineering" ]
2,058
[ "Design", "Architectural theory", "Architectural design", "Architecture" ]
27,838,362
https://en.wikipedia.org/wiki/EPANET
EPANET (Environmental Protection Agency Network Evaluation Tool) is a public domain, water distribution system modeling software package developed by the United States Environmental Protection Agency's (EPA) Water Supply and Water Resources Division. It performs extended-period simulation of hydraulic and water-quality behavior within pressurized pipe networks and is designed to be "a research tool that improves our understanding of the movement and fate of drinking-water constituents within distribution systems". EPANET first appeared in 1993. EPANET 2 is available both as a standalone program and as an open-source toolkit (Application Programming Interface in C). Its computational engine is used by many software companies that developed more powerful, proprietary packages, often GIS-centric. The EPANET ".inp" input file format, which represents network topology, water consumption, and control rules, is supported by many free and commercial modeling packages. Therefore, it is arguably considered as the industry standard. Features EPANET provides an integrated environment for editing network input data, running hydraulic and water quality simulations, and viewing the results in a variety of formats. EPANET provides a fully equipped and extended period of hydraulic analysis that can handle systems of any size. The package also supports the simulation of spatially and temporally varying water demand, constant or variable speed pumps, and the minor head losses for bends and fittings. The modeling provides information such as flows in pipes, pressures at junctions, propagation of a contaminant, chlorine concentration, water age, and even alternative scenario analysis. This helps to compute pumping energy and cost and then model various types of valves, including shutoffs, check pressure regulating and flow control. EPANET's water quality modeling functionality allows users to analyze the movement of a reactive or non-reactive tracer material which spreads through the network over time. It tracks the reactive material as it spreads, measuring the percentage of flow from the given nodes. The package employs the global reaction rate coefficient which can be modified on a pipe-by-pipe basis. The storage tanks can be modeled as complete mix, plug flow or two-compartment reactors. The visual network editor of EPANET simplifies the process of building piping network models and editing their properties. These various types of data reporting visualization tools are used to assist to analyze the networks, which include the graphics views, tabular views, and special reports. Hydraulic simulation Headloss in pipe segments EPANET hydraulics engine computes headlosses along the pipes by using one of the three formulas: Hazen-Williams formula: used to model full flow conditions under simplified conditions (turbulent flow, temperature around 60 degrees Fahrenheit, and viscosity similar to water) https://www.epa.gov/water-research/epanet Darcy-Weisbach formula: used to model pressurized flow under a broader range of hydraulic conditions Chezy-Manning formula: used to model pressurized flow by using Chezy's roughness coefficients for Manning's equation Since the pipe segment headloss equation is used within the network solver, the formula above is selected for the entire model. Head-flow Curves of Pumps Within EPANET, pumps are modeled using a head-flow curve, which defines the relationship between hydraulic head imparted to the system by the pump and flow conveyed by the pump. The model calculates the flow conveyed by the pump element for a given system head condition based on this curve. EPANET can also model a pump as a constant power input, effectively adding a given amount of energy to the system downstream of the pump element. Network solver The network hydraulics solver employed by EPANET uses the "Gradient Method" first proposed by Todini and Pilati, which is a variant of Newton–Raphson method. Water-quality simulation EPANET includes the capability to model water age and predict flow of non-reactive and, under simplified conditions, reactive materials. This capability is frequently used to predict chlorine residuals within water distribution systems. While the internal water quality simulation capabilities only evaluates decay or growth of a single constituent, an extension is available (EPANET-MSX), which allows modeling of interactions between constituents. EPANET Toolkit EPANET's computational engine is available for download as a separate dynamic link library for incorporation into other applications. The source code for EPANET 2 is available on the EPA's EPANET website. In 2012 the EPANET toolkit, written in C, was rewritten in Java in a more object-oriented style. The code in Java is available on GitHub: https://github.com/Baseform/Baseform-Epanet-Java-Library. Compatibility EPANET uses a binary file format, but also includes the capability for importing and exporting data in dxf, metafile, and ASCII file formats. EPANET's ASCII file format is called an input file within EPANET, and uses a file extension ".inp". The input file can include data describing network topology, water consumption, and control rules, and is supported by many free and commercial modeling packages. While EPANET is used as the computational engine for most water distribution system models, most models are developed and maintained in hydraulic modeling packages based on EPANET's computational engine. Some of the major hydraulic modeling packages are: InfoWorks WS Pro, InfoWater Pro, and InfoWater, developed by Innovyze [An Autodesk Company] Qatium, developed by Qatium Fluidit Water, developed by Fluidit Pipe2000, developed by KYPipe, LLC MIKE URBAN, developed by DHI WaterCAD, WaterGEMS, HAMMER, and SewerCAD developed by Bentley's Haestad Methods (Hydraulics & Hydrology) group. WatDis, developed by Transparent Blue Hydronet, Calculation software for pressure networks: aqueducts, fire prevention systems, gas networks with BIM interoperability via IFC, and GIS via Shapefile developed by Newsoft WaterNAM, Water Network Analysis Model, Developed by Streamstech Inc. Giswater, open source software developed by the Giswater Association GISpipe, software which is easily used for the analysis, design, and operation of water distribution networks Integrated with GIS system developed by Jinbosoft Urbano Hydra, AutoCAD/Map3D/Civil3D application software used for the hydraulic calculation, analysis, design and operation of water distribution networks. Integrated with GIS system and ready for BIM workflows developed by StudioARS Company GeoSan, open source GIS software to manage water pipes and consumers, developed by NEXUS GeoEngenharia available at www.softwarepublico.gov.br. WateNET-CAD developed by Diolkos3D. Esurvey Water - developed by {https://www.esurveying.net], The output Files from EPANET can be used to generate LS and presentable Final Drawings Most of these applications allow for multiple demand conditions, planning scenarios, and various methods of integrating with other data sources an agency may already have in place not supported in EPANET, such as GIS, and support additional types of analyses not found in EPANET. ESurvey Water is developed to create Auto Designed Longitudinal Profiles, and auto generation of the final outputs after the hydraulic design is completed in EPANET and other software. See also Hydraulics Pipe network analysis Storm Water Management Model Public domain software Water supply network References External links EPA Webpage (Download from here) - Development Repository Epanet and Development discussion group Open Water Analytics EPANET support forum EPANET Community (Facebook group) United States Environmental Protection Agency Civil engineering Computer programming Hydraulic engineering Environmental engineering Free simulation software Public-domain software with source code Pascal (programming language) software
EPANET
[ "Physics", "Chemistry", "Technology", "Engineering", "Environmental_science" ]
1,595
[ "Hydrology", "Chemical engineering", "Computer programming", "Physical systems", "Construction", "Software engineering", "Hydraulics", "Civil engineering", "Environmental engineering", "Computers", "Hydraulic engineering" ]
27,842,448
https://en.wikipedia.org/wiki/Enthalpy%E2%80%93entropy%20chart
An enthalpy–entropy chart, also known as the H–S chart or Mollier diagram, plots the total heat against entropy, describing the enthalpy of a thermodynamic system. A typical chart covers a pressure range of 0.01–1000 bar, and temperatures up to 800 degrees Celsius. It shows enthalpy in terms of internal energy , pressure and volume using the relationship (or, in terms of specific enthalpy, specific entropy and specific volume, ). History The diagram was created in 1904, when Richard Mollier plotted the total heat against entropy . At the 1923 Thermodynamics Conference held in Los Angeles it was decided to name, in his honor, as a "Mollier diagram" any thermodynamic diagram using the enthalpy as one of its axes. Details On the diagram, lines of constant pressure, constant temperature and volume are plotted, so in a two-phase region, the lines of constant pressure and temperature coincide. Thus, coordinates on the diagram represent entropy and heat. The work done in a process on vapor cycles is represented by length of , so it can be measured directly, whereas in a T–s diagram it has to be computed using thermodynamic relationship between thermodynamic properties. In an isobaric process, the pressure remains constant, so the heat interaction is the change in enthalpy. In an isenthalpic process, the enthalpy is constant. A horizontal line in the diagram represents an isenthalpic process. A vertical line in the h–s chart represents an isentropic process. The process 3–4 in a Rankine cycle is isentropic when the steam turbine is said to be an ideal one. So the expansion process in a turbine can be easily calculated using the h–s chart when the process is considered to be ideal (which is the case normally when calculating enthalpies, entropies, etc. Later the deviations from the ideal values and they can be calculated considering the isentropic efficiency of the steam turbine used.) Lines of constant dryness fraction (x), sometimes called the quality, are drawn in the wet region and lines of constant temperature are drawn in the superheated region. X gives the fraction (by mass) of gaseous substance in the wet region, the remainder being colloidal liquid droplets. Above the heavy line, the temperature is above the boiling point, and the dry (superheated) substance is gas only. In general such charts do not show the values of specific volumes, nor do they show the enthalpies of saturated water at pressures which are of the order of those experienced in condensers in a thermal power station. Hence the chart is only useful for enthalpy changes in the expansion process of the steam cycle. Applications and usage It can be used in practical applications such as malting, to represent the grain–air–moisture system. The underlying property data for the Mollier diagram is identical to a psychrometric chart. At first inspection, there may appear little resemblance between the charts, but if the user rotates a chart ninety degrees and looks at it in a mirror, the resemblance is apparent. The Mollier diagram coordinates are enthalpy h and humidity ratio x. The enthalpy coordinate is skewed and the constant enthalpy lines are parallel and evenly spaced. See also Thermodynamic diagrams Contour line Phase diagram References Thermodynamics Entropy de:Wasserdampf#h-s-Diagramm
Enthalpy–entropy chart
[ "Physics", "Chemistry", "Mathematics" ]
740
[ "Thermodynamic properties", "Physical quantities", "Quantity", "Entropy", "Thermodynamics", "Asymmetry", "Wikipedia categories named after physical quantities", "Symmetry", "Dynamical systems" ]
27,847,679
https://en.wikipedia.org/wiki/Reactive%20nitrogen
Reactive nitrogen ("Nr"), also known as fixed nitrogen, refers to all forms of nitrogen present in the environment except for molecular nitrogen (). While nitrogen is an essential element for life on Earth, molecular nitrogen is comparatively unreactive, and must be converted to other chemical forms via nitrogen fixation before it can be used for growth. Common Nr species include nitrogen oxides (), ammonia (), nitrous oxide (), as well as the anion nitrate (). Biologically, nitrogen is "fixed" mainly by the microbes (eg., Bacteria and Archaea) of the soil that fix into mainly but also other species. Legumes, a type of plant in the Fabacae family, are symbionts to some of these microbes that fix . is a building block to Amino acids and proteins amongst other things essential for life. However, just over half of all reactive nitrogen entering the biosphere is attributable to anthropogenic activity such as industrial fertilizer production. While reactive nitrogen is eventually converted back into molecular nitrogen via denitrification, an excess of reactive nitrogen can lead to problems such as eutrophication in marine ecosystems. Reactive nitrogen compounds In the environmental context, reactive nitrogen compounds include the following classes: oxide gases: nitric oxide, nitrogen dioxide, nitrous oxide. Containing oxidized nitrogen, mainly the result of industrial processes and internal combustion engines. anions: nitrate, nitrite. Nitrate is a common component of fertilizers, e.g. ammonium nitrate. amine derivatives: ammonia and ammonium salts, urea. Containing reduced nitrogen, these compounds are components of fertilizers. All of these compounds enter into the nitrogen cycle. As a consequence, an excess of Nr can affect the environment relatively quickly. This also means that nitrogen-related problems need to be looked at in an integrated manner. See also Human impact on the nitrogen cycle References Citations General references Cycle Biogeochemical cycle Soil biology Nitrogen cycle Metabolism Biogeography Intensive farming
Reactive nitrogen
[ "Chemistry", "Biology" ]
426
[ "Eutrophication", "Biogeography", "Intensive farming", "Biogeochemical cycle", "Nitrogen cycle", "Biogeochemistry", "Soil biology", "Cellular processes", "Biochemistry", "Metabolism" ]
27,848,005
https://en.wikipedia.org/wiki/Spatial%20distribution
A spatial distribution in statistics is the arrangement of a phenomenon across the Earth's surface and a graphical display of such an arrangement is an important tool in geographical and environmental statistics. A graphical display of a spatial distribution may summarize raw data directly or may reflect the outcome of a more sophisticated data analysis. Many different aspects of a phenomenon can be shown in a single graphical display by using a suitable choice of different colours to represent differences. One example of such a display could be observations made to describe the geographic patterns of features, both physical and human across the earth. The information included could be where units of something are, how many units of the thing there are per units of area, and how sparsely or densely packed they are from each other. Patterns of spatial distribution Usually, for a phenomenon that changes in space, there is a pattern that determines the location of the subject of the phenomenon and its intensity or size, in X and Y coordinates. The scientific challenge is trying to identify the variables that affect this pattern. The issue can be demonstrated with several simple examples: The spatial distribution of the human population The spatial distribution of the population and development are closely related to each other, especially in the context of sustainability. The challenges related to the spatial spread of a population include: rapid urbanization and population concentration, rural population, urban management and poverty housing, displaced persons and refugees. Migration is a basic element in the spatial distribution of a population, and it may remain a key driver in the coming decades, especially as an element of urbanization in developing countries. The spatial distribution of economic activity in the world In a pair of studies from Brown University by urban economist J. Vernon Henderson, with co-authors Adam Storeygard and David Weil, the spatial distribution of the economic activity in the world was examined by mapping the artificial lights at night from space over 250,000 grid cells, the average area of each of which is 560 square kilometers. They found that 50% of the variation in this activity can be explained through a system of physical geographic features. The spatial distribution of the seismic intensities of an earthquake The seismic intensityies of an earthquake are distributed across space with an elementary regularity, so that in towns located close to the epicenter of the earthquake, high seismic intensities are observed and vice versa; Low intensities were observed in settlements far from the epicenter. The distance of each settlement from the epicenter is marked with XY coordinates, a variable that affects the seismic intensity observed there. But there are other variables that affect these intensities, such as the geological structure of each settlement, its topography, and more. All these make the simple regularity of the effect of the distance variable more complex. If we succeed in identifying the contribution of most of the variables to the fact that Intensity Z occurred in the XY settlement and not other one, we will understand the pattern that stands behind the organization of the seismic intensity in a specific earthquake, a fact that will help us in the field of seismic risks surveys and their assessments. The spatial distribution of a population with health impairments related to vitamin A deficiency Vitamin A deficiency is a major public health problem in poor societies. Dietary consumption of foods rich with vitamin A was low in Ethiopia. In 2021, a study was published that evaluated the spatial distribution and the spatial variables affecting it in dietary consumption of foods rich (or poor) in vitamin A among children aged 6–23 months in Ethiopia. More examples Many police departments colour-code a city map based on crime statistics. The two-step floating catchment area (2SFCA) method has been used to prepare maps showing the relative accessibility of individuals (demand units) to physicians (supply units), by shading which shows many different degrees of accessibility. Notes Demographics Spatial analysis Statistical charts and diagrams
Spatial distribution
[ "Physics" ]
768
[ "Spacetime", "Space", "Spatial analysis" ]
38,658,138
https://en.wikipedia.org/wiki/Continuous%20fiber%20reinforced%20thermoplastic
Continuous fiber reinforced thermoplastic, is a composite material that contains high-performance continuous fiber, such as carbon fiber, glass fiber, or aramid fiber that is impregnated in a matrix of thermoplastics like polycarbonates. CFRTP is producible into both tape and sheet formats that can later be formed using thermoforming techniques. References Composite materials Fibre-reinforced polymers
Continuous fiber reinforced thermoplastic
[ "Physics" ]
89
[ "Materials stubs", "Materials", "Composite materials", "Matter" ]
38,662,839
https://en.wikipedia.org/wiki/Transmembrane%20Protein%20205
Transmembrane Protein 205 (TMEM205) is a protein encoded on chromosome 19 by the TMEM205 gene. Gene TMEM205 is located on the minus strand of chromosome 19 from base pair 11,453,452 to 11,456,981. In close proximity to TMEM205, CCDC159 is located slightly upstream and RAB3D slightly down stream of the genomic sequence. Homology TMEM205 has no known Paralogs in the Human genome. Using UCSC genome browser BLAT against the human protein sequence it was found that the closest relative to humans to contain a paralog of the TMEM205 gene in its genome is the Bushbaby. TMEM205 does however have a large range of ortholog sequences. Protein The human homologue of TMEM205 is 189 amino acids long and has a molecular weight of 21.2 kDa. It contains 4 hydrophobic helical domains that are predicted to be transmembrane domains. Expression TMEM205 has been shown to be expressed in greater amounts in tissues that have secretory function. These tissues include the thyroid, adrenal gland, pancrease, and mammary tissues. The protein has also been shown to have increased expression in tumor tissue that have become resistant to platinum based chemotherapy drugs. Function TMEM205 is thought to be a multi-pass transmembrane protein. It has been shown to be located at the plasma membrane in humans tissues and translocates to the nuclear envelope when cells become resistant to Cisplatin. It contains four domains predicted to be trans membrane domains by TMHMM analysis. Interacting proteins TMEM205 has been shown to be co-located with RAB8 a known GTPase involved in vesicular traffic. Clinical significance TMEM205 has been shown to be involved in Cisplatin resistance. Cisplatin is a chemotherapeutic drug that is commonly used to treat solid malignancies such as carcinomas, sarcomas, and lymphomas. In addition to being involved in Cisplatin resistance there is growing evidence that the protein is also involved in the diseases thyroiditis and prostatitis Notes Genes on human chromosome 19 Proteins
Transmembrane Protein 205
[ "Chemistry" ]
485
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
38,664,703
https://en.wikipedia.org/wiki/Pioneer%20factor
Pioneer factors are transcription factors that can directly bind condensed chromatin. They can have positive and negative effects on transcription and are important in recruiting other transcription factors and histone modification enzymes as well as controlling DNA methylation. They were first discovered in 2002 as factors capable of binding to target sites on nucleosomal DNA in compacted chromatin and endowing competency for gene activity during hepatogenesis. Pioneer factors are involved in initiating cell differentiation and activation of cell-specific genes. This property is observed in histone fold-domain containing transcription factors (fork head box (FOX) and NF-Y) and other transcription factors that use zinc finger(s) for DNA binding (Groucho TLE, Gal4, and GATA). The eukaryotic cell condenses its genome into tightly packed chromatin and nucleosomes. This ability saves space in the nucleus for only actively transcribed genes and hides unnecessary or detrimental genes from being transcribed. Access to these condensed regions is done by chromatin remodelling by either balancing histone modifications or directly with pioneer factors that can loosen the chromatin themselves or as a flag recruiting other factors. Pioneer factors are not necessarily required for assembly of the transcription apparatus and may dissociate after being replaced by other factors. Active rearrangement Pioneer factors can also actively affect transcription by directly opening up condensed chromatin in an ATP-independent process. This is a common trait of fork head box factors (which contain a winged helix DNA-binding domain that mimics the DNA-binding domain of the linker H1 histone), and NF-Y (whose NF-YB and NF-YC subunits contain histone-fold domains similar to those of the core histones H2A/H2B). Fork head box factors The similarity to histone H1 explains how fork head factors are able to bind chromatin by interacting with the major groove of only the one available side of DNA wrapped around a nucleosome. Fork head domains also have a helix that confers sequence specificity unlike linker histone. The C terminus is associated with higher mobility around the nucleosome than linker histone, displacing it and rearranging nucleosomal landscapes effectively. This active re-arrangement of the nucleosomes allows for other transcription factors to bind the available DNA. In thyroid cell differentiation FoxE binds to compacted chromatin of the thyroid peroxidase promoter and opens it for NF1 binding. NF-Y NF-Y is a heterotrimeric complex composed of NF-YA, NF-YB, and NF-YC subunits. The key structural feature of the NF-Y/DNA complex is the minor-groove interaction of its DNA binding domain-containing subunit NF-YA, which induces an ~80° bend in the DNA. NF-YB and NF-YC interact with DNA through non-specific histone-fold domain-DNA contacts. NF-YA's unique DNA-binding mode and NF-YB/NF-YC's nucleosome-like properties of non-specific DNA binding impose sufficient spatial constraints to induce flanking nucleosomes to slide outward, making nearby recognition sites for other transcription factors accessible. Passive factors Pioneer factors can function passively, by acting as a bookmark for the cell to recruit other transcription factors to specific genes in condensed chromatin. This can be important for priming the cell for a rapid response as the enhancer is already bound by a pioneer transcription factor giving it a head start towards assembling the transcription preinitiation complex. Hormone responses are often quickly induced in the cell using this priming method such as with the estrogen receptor. Another form of priming is when an enhancer is simultaneously bound by activating and repressing pioneer factors. This balance can be tipped by dissociation of one of the factors. In hepatic cell differentiation the activating pioneer factor FOXA1 recruits a repressor, grg3, that prevents transcription until the repressor is down-regulated later on in the differentiation process. In a direct role pioneer factors can bind an enhancer and recruit activation complex that will modify the chromatin directly. The change in the chromatin changes the affinity, decreasing the affinity of the pioneer factor such that it is replaced by a transcription factor that has a higher affinity. This is a mechanism for the cell to switch a gene on was observed with glucocorticoid receptor recruiting modification factors that then modify the site to bind activated estrogen receptor which was coined as a “bait and switch” mechanism. Epigenetic effects Pioneer factors can exhibit their greatest range of effects on transcription through the modulation of epigenetic factors by recruiting activating or repressing histone modification enzymes and controlling CpG methylation by protecting specific cysteine residues. This has effects on controlling the timing of transcription during cell differentiation processes. Histone modification Histone modification is a well-studied mechanism to transiently adjust chromatin density. Pioneer factors can play a role in this by binding specific enhancers and flagging histone modification enzymes to that specific gene. Repressive pioneer factors can inhibit transcription by recruiting factors that modify histones that further tighten the chromatin. This is important to limit gene expression to specific cell types and has to be removed only when cell differentiation begins. FoxD3 has been associated as a repressor of both B-cell and melanocytic cell differentiation pathways, maintaining repressive histone modifications where bound, that have to be overcome to start differentiation. Pioneer factors can also be associated with recruiting transcription-activating histone modifications. Enzymes that modify H3K4 with mono and di-methylation are associated with increasing transcription and have been shown to bind pioneer factors. In B cell differentiation PU.1 is necessary to signal specific histones for activating H3K4me1 modifications that differentiate hematopoietic stem cells into either the B-cell or macrophage lineage. FoxA1 binding induces HSK4me2 during neuronal differentiation of pluripotent stem cells as well as the loss of DNA methylation. SOX9 recruits histone modification enzymes MLL3 and MLL4 to deposit H3K4me1 prior to the opening of enhancers in developing hair follicle and basal cell carcinoma. DNA methylation Pioneer factors can also affect transcription and differentiation through the control of DNA methylation. Pioneer factors that bind to CpG islands and cytosine residues block access to methyltransferases. Many eukaryotic cells have CpG islands in their promoters that can be modified by methylation having adverse effects on their ability to control transcription. This phenomenon is also present in promoters without CpG islands where single cytosine residues are protected from methylation until further cell differentiation. An example is FoxD3 preventing methylation of a cytosine residue in Alb1 enhancer, acting as a place holder for FoxA1 later in hepatic as well as in CpG islands of genes in chronic lymphocytic leukemia. For stable control of methylation state the cytosine residues are covered during mitosis, unlike most other transcription factors, to prevent methylation. Studies have shown that during mitosis 15% of all interphase FoxA1 binding sites were bound. The protection of cytosine methylation can be quickly removed allowing for rapid induction when a signal is present. Other pioneer factors A well studied pioneer factor family is the Groucho-related (Gro/TLE/Grg) transcription factors that often have a negative effect on transcription. These chromatin binding domains can span up to 3-4 nucleosomes. These large domains are scaffolds for further protein interactions and also modify the chromatin for other pioneer factors such as FoxA1 which has been shown to bind to Grg3. Transcription factors with zinc finger DNA binding domains, such as the GATA family and glucocorticoid receptor. The zinc finger domains do not appear to bind nucleosomes well and can be displaced by FOX factors. In the skin epidermis, SOX family transcription factor, SOX9, also behaves as a pioneer factor that governs hair follicle cell fate and can reprogram epidermal stem cells to a hair follicle fate. Role in cancer The ability of pioneer factors to respond to extracellular signals to differentiate cell type has been studied as a potential component of hormone-dependent cancers. Hormones such as estrogen and IGFI are shown to increase pioneer factor concentration leading to a change in transcription. Known pioneer factors such as FoxA1, PBX1, TLE, AP2, GATA factors 2/3/4, and PU.1 have been associated with hormone-dependent cancer . FoxA1 is necessary for estrogen and androgen mediated hepatocarcinogenesis and is a defining gene for ER+ luminal breast cancer, as is another pioneer factor GATA3. FOXA1 particularly is expressed in 90% of breast cancer metastases and 89% of metastatic prostate cancers. In the breast cancer cell line, MCF-7, it was found that FoxA1 was bound to 50% of estrogen receptor binding sites independent of estrogen presence. High expression of pioneer factors is associated with poor prognosis with the exception of breast cancer where FoxA1 is associated with a stronger outcome. The correlation between pioneer factors and cancer has led to prospective therapeutic targeting. In knockdown studies in the MCF-7 breast cancer cell line it was found that decreasing pioneer factors FoxA1 and AP2 decreased ER signalling. Other fork head proteins have been associated with cancer, including FoxO3 and FoxM that repress the cell survival pathways Ras and PPI3K/AKT/IKK. Drugs such as Paclitaxel, Imatinib, and doxorubicin which activate FoxO3a or its targets are being used. Modification to modulate related factors with pioneer activity is a topic of interest in the early stages as knocking down pioneer factors may have toxic effects through alteration of the lineage pathways of healthy cells. References Transcription factors Protein families Gene expression
Pioneer factor
[ "Chemistry", "Biology" ]
2,128
[ "Transcription factors", "Gene expression", "Protein classification", "Signal transduction", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Protein families", "Induced stem cells" ]