id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
15,828,681
https://en.wikipedia.org/wiki/Dimroth%20rearrangement
The Dimroth rearrangement is a rearrangement reaction taking place with certain 1,2,3-triazoles where endocyclic and exocyclic nitrogen atoms switch place. This organic reaction was discovered in 1909 by Otto Dimroth. With R a phenyl group the reaction takes place in boiling pyridine for 24 hours. This type of triazole has an amino group in the 5 position. After ring-opening to a diazo intermediate, C-C bond rotation is possible with 1,3-migration of a proton. Certain 1-alkyl-2-iminopyrimidines also display this type of rearrangement. In the first step is an addition reaction of water followed by ring-opening of the hemiaminal to the aminoaldehyde followed by ring closure. A known drug example of the Dimroth rearrangement includes in the synthesis of Bemitradine [88133-11-3]. References Rearrangement reactions Name reactions
Dimroth rearrangement
[ "Chemistry" ]
214
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
15,831,300
https://en.wikipedia.org/wiki/Tellegen%27s%20theorem
Tellegen's theorem is one of the most powerful theorems in network theory. Most of the energy distribution theorems and extremum principles in network theory can be derived from it. It was published in 1952 by Bernard Tellegen. Fundamentally, Tellegen's theorem gives a simple relation between magnitudes that satisfy Kirchhoff's laws of electrical circuit theory. The Tellegen theorem is applicable to a multitude of network systems. The basic assumptions for the systems are the conservation of flow of extensive quantities (Kirchhoff's current law, KCL) and the uniqueness of the potentials at the network nodes (Kirchhoff's voltage law, KVL). The Tellegen theorem provides a useful tool to analyze complex network systems including electrical circuits, biological and metabolic networks, pipeline transport networks, and chemical process networks. The theorem Consider an arbitrary lumped network that has branches and nodes. In an electrical network, the branches are two-terminal components and the nodes are points of interconnection. Suppose that to each branch we assign arbitrarily a branch potential difference and a branch current for , and suppose that they are measured with respect to arbitrarily picked associated reference directions. If the branch potential differences satisfy all the constraints imposed by KVL and if the branch currents satisfy all the constraints imposed by KCL, then Tellegen's theorem is extremely general; it is valid for any lumped network that contains any elements, linear or nonlinear, passive or active, time-varying or time-invariant. The generality is extended when and are linear operations on the set of potential differences and on the set of branch currents (respectively) since linear operations don't affect KVL and KCL. For instance, the linear operation may be the average or the Laplace transform. More generally, operators that preserve KVL are called Kirchhoff voltage operators, operators that preserve KCL are called Kirchhoff current operators, and operators that preserve both are simply called Kirchhoff operators. These operators need not necessarily be linear for Tellegen's theorem to hold. The set of currents can also be sampled at a different time from the set of potential differences since KVL and KCL are true at all instants of time. Another extension is when the set of potential differences is from one network and the set of currents is from an entirely different network, so long as the two networks have the same topology (same incidence matrix) Tellegen's theorem remains true. This extension of Tellegen's Theorem leads to many theorems relating to two-port networks. Definitions We need to introduce a few necessary network definitions to provide a compact proof. Incidence matrix: The matrix is called node-to-branch incidence matrix for the matrix elements being A reference or datum node is introduced to represent the environment and connected to all dynamic nodes and terminals. The matrix , where the row that contains the elements of the reference node is eliminated, is called reduced incidence matrix. The conservation laws (KCL) in vector-matrix form: The uniqueness condition for the potentials (KVL) in vector-matrix form: where are the absolute potentials at the nodes to the reference node . Proof Using KVL: because by KCL. So: Applications Network analogs have been constructed for a wide variety of physical systems, and have proven extremely useful in analyzing their dynamic behavior. The classical application area for network theory and Tellegen's theorem is electrical circuit theory. It is mainly in use to design filters in signal processing applications. A more recent application of Tellegen's theorem is in the area of chemical and biological processes. The assumptions for electrical circuits (Kirchhoff laws) are generalized for dynamic systems obeying the laws of irreversible thermodynamics. Topology and structure of reaction networks (reaction mechanisms, metabolic networks) can be analyzed using the Tellegen theorem. Another application of Tellegen's theorem is to determine stability and optimality of complex process systems such as chemical plants or oil production systems. The Tellegen theorem can be formulated for process systems using process nodes, terminals, flow connections and allowing sinks and sources for production or destruction of extensive quantities. A formulation for Tellegen's theorem of process systems: where are the production terms, are the terminal connections, and are the dynamic storage terms for the extensive variables. References In-line references General references Basic Circuit Theory by C.A. Desoer and E.S. Kuh, McGraw-Hill, New York, 1969 "Tellegen's Theorem and Thermodynamic Inequalities", G.F. Oster and C.A. Desoer, J. Theor. Biol 32 (1971), 219–241 "Network Methods in Models of Production", Donald Watson, Networks, 10 (1980), 1–15 External links Circuit example for Tellegen's theorem G.F. Oster and C.A. Desoer, Tellegen's Theorem and Thermodynamic Inequalities Network thermodynamics Circuit theorems Eponymous theorems of physics
Tellegen's theorem
[ "Physics" ]
1,063
[ "Circuit theorems", "Eponymous theorems of physics", "Equations of physics", "Physics theorems" ]
15,832,717
https://en.wikipedia.org/wiki/Computational%20statistics
Computational statistics, or statistical computing, is the study which is the intersection of statistics and computer science, and refers to the statistical methods that are enabled by using computational methods. It is the area of computational science (or scientific computing) specific to the mathematical science of statistics. This area is fast developing. The view that the broader concept of computing must be taught as part of general statistical education is gaining momentum. As in traditional statistics the goal is to transform raw data into knowledge, but the focus lies on computer intensive statistical methods, such as cases with very large sample size and non-homogeneous data sets. The terms 'computational statistics' and 'statistical computing' are often used interchangeably, although Carlo Lauro (a former president of the International Association for Statistical Computing) proposed making a distinction, defining 'statistical computing' as "the application of computer science to statistics", and 'computational statistics' as "aiming at the design of algorithm for implementing statistical methods on computers, including the ones unthinkable before the computer age (e.g. bootstrap, simulation), as well as to cope with analytically intractable problems" [sic]. The term 'Computational statistics' may also be used to refer to computationally intensive statistical methods including resampling methods, Markov chain Monte Carlo methods, local regression, kernel density estimation, artificial neural networks and generalized additive models. History Though computational statistics is widely used today, it actually has a relatively short history of acceptance in the statistics community. For the most part, the founders of the field of statistics relied on mathematics and asymptotic approximations in the development of computational statistical methodology. In 1908, William Sealy Gosset performed his now well-known Monte Carlo method simulation which led to the discovery of the Student’s t-distribution. With the help of computational methods, he also has plots of the empirical distributions overlaid on the corresponding theoretical distributions. The computer has revolutionized simulation and has made the replication of Gosset’s experiment little more than an exercise. Later on, the scientists put forward computational ways of generating pseudo-random deviates, performed methods to convert uniform deviates into other distributional forms using inverse cumulative distribution function or acceptance-rejection methods, and developed state-space methodology for Markov chain Monte Carlo. One of the first efforts to generate random digits in a fully automated way, was undertaken by the RAND Corporation in 1947. The tables produced were published as a book in 1955, and also as a series of punch cards. By the mid-1950s, several articles and patents for devices had been proposed for random number generators. The development of these devices were motivated from the need to use random digits to perform simulations and other fundamental components in statistical analysis. One of the most well known of such devices is ERNIE, which produces random numbers that determine the winners of the Premium Bond, a lottery bond issued in the United Kingdom. In 1958, John Tukey’s jackknife was developed. It is as a method to reduce the bias of parameter estimates in samples under nonstandard conditions. This requires computers for practical implementations. To this point, computers have made many tedious statistical studies feasible. Methods Maximum likelihood estimation Maximum likelihood estimation is used to estimate the parameters of an assumed probability distribution, given some observed data. It is achieved by maximizing a likelihood function so that the observed data is most probable under the assumed statistical model. Monte Carlo method Monte Carlo is a statistical method that relies on repeated random sampling to obtain numerical results. The concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution. Markov chain Monte Carlo The Markov chain Monte Carlo method creates samples from a continuous random variable, with probability density proportional to a known function. These samples can be used to evaluate an integral over that variable, such as its expected value or variance. The more steps are included, the more closely the distribution of the sample matches the actual desired distribution. Bootstrapping The bootstrap is a resampling technique used to generate samples from an empirical probability distribution defined by an original sample of the population. It can be used to find a bootstrapped estimator of a population parameter. It can also be used to estimate the standard error of an estimator as well as to generate bootstrapped confidence intervals. The jackknife is a related technique. Applications Computational biology Computational linguistics Computational physics Computational mathematics Computational materials science Machine Learning Computational statistics journals Communications in Statistics - Simulation and Computation Computational Statistics Computational Statistics & Data Analysis Journal of Computational and Graphical Statistics Journal of Statistical Computation and Simulation Journal of Statistical Software The R Journal The Stata Journal Statistics and Computing Wiley Interdisciplinary Reviews: Computational Statistics Associations International Association for Statistical Computing See also Algorithms for statistical classification Data science Statistical methods in artificial intelligence Free statistical software List of statistical algorithms List of statistical packages Machine learning References Further reading Articles Books External links Associations International Association for Statistical Computing Statistical Computing section of the American Statistical Association Journals Computational Statistics & Data Analysis Journal of Computational & Graphical Statistics Statistics and Computing Numerical analysis Computational fields of study Mathematics of computing
Computational statistics
[ "Mathematics", "Technology" ]
1,073
[ "Computational fields of study", "Computational mathematics", "Mathematical relations", "Computing and society", "Numerical analysis", "Computational statistics", "Approximations" ]
13,160,311
https://en.wikipedia.org/wiki/Airborne%20Real-time%20Cueing%20Hyperspectral%20Enhanced%20Reconnaissance
Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance, also known by the acronym ARCHER, is an aerial imaging system that produces ground images far more detailed than plain sight or ordinary aerial photography can. It is the most sophisticated unclassified hyperspectral imaging system available, according to U.S. Government officials. ARCHER can automatically scan detailed imaging for a given signature of the object being sought (such as a missing aircraft), for abnormalities in the surrounding area, or for changes from previous recorded spectral signatures. It has direct applications for search and rescue, counterdrug, disaster relief and impact assessment, and homeland security, and has been deployed by the Civil Air Patrol (CAP) in the US on the Australian-built Gippsland GA8 Airvan fixed-wing aircraft. CAP, the civilian auxiliary of the United States Air Force, is a volunteer education and public-service non-profit organization that conducts aircraft search and rescue in the US. Overview ARCHER is a daytime non-invasive technology, which works by analyzing an object's reflected light. It cannot detect objects at night, underwater, under dense cover, underground, under snow or inside buildings. The system uses a special camera facing down through a quartz glass portal in the belly of the aircraft, which is typically flown at a standard mission altitude of and 100 knots (50 meters/second) ground speed. The system software was developed by Space Computer Corporation of Los Angeles and the system hardware is supplied by NovaSol Corp. of Honolulu, Hawaii specifically for CAP. The ARCHER system is based on hyperspectral technology research and testing previously undertaken by the United States Naval Research Laboratory (NRL) and Air Force Research Laboratory (AFRL). CAP developed ARCHER in cooperation with the NRL, AFRL and the United States Coast Guard Research & Development Center in the largest interagency project CAP has undertaken in its 74-year history. Since 2003, almost US$5 million authorized under the 2002 Defense Appropriations Act has been spent on development and deployment. , CAP reported completing the initial deployment of 16 aircraft throughout the U.S. and training over 100 operators, but had only used the system on a few search and rescue missions, and had not credited it with being the first to find any wreckage. In searches in Georgia and Maryland during 2007, ARCHER located the aircraft wreckage, but both accidents had no survivors, according to Col. Drew Alexa, director of advanced technology, and the ARCHER program manager at CAP. An ARCHER equipped aircraft from the Utah Wing of the Civil Air Patrol was used in the search for adventurer Steve Fossett in September 2007. ARCHER did not locate Mr. Fossett, but was instrumental in uncovering eight previously uncharted crash sites in the high desert area of Nevada, some decades old. Col. Alexa described the system to the press in 2007: "The human eye sees basically three bands of light. The ARCHER sensor sees 50. It can see things that are anomalous in the vegetation such as metal or something from an airplane wreckage." Major Cynthia Ryan of the Nevada Civil Air Patrol, while also describing the system to the press in 2007, stated, "ARCHER is essentially something used by the geosciences. It's pretty sophisticated stuff … beyond what the human eye can generally see," She elaborated further, "It might see boulders, it might see trees, it might see mountains, sagebrush, whatever, but it goes 'not that' or 'yes, that'. The amazing part of this is that it can see as little as 10 per cent of the target, and extrapolate from there." In addition to the primary search and rescue mission, CAP has tested additional uses for ARCHER. For example, an ARCHER equipped CAP GA8 was used in a pilot project in Missouri in August 2005 to assess the suitability of the system for tracking hazardous material releases into the environment, and one was deployed to track oil spills in the aftermath of Hurricane Rita in Texas during September 2005. Since then, in the case of a flight originating in Missouri, the ARCHER system proved its usefulness in October 2006, when it found the wreckage in Antlers, Okla. The National Transportation and Safety Board was extremely pleased with the data ARCHER provided, which was later used to locate aircraft debris spread over miles of rough, wooded terrain. In July 2007, the ARCHER system identified a flood-borne oil spill originating in a Kansas oil refinery, that extended downstream and had invaded previously unsuspected reservoir areas. The client agencies (EPA, Coast Guard, and other federal and state agencies) found the data essential to quick remediation. In September 2008, a Civil Air Patrol GA-8 from Texas Wing searched for a missing aircraft from Arkansas. It was found in Oklahoma, identified simultaneously by ground searchers and the overflying ARCHER system. Rather than a direct find, this was a validation of the system's accuracy and efficacy. In the subsequent recovery, it was found that the ARCHER plotted the debris area with great accuracy. Technical description The major ARCHER subsystem components include: advanced hyperspectral imaging (HSI) system with a resolution of one square meter per pixel. panchromatic high-resolution imaging (HRI) camera with a resolution of per pixel. global positioning system (GPS) integrated with an inertial navigation system (INS) Hyperspectral imager The passive hyperspectral imaging spectroscopy remote sensor observes a target in multi-spectral bands. The HSI camera separates the image spectra into 52 "bins" from 500 nanometers (nm) wavelength at the blue end of the visible spectrum to 1100 nm in the infrared, giving the camera a spectral resolution of 11.5 nm. Although ARCHER records data in all 52 bands, the computational algorithms only use the first 40 bands, from 500 nm to 960 nm because the bands above 960 nm are too noisy to be useful. For comparison, the normal human eye will respond to wavelengths from approximately 400 to 700 nm, and is trichromatic, meaning the eye's cone cells only sense light in three spectral bands. As the ARCHER aircraft flies over a search area, reflected sunlight is collected by the HSI camera lens. The collected light passes through a set of lenses that focus the light to form an image of the ground. The imaging system uses a pushbroom approach to image acquisition. With the pushbroom approach, the focusing slit reduces the image height to the equivalent of one vertical pixel, creating a horizontal line image. The horizontal line image is then projected onto a diffraction grating, which is a very finely etched reflecting surface that disperses light into its spectra. The diffraction grating is specially constructed and positioned to create a two-dimensional (2D) spectrum image from the horizontal line image. The spectra are projected vertically, i.e., perpendicular to the line image, by the design and arrangement of the diffraction grating. The 2D spectrum image projects onto a charge-coupled device (CCD) two-dimensional image sensor, which is aligned so that the horizontal pixels are parallel to the image's horizontal. As a result, the vertical pixels are coincident to the spectra produced from the diffraction grating. Each column of pixels receives the spectrum of one horizontal pixel from the original image. The arrangement of vertical pixel sensors in the CCD divides the spectrum into distinct and non-overlapping intervals. The CCD output consists of electrical signals for 52 spectral bands for each of 504 horizontal image pixels. The on-board computer records the CCD output signal at a frame rate of sixty times each second. At an aircraft altitude of 2,500 ft AGL and a speed of 100 knots, a 60 Hz frame rate equates to a ground image resolution of approximately one square meter per pixel. Thus, every frame captured from the CCD contains the spectral data for a ground swath that is approximately one meter long and 500 meters wide. High-resolution imager A high-resolution imaging (HRI) black-and-white, or panchromatic, camera is mounted adjacent to the HSI camera to enable both cameras to capture the same reflected light. The HRI camera uses a pushbroom approach just like the HSI camera with a similar lens and slit arrangement to limit the incoming light to a thin, wide beam. However, the HRI camera does not have a diffraction grating to disperse the incoming reflected light. Instead, the light is directed to a wider CCD to capture more image data. Because it captures a single line of the ground image per frame, it is called a line scan camera. The HRI CCD is 6,144 pixels wide and one pixel high. It operates at a frame rate of 720 Hz. At ARCHER search speed and altitude (100 knots over the ground at 2,500 ft AGL) each pixel in the black-and-white image represents a 3 inch by 3 inch area of the ground. This high resolution adds the capability to identify some objects. Processing A monitor in the cockpit displays detailed images in real time, and the system also logs the image and Global Positioning System data at a rate of 30 gigabytes (GB) per hour for later analysis. The on-board data processing system performs numerous real-time processing functions including data acquisition and recording, raw data correction, target detection, cueing and chipping, precision image geo-registration, and display and dissemination of image products and target cue information. ARCHER has three methods for locating targets: signature matching where reflected light is matched to spectral signatures anomaly detection using a statistical model of the pixels in the image to determine the probability that a pixel does not match the profile, and change detection which executes a pixel-by-pixel comparison of the current image against ground conditions that were obtained in a previous mission over the same area. In change detection, scene changes are identified, and new, moved or departed targets are highlighted for evaluation. In spectral signature matching, the system can be programmed with the parameters of a missing aircraft, such as paint colors, to alert the operators of possible wreckage. It can also be used to look for specific materials, such as petroleum products or other chemicals released into the environment, or even ordinary items like commonly available blue polyethylene tarpaulins. In an impact assessment role, information on the location of blue tarps used to temporarily repair buildings damaged in a storm can help direct disaster relief efforts; in a counterdrug role, a blue tarp located in a remote area could be associated with illegal activity. References External links NovaSol Corp Space Computer Corporation Civil Air Patrol Spectroscopy Earth observation remote sensors
Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance
[ "Physics", "Chemistry" ]
2,169
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
13,163,358
https://en.wikipedia.org/wiki/Whole%20number%20rule
In chemistry, the whole number rule states that the masses of the isotopes are whole number multiples of the mass of the hydrogen atom. The rule is a modified version of Prout's hypothesis proposed in 1815, to the effect that atomic weights are multiples of the weight of the hydrogen atom. It is also known as the Aston whole number rule after Francis W. Aston who was awarded the Nobel Prize in Chemistry in 1922 "for his discovery, by means of his mass spectrograph, of isotopes, in a large number of non-radioactive elements, and for his enunciation of the whole-number rule." Law of definite proportions The law of definite proportions was formulated by Joseph Proust around 1800 and states that all samples of a chemical compound will have the same elemental composition by mass. The atomic theory of John Dalton expanded this concept and explained matter as consisting of discrete atoms with one kind of atom for each element combined in fixed proportions to form compounds. Prout's hypothesis In 1815, William Prout reported on his observation that the atomic weights of the elements were whole multiples of the atomic weight of hydrogen. He then hypothesized that the hydrogen atom was the fundamental object and that the other elements were a combination of different numbers of hydrogen atoms. Aston's discovery of isotopes In 1920, Francis W. Aston demonstrated through the use of a mass spectrometer that apparent deviations from Prout's hypothesis are predominantly due to the existence of isotopes. For example, Aston discovered that neon has two isotopes with masses very close to 20 and 22 as per the whole number rule, and proposed that the non-integer value 20.2 for the atomic weight of neon is due to the fact that natural neon is a mixture of about 90% neon-20 and 10% neon-22). A secondary cause of deviations is the binding energy or mass defect of the individual isotopes. Discovery of the neutron During the 1920s, it was thought that the atomic nucleus was made of protons and electrons, which would account for the disparity between the atomic number of an atom and its atomic mass. In 1932, James Chadwick discovered an uncharged particle of approximately the mass as the proton, which he called the neutron. The fact that the atomic nucleus is composed of protons and neutrons was rapidly accepted and Chadwick was awarded the Nobel Prize in Physics in 1935 for his discovery. The modern form of the whole number rule is that the atomic mass of a given elemental isotope is approximately the mass number (number of protons plus neutrons) times an atomic mass unit (approximate mass of a proton, neutron, or hydrogen-1 atom). This rule predicts the atomic mass of nuclides and isotopes with an error of at most 1%, with most of the error explained by the mass deficit caused by nuclear binding energy. References Further reading External links 1922 Nobel Prize Presentation Speech Mass spectrometry Periodic table
Whole number rule
[ "Physics", "Chemistry" ]
602
[ "Periodic table", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
13,164,797
https://en.wikipedia.org/wiki/Live%20bottom%20trailer
A live bottom trailer is a semi-trailer used for hauling loose material such as asphalt, grain, potatoes, sand and gravel. A live bottom trailer is the alternative to a dump truck or an end dump trailer. The typical live bottom trailer has a conveyor belt on the bottom of the trailer tub that pushes the material out of the back of the trailer at a controlled pace. Unlike the conventional dump truck, the tub does not have to be raised to deposit the materials. Operation The live bottom trailer is powered by a hydraulic system. When the operator engages the truck hydraulic system, it activates the conveyor belt, moving the load horizontally out of the back trailer. Uses Live bottom trailers can haul a variety of products including gravel, potatoes, top soil, grain, carrots, sand, lime, peat moss, asphalt, compost, rip-rap, heavy rocks, biowaste, etc. Those who work in industries such as the agriculture and construction benefit from the speed of unloading, versatility of the trailer and chassis mount. Safety The live bottom trailer eliminates trailer roll over because the tub does not have to be raised in the air to unload the materials. The trailer has a lower centre of gravity which makes it easy for the trailer to unload in an uneven area, compared to dump trailers that have to be on level ground to unload. Overhead electrical wires are a danger for the conventional dump trailer during unloading, but with a live bottom, wires are not a problem. The trailer can work anywhere that it can drive into because the tub does not have to be raised for unloading. In addition, the truck cannot be accidentally driven with the trailer raised, which has been a cause of a number of accidents, often involving collision with bridges, overpasses, or overhead/suspended traffic signs/lights. Advantages The tub empties clean, making it easier for different materials to be transported without having to get inside the tub to clean it out. The conveyor belt allows the material to be dumped at a controlled pace so that the material can be partially unloaded where it is needed. The rounded tub results in a lower centre of gravity which means a smoother ride and better handling than other trailers. Working under bridges and in confined areas is easier with a live bottom as opposed to a dump trailer because it can fit anywhere it can drive. Wet or dry materials can be hauled in a live bottom trailer. In a dump truck, wet materials stick in the top of the tub during unloading and causes trailer roll over. Insurance costs are lower for a live bottom trailer because it does not have to be raised in the air and there are few cases of trailer roll over. Disadvantages Some live bottom trailers are not well suited for heavy rock and demolition. However rip-rap, heavy rock, and asphalt can be hauled if built with the appropriate strength steels. See also Moving floor, a hydraulically driven conveyance system also used in semi-trailers External links Engineering vehicles
Live bottom trailer
[ "Engineering" ]
606
[ "Engineering vehicles" ]
13,165,796
https://en.wikipedia.org/wiki/Ocean%20heat%20content
Ocean heat content (OHC) or ocean heat uptake (OHU) is the energy absorbed and stored by oceans. To calculate the ocean heat content, it is necessary to measure ocean temperature at many different locations and depths. Integrating the areal density of a change in enthalpic energy over an ocean basin or entire ocean gives the total ocean heat uptake. Between 1971 and 2018, the rise in ocean heat content accounted for over 90% of Earth's excess energy from global heating. The main driver of this increase was caused by humans via their rising greenhouse gas emissions. By 2020, about one third of the added energy had propagated to depths below 700 meters. In 2023, the world's oceans were again the hottest in the historical record and exceeded the previous 2022 record maximum. The five highest ocean heat observations to a depth of 2000 meters occurred in the period 2019–2023. The North Pacific, North Atlantic, the Mediterranean, and the Southern Ocean all recorded their highest heat observations for more than sixty years of global measurements. Ocean heat content and sea level rise are important indicators of climate change. Ocean water can absorb a lot of solar energy because water has far greater heat capacity than atmospheric gases. As a result, the top few meters of the ocean contain more energy than the entire Earth's atmosphere. Since before 1960, research vessels and stations have sampled sea surface temperatures and temperatures at greater depth all over the world. Since 2000, an expanding network of nearly 4000 Argo robotic floats has measured temperature anomalies, or the change in ocean heat content. With improving observation in recent decades, the heat content of the upper ocean has been analyzed to have increased at an accelerating rate. The net rate of change in the top 2000 meters from 2003 to 2018 was (or annual mean energy gain of 9.3 zettajoules). It is difficult to measure temperatures accurately over long periods while at the same time covering enough areas and depths. This explains the uncertainty in the figures. Changes in ocean temperature greatly affect ecosystems in oceans and on land. For example, there are multiple impacts on coastal ecosystems and communities relying on their ecosystem services. Direct effects include variations in sea level and sea ice, changes to the intensity of the water cycle, and the migration of marine life. Calculations Definition Ocean heat content is a term used in physical oceanography to describe a type of thermodynamic potential energy that is stored in the ocean. It is defined in coordination with the equation of state of seawater. TEOS-10 is an international standard approved in 2010 by the Intergovernmental Oceanographic Commission. Calculation of ocean heat content follows that of enthalpy referenced to the ocean surface, also called potential enthalpy. OHC changes are thus made more readily comparable to seawater heat exchanges with ice, freshwater, and humid air. OHC is always reported as a change or as an "anomaly" relative to a baseline. Positive values then also quantify ocean heat uptake (OHU) and are useful to diagnose where most of planetary energy gains from global heating are going. To calculate the ocean heat content, measurements of ocean temperature from sample parcels of seawater gathered at many different locations and depths are required. Integrating the areal density of ocean heat over an ocean basin, or entire ocean, gives the total ocean heat content. Thus, total ocean heat content is a volume integral of the product of temperature, density, and heat capacity over the three-dimensional region of the ocean for which data is available. The bulk of measurements have been performed at depths shallower than about 2000 m (1.25 miles). The areal density of ocean heat content between two depths is computed as a definite integral: where is the specific heat capacity of sea water, h2 is the lower depth, h1 is the upper depth, is the in-situ seawater density profile, and is the conservative temperature profile. is defined at a single depth h0 usually chosen as the ocean surface. In SI units, has units of Joules per square metre (J·m−2). In practice, the integral can be approximated by summation using a smooth and otherwise well-behaved sequence of in-situ data; including temperature (t), pressure (p), salinity (s) and their corresponding density (ρ). Conservative temperature are translated values relative to the reference pressure (p0) at h0. A substitute known as potential temperature has been used in earlier calculations. Measurements of temperature versus ocean depth generally show an upper mixed layer (0–200 m), a thermocline (200–1500 m), and a deep ocean layer (>1500 m). These boundary depths are only rough approximations. Sunlight penetrates to a maximum depth of about 200 m; the top 80 m of which is the habitable zone for photosynthetic marine life covering over 70% of Earth's surface. Wave action and other surface turbulence help to equalize temperatures throughout the upper layer. Unlike surface temperatures which decrease with latitude, deep-ocean temperatures are relatively cold and uniform in most regions of the world. About 50% of all ocean volume is at depths below 3000 m (1.85 miles), with the Pacific Ocean being the largest and deepest of five oceanic divisions. The thermocline is the transition between upper and deep layers in terms of temperature, nutrient flows, abundance of life, and other properties. It is semi-permanent in the tropics, variable in temperate regions (often deepest during the summer), and shallow to nonexistent in polar regions. Measurements Ocean heat content measurements come with difficulties, especially before the deployment of the Argo profiling floats. Due to poor spatial coverage and poor quality of data, it has not always been easy to distinguish between long term global warming trends and climate variability. Examples of these complicating factors are the variations caused by El Niño–Southern Oscillation or changes in ocean heat content caused by major volcanic eruptions. Argo is an international program of robotic profiling floats deployed globally since the start of the 21st century. The program's initial 3000 units had expanded to nearly 4000 units by year 2020. At the start of each 10-day measurement cycle, a float descends to a depth of 1000 meters and drifts with the current there for nine days. It then descends to 2000 meters and measures temperature, salinity (conductivity), and depth (pressure) over a final day of ascent to the surface. At the surface the float transmits the depth profile and horizontal position data through satellite relays before repeating the cycle. Starting 1992, the TOPEX/Poseidon and subsequent Jason satellite series altimeters have observed vertically integrated OHC, which is a major component of sea level rise. Since 2002, GRACE and GRACE-FO have remotely monitored ocean changes using gravimetry. The partnership between Argo and satellite measurements has thereby yielded ongoing improvements to estimates of OHC and other global ocean properties. Causes for heat uptake Ocean heat uptake accounts for over 90% of total planetary heat uptake, mainly as a consequence of human-caused changes to the composition of Earth's atmosphere. This high percentage is because waters at and below the ocean surface - especially the turbulent upper mixed layer - exhibit a thermal inertia much larger than the planet's exposed continental crust, ice-covered polar regions, or atmospheric components themselves. A body with large thermal inertia stores a big amount of energy because of its heat capacity, and effectively transmits energy according to its heat transfer coefficient. Most extra energy that enters the planet via the atmosphere is thereby taken up and retained by the ocean. Planetary heat uptake or heat content accounts for the entire energy added to or removed from the climate system. It can be computed as an accumulation over time of the observed differences (or imbalances) between total incoming and outgoing radiation. Changes to the imbalance have been estimated from Earth orbit by CERES and other remote instruments, and compared against in-situ surveys of heat inventory changes in oceans, land, ice and the atmosphere. Achieving complete and accurate results from either accounting method is challenging, but in different ways that are viewed by researchers as being mostly independent of each other. Increases in planetary heat content for the well-observed 2005–2019 period are thought to exceed measurement uncertainties. From the ocean perspective, the more abundant equatorial solar irradiance is directly absorbed by Earth's tropical surface waters and drives the overall poleward propagation of heat. The surface also exchanges energy that has been absorbed by the lower troposphere through wind and wave action. Over time, a sustained imbalance in Earth's energy budget enables a net flow of heat either into or out of greater ocean depth via thermal conduction, downwelling, and upwelling. Releases of OHC to the atmosphere occur primarily via evaporation and enable the planetary water cycle. Concentrated releases in association with high sea surface temperatures help drive tropical cyclones, atmospheric rivers, atmospheric heat waves and other extreme weather events that can penetrate far inland. Altogether these processes enable the ocean to be Earth's largest thermal reservoir which functions to regulate the planet's climate; acting as both a sink and a source of energy. From the perspective of land and ice covered regions, their portion of heat uptake is reduced and delayed by the dominant thermal inertia of the ocean. Although the average rise in land surface temperature has exceeded the ocean surface due to the lower inertia (smaller heat-transfer coefficient) of solid land and ice, temperatures would rise more rapidly and by a greater amount without the full ocean. Measurements of how rapidly the heat mixes into the deep ocean have also been underway to better close the ocean and planetary energy budgets. Recent observations and changes Numerous independent studies in recent years have found a multi-decadal rise in OHC of upper ocean regions that has begun to penetrate to deeper regions. The upper ocean (0–700 m) has warmed since 1971, while it is very likely that warming has occurred at intermediate depths (700–2000 m) and likely that deep ocean (below 2000 m) temperatures have increased. The heat uptake results from a persistent warming imbalance in Earth's energy budget that is most fundamentally caused by the anthropogenic increase in atmospheric greenhouse gases. There is very high confidence that increased ocean heat content in response to anthropogenic carbon dioxide emissions is essentially irreversible on human time scales. Studies based on Argo measurements indicate that ocean surface winds, especially the subtropical trade winds in the Pacific Ocean, change ocean heat vertical distribution. This results in changes among ocean currents, and an increase of the subtropical overturning, which is also related to the El Niño and La Niña phenomenon. Depending on stochastic natural variability fluctuations, during La Niña years around 30% more heat from the upper ocean layer is transported into the deeper ocean. Furthermore, studies have shown that approximately one-third of the observed warming in the ocean is taking place in the 700–2000 meter ocean layer. Model studies indicate that ocean currents transport more heat into deeper layers during La Niña years, following changes in wind circulation. Years with increased ocean heat uptake have been associated with negative phases of the interdecadal Pacific oscillation (IPO). This is of particular interest to climate scientists who use the data to estimate the ocean heat uptake. The upper ocean heat content in most North Atlantic regions is dominated by heat transport convergence (a location where ocean currents meet), without large changes to temperature and salinity relation. Additionally, a study from 2022 on anthropogenic warming in the ocean indicates that 62% of the warming from the years between 1850 and 2018 in the North Atlantic along 25°N is kept in the water below 700 m, where a major percentage of the ocean's surplus heat is stored. A study in 2015 concluded that ocean heat content increases by the Pacific Ocean were compensated by an abrupt distribution of OHC into the Indian Ocean. Although the upper 2000 m of the oceans have experienced warming on average since the 1970s, the rate of ocean warming varies regionally with the subpolar North Atlantic warming more slowly and the Southern Ocean taking up a disproportionate large amount of heat due to anthropogenic greenhouse gas emissions. Deep-ocean warming below 2000 m has been largest in the Southern Ocean compared to other ocean basins. Impacts Warming oceans are one reason for coral bleaching and contribute to the migration of marine species. Marine heat waves are regions of life-threatening and persistently elevated water temperatures. Redistribution of the planet's internal energy by atmospheric circulation and ocean currents produces internal climate variability, often in the form of irregular oscillations, and helps to sustain the global thermohaline circulation. The increase in OHC accounts for 30–40% of global sea-level rise from 1900 to 2020 because of thermal expansion. It is also an accelerator of sea ice, iceberg, and tidewater glacier melting. The ice loss reduces polar albedo, amplifying both the regional and global energy imbalances. The resulting ice retreat has been rapid and widespread for Arctic sea ice, and within northern fjords such as those of Greenland and Canada. Impacts to Antarctic sea ice and the vast Antarctic ice shelves which terminate into the Southern Ocean have varied by region and are also increasing due to warming waters. Breakup of the Thwaites Ice Shelf and its West Antarctica neighbors contributed about 10% of sea-level rise in 2020. The ocean also functions as a sink and source of carbon, with a role comparable to that of land regions in Earth's carbon cycle. In accordance with the temperature dependence of Henry's law, warming surface waters are less able to absorb atmospheric gases including oxygen and the growing emissions of carbon dioxide and other greenhouse gases from human activity. Nevertheless the rate in which the ocean absorbs anthropogenic carbon dioxide has approximately tripled from the early 1960s to the late 2010s; a scaling proportional to the increase in atmospheric carbon dioxide. Warming of the deep ocean has the further potential to melt and release some of the vast store of frozen methane hydrate deposits that have naturally accumulated there. See also References External links NOAA Global Ocean Heat and Salt Content Meteorological concepts Climate change Climatology Earth Earth sciences Environmental science Oceanography Articles containing video clips
Ocean heat content
[ "Physics", "Environmental_science" ]
2,925
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics", "nan" ]
13,167,602
https://en.wikipedia.org/wiki/Submersion%20%28coastal%20management%29
Submersion is the sustainable cyclic portion of coastal erosion where coastal sediments move from the visible portion of a beach to the submerged nearshore region, and later return to the original visible portion of the beach. The recovery portion of the sustainable cycle of sediment behaviour is named accretion. Submersion vs erosion The sediment that is submerged during rough weather forms landforms including storm bars. In calmer weather waves return sediment to the visible part of the beach. Due to longshore drift some sediment can end up further along the beach from where it started. Often coastal areas have developed sustainable coastal positions where the sediment moving off beaches is sustainable submersion. On many inhabited coastlines, anthropogenic interference in coastal processes has meant that erosion is often more permanent than submersion. Community perception The term erosion often is associated with undesirable impacts on the environment, whereas submersion is a sustainable part of healthy foreshores. Communities making decisions about coastal management need to develop understanding of the components of beach recession and be able to separate the component that is temporary sustainable submersion from the more serious irreversible anthropogenic or climate change erosion portion. References Coastal geography Geological processes Physical oceanography
Submersion (coastal management)
[ "Physics" ]
248
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
13,168,288
https://en.wikipedia.org/wiki/Jackup%20rig
A jackup rig or a self-elevating unit is a type of mobile platform that consists of a buoyant hull fitted with a number of movable legs, capable of raising its hull over the surface of the sea. The buoyant hull enables transportation of the unit and all attached machinery to a desired location. Once on location the hull is raised to the required elevation above the sea surface supported by the sea bed. The legs of such units may be designed to penetrate the sea bed, may be fitted with enlarged sections or footings, or may be attached to a bottom mat. Generally jackup rigs are not self-propelled and rely on tugs or heavy lift ships for transportation. Jackup platforms are almost exclusively used as exploratory oil and gas drilling platforms and as offshore and wind farm service platforms. Jackup rigs can either be triangular in shape with three legs or square in shape with four legs. Jackup platforms have been the most popular and numerous of various mobile types in existence. The total number of jackup drilling rigs in operation numbered about 540 at the end of 2013. The tallest jackup rig built to date is the Noble Lloyd Noble, completed in 2016 with legs 214 metres (702 feet) tall. Name Jackup rigs are so named because they are self-elevating with three, four, six and even eight movable legs that can be extended (“jacked”) above or below the hull. Jackups are towed or moved under self propulsion to the site with the hull lowered to the water level, and the legs extended above the hull. The hull is actually a water-tight barge that floats on the water’s surface. When the rig reaches the work site, the crew jacks the legs downward through the water and into the sea floor (or onto the sea floor with mat supported jackups). This anchors the rig and holds the hull well above the waves. History An early design was the DeLong platform, designed by Leon B. DeLong. In 1949 he started his own company, DeLong Engineering & Construction Company. In 1950 he constructed the DeLong Rig No. 1 for Magnolia Petroleum, consisting of a barge with six legs. In 1953 DeLong entered into a joint venture with McDermott, which built the DeLong-McDermott No.1 in 1954 for Humble Oil. This was the first mobile offshore drilling platform. This barge had ten legs which had spud cans to prevent them from digging into the seabed too deep. When DeLong-McDermott was taken over by the Southern Natural Gas Company, which formed The Offshore Company, the platform was called Offshore No. 51. In 1954, Zapata Offshore, owned by George H. W. Bush, ordered the Scorpion. It was designed by R. G. LeTourneau and featured three electro-mechanically-operated lattice type legs. Built on the shores of the Mississippi River by the LeTourneau Company, it was launched in December 1955. The Scorpion was put into operation in May 1956 off Port Aransas, Texas. The second, also designed by LeTourneau, was called Vinegaroon. Operation A jackup rig is a barge fitted with long support legs that can be raised or lowered. The jackup is maneuvered (self-propelled or by towing) into location with its legs up and the hull floating on the water. Upon arrival at the work location, the legs are jacked down onto the seafloor. Then "preloading" takes place, where the weight of the barge and additional ballast water are used to drive the legs securely into the sea bottom so they will not penetrate further while operations are carried out. After preloading, the jacking system is used to raise the entire barge above the water to a predetermined height or "air gap", so that wave, tidal and current loading acts only on the relatively slender legs and not on the barge hull. Modern jacking systems use a rack and pinion gear arrangement where the pinion gears are driven by hydraulic or electric motors and the rack is affixed to the legs. Jackup rigs can only be placed in relatively shallow waters, generally less than of water. However, a specialized class of jackup rigs known as premium or ultra-premium jackups are known to have operational capability in water depths ranging from 150 to 190 meters (500 to 625 feet). Types Mobile offshore Drilling Units (MODU) This type of rig is commonly used in connection with oil and/or natural gas drilling. There are more jackup rigs in the worldwide offshore rig fleet than other type of mobile offshore drilling rig. Other types of offshore rigs include semi-submersibles (which float on pontoon-like structures) and drillships, which are ship-shaped vessels with rigs mounted in their center. These rigs drill through holes in the drillship hulls, known as moon pools. Turbine Installation Vessel (TIV) This type of rig is commonly used in connection with offshore wind turbine installation. Barges Jackup rigs can also refer to specialized barges that are similar to an oil and gas platform but are used as a base for servicing other structures such as offshore wind turbines, long bridges, and drilling platforms. See also Crane vessel Offshore geotechnical engineering Oil platform Rack phase difference TIV Resolution References Oil platforms Ship types
Jackup rig
[ "Chemistry", "Engineering" ]
1,091
[ "Oil platforms", "Petroleum technology", "Natural gas technology", "Structural engineering" ]
14,325,911
https://en.wikipedia.org/wiki/BCAR1
Breast cancer anti-estrogen resistance protein 1 is a protein that in humans is encoded by the BCAR1 gene. Gene BCAR1 is localized on chromosome 16 on region q, on the negative strand and it consists of seven exons. Eight different gene isoforms have been identified that share the same sequence starting from the second exon onwards but are characterized by different starting sites. The longest isoform is called BCAR1-iso1 (RefSeq NM_001170714.1) and is 916 amino acids long, the other shorter isoforms start with an alternative first exon. Function BCAR1 is a ubiquitously expressed adaptor molecule originally identified as the major substrate of v-Src and v-Crk . p130Cas/BCAR1 belongs to the Cas family of adaptor proteins and can act as a docking protein for several signalling partners. Due to its ability to associate with multiple signaling partners, p130Cas/BCAR1 contributes to the regulation to a variety of signaling pathways leading to cell adhesion, migration, invasion, apoptosis, hypoxia and mechanical forces. p130Cas/BCAR1 plays a role in cell transformation and cancer progression and alterations of p130Cas/BCAR1 expression and the resulting activation of selective signalling are determinants for the occurrence of different types of human tumors. Due to the capacity of p130Cas/BCAR1, as an adaptor protein, to interact with multiple partners and to be regulated by phosphorylation and dephosphorylation, its expression and phosphorylation can lead to a wide range of functional consequences. Among the regulators of p130Cas/BCAR1 tyrosine phosphorylation, receptor tyrosine kinases (RTKs) and integrins play a prominent role. RTK-dependent p130Cas/BCAR1 tyrosine phosphorylation and the subsequent binding with specific downstream signaling molecule modulate cell processes such as actin cytoskeleton remodeling, cell adhesion, proliferation, migration, invasion and survival. Integrin-mediated p130Cas/BCAR1 phosphorylation upon adhesion to extracellular matrix (ECM) induces downstream signaling that is required for allowing cells to spread and migrate on the ECM. Both RTKs and integrin activation affect p130Cas/BCAR1 tyrosine phosphorylation and represent an efficient means by which cells utilize signals coming from growth factors and integrin activation to coordinate cell responses. Additionally, p130Cas/BCAR1 tyrosine phosphorylation on its substrate domain can be induced by cell stretching subsequent to changes in the rigidity of the extracellular matrix, allowing cells to respond to mechanical force changes in the cell environment. Cas-Family p130Cas/BCAR1 is a member of the Cas family (Crk-associated substrate) of adaptor proteins which is characterized by the presence of multiple conserved motifs for protein–protein interactions, and by extensive tyrosine and serine phosphorylations. The Cas family comprises other three members: NEDD9 (Neural precursor cell expressed, developmentally down-regulated 9, also called Human enhancer of filamentation 1, HEF-1 or Cas-L), EFS (Embryonal Fyn-associated substrate), and CASS4 (Cas scaffolding protein family member 4). These Cas proteins have a high structural homology, characterized by the presence of multiple protein interaction domains and phosphorylation motifs through which Cas family members can recruit effector proteins. However, despite the high degree of similarity, their temporal expression, tissue distribution and functional roles are distinct and not overlapping. Notably, the knock-out of p130Cas/BCAR1 in mice is embryonic lethal, suggesting that other family members do not show an overlapping role in development. Structure p130Cas/BCAR1 is a scaffold protein characterized by several structural domains. It possesses an amino N-terminal Src-homology 3 domain (SH3) domain, followed by a proline-rich domain (PRR) and a substrate domain (SD). The substrate domain consists of 15 repeats of the YxxP consensus phosphorylation motif for Src family kinases (SFKs). Following the substrate domain is the serine-rich domain, which forms a four-helix bundle. This acts as a protein-interaction motif, similar to those found in other adhesion-related proteins such as focal adhesion kinase (FAK) and vinculin. The remaining carboxy-terminal sequence contains a bipartite Src-binding domain (residues 681–713) able to bind both the SH2 and SH3 domains of Src. p130Cas/BCAR1 can undergo extensive changes in tyrosine phosphorylation that occur predominantly in the 15 YxxP repeats within the substrate domain and represent the major post-translational modification of p130Cas/BCAR1. p130Cas/BCAR1 tyrosine phosphorylation can result from a diverse range of extracellular stimuli, including growth factors, integrin activation, vasoactive hormones and peptides ligands for G-protein coupled receptors. These stimuli triggers p130Cas/BCAR1 tyrosine phosphorylation and its translocation from cytosol to the cell membrane. Clinical significance Given the ability of p130Cas/BCAR1 scaffold protein to convey and integrate different type of signals and subsequently to regulate key cellular functions such as adhesion, migration, invasion, proliferation and survival, the existence of a strong correlation between deregulated p130Cas/BCAR1 expression and cancer was inferred. Deregulated expression of p130Cas/BCAR1 has been identified in several cancer types. Altered levels of p130Cas/BCAR1 expression in cancers can result from gene amplification, transcription upregulation or changes in protein stability. Overexpression of p130Cas/BCAR1 has been detected in human breast cancer, prostate cancer, ovarian cancer, lung cancer, colorectal cancer, hepatocellular carcinoma, glioma, melanoma, anaplastic large cell lymphoma and chronic myelogenous leukaemia. The presence of aberrant levels of hyperphosphorylated p130Cas/BCAR1 strongly promotes cell proliferation, migration, invasion, survival, angiogenesis and drug resistance. It has been demonstrated that high levels of p130Cas/BCAR1 expression in breast cancer correlate with worse prognosis, increased probability to develop metastasis and resistance to therapy. Conversely, lowering the amount of p130Cas/BCAR1 expression in ovarian, breast and prostate cancer is sufficient to block tumor growth and progression of cancer cells. p130Cas/BCAR1 has potential uses as a diagnostic and prognostic marker for some human cancers. Since lowering p130Cas/BCAR1 in tumor cells is sufficient to halt their transformation and progression, it is conceivable to propose p130Cas/BCAR1 may represent a therapeutic target. However, the non-catalytic nature of p130Cas/BCAR1 makes difficult to develop specific inhibitors. Notes References Further reading External links Bcar1 Info with links in the Cell Migration Gateway Proteins
BCAR1
[ "Chemistry" ]
1,576
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,326,078
https://en.wikipedia.org/wiki/Actin%2C%20cytoplasmic%202
Actin, cytoplasmic 2, or gamma-actin is a protein that in humans is encoded by the ACTG1 gene. Gamma-actin is widely expressed in cellular cytoskeletons of many tissues; in adult striated muscle cells, gamma-actin is localized to Z-discs and costamere structures, which are responsible for force transduction and transmission in muscle cells. Mutations in ACTG1 have been associated with nonsyndromic hearing loss and Baraitser-Winter syndrome, as well as susceptibility of adolescent patients to vincristine toxicity. Structure Human gamma-actin is 41.8 kDa in molecular weight and 375 amino acids in length. Actins are highly conserved proteins that are involved in various types of cell motility, and maintenance of the cytoskeleton. In vertebrates, three main groups of actin paralogs, alpha, beta, and gamma, have been identified. The alpha actins are found in muscle tissues and are a major constituent of the sarcomere contractile apparatus. The beta and gamma actins co-exist in most cell types as components of the cytoskeleton, and as mediators of internal cell motility. Actin, gamma 1, encoded by this gene, is found in non-muscle cells in the cytoplasm, and in muscle cells at costamere structures, or transverse points of cell-cell adhesion that run perpendicular to the long axis of myocytes. Function In myocytes, sarcomeres adhere to the sarcolemma via costameres, which align at Z-discs and M-lines. The two primary cytoskeletal components of costameres are desmin intermediate filaments and gamma-actin microfilaments. It has been shown that gamma-actin interacting with another costameric protein dystrophin is critical for costameres forming mechanically strong links between the cytoskeleton and the sarcolemmal membrane. Additional studies have shown that gamma-actin colocalizes with alpha-actinin and GFP-labeled gamma actin localized to Z-discs, whereas GFP-alpha-actin localized to pointed ends of thin filaments, indicating that gamma actin specifically localizes to Z-discs in striated muscle cells. During development of myocytes, gamma actin is thought to play a role in the organization and assembly of developing sarcomeres, evidenced in part by its early colocalization with alpha-actinin. Gamma-actin is eventually replaced by sarcomeric alpha-actin isoforms, with low levels of gamma-actin persisting in adult myocytes which associate with Z-disc and costamere domains. Insights into the function of gamma-actin in muscle have come from studies employing transgenesis. In a skeletal muscle-specific knockout of gamma-actin in mice, these animals showed no detectable abnormalities in development; however, knockout mice showed muscle weakness and fiber necrosis, along with decreased isometric twitch force, disrupted intrafibrillar and interfibrillar connections among myocytes, and myopathy. Clinical significance An autosomal dominant mutation in ACTG1 in the DFNA20/26 locus at 17q25-qter was identified in patients with hearing loss. A Thr278Ile mutation was identified in helix 9 of gamma-actin protein, which is predicted to alter protein structure. This study identified the first disease causing mutation in gamma-actin and underlies the importance of gamma-actin as structural elements of the inner ear hair cells. Since then, other ACTG1 mutations have been linked to nonsyndromic hearing loss, including Met305Thr. A missense mutation in ACTG1 at Ser155Phe has also been identified in patients with Baraitser-Winter syndrome, which is a developmental disorder characterized by congenital ptosis, excessively-arched eyebrows, hypertelorism, ocular colobomata, lissencephaly, short stature, seizures and hearing loss. Differential expression of ACTG1 mRNA was also identified in patients with Sporadic Amyotrophic Lateral Sclerosis, a devastating disease with unknown causality, using a sophisticated bioinformatics approach employing Affymetrix long-oligonucleotide BaFL methods. Single nucleotide polymorphisms in ACTG1 have been associated with vincristine toxicity, which is part of the standard treatment regimen for childhood acute lymphoblastic leukemia. Neurotoxicity was more frequent in patients that were ACTG1 Gly310Ala mutation carriers, suggesting that this may play a role in patient outcomes from vincristine treatment. Interactions ACTG1 has been shown to interact with: CAP1, DMD, TMSB4X, and Plectin. See also Actin References External links Further reading Proteins
Actin, cytoplasmic 2
[ "Chemistry" ]
1,022
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,326,527
https://en.wikipedia.org/wiki/Flying%20probe
Flying probes are test probes used for testing both bare circuit boards and boards loaded with components. Flying probes were introduced in the late 1980’s and can be found in many manufacturing and assembly operations, most often in manufacturing of electronic printed circuit boards. A flying probe tester uses one or more test probes to make contact with the circuit board under test; the probes are moved from place to place on the circuit board to carry out tests of multiple conductors or components. Flying probe testers are a more flexible alternative to bed of nails testers, which use multiple contacts to simultaneously contact the board and which rely on electrical switching to carry out measurements. One limitation in flying probe test methods is the speed at which measurements can be taken; the probes must be moved to each new test site on the board, and then a measurement must be completed. Bed-of-nails testers touch each test point simultaneously and electronic switching of instruments between test pins is more rapid than movement of probes. Manufacturing of a bed-of-nails testers however is more costly. Bare board Loaded board in-circuit test In the testing of printed circuit boards, a flying probe test or fixtureless in-circuit test (FICT) system may be used for testing low to mid volume production, prototypes, and boards that present accessibility problems. A traditional "bed of nails" tester for testing a PCB requires a custom fixture to hold the PCBA and the Pogo pins which make contact with the PCBA. In contrast, FICT uses two or more flying probes, which may be moved based on software instruction. The flying probes are electro-mechanically controlled to access components on printed circuit assemblies (PCAs). The probes are moved around the board under test using an automatically operated two-axis system, and one or more test probes contact components of the board or test points on the printed circuit board. The main advantage of flying probe testing is the substantial cost of a bed-of-nails fixture, costing on the order of US $20,000, is not required. The flying probes also allow easy modification of the test fixture when the PCBA design changes. FICT may be used on both bare or assembled PCB's. However, since the tester makes measurements serially, instead of making many measurements at once, the test cycle may become much longer than for a bed-of-nails fixture. A test cycle that may take 30 seconds on such a system, may take an hour with flying probes. Test coverage may not be as comprehensive as a bed of nails tester (assuming similar net access for each), because fewer points are tested at one time. References electronic test equipment hardware testing nondestructive testing
Flying probe
[ "Materials_science", "Technology", "Engineering" ]
560
[ "Nondestructive testing", "Materials testing", "Electronic test equipment", "Measuring instruments" ]
14,326,894
https://en.wikipedia.org/wiki/Iris%20Bay%20%28Dubai%29
The Iris Bay is a 32-floor commercial tower in the Business Bay in Dubai, United Arab Emirates that is known for "its oval, crescent moon type shape." The tower has a total structural height of 170 m (558 ft). Construction of the Iris Bay was expected to be completed in 2008 but progress stopped in 2011. The building was completed 2015. The tower is designed in the shape of an ovoid and comprises two identical double curved pixelated shells which are rotated and cantilevered over the podium. The rear elevation is a continuous vertical curve punctuated by balconies while the front elevation is made up of seven zones of rotated glass. The podium comprises 4 stories with a double height ground level and houses retail and commercial space totaling 36,000 m2. See also List of buildings in Dubai Notes External links Buildings and structures under construction in Dubai High-tech architecture Postmodern architecture Skyscraper office buildings in Dubai
Iris Bay (Dubai)
[ "Engineering" ]
189
[ "Postmodern architecture", "Architecture" ]
14,331,278
https://en.wikipedia.org/wiki/Hildebrand%20solubility%20parameter
The Hildebrand solubility parameter (δ) provides a numerical estimate of the degree of interaction between materials and can be a good indication of solubility, particularly for nonpolar materials such as many polymers. Materials with similar values of δ are likely to be miscible. Definition The Hildebrand solubility parameter is the square root of the cohesive energy density: The cohesive energy density is the amount of energy needed to completely remove a unit volume of molecules from their neighbours to infinite separation (an ideal gas). This is equal to the heat of vaporization of the compound divided by its molar volume in the condensed phase. In order for a material to dissolve, these same interactions need to be overcome, as the molecules are separated from each other and surrounded by the solvent. In 1936 Joel Henry Hildebrand suggested the square root of the cohesive energy density as a numerical value indicating solvency behavior. This later became known as the "Hildebrand solubility parameter". Materials with similar solubility parameters will be able to interact with each other, resulting in solvation, miscibility or swelling. Uses and limitations Its principal utility is that it provides simple predictions of phase equilibrium based on a single parameter that is readily obtained for most materials. These predictions are often useful for nonpolar and slightly polar (dipole moment < 2 debyes) systems without hydrogen bonding. It has found particular use in predicting solubility and swelling of polymers by solvents. More complicated three-dimensional solubility parameters, such as Hansen solubility parameters, have been proposed for polar molecules. The principal limitation of the solubility parameter approach is that it applies only to associated solutions ("like dissolves like" or, technically speaking, positive deviations from Raoult's law); it cannot account for negative deviations from Raoult's law that result from effects such as solvation or the formation of electron donor–acceptor complexes. Like any simple predictive theory, it can inspire overconfidence; it is best used for screening with data used to verify the predictions. Units The conventional units for the solubility parameter are (calories per cm3)1/2, or cal1/2 cm−3/2. The SI units are J1/2 m−3/2, equivalent to the pascal1/2. 1 calorie is equal to 4.184 J. 1 cal1/2 cm−3/2 = (523/125 J)1/2 (10−2 m)−3/2 = (4.184 J)1/2 (0.01 m)−3/2 = 2.045483 103 J1/2 m−3/2 = 2.045483 (106 J/m3)1/2= 2.045483 MPa1/2. Given the non-exact nature of the use of δ, it is often sufficient to say that the number in MPa1/2 is about twice the number in cal1/2 cm−3/2. Where the units are not given, for example, in older books, it is usually safe to assume the non-SI unit. Examples From the table, poly(ethylene) has a solubility parameter of 7.9 cal1/2 cm−3/2. Good solvents are likely to be diethyl ether and hexane. (However, PE only dissolves at temperatures well above 100 °C.) Poly(styrene) has a solubility parameter of 9.1 cal1/2 cm−3/2, and thus ethyl acetate is likely to be a good solvent. Nylon 6,6 has a solubility parameter of 13.7 cal1/2 cm−3/2, and ethanol is likely to be the best solvent of those tabulated. However, the latter is polar, and thus we should be very cautions about using just the Hildebrand solubility parameter to make predictions. See also Solvent Hansen solubility parameters References Notes Bibliography External links Abboud J.-L. M., Notario R. (1999) Critical compilation of scales of solvent parameters. part I. pure, non-hydrogen bond donor solvents – technical report. Pure Appl. Chem. 71(4), 645–718 (IUPAC document with large table (1b) of Hildebrand solubility parameter (δH)) Polymer chemistry 1936 introductions
Hildebrand solubility parameter
[ "Chemistry", "Materials_science", "Engineering" ]
933
[ "Materials science", "Polymer chemistry" ]
14,331,851
https://en.wikipedia.org/wiki/Boundedly%20generated%20group
In mathematics, a group is called boundedly generated if it can be expressed as a finite product of cyclic subgroups. The property of bounded generation is also closely related with the congruence subgroup problem (see ). Definitions A group G is called boundedly generated if there exists a finite subset S of G and a positive integer m such that every element g of G can be represented as a product of at most m powers of the elements of S: where and are integers. The finite set S generates G, so a boundedly generated group is finitely generated. An equivalent definition can be given in terms of cyclic subgroups. A group G is called boundedly generated if there is a finite family C1, …, CM of not necessarily distinct cyclic subgroups such that G = C1…CM as a set. Properties Bounded generation is unaffected by passing to a subgroup of finite index: if H is a finite index subgroup of G then G is boundedly generated if and only if H is boundedly generated. Bounded generation goes to extension: if a group G has a normal subgroup N such that both N and G/N are boundedly generated, then so is G itself. Any quotient group of a boundedly generated group is also boundedly generated. A finitely generated torsion group must be finite if it is boundedly generated; equivalently, an infinite finitely generated torsion group is not boundedly generated. A pseudocharacter on a discrete group G is defined to be a real-valued function f on a G such that f(gh) − f(g) − f(h) is uniformly bounded and f(gn) = n·f(g). The vector space of pseudocharacters of a boundedly generated group G is finite-dimensional. Examples If n ≥ 3, the group SLn(Z) is boundedly generated by its elementary subgroups, formed by matrices differing from the identity matrix only in one off-diagonal entry. In 1984, Carter and Keller gave an elementary proof of this result, motivated by a question in algebraic . A free group on at least two generators is not boundedly generated (see below). The group SL2(Z) is not boundedly generated, since it contains a free subgroup with two generators of index 12. A Gromov-hyperbolic group is boundedly generated if and only if it is virtually cyclic (or elementary), i.e. contains a cyclic subgroup of finite index. Free groups are not boundedly generated Several authors have stated in the mathematical literature that it is obvious that finitely generated free groups are not boundedly generated. This section contains various obvious and less obvious ways of proving this. Some of the methods, which touch on bounded cohomology, are important because they are geometric rather than algebraic, so can be applied to a wider class of groups, for example Gromov-hyperbolic groups. Since for any n ≥ 2, the free group on 2 generators F2 contains the free group on n generators Fn as a subgroup of finite index (in fact n − 1), once one non-cyclic free group on finitely many generators is known to be not boundedly generated, this will be true for all of them. Similarly, since SL2(Z) contains F2 as a subgroup of index 12, it is enough to consider SL2(Z). In other words, to show that no Fn with n ≥ 2 has bounded generation, it is sufficient to prove this for one of them or even just for SL2(Z) . Burnside counterexamples Since bounded generation is preserved under taking homomorphic images, if a single finitely generated group with at least two generators is known to be not boundedly generated, this will be true for the free group on the same number of generators, and hence for all free groups. To show that no (non-cyclic) free group has bounded generation, it is therefore enough to produce one example of a finitely generated group which is not boundedly generated, and any finitely generated infinite torsion group will work. The existence of such groups constitutes Golod and Shafarevich's negative solution of the generalized Burnside problem in 1964; later, other explicit examples of infinite finitely generated torsion groups were constructed by Aleshin, Olshanskii, and Grigorchuk, using automata. Consequently, free groups of rank at least two are not boundedly generated. Symmetric groups The symmetric group Sn can be generated by two elements, a 2-cycle and an n-cycle, so that it is a quotient group of F2. On the other hand, it is easy to show that the maximal order M(n) of an element in Sn satisfies log M(n) ≤ n/e where e is Euler's number (Edmund Landau proved the more precise asymptotic estimate log M(n) ~ (n log n)1/2). In fact if the cycles in a cycle decomposition of a permutation have length N1, ..., Nk with N1 + ··· + Nk = n, then the order of the permutation divides the product N1 ··· Nk, which in turn is bounded by (n/k)k, using the inequality of arithmetic and geometric means. On the other hand, (n/x)x is maximized when x = e. If F2 could be written as a product of m cyclic subgroups, then necessarily n! would have to be less than or equal to M(n)m for all n, contradicting Stirling's asymptotic formula. Hyperbolic geometry There is also a simple geometric proof that G = SL2(Z) is not boundedly generated. It acts by Möbius transformations on the upper half-plane H, with the Poincaré metric. Any compactly supported 1-form α on a fundamental domain of G extends uniquely to a G-invariant 1-form on H. If z is in H and γ is the geodesic from z to g(z), the function defined by satisfies the first condition for a pseudocharacter since by the Stokes theorem where Δ is the geodesic triangle with vertices z, g(z) and h−1(z), and geodesics triangles have area bounded by π. The homogenized function defines a pseudocharacter, depending only on α. As is well known from the theory of dynamical systems, any orbit (gk(z)) of a hyperbolic element g has limit set consisting of two fixed points on the extended real axis; it follows that the geodesic segment from z to g(z) cuts through only finitely many translates of the fundamental domain. It is therefore easy to choose α so that fα equals one on a given hyperbolic element and vanishes on a finite set of other hyperbolic elements with distinct fixed points. Since G therefore has an infinite-dimensional space of pseudocharacters, it cannot be boundedly generated. Dynamical properties of hyperbolic elements can similarly be used to prove that any non-elementary Gromov-hyperbolic group is not boundedly generated. Brooks pseudocharacters Robert Brooks gave a combinatorial scheme to produce pseudocharacters of any free group Fn; this scheme was later shown to yield an infinite-dimensional family of pseudocharacters (see ). Epstein and Fujiwara later extended these results to all non-elementary Gromov-hyperbolic groups. Gromov boundary This simple folklore proof uses dynamical properties of the action of hyperbolic elements on the Gromov boundary of a Gromov-hyperbolic group. For the special case of the free group Fn, the boundary (or space of ends) can be identified with the space X of semi-infinite reduced words g1 g2 ··· in the generators and their inverses. It gives a natural compactification of the tree, given by the Cayley graph with respect to the generators. A sequence of semi-infinite words converges to another such word provided that the initial segments agree after a certain stage, so that X is compact (and metrizable). The free group acts by left multiplication on the semi-infinite words. Moreover, any element g in Fn has exactly two fixed points g ±∞, namely the reduced infinite words given by the limits of g&hairsp;n as n tends to ±∞. Furthermore, g&hairsp;n·w tends to g ±∞ as n tends to ±∞ for any semi-infinite word w; and more generally if wn tends to w ≠ g ±∞, then g&hairsp;n·wn tends to g&hairsp;+∞ as n tends to ∞. If Fn were boundedly generated, it could be written as a product of cyclic groups Ci generated by elements hi. Let X0 be the countable subset given by the finitely many Fn-orbits of the fixed points hi ±∞, the fixed points of the hi and all their conjugates. Since X is uncountable, there is an element of g with fixed points outside X0 and a point w outside X0 different from these fixed points. Then for some subsequence (gm) of (gn) gm = h1n(m,1) ··· hkn(m,k), with each n(m,i&hairsp;) constant or strictly monotone. On the one hand, by successive use of the rules for computing limits of the form h&hairsp;n·wn, the limit of the right hand side applied to x is necessarily a fixed point of one of the conjugates of the hi's. On the other hand, this limit also must be g&hairsp;+∞, which is not one of these points, a contradiction. References (see pages 222-229, also available on the Cornell archive) . Group theory Geometric group theory
Boundedly generated group
[ "Physics", "Mathematics" ]
2,076
[ "Geometric group theory", "Group actions", "Group theory", "Fields of abstract algebra", "Symmetry" ]
14,334,415
https://en.wikipedia.org/wiki/Grzegorczyk%20hierarchy
The Grzegorczyk hierarchy (, ), named after the Polish logician Andrzej Grzegorczyk, is a hierarchy of functions used in computability theory. Every function in the Grzegorczyk hierarchy is a primitive recursive function, and every primitive recursive function appears in the hierarchy at some level. The hierarchy deals with the rate at which the values of the functions grow; intuitively, functions in lower levels of the hierarchy grow slower than functions in the higher levels. Definition First we introduce an infinite set of functions, denoted Ei for some natural number i. We define is the addition function, and is a unary function which squares its argument and adds two. Then, for each n greater than 1, , i.e. the x-th iterate of evaluated at 2. From these functions we define the Grzegorczyk hierarchy. , the n-th set in the hierarchy, contains the following functions: Ek for k < n the zero function (Z(x) = 0); the successor function (S(x) = x + 1); the projection functions (); the (generalized) compositions of functions in the set (if h, g1, g2, ... and gm are in , then is as well); and the results of limited (primitive) recursion applied to functions in the set, (if g, h and j are in and for all t and , and further and , then f is in as well). In other words, is the closure of set with respect to function composition and limited recursion (as defined above). Properties These sets clearly form the hierarchy because they are closures over the 's and . They are strict subsets. In other words because the hyperoperation is in but not in . includes functions such as x+1, x+2, ... Every unary function f(x) in is upper bounded by some x+n. However, also includes more complicated functions like x∸1, x∸y (where ∸ is the monus sign defined as x∸y = max(x-y, 0)), , etc. provides all addition functions, such as x+y, 4x, ... provides all multiplication functions, such as xy, x4 provides all exponentiation functions, such as xy, 222x, and is exactly the elementary recursive functions. provides all tetration functions, and so on. Notably, both the function and the characteristic function of the predicate from the Kleene normal form theorem are definable in a way such that they lie at level of the Grzegorczyk hierarchy. This implies in particular that every recursively enumerable set is enumerable by some -function. Relation to primitive recursive functions The definition of is the same as that of the primitive recursive functions, , except that recursion is limited ( for some j in ) and the functions are explicitly included in . Thus the Grzegorczyk hierarchy can be seen as a way to limit the power of primitive recursion to different levels. It is clear from this fact that all functions in any level of the Grzegorczyk hierarchy are primitive recursive functions (i.e. ) and thus: It can also be shown that all primitive recursive functions are in some level of the hierarchy, thus and the sets partition the set of primitive recursive functions, . Meyer and Ritchie introduced another hierarchy subdividing the primitive recursive functions, based on the nesting depth of loops needed to write a LOOP program that computes the function. For a natural number , let denote the set of functions computable by a LOOP program with LOOP and END commands nested no deeper than levels. Fachini and Maggiolo-Schettini showed that coincides with for all integers .p.63 Extensions The Grzegorczyk hierarchy can be extended to transfinite ordinals. Such extensions define a fast-growing hierarchy. To do this, the generating functions must be recursively defined for limit ordinals (note they have already been recursively defined for successor ordinals by the relation ). If there is a standard way of defining a fundamental sequence , whose limit ordinal is , then the generating functions can be defined . However, this definition depends upon a standard way of defining the fundamental sequence. suggests a standard way for all ordinals α < ε0. The original extension was due to Martin Löb and Stan S. Wainer and is sometimes called the Löb–Wainer hierarchy. See also ELEMENTARY Fast-growing hierarchy Ordinal analysis Notes References Bibliography Computability theory Hierarchy of functions
Grzegorczyk hierarchy
[ "Mathematics" ]
979
[ "Computability theory", "Mathematical logic" ]
4,047,274
https://en.wikipedia.org/wiki/Gold-containing%20drugs
Gold-containing drugs are pharmaceuticals that contain gold. Sometimes these species are referred to as "gold salts". "Chrysotherapy" and "aurotherapy" are the applications of gold compounds to medicine. Research on the medicinal effects of gold began in 1935, primarily to reduce inflammation and to slow disease progression in patients with rheumatoid arthritis. The use of gold compounds has decreased since the 1980s because of numerous side effects and monitoring requirements, limited efficacy, and very slow onset of action. Most chemical compounds of gold, including some of the drugs discussed below, are not salts, but are examples of metal thiolate complexes. Use in rheumatoid arthritis Investigation of medical applications of gold began at the end of the 19th century, when gold cyanide demonstrated efficacy in treating Mycobacterium tuberculosis in vitro. Indications The use of injected gold compound is indicated for rheumatoid arthritis. Its uses have diminished with the advent of newer compounds such as methotrexate and because of numerous side effects. The efficacy of orally administered gold is more limited than injecting the gold compounds. Mechanism in arthritis The mechanism by which gold drugs affect arthritis is unknown. Administration Gold-containing drugs for rheumatoid arthritis are administered by intramuscular injection but can also be administered orally (although the efficacy is low). Regular urine tests to check for protein, indicating kidney damage, and blood tests are required. Efficacy A 1997 review (Suarez-Almazor ME, et al) reports that treatment with intramuscular gold (parenteral gold) reduces disease activity and joint inflammation. Gold-containing drugs taken by mouth are less effective than by injection. Three to six months are often required before gold treatment noticeably improves symptoms. Side effects Chrysiasis A noticeable side-effect of gold-based therapy is skin discoloration, in shades of mauve to a purplish dark grey when exposed to sunlight. Skin discoloration occurs when gold salts are taken on a regular basis over a long period of time. Excessive intake of gold salts while undergoing chrysotherapy results – through complex redox processes – in the saturation by relatively stable gold compounds of skin tissue and organs (as well as teeth and ocular tissue in extreme cases) in a condition known as chrysiasis. This condition is similar to argyria, which is caused by exposure to silver salts and colloidal silver. Chrysiasis can ultimately lead to acute kidney injury (such as tubular necrosis, nephrosis, glomerulitis), severe heart conditions, and hematologic complications (leukopenia, anemia). While some effects can be healed with moderate success, the skin discoloration is considered permanent. Other side effects Other side effects of gold-containing drugs include kidney damage, itching rash, and ulcerations of the mouth, tongue, and pharynx. Approximately 35% of patients discontinue the use of gold salts because of these side effects. Kidney function must be monitored continuously while taking gold compounds. Types Disodium aurothiomalate Sodium aurothiosulfate (Gold sodium thiosulfate) Sodium aurothiomalate (Gold sodium thiomalate) (UK) Auranofin (UK & US) Aurothioglucose (Gold thioglucose) (US) References External links "Gold salts for juvenile rheumatoid arthritis". BCHealthGuide.org "Gold salts information". DiseasesDatabase.com "HMS researchers find how gold fights arthritis: Sheds light on how medicinal metal function against rheumatoid arthritis and other autoimmune diseases." Harvard University Gazette (2006) "Aurothioglucose is a gold salt used in treating inflammatory arthritis". MedicineNet.com "About gold treatment: What is it? Gold treatment includes different forms of gold salts used to treat arthritis." Washington.edu University of Washington (December 30, 2004) Gold compounds Hepatotoxins Antirheumatic products Coordination complexes Nephrotoxins
Gold-containing drugs
[ "Chemistry" ]
854
[ "Coordination chemistry", "Coordination complexes" ]
4,047,871
https://en.wikipedia.org/wiki/Hexachlorobenzene
Hexachlorobenzene, or perchlorobenzene, is an aryl chloride and a six-substituted chlorobenzene with the molecular formula C6Cl6. It is a fungicide formerly used as a seed treatment, especially on wheat to control the fungal disease bunt. Its use has been banned globally under the Stockholm Convention on Persistent Organic Pollutants. Physical and chemical properties Hexachlorobenzene is a stable, white, crystalline chlorinated hydrocarbon. It is sparingly soluble in organic solvents such as benzene, diethyl ether and alcohol, but practically insoluble in water with no reaction. It has a flash point of 468 °F and it is stable under normal temperatures and pressures. It is combustible but it does not ignite readily. When heated to decomposition, hexachlorobenzene emits highly toxic fumes of hydrochloric acid, other chlorinated compounds (such as phosgene), carbon monoxide, and carbon dioxide. History Hexachlorobenzene was first known as "Julin's chloride of carbon" as it was discovered as a strange and unexpected product of impurities reacting in Julin's nitric acid factory. In 1864, Hugo Müller synthesised the compound by the reaction of benzene and antimony pentachloride, he then suggested that his compound was the same as Julin's chloride of carbon. Müller previously also believed it was the same compound as Michael Faraday's "perchloride of carbon" (Hexachloroethane), obtained a small sample of Julin's chloride of carbon to send to Richard Phillips and Faraday for investigation. In 1867, Henry Bassett proved that the compound produced from benzene and antimony was the same as Julian's carbon chloride and named it "hexachlorobenzene". Leopold Gmelin named it "dichloride of carbon" and claimed that the carbon was derived from cast iron and the chlorine was from crude saltpetre. Victor Regnault obtained hexachlorobenzene from the decomposition of chloroform and tetrachloroethylene vapours through a red-hot tube. Synthesis Large-scale manufacture for use as a fungicide was developed by using the residue remaining after purification of the mixture of isomers of hexachlorocyclohexane, from which the insecticide lindane (the γ-isomer) had been removed, leaving the unwanted α- and β- isomers. This mixture is produced when benzene is reacted with chlorine in the presence of ultraviolet light (e.g. from sunlight). However, manufacture is no longer practiced following the compound's ban. Hexachlorobenzene has been made on a laboratory scale since the 1890s, by the electrophilic aromatic substitution reaction of chlorine with benzene or chlorobenzenes. A typical catalyst is ferric chloride. Much milder reagents than chlorine (e.g. dichlorine monoxide, iodine in chlorosulfonic acid) also suffice, and the various hexachlorocyclohexanes can substitute for benzene as well. Usage Hexachlorobenzene was used in agriculture to control the fungus tilletia caries (common bunt of wheat). It is also effective on tilletia controversa, dwarf bunt. The compound was introduced in 1947, normally formulated as a seed dressing but is now banned in many countries. A minor industrial phloroglucinol synthesis nucleophilically substitutes hexachlorobenzene with alkoxides, followed by acidic workup. Environmental considerations In the 1970 HCB was produced at a level of 100,000 tons/y. Since then usage has decline steadily, production being 23-90 tons/y in "mid 1990s". The half-life in the soil is estimated to be 9 years. The mechanism of its toxicity and other adverse effects remain under study. Safety Hexachlorobenzene can react violently with dimethylformamide, particularly in the presence of catalytic transition-metal salts. Toxicology Oral LD50 (rat): 10,000 mg/kg Oral LD50 (mice): 4,000 mg/kg Inhalation LC50 (rat): 3600 mg/m3 Material has relatively low acute toxicity but is toxic because of its persistent and cumulative nature in body tissues in rich lipid content. Hexachlorobenzene is an animal carcinogen and is considered to be a probable human carcinogen. After its introduction as a fungicide in 1945, for crop seeds, this toxic chemical was found in all food types. Hexachlorobenzene was banned from use in the United States in 1966. This material has been classified by the International Agency for Research on Cancer (IARC) as a Group 2B carcinogen (possibly carcinogenic to humans). Animal carcinogenicity data for hexachlorobenzene show increased incidences of liver, kidney (renal tubular tumours) and thyroid cancers. Chronic oral exposure in humans has been shown to give rise to a liver disease (porphyria cutanea tarda), skin lesions with discoloration, ulceration, photosensitivity, thyroid effects, bone effects and loss of hair. Neurological changes have been reported in rodents exposed to hexachlorobenzene. Hexachlorobenzene may cause embryolethality and teratogenic effects. Human and animal studies have demonstrated that hexachlorobenzene crosses the placenta to accumulate in foetal tissues and is transferred in breast milk. HCB is very toxic to aquatic organisms. It may cause long term adverse effects in the aquatic environment. Therefore, release into waterways should be avoided. It is persistent in the environment. Ecological investigations have found that biomagnification up the food chain does occur. Hexachlorobenzene has a half life in the soil of between 3 and 6 years. Risk of bioaccumulation in an aquatic species is high. Anatolian porphyria In Anatolia, Turkey between 1955 and 1959, during a period when bread wheat was unavailable, 500 people were fatally poisoned and more than 4,000 people fell ill by eating bread made with HCB-treated seed that was intended for agriculture use. Most of the sick were affected with a liver condition called porphyria cutanea tarda, which disturbs the metabolism of hemoglobin and results in skin lesions. Almost all breastfeeding children under the age of two, whose mothers had eaten tainted bread, died from a condition called "pembe yara" or "pink sore", most likely from high doses of HCB in the breast milk. In one mother's breast milk the HCB level was found to be 20 parts per million in lipid, approximately 2,000 times the average levels of contamination found in breast-milk samples around the world. Follow-up studies 20 to 30 years after the poisoning found average HCB levels in breast milk were still more than seven times the average for unexposed women in that part of the world (56 specimens of human milk obtained from mothers with porphyria, average value was 0.51 ppm in HCB-exposed patients compared to 0.07 ppm in unexposed controls), and 150 times the level allowed in cow's milk. In the same follow-up study of 252 patients (162 males and 90 females, avg. current age of 35.7 years), 20–30 years' postexposure, many subjects had dermatologic, neurologic, and orthopedic symptoms and signs. The observed clinical findings include scarring of the face and hands (83.7%), hyperpigmentation (65%), hypertrichosis (44.8%), pinched faces (40.1%), painless arthritis (70.2%), small hands (66.6%), sensory shading (60.6%), myotonia (37.9%), cogwheeling (41.9%), enlarged thyroid (34.9%), and enlarged liver (4.8%). Urine and stool porphyrin levels were determined in all patients, and 17 have at least one of the porphyrins elevated. Offspring of mothers with three decades of HCB-induced porphyria appear normal. See also Chlorobenzenes—different numbers of chlorine substituents Pentachlorobenzenethiol References Cited works Additional references International Agency for Research on Cancer. In: IARC Monographs on the Evaluation of Carcinogenic Risk to Humans. World Health Organisation, Vol 79, 2001pp 493–567 Registry of Toxic Effects of Chemical Substances. Ed. D. Sweet, US Dept. of Health & Human Services: Cincinnati, 2005. Environmental Health Criteria No 195; International Programme on Chemical Safety, World health Organization, Geneva, 1997. Toxicological Profile for Hexachlorobenzene (Update), US Dept of Health & Human Services, Sept 2002. Merck Index, 11th Edition, 4600 External links Obsolete pesticides Chlorobenzenes Endocrine disruptors Fungicides Hazardous air pollutants IARC Group 2B carcinogens Persistent organic pollutants under the Stockholm Convention Suspected teratogens Suspected embryotoxicants Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution Perchlorocarbons
Hexachlorobenzene
[ "Chemistry", "Biology" ]
2,013
[ "Fungicides", "Persistent organic pollutants under the Stockholm Convention", "Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution", "Endocrine disruptors", "Biocides" ]
4,048,455
https://en.wikipedia.org/wiki/Mechanically%20interlocked%20molecular%20architectures
In chemistry, mechanically interlocked molecular architectures (MIMAs) are molecules that are connected as a consequence of their topology. This connection of molecules is analogous to keys on a keychain loop. The keys are not directly connected to the keychain loop but they cannot be separated without breaking the loop. On the molecular level, the interlocked molecules cannot be separated without the breaking of the covalent bonds that comprise the conjoined molecules; this is referred to as a mechanical bond. Examples of mechanically interlocked molecular architectures include catenanes, rotaxanes, molecular knots, and molecular Borromean rings. Work in this area was recognized with the 2016 Nobel Prize in Chemistry to Bernard L. Feringa, Jean-Pierre Sauvage, and J. Fraser Stoddart. The synthesis of such entangled architectures has been made efficient by combining supramolecular chemistry with traditional covalent synthesis, however mechanically interlocked molecular architectures have properties that differ from both "supramolecular assemblies" and "covalently bonded molecules". The terminology "mechanical bond" has been coined to describe the connection between the components of mechanically interlocked molecular architectures. Although research into mechanically interlocked molecular architectures is primarily focused on artificial compounds, many examples have been found in biological systems including: cystine knots, cyclotides or lasso-peptides such as microcin J25 which are proteins, and a variety of peptides. Residual topology Residual topology is a descriptive stereochemical term to classify a number of intertwined and interlocked molecules, which cannot be disentangled in an experiment without breaking of covalent bonds, while the strict rules of mathematical topology allow such a disentanglement. Examples of such molecules are rotaxanes, catenanes with covalently linked rings (so-called pretzelanes), and open knots (pseudoknots) which are abundant in proteins. The term "residual topology" was suggested on account of a striking similarity of these compounds to the well-established topologically nontrivial species, such as catenanes and knotanes (molecular knots). The idea of residual topological isomerism introduces a handy scheme of modifying the molecular graphs and generalizes former efforts of systemization of mechanically bound and bridged molecules. History Experimentally the first examples of mechanically interlocked molecular architectures appeared in the 1960s with catenanes being synthesized by Wasserman and Schill and rotaxanes by Harrison and Harrison. The chemistry of MIMAs came of age when Sauvage pioneered their synthesis using templating methods. In the early 1990s the usefulness and even the existence of MIMAs were challenged. The latter concern was addressed by X ray crystallographer and structural chemist David Williams. Two postdoctoral researchers who took on the challenge of producing [5]catenane (olympiadane) pushed the boundaries of the complexity of MIMAs that could be synthesized their success was confirmed in 1996 by a solid‐state structure analysis conducted by David Williams. Mechanical bonding and chemical reactivity The introduction of a mechanical bond alters the chemistry of the sub components of rotaxanes and catenanes. Steric hindrance of reactive functionalities is increased and the strength of non-covalent interactions between the components are altered. Mechanical bonding effects on non-covalent interactions The strength of non-covalent interactions in a mechanically interlocked molecular architecture increases as compared to the non-mechanically bonded analogues. This increased strength is demonstrated by the necessity of harsher conditions to remove a metal template ion from catenanes as opposed to their non-mechanically bonded analogues. This effect is referred to as the "catenand effect". The augmented non-covalent interactions in interlocked systems compared to non-interlocked systems has found utility in the strong and selective binding of a range of charged species, enabling the development of interlocked systems for the extraction of a range of salts. This increase in strength of non-covalent interactions is attributed to the loss of degrees of freedom upon the formation of a mechanical bond. The increase in strength of non-covalent interactions is more pronounced on smaller interlocked systems, where more degrees of freedom are lost, as compared to larger mechanically interlocked systems where the change in degrees of freedom is lower. Therefore, if the ring in a rotaxane is made smaller the strength of non-covalent interactions increases, the same effect is observed if the thread is made smaller as well. Mechanical bonding effects on chemical reactivity The mechanical bond can reduce the kinetic reactivity of the products, this is ascribed to the increased steric hindrance. Because of this effect hydrogenation of an alkene on the thread of a rotaxane is significantly slower as compared to the equivalent non interlocked thread. This effect has allowed for the isolation of otherwise reactive intermediates. The ability to alter reactivity without altering covalent structure has led to MIMAs being investigated for a number of technological applications. Applications of mechanical bonding in controlling chemical reactivity The ability for a mechanical bond to reduce reactivity and hence prevent unwanted reactions has been exploited in a number of areas. One of the earliest applications was in the protection of organic dyes from environmental degradation. Examples Olympiadane References Further reading Supramolecular chemistry Molecular topology
Mechanically interlocked molecular architectures
[ "Chemistry", "Materials_science", "Mathematics" ]
1,104
[ "Molecular topology", "Topology", "nan", "Nanotechnology", "Supramolecular chemistry" ]
4,049,168
https://en.wikipedia.org/wiki/Glass%20cloth
Glass cloth is a textile material woven from glass fiber yarn. Home and garden Glass cloth was originally developed to be used in greenhouse paneling, allowing sunlight's ultraviolet rays to be filtered out, while still allowing visible light through to plants. Glass cloth is also a term for a type of tea towel suited for polishing glass. The cloth is usually woven with the plain weave, and may be patterned in various ways, though checked cloths are the most common. The original cloth was made from linen, but a large quantity is made with cotton warp and tow weft, and in some cases they are composed entirely of cotton. Short fibres of the cheaper kind are easily detached from the cloth. In the Southern Plains during the Dust Bowl, states' health officials recommended attaching translucent glass cloth to the inside frames of windows to help in keeping the dust out of buildings, although people also used paperboard, canvas or blankets. Eyewitness accounts indicate they were not completely successful. Use in technology Given the properties of glass - in particular, its heat resistance and inability to ignite - glass is often used to create fire barriers in hazardous environments, such as those inside racecars. Due to its poor flexibility and ability to cause skin irritation, glass fibers are typically inadequate for use in apparel. However, the bi-directional strength of glass cloth has found utility in some fiberglass reinforced plastics. The Rutan VariEze homebuilt aircraft uses a moldless glass-cloth/epoxy composite, which acts as a protective skin. Glass cloth is also commonly used as a reinforcing lattice for pre-pregs. See also G-10 (material) Glass fiber References Woven fabrics Linens Fiberglass Composite materials Fibre-reinforced polymers Glass applications
Glass cloth
[ "Physics", "Chemistry", "Materials_science" ]
355
[ "Composite materials", "Fiberglass", "Materials", "Polymer chemistry", "Matter" ]
4,049,625
https://en.wikipedia.org/wiki/Noise%20barrier
A noise barrier (also called a soundwall, noise wall, sound berm, sound barrier, or acoustical barrier) is an exterior structure designed to protect inhabitants of sensitive land use areas from noise pollution. Noise barriers are the most effective method of mitigating roadway, railway, and industrial noise sources – other than cessation of the source activity or use of source controls. In the case of surface transportation noise, other methods of reducing the source noise intensity include encouraging the use of hybrid and electric vehicles, improving automobile aerodynamics and tire design, and choosing low-noise paving material. Extensive use of noise barriers began in the United States after noise regulations were introduced in the early 1970s. History Noise barriers have been built in the United States since the mid-twentieth century, when vehicular traffic burgeoned. I-680 in Milpitas, California was the first noise barrier. In the late 1960s, analytic acoustical technology emerged to mathematically evaluate the efficacy of a noise barrier design adjacent to a specific roadway. By the 1990s, noise barriers that included use of transparent materials were being designed in Denmark and other western European countries. The best of these early computer models considered the effects of roadway geometry, topography, vehicle volumes, vehicle speeds, truck mix, road surface type, and micro-meteorology. Several U.S. research groups developed variations of the computer modeling techniques: Caltrans Headquarters in Sacramento, California; the ESL Inc. group in Sunnyvale, California; the Bolt, Beranek and Newman group in Cambridge, Massachusetts, and a research team at the University of Florida. Possibly the earliest published work that scientifically designed a specific noise barrier was the study for the Foothill Expressway in Los Altos, California. Numerous case studies across the U.S. soon addressed dozens of different existing and planned highways. Most were commissioned by state highway departments and conducted by one of the four research groups mentioned above. The U.S. National Environmental Policy Act, enacted in 1970, effectively mandated the quantitative analysis of noise pollution from every Federal-Aid Highway Act Project in the country, propelling noise barrier model development and application. With passage of the Noise Control Act of 1972, demand for noise barrier design soared from a host of noise regulation spinoff. By the late 1970s, more than a dozen research groups in the U.S. were applying similar computer modeling technology and addressing at least 200 different locations for noise barriers each year. , this technology is considered a standard in the evaluation of noise pollution from highways. The nature and accuracy of the computer models used is nearly identical to the original 1970s versions of the technology. Small and purposeful gaps exist in most noise barriers to allow firefighters to access nearby fire hydrants and pull through fire hoses, which are usually denoted by a sign indicating the nearest cross street, and a pictogram of a fire hydrant, though some hydrant gaps channel the hoses through small culvert channels beneath the wall. Design The acoustical science of noise barrier design is based upon treating an airway or railway as a line source. The theory is based upon blockage of sound ray travel toward a particular receptor; however, diffraction of sound must be addressed. Sound waves bend (downward) when they pass an edge, such as the apex of a noise barrier. Barriers that block line of sight of a highway or other source will therefore block more sound. Further complicating matters is the phenomenon of refraction, the bending of sound rays in the presence of an inhomogeneous atmosphere. Wind shear and thermocline produce such inhomogeneities. The sound sources modeled must include engine noise, tire noise, and aerodynamic noise, all of which vary by vehicle type and speed. The noise barrier may be constructed on private land, on a public right-of-way, or on other public land. Because sound levels are measured using a logarithmic scale, a reduction of nine decibels is equivalent to elimination of approximately 86 percent of the unwanted sound power. Materials Several different materials may be used for sound barriers, including masonry, earthwork (such as earth berm), steel, concrete, wood, plastics, insulating wool, or composites. Walls that are made of absorptive material mitigate sound differently than hard surfaces. It is also possible to make noise barriers with active materials such as solar photovoltaic panels to generate electricity while also reducing traffic noise. A wall with porous surface material and sound-dampening content material can be absorptive where little or no noise is reflected back towards the source or elsewhere. Hard surfaces such as masonry or concrete are considered to be reflective where most of the noise is reflected back towards the noise source and beyond. Noise barriers can be effective tools for noise pollution abatement, but certain locations and topographies are not suitable for use of noise barriers. Cost and aesthetics also play a role in the choice of noise barriers. In some cases, a roadway is surrounded by a noise abatement structure or dug into a tunnel using the cut-and-cover method. Disadvantages Potential disadvantages of noise barriers include: Blocked vision for motorists and rail passengers. Glass elements in noise screens can reduce visual obstruction, but require regular cleaning Aesthetic impact on land- and townscape An expanded target for graffiti, unsanctioned guerilla advertising, and vandalism Creation of spaces hidden from view and social control (e.g. at railway stations) Possibility of bird–window collisions for large and clear barriers Effects on air pollution Roadside noise barriers have been shown to reduce the near-road air pollution concentration levels. Within 15–50 m from the roadside, air pollution concentration levels at the lee side of the noise barriers may be reduced by up to 50% compared to open road values. Noise barriers force the pollution plumes coming from the road to move up and over the barrier creating the effect of an elevated source and enhancing vertical dispersion of the plume. The deceleration and the deflection of the initial flow by the noise barrier force the plume to disperse horizontally. A highly turbulent shear zone characterized by slow velocities and a re-circulation cavity is created in the lee of the barrier which further enhances the dispersion; this mixes ambient air with the pollutants downwind behind the barrier. See also Health effects from noise Noise control Safety barrier Soundproofing References External links Environmental engineering Noise pollution Noise control Road infrastructure Acoustics Sound 1970s introductions
Noise barrier
[ "Physics", "Chemistry", "Engineering" ]
1,322
[ "Chemical engineering", "Classical mechanics", "Acoustics", "Civil engineering", "Environmental engineering" ]
4,051,468
https://en.wikipedia.org/wiki/Plutonium-238
Plutonium-238 ( or Pu-238) is a radioactive isotope of plutonium that has a half-life of 87.7 years. Plutonium-238 is a very powerful alpha emitter; as alpha particles are easily blocked, this makes the plutonium-238 isotope suitable for usage in radioisotope thermoelectric generators (RTGs) and radioisotope heater units. The density of plutonium-238 at room temperature is about 19.8 g/cc. The material will generate about 0.57 watts per gram of 238Pu. The bare sphere critical mass of metallic plutonium-238 is not precisely known, but its calculated range is between 9.04 and 10.07 kilograms. History Initial production Plutonium-238 was the first isotope of plutonium to be discovered. It was synthesized by Glenn Seaborg and associates in December 1940 by bombarding uranium-238 with deuterons, creating neptunium-238. + → + 2 The neptunium isotope then undergoes β− decay to plutonium-238, with a half-life of 2.12 days: → + + Plutonium-238 naturally decays to uranium-234 and then further along the radium series to lead-206. Historically, most plutonium-238 has been produced by Savannah River in their weapons reactor, by irradiating neptunium-237 (half life ) with neutrons. + → Neptunium-237 is a by-product of the production of plutonium-239 weapons-grade material, and when the site was shut down in 1988, 238Pu was mixed with about 16% 239Pu. Manhattan Project Plutonium was first synthesized in 1940 and isolated in 1941 by chemists at the University of California, Berkeley. The Manhattan Project began shortly after the discovery, with most early research (pre-1944) carried out using small samples manufactured using the large cyclotrons at the Berkeley Rad Lab and Washington University in St. Louis. Much of the difficulty encountered during the Manhattan Project regarded the production and testing of nuclear fuel. Both uranium and plutonium were eventually determined to be fissile, but in each case they had to be purified to select for the isotopes suitable for an atomic bomb. With World War II underway, the research teams were pressed for time. Micrograms of plutonium were made by cyclotrons in 1942 and 1943. In the fall of 1943 Robert Oppenheimer is quoted as saying "there's only a twentieth of a milligram in existence." By his request, the Rad Lab at Berkeley made available 1.2 mg of plutonium by the end of October 1943, most of which was taken to Los Alamos for theoretical work there. The world's second reactor, the X-10 Graphite Reactor built at a secret site at Oak Ridge, would be fully operational in 1944. In November 1943, shortly after its initial start-up, it was able to produce a minuscule 500 mg. However, this plutonium was mixed with large amounts of uranium fuel and destined for the nearby chemical processing pilot plant for isotopic separation (enrichment). Gram amounts of plutonium would not be available until spring of 1944. Industrial-scale production of plutonium only began in March 1945 when the B Reactor at the Hanford Site began operation. Plutonium-238 and human experimentation While samples of plutonium were available in small quantities and being handled by researchers, no one knew what health effects this might have. Plutonium handling mishaps occurred in 1944, causing alarm in the Manhattan Project leadership as contamination inside and outside the laboratories was becoming an issue. In August 1944, chemist Donald Mastick was sprayed in the face with a solution of plutonium chloride, causing him to accidentally swallow some. Nose swipes taken of plutonium researchers indicated that plutonium was being breathed in. Lead Manhattan Project chemist Glenn Seaborg, discoverer of many transuranium elements including plutonium, urged that a safety program be developed for plutonium research. In a memo to Robert Stone at the Chicago Met Lab, Seaborg wrote "that a program to trace the course of plutonium in the body be initiated as soon as possible ... [with] the very highest priority." This memo was dated January 5, 1944, prior to many of the contamination events of 1944 in Building D where Mastick worked. Seaborg later claimed that he did not at all intend to imply human experimentation in this memo, nor did he learn of its use in humans until far later due to the compartmentalization of classified information. With bomb-grade enriched plutonium-239 destined for critical research and for atomic weapon production, plutonium-238 was used in early medical experiments as it is unusable as atomic weapon fuel. However, 238Pu is far more dangerous than 239Pu due to its short half-life and being a strong alpha-emitter. It was soon found that plutonium was being excreted at a very slow rate, accumulating in test subjects involved in early human experimentation. This led to severe health consequences for the patients involved. From April 10, 1945, to July 18, 1947, eighteen people were injected with plutonium as part of the Manhattan Project. Doses administered ranged from 0.095 to 5.9 microcuries (μCi). Albert Stevens, after a (mistaken) terminal cancer diagnosis which seemed to include many organs, was injected in 1945 with plutonium without his informed consent. He was referred to as patient CAL-1 and the plutonium consisted of 3.5 μCi 238Pu, and 0.046 μCi 239Pu, giving him an initial body burden of 3.546 μCi (131 kBq) total activity. The fact that he had the highly radioactive plutonium-238 (produced in the 60-inch cyclotron at the Crocker Laboratory by deuteron bombardment of natural uranium) contributed heavily to his long-term dose. Had all of the plutonium given to Stevens been the long-lived 239Pu as used in similar experiments of the time, Stevens's lifetime dose would have been significantly smaller. The short half-life of 87.7 years of 238Pu means that a large amount of it decayed during its time inside his body, especially when compared to the 24,100 year half-life of 239Pu. After his initial "cancer" surgery removed many non-cancerous "tumors", Stevens survived for about 20 years after his experimental dose of plutonium before succumbing to heart disease; he had received the highest known accumulated radiation dose of any human patient. Modern calculations of his lifetime absorbed dose give a significant 64 Sv (6400 rem) total. Weapons The first application of 238Pu was its use in nuclear weapon components made at Mound Laboratories for Lawrence Radiation Laboratory (now Lawrence Livermore National Laboratory). Mound was chosen for this work because of its experience in producing the polonium-210-fueled Urchin initiator and its work with several heavy elements in a Reactor Fuels program. Two Mound scientists spent 1959 at Lawrence in joint development while the Special Metallurgical Building was constructed at Mound to house the project. Meanwhile, the first sample of 238Pu came to Mound in 1959. The weapons project called for the production of about 1 kg/year of 238Pu over a 3-year period. However, the 238Pu component could not be produced to the specifications despite a 2-year effort beginning at Mound in mid-1961. A maximum effort was undertaken with 3 shifts a day, 6 days a week, and ramp-up of Savannah River's 238Pu production over the next three years to about 20 kg/year. A loosening of the specifications resulted in productivity of about 3%, and production finally began in 1964. Use in radioisotope thermoelectric generators Beginning on January 1, 1957, Mound Laboratories RTG inventors Jordan & Birden were working on an Army Signal Corps contract (R-65-8- 998 11-SC-03-91) to conduct research on radioactive materials and thermocouples suitable for the direct conversion of heat to electrical energy using polonium-210 as the heat source. In 1961, Capt. R. T. Carpenter had chosen 238Pu as the fuel for the first RTG (radioisotope thermoelectric generator) to be launched into space as auxiliary power for the Transit IV Navy navigational satellite. By January 21, 1963, the decision had yet to be made as to what isotope would be used to fuel the large RTGs for NASA programs. Early in 1964, Mound Laboratories scientists developed a different method of fabricating the weapon component that resulted in a production efficiency of around 98%. This made available the excess Savannah River 238Pu production for Space Electric Power use just in time to meet the needs of the SNAP-27 RTG on the Moon, the Pioneer spacecraft, the Viking Mars landers, more Transit Navy navigation satellites (precursor to today's GPS) and two Voyager spacecraft, for which all of the 238Pu heat sources were fabricated at Mound Laboratories. The radioisotope heater units were used in space exploration beginning with the Apollo Radioisotope Heaters (ALRH) warming the Seismic Experiment placed on the Moon by the Apollo 11 mission and on several Moon and Mars rovers, to the 129 LWRHUs warming the experiments on the Galileo spacecraft. An addition to the Special Metallurgical building weapon component production facility was completed at the end of 1964 for 238Pu heat source fuel fabrication. A temporary fuel production facility was also installed in the Research Building in 1969 for Transit fuel fabrication. With completion of the weapons component project, the Special Metallurgical Building, nicknamed "Snake Mountain" because of the difficulties encountered in handling large quantities of 238Pu, ceased operations on June 30, 1968, with 238Pu operations taken over by the new Plutonium Processing Building, especially designed and constructed for handling large quantities of 238Pu. Plutonium-238 is given the highest relative hazard number (152) of all 256 radionuclides evaluated by Karl Z. Morgan et al. in 1963. Nuclear powered pacemakers In the United States, when plutonium-238 became available for non-military uses, numerous applications were proposed and tested, including the cardiac pacemaker program that began on June 1, 1966, in conjunction with NUMEC. The last of these units was implanted in 1988, as lithium-powered pacemakers, which had an expected lifespan of 10 or more years without the disadvantages of radiation concerns and regulatory hurdles, made these units obsolete. , there were nine living people with nuclear-powered pacemakers in the United States, out of an original 139 recipients. When these individuals die, the pacemaker is supposed to be removed and shipped to Los Alamos where the plutonium will be recovered. In a letter to the New England Journal of Medicine discussing a woman who received a Numec NU-5 decades ago that is continuously operating, despite an original $5,000 price tag equivalent to $23,000 in 2007 dollars, the follow-up costs have been about $19,000 compared with $55,000 for a battery-powered pacemaker. Another nuclear powered pacemaker was the Medtronics “Laurens-Alcatel Model 9000”. Approximately 1600 nuclear-powered cardiac pacemakers and/or battery assemblies have been located across the United States, and are eligible for recovery by the Off-Site Source Recovery Project (OSRP) Team at Los Alamos National Laboratory (LANL). Production Reactor-grade plutonium from spent nuclear fuel contains various isotopes of plutonium. 238Pu makes up only one or two percent, but it may be responsible for much of the short-term decay heat because of its short half-life relative to other plutonium isotopes. Reactor-grade plutonium is not useful for producing 238Pu for RTGs because difficult isotopic separation would be needed. Pure plutonium-238 is prepared by neutron irradiation of neptunium-237, one of the minor actinides that can be recovered from spent nuclear fuel during reprocessing, or by the neutron irradiation of americium in a reactor. The targets are purified chemically, including dissolution in nitric acid to extract the plutonium-238. A 100 kg sample of light water reactor fuel that has been irradiated for three years contains only about 700 grams (0.7% by weight) of neptunium-237, which must be extracted and purified. Significant amounts of pure 238Pu could also be produced in a thorium fuel cycle. In the US, the Department of Energy's Space and Defense Power Systems Initiative of the Office of Nuclear Energy processes 238Pu, maintains its storage, and develops, produces, transports and manages safety of radioisotope power and heating units for both space exploration and national security spacecraft. As of March 2015, a total of of 238Pu was available for civil space uses. Out of the inventory, remained in a condition meeting NASA specifications for power delivery. Some of this pool of 238Pu was used in a multi-mission radioisotope thermoelectric generator (MMRTG) for the 2020 Mars Rover mission and two additional MMRTGs for a notional 2024 NASA mission. would remain after that, including approximately just barely meeting the NASA specification. Since isotope content in the material is lost over time to radioactive decay while in storage, this stock could be brought up to NASA specifications by blending it with a smaller amount of freshly produced 238Pu with a higher content of the isotope, and therefore energy density. U.S. production ceases and resumes The United States stopped producing bulk 238Pu with the closure of the Savannah River Site reactors in 1988. Since 1993, all of the 238Pu used in American spacecraft has been purchased from Russia. From 1992 to 1994, 10 kilograms were purchased by the US Department of Energy from Russia's Mayak Production Association. Via agreement with Minatom, the US must use plutonium for uncrewed NASA missions, and Russia must use the currency for environmental and social investment in the Chelyabinsk region, affected by long-term radioactive contamination such as the Kyshtym disaster. In total, have been purchased, but Russia is no longer producing 238Pu, and their own supply is reportedly running low. In February 2013, a small amount of 238Pu was successfully produced by Oak Ridge's High Flux Isotope Reactor, and on December 22, 2015, they reported the production of of 238Pu. In March 2017, Ontario Power Generation (OPG) and its venture arm, Canadian Nuclear Partners, announced plans to produce 238Pu as a second source for NASA. Rods containing neptunium-237 will be fabricated by Pacific Northwest National Laboratory (PNNL) in Washington State and shipped to OPG's Darlington Nuclear Generating Station in Clarington, Ontario, Canada where they will be irradiated with neutrons inside the reactor's core to produce 238Pu. In January 2019, it was reported that some automated aspects of its production were implemented at Oak Ridge National Laboratory in Tennessee, that are expected to triple the number of plutonium pellets produced each week. The production rate is now expected to increase from 80 pellets per week to about 275 pellets per week, for a total production of about 400 grams per year. The goal now is to optimize and scale-up the processes in order to produce an average of per year by 2025. Applications The main application of 238Pu is as the heat source in radioisotope thermoelectric generators (RTGs). The RTG was invented in 1954 by Mound scientists Ken Jordan and John Birden, who were inducted into the National Inventors Hall of Fame in 2013. They immediately produced a working prototype using a 210Po heat source, and on January 1, 1957, entered into an Army Signal Corps contract (R-65-8- 998 11-SC-03-91) to conduct research on radioactive materials and thermocouples suitable for the direct conversion of heat to electrical energy using polonium-210 as the heat source. In 1966, a study reported by SAE International described the potential for the use of plutonium-238 in radioisotope power subsystems for applications in space. This study focused on employing power conversions through the Rankine cycle, Brayton cycle, thermoelectric conversion and thermionic conversion with plutonium-238 as the primary heating element. The heat supplied by the plutonium-238 heating element was consistent between the 400 °C and 1000 °C regime but future technology could reach an upper limit of 2000 °C, further increasing the efficiency of the power systems. The Rankine cycle study reported an efficiency between 15 and 19% with inlet turbine temperatures of , whereas the Brayton cycle offered efficiency greater than 20% with an inlet temperature of . Thermoelectric converters offered low efficiency (3-5%) but high reliability. Thermionic conversion could provide similar efficiencies to the Brayton cycle if proper conditions reached. RTG technology was first developed by Los Alamos National Laboratory during the 1960s and 1970s to provide radioisotope thermoelectric generator power for cardiac pacemakers. Of the 250 plutonium-powered pacemakers Medtronic manufactured, twenty-two were still in service more than twenty-five years later, a feat that no battery-powered pacemaker could achieve. This same RTG power technology has been used in spacecraft such as Pioneer 10 and 11, Voyager 1 and 2, Cassini–Huygens and New Horizons, and in other devices, such as the Mars Science Laboratory and Mars 2020 Perseverance Rover, for long-term nuclear power generation. See also Atomic battery Plutonium-239 Polonium-210 References External links Story of Seaborg's discovery of Pu-238, especially pages 34–35. NLM Hazardous Substances Databank – Plutonium, Radioactive Fertile materials Isotopes of plutonium Radioisotope fuels Fissile materials
Plutonium-238
[ "Chemistry" ]
3,715
[ "Explosive chemicals", "Fissile materials", "Isotopes of plutonium", "Isotopes" ]
4,051,670
https://en.wikipedia.org/wiki/Secular%20resonance
A secular resonance is a type of orbital resonance between two bodies with synchronized precessional frequencies. In celestial mechanics, secular refers to the long-term motion of a system, and resonance is periods or frequencies being a simple numerical ratio of small integers. Typically, the synchronized precessions in secular resonances are between the rates of change of the argument of the periapses or the rates of change of the longitude of the ascending nodes of two system bodies. Secular resonances can be used to study the long-term orbital evolution of asteroids and their families within the asteroid belt. Description Secular resonances occur when the precession of two orbits is synchronised (a precession of the perihelion, with frequency g, or the ascending node, with frequency s, or both). A small body (such as a small Solar System body) in secular resonance with a much larger one (such as a planet) will precess at the same rate as the large body. Over relatively short time periods (a million years or so), a secular resonance will change the eccentricity and the inclination of the small body. One can distinguish between: linear secular resonances between a body (no subscript) and a single other large perturbing body (e.g. a planet, subscript as numbered from the Sun), such as the ν6 = g − g6 secular resonance between asteroids and Saturn; and nonlinear secular resonances, which are higher-order resonances, usually combination of linear resonances such as the z1 = (g − g6) + (s − s6), or the ν6 + ν5 = 2g − g6 − g5 resonances. ν6 resonance A prominent example of a linear resonance is the ν6 secular resonance between asteroids and Saturn. Asteroids that approach Saturn have their eccentricity slowly increased until they become Mars-crossers, when they are usually ejected from the asteroid belt by a close encounter with Mars. The resonance forms the inner and "side" boundaries of the asteroid belt around 2 AU and at inclinations of about 20°. See also Orbital resonance Asteroid belt References Orbital perturbations Orbital resonance
Secular resonance
[ "Physics", "Chemistry", "Astronomy" ]
449
[ "Scattering stubs", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Scattering" ]
7,057,416
https://en.wikipedia.org/wiki/S-Adenosyl-L-homocysteine
{{DISPLAYTITLE:S-Adenosyl-L-homocysteine}} S-Adenosyl-L-homocysteine (SAH) is the biosynthetic precursor to homocysteine. SAH is formed by the demethylation of S-adenosyl-L-methionine. Adenosylhomocysteinase converts SAH into homocysteine and adenosine. Biological role DNA methyltransferases are inhibited by SAH. Two S-adenosyl-L-homocysteine cofactor products can bind the active site of DNA methyltransferase 3B and prevent the DNA duplex from binding to the active site, which inhibits DNA methylation. References External links BioCYC E.Coli K-12 Compound: S-adenosyl-L-homocysteine Nucleosides Purines Alpha-Amino acids Amino acid derivatives
S-Adenosyl-L-homocysteine
[ "Chemistry", "Biology" ]
202
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
7,057,811
https://en.wikipedia.org/wiki/Microcontact%20printing
Microcontact printing (or μCP) is a form of soft lithography that uses the relief patterns on a master polydimethylsiloxane (PDMS) stamp or Urethane rubber micro stamp to form patterns of self-assembled monolayers (SAMs) of ink on the surface of a substrate through conformal contact as in the case of nanotransfer printing (nTP). Its applications are wide-ranging including microelectronics, surface chemistry and cell biology. History Both lithography and stamp printing have been around for centuries. However, the combination of the two gave rise to the method of microcontact printing. The method was first introduced by George M. Whitesides and Amit Kumar at Harvard University. Since its inception many methods of soft lithography have been explored. Procedure Preparing the master Creation of the master, or template, is done using traditional photolithography techniques. The master is typically created on silicon, but can be done on any solid patterned surface. Photoresist is applied to the surface and patterned by a photomask and UV light. The master is then baked, developed and cleaned before use. In typical processes the photoresist is usually kept on the wafer to be used as a topographic template for the stamp. However, the unprotected silicon regions can be etched, and the photoresist stripped, which would leave behind a patterned wafer for creating the stamp. This method is more complex but creates a more stable template. Creating the PDMS stamp After fabrication the master is placed in a walled container, typically a petri dish, and the stamp is poured over the master. The PDMS stamp, in most applications, is a 10:1 ratio of silicone elastomer and a silicone elastomer curing agent. This mixture consists of a short hydrosilane crosslinker that contains a catalyst made from a platinum complex. After pouring, the PDMS is cured at elevated temperatures to create a solid polymer with elastomeric properties. The stamp is then peeled off and cut to the proper size. The stamp replicates the opposite of the master. Elevated regions of the stamp correspond to indented regions of the master. Some commercial services for procuring PDMS stamps and micropatterned samples exist such as Research Micro Stamps. Inking the stamp Inking of the stamp occurs through the application of a thiol solution either by immersion or coating the stamp with a Q-tip. The highly hydrophobic PDMS material allows the ink to be diffused into the bulk of the stamp, which means the thiols reside not only on the surface, but also in the bulk of the stamp material. This diffusion into the bulk creates an ink reservoir for multiple prints. The stamp is let dry until no liquid is visible and an ink reservoir is created. Applying the stamp to the substrate Direct contact Applying the stamp to the substrate is easy and straightforward which is one of the main advantages of this process. The stamp is brought into physical contact with the substrate and the thiol solution is transferred to the substrate. The thiol is area-selectively transferred to the surface based on the features of the stamp. During the transfer the carbon chains of the thiol align with each other to create a hydrophobic self-assembling monolayer (SAM). Other application techniques Printing of the stamp onto the substrate, although not used as often, can also take place with a rolling stamp onto a planar substrate or a curved substrate with a planar stamp. Advantages Microcontact Printing has several advantages including: The simplicity and ease of creating patterns with micro-scale features Can be done in a traditional laboratory without the constant use of a cleanroom (cleanroom is needed only to create the master). Multiple stamps can be created from a single master Individual stamps can be used several times with minimal degradation of performance A cheaper technique for fabrication that uses less energy than conventional techniques Some materials have no other micro patterning method available Disadvantages After this technique became popular various limitations and problems arose, all of which affected patterning and reproducibility. Stamp Deformation During direct contact one must be careful because the stamp can easily be physically deformed causing printed features that are different from the original stamp features. Horizontally stretching or compressing the stamp will cause deformations in the raised and recessed features. Also, applying too much vertical pressure on the stamp during printing can cause the raised relief features to flatten against the substrate. These deformations can yield submicron features even though the original stamp has a lower resolution. Deformation of the stamp can occur during removal from the master and during the substrate contacting process. When the aspect ratio of the stamp is high buckling of the stamp can occur. When the aspect ratio is low roof collapse can occur. Substrate contamination During the curing process some fragments can potentially be left uncured and contaminate the process. When this occurs the quality of the printed SAM is decreased. When the ink molecules contain certain polar groups the transfer of these impurities is increased. Shrinking/swelling of the stamp During the curing process the stamp can potentially shrink in size leaving a difference in desired dimensions of the substrate patterning. Swelling of the stamp may also occur. Most organic solvents induce swelling of the PDMS stamp. Ethanol in particular has a very small swelling effect, but many other solvents cannot be used for wet inking because of high swelling. Because of this the process is limited to apolar inks that are soluble in ethanol. Ink mobility Ink diffusion from the PDMS bulk to the surface occurs during the formation of the patterned SAM on the substrate. This mobility of the ink can cause lateral spreading to unwanted regions. Upon the transfer this spreading can influence the desired pattern. Applications Depending on the type of ink used and the subsequent substrate the microcontact printing technique has many different applications Micromachining Microcontact printing has great applications in micromachining. For this application inking solutions commonly consist of a solution of alkanethiol. This method uses metal substrates with the most common metal being gold. However, silver, copper, and palladium have been proven to work as well. Once the ink has been applied to the substrate the SAM layer acts as a resist to common wet etching techniques allowing for the creation of high resolution patterning. The patterned SAMs layer is a step in a series of steps to create complex microstructures. For example, applying the SAM layer on top of gold and etching creates microstructures of gold. After this step etched areas of gold exposes the substrate which can further be etched using traditional anisotropic etch techniques. Because of the microcontact printing technique no traditional photolithography is needed to accomplish these steps. Patterning proteins The patterning of proteins has helped the advancement of biosensors., cell biology research, and tissue engineering. Various proteins have been proven to be suitable inks and are applied to various substrates using the microcontact printing technique. Polylysine, immunoglobulin antibody, and different enzymes have been successfully placed onto surfaces including glass, polystyrene, and hydrophobic silicon. Patterning cells Microcontact printing has been used to advance the understanding of how cells interact with substrates. This technique has helped improve the study of cell patterning that was not possible with traditional cell culture techniques. Patterning DNA Successful patterning of DNA has also been done using this technique. The reduction in time and DNA material are the critical advantages for using this technique. The stamps were able to be used multiple times that were more homogeneous and sensitive than other techniques. Making Microchambers To learn about micro organisms, scientists need adaptable ways to capture and record the behavior of motile single-celled organisms across a diverse range of species. PDMS stamps can mold growth material into micro chambers that then capture single-celled organisms for imaging. Technique improvements To help overcome the limitations set by the original technique several alternatives have been developed. High-Speed printing: Successful contact printing was done on a gold substrate with a contact time in the range of milliseconds. This printing time is three orders of magnitude shorter than the normal technique, yet successfully transformed the pattern. The process of contact was automated to achieve these speeds through a piezoelectric actuator. At these low contact times the surface spreading of thiol did not occur, greatly improving the pattern uniformity Submerged Printing: By submerging the stamp in a liquid medium stability was greatly increased. By printing hydrophobic long-chain thiols underwater the common problem of vapor transport of the ink is greatly reduced. PDMS aspect ratios of 15:1 were achieved using this method, which was not accomplished before Lift-off Nanocontact printing: By first using Silicon lift-off stamps and later low cost polymer lift-off stamps and contacting these with an inked flat PDMS stamp, nanopatterns of multiple proteins or of complex digital nanodot gradients with dot spacing ranging from 0 nm to 15 um apart were achieved for immunoassays and cell assays. Implementation of this approach led to the patterning of a 100 digital nanodot gradient array, composed of more than 57 million protein dots 200 nm in diameter printed in 10 minutes in a 35 mm2 area. Contact Inking: as opposed to wet inking this technique does not permeate the PDMS bulk. The ink molecules only contact the protruding areas of the stamp that are going to be used for the patterning. The absence of ink on the rest of the stamp reduces the amount of ink transferred through the vapor phase that can potentially affect the pattern. This is done by the direct contact of a feature stamp and a flat PDMS substrate that has ink on it. New Stamp Materials: To create uniform transfer of the ink the stamp needs to be both mechanically stable and also be able to create conformal contact well. These two characteristics are juxtaposed because high stability requires a high Young's modulus while efficient contact requires an increase in elasticity. A composite, thin PDMS stamp with a rigid back support has been used for patterning to help solve this problem. Magnetic field assisted micro contact printing: to apply a homogeneous pressure during the printing step, a magnetic force is used. For that, the stamp is sensitive to a magnetic field by injecting iron powder into a second layer of PDMS. This force can be adjusted for nano and micro-patterns [13][12][12][12]. Multiplexing : the macrostamp: the main drawback of microcontact printing for biomedical application is that it is not possible to print different molecules with one stamp. To print different (bio)molecules in one step, a new concept is proposed : the macrostamp. It is a stamp composed of dots. The space between the dots corresponds to the space between the wells of a microplate. Then, it is possible to ink, dry and print in one step different molecules. General references www.microcontactprinting.net : a website dealing with microcontact printing (articles, patents, thesis, tips, education, ...) www.researchmicrostamps.com: a service that provides micro stamps via simple online sales. Footnotes Lithography (microfabrication)
Microcontact printing
[ "Materials_science" ]
2,339
[ "Nanotechnology", "Microtechnology", "Lithography (microfabrication)" ]
7,060,924
https://en.wikipedia.org/wiki/Hankinson%27s%20equation
Hankinson's equation (also called Hankinson's formula or Hankinson's criterion) is a mathematical relationship for predicting the off-axis uniaxial compressive strength of wood. The formula can also be used to compute the fiber stress or the stress wave velocity at the elastic limit as a function of grain angle in wood. For a wood that has uniaxial compressive strengths of parallel to the grain and perpendicular to the grain, Hankinson's equation predicts that the uniaxial compressive strength of the wood in a direction at an angle to the grain is given by Even though the original relation was based on studies of spruce, Hankinson's equation has been found to be remarkably accurate for many other types of wood. A generalized form of the Hankinson formula has also been used for predicting the uniaxial tensile strength of wood at an angle to the grain. This formula has the form where the exponent can take values between 1.5 and 2. The stress wave velocity at angle to the grain at the elastic limit can similarly be obtained from the Hankinson formula where is the velocity parallel to the grain, is the velocity perpendicular to the grain and is the grain angle. See also Material failure theory Linear elasticity Hooke's law Orthotropic material Transverse isotropy References Materials science Solid mechanics Equations
Hankinson's equation
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
286
[ "Solid mechanics", "Applied and interdisciplinary physics", "Mathematical objects", "Materials science", "Equations", "Mechanics", "nan" ]
7,062,643
https://en.wikipedia.org/wiki/Oil%20burner%20%28engine%29
An oil burner engine is a steam engine that uses oil as its fuel. The term is usually applied to a locomotive or ship engine that burns oil to heat water, to produce the steam which drives the pistons, or turbines, from which the power is derived. This is mechanically very different from diesel engines, which use internal combustion, although they are sometimes colloquially referred to as oil burners. History A variety of experimental oil powered steam boilers were patented in the 1860s. Most of the early patents used steam to spray atomized oil into the steam boilers furnace. Attempts to burn oil from a free surface were unsuccessful due to the inherently low rates of combustion from the available surface area. On 21 April 1868, the steam yacht Henrietta made a voyage down the river Clyde powered by an oil fired boiler designed and patented by a Mr Donald of George Miller & Co. Donald's design used a jet of dry steam to spray oil into a furnace lined with fireproof bricks. Prior to the Henrietta’s oil burner conversion, George Miller & Co was recorded as having used oil to power their works in Glasgow for a “considerable time”. During the late 19th century numerous burner designs were patented using combinations of steam, compressed air and injection pumps to spray oil into boiler furnaces. Most of the early oil burner designs were commercial failures due to the high cost of oil (relative to coal) rather than any technical issues with the burners themselves. During the early 20th century, marine and large oil burning steam engines generally used electric motor or steam driven injection pumps. Oil would be draw from a storage tank through suction strainers and across viscosity-reducing oil heaters. The oil would then be pumped through discharge strainers before entering the burners as a whirling mist. Combustion air was introduced through special furnace-fronts, which were fitted with dampers to regulate the supply. Smaller land-based oil burning steam engines typically used steam jets fed from the main boiler to blast atomized oil into the burner nozzles. Steam ships In the 1870s, Caspian steamships began using mazut, a residual fuel oil which at that time was produced as a waste stream by the many oil refineries located in the Absheron peninsula. During the late 19th century Mazut remained cheap and plentiful in the Caspian region. In 1870, either the SS Iran or SS Constantine (depending on source) became the first ship to convert to burning fuel oil, both were Caspian based merchant steamships. During the 1870s, the Imperial Russian Navy converted the ships of the Caspian fleet to oil burners starting with the Khivenets in 1874. In 1894, the oil tanker SS Baku Standard became the first oil burning vessel to cross the Atlantic Ocean. In 1903, the Red Star Liner SS Kensington became the first passenger liner to make the Atlantic crossing with boilers fired by fuel oil. Fuel oil has a higher energy density than coal and oil powered ships did not need to employ stokers however coal remained the dominant power source for marine boilers throughout the 19th century primarily due to the relatively high cost of fuel oil. Oil was used in marine boilers to a greater extent during the early 20th century. By 1939, about half the world’s ships burned fuel oil, of these about half had steam engines and the other half used diesel engines. Steam locomotives Oil burners designed by Thomas Urquhart were fitted to the locomotives of the Gryazi-Tsaritsyn railway in southern Russia. Thomas Urquhart, who was employed as a Locomotive Superintendent by the Gryazi-Tsaritsyn Railway Company, began his experiments in 1874. By 1885 all the locomotives of the Gryazi-Tsaritsyn Railway had been converted to run on fuel oil. In Great Britain, an early pioneer of oil burning railway locomotives was James Holden, of the Great Eastern Railway. In James Holden's system, steam was raised by burning coal before the oil fuel was turned on. Holden's first oil burning locomotive Petrolea, was a class T19 2-4-0. Built in 1893, Petrolea burned waste oil that the railway had previously been discharging into the River Lea. Due to the relatively low cost of coal, oil was rarely used on Britain's stream trains and in most cases only where there was a shortage of coal. In the United States, the first oil burning steam locomotive was in service on the Southern Pacific railroad by 1900. By 1915 there were 4,259 oil burning steam locomotives in the United States, which represented 6.5% of all the locomotives then in service. Most oil burners were operated in areas west of the Mississippi where oil was abundant. American usage of oil burning steam locomotives peaked in 1945 when they were responsible for around 20% of all the fuel consumed (measured by energy content) during rail freight operations. After WW2, both oil and coal burning steam locomotives were replaced by more efficient diesel engines and had been almost entirely phased out of service by 1960. Notable early oil-fired steamships Passenger liners SS Kensington NMS Regele Carol I (one oil fired + one coal fired boiler) SS Tenyo Maru SS George Washington Warships Re Umberto-class - Italian ironclad battleships equipped to burn a mix of coal and oil Rostislav - Russian battleship HMS Spiteful - British Royal Navy destroyer Paulding-class destroyers - US Navy Notable oil-fired steam locomotives General Most cab forward locomotives Some Fairlie locomotives Some steam locomotives used on heritage railways Advanced steam technology locomotives Australia NSWGR D55 Class NSWGR D59 Class VR J Class VR R Class WAGR U Class WAGR Pr Class India Darjeeling Himalayan Railway Nilgiri Mountain Railway Great Britain GER Class T19 GER Class P43 WD Austerity 2-10-0 (3672 converted in preservation). GWR oil burning steam locomotives (4965 Rood Ashton Hall to be converted during overhaul in preservation). New Zealand NZR JA class (North British-built locomotives only) NZR JB class NZR K class (1932) - converted from coal 1947-53 NZR KA class - converted from coal 1947-53 North America ('*' symbol indicates locomotive was converted or is being converted from coal-burning to oil-burning in either revenue service or excursion service) Sierra Railway 3 - Part of Railtown 1897 State Historic Park (Jamestown, CA) Sierra Railway 28 - Part of Railtown 1897 State Historic Park (Jamestown, CA) McCloud Railway 25 - Oregon Coast Scenic Railroad (Garibaldi, OR) Polson Logging Co. 2 - Albany & Eastern Railroad (Albany, OR) California Western 45 - California Western Railroad (Fort Bragg, CA) US Army Transportation Corps 1702* - Great Smoky Mountains Railroad (Bryson City, NC) Southern Railway 722* - Great Smoky Mountains Railroad (Bryson City, NC) Union Pacific 844* - UP Heritage Fleet (Cheyenne, WY) Union Pacific 4014* - UP Heritage Fleet (Cheyenne, WY) Union Pacific 3985* - Railroading Heritage of Midwest America (Silvis, IL) Union Pacific 5511 - Railroading Heritage of Midwest America (Silvis, IL) Union Pacific 737* - Double-T Agricultural Museum (Stevinson, CA) White Pass & Yukon Route 73 - White Pass & Yukon Route (Skagway, AK) Alaska Railroad 557* - Engine 557 Restoration Company (Anchorage, AK) Santa Fe 5000 - Amarillo, TX Santa Fe 3759* - Locomotive Park (Kingman, AZ) Santa Fe 3751* - San Bernardino Railroad Historical Society (San Bernardino, CA) Santa Fe 3450* - RailGiants Train Museum (Pomona, CA) Santa Fe 3415* - Abilene & Smoky Valley Railroad (Abilene, KS) Santa Fe 2926 - New Mexico Steam Locomotive & Railroad Historical Society (Albuquerque, NM) Santa Fe 1316* - Texas State Railroad (Palestine, TX) Texas & Pacific 610 - Texas State Railroad (Palestine, TX) Southern Pine Lumber Co. 28 - Texas State Railroad (Palestine, TX) Tremont & Gulf 30/Magma Arizona 7 - Texas State Railroad (Palestine, TX) Lake Superior & Ishpeming 18* - Colebrookdale Railroad (Boyertown, PA) Florida East Coast 148 - US Sugar Corporation (Clewiston, FL) Atlantic Coast Line 1504* - US Sugar Corporation (Clewiston, FL) Grand Canyon Railway 29* - Grand Canyon Railway (Williams, AR) Grand Canyon Railway 4960* - Grand Canyon Railway (Williams, AR) Oregon Railroad & Navigation 197 - Oregon Rail Heritage Center (Portland, OR) Spokane, Portland & Seattle 700 - Oregon Rail Heritage Center (Portland, OR) Southern Pacific 4449 - Oregon Rail Heritage Center (Portland, OR) Southern Pacific 4460 - National Museum of Transportation (Kirkwood, MO) Southern Pacific 4294 - California State Railroad Museum (Sacramento, CA) Southern Pacific 2479 - Niles Canyon Railway (Sunol, CA) Southern Pacific 2472 - Golden Gate Railroad Museum Southern Pacific 2467 - Pacific Locomotive Association Southern Pacific 2353 - Pacific Southwest Railway Museum (Campo, CA) Southern Pacific 1744 - Niles Canyon Railway (Sunol, CA) Southern Pacific 786 - Austin Steam Train Association, Inc. (Cedar Park, TX) Southern Pacific 745 - Louisiana Steam Train Association, Inc. (Jefferson, LA) Southern Pacific 18 - Eastern California Museum (Independence, CA) Cotton Belt 819 - Arkansas Railroad Museum (Pine Bluff, AR) Frisco 1522 - National Museum of Transportation (Kirkwood, MO) Southern Railway 401* - Monticello Railway Museum (Monticello, IL) Reading 2100* - American Steam Railroad Preservation Association (Cleveland, OH) See also Oil refinery Steam power during the Industrial Revolution Timeline of steam power References External links Fuel energy & steam traction Engine technology Energy conversion Combustion engineering Steam engine technology
Oil burner (engine)
[ "Technology", "Engineering" ]
2,022
[ "Engine technology", "Industrial engineering", "Combustion engineering", "Engines" ]
6,899,646
https://en.wikipedia.org/wiki/Lead-bismuth%20eutectic
Lead-Bismuth Eutectic or LBE is a eutectic alloy of lead (44.5 at%) and bismuth (55.5 at%) used as a coolant in some nuclear reactors, and is a proposed coolant for the lead-cooled fast reactor, part of the Generation IV reactor initiative. It has a melting point of 123.5 °C/254.3 °F (pure lead melts at 327 °C/621 °F, pure bismuth at 271 °C/520 °F) and a boiling point of 1,670 °C/3,038 °F. Lead-bismuth alloys with between 30% and 75% bismuth all have melting points below 200 °C/392 °F. Alloys with between 48% and 63% bismuth have melting points below 150 °C/302 °F. While lead expands slightly on melting and bismuth contracts slightly on melting, LBE has negligible change in volume on melting. History The Soviet Alfa-class submarines used LBE as a coolant for their nuclear reactors throughout the Cold War. OKB Gidropress (the Russian developers of the VVER-type Light-water reactors) has expertise in LBE reactors. The SVBR-75/100, a modern design of this type, is one example of the extensive Russian experience with this technology. Gen4 Energy (formerly Hyperion Power Generation), a United States firm connected with Los Alamos National Laboratory, announced plans in 2008 to design and deploy a uranium nitride fueled small modular reactor cooled by lead-bismuth eutectic for commercial power generation, district heating, and desalinization. The proposed reactor, called the Gen4 Module, was planned as a 70 MWth reactor of the sealed modular type, factory assembled and transported to site for installation, and transported back to the factory for refuelling. Gen4 Energy ceased operations in 2018. Advantages As compared to sodium-based liquid metal coolants such as liquid sodium or NaK, lead-based coolants have significantly higher boiling points, meaning a reactor can be operated without risk of coolant boiling at much higher temperatures. This improves thermal efficiency and could potentially allow hydrogen production through thermochemical processes. Lead and LBE also do not react readily with water or air, in contrast to sodium and NaK which ignite spontaneously in air and react explosively with water. This means that lead- or LBE-cooled reactors, unlike sodium-cooled designs, would not need an intermediate coolant loop, which reduces the capital investment required for a plant. Both lead and bismuth are also an excellent radiation shield, absorbing gamma radiation while simultaneously being virtually transparent to neutrons. In contrast, sodium forms the potent gamma emitter sodium-24 (half-life 15 hours) following intense neutron radiation, requiring a large radiation shield for the primary cooling loop. As heavy nuclei, lead and bismuth can be used as spallation targets for non-fission neutron production, as in accelerator transmutation of waste (see energy amplifier). Both lead-based and sodium-based coolants have the advantage of relatively high boiling points as compared to water, meaning it is not necessary to pressurise the reactor even at high temperatures. This improves safety as it reduces the probability of a loss of coolant accident (LOCA), and allows for passively safe designs. The thermodynamic cycle (Carnot cycle) is also more efficient with a larger difference of temperature. However, a disadvantage of higher temperatures is also the higher corrosion rate of metallic structural components in LBE due to their increased solubility in liquid LBE with temperature (formation of amalgam) and to liquid metal embrittlement. Limitations Lead and LBE coolant are more corrosive to steel than sodium, and this puts an upper limit on the velocity of coolant flow through the reactor due to safety considerations. Furthermore, the higher melting points of lead and LBE (327 °C and 123.5 °C respectively) may mean that solidification of the coolant may be a greater problem when the reactor is operated at lower temperatures. Finally, upon neutron radiation bismuth-209, the main isotope of bismuth present in LBE coolant, undergoes neutron capture and subsequent beta decay, forming polonium-210, a potent alpha emitter. The presence of radioactive polonium in the coolant would require special precautions to control alpha contamination during refueling of the reactor and handling components in contact with LBE. See also Subcritical reactor (accelerator-driven system) References External links NEA 2015 LBE Handbook Fusible alloys Nuclear reactor coolants Nuclear materials Bismuth
Lead-bismuth eutectic
[ "Physics", "Chemistry", "Materials_science" ]
958
[ "Lead alloys", "Metallurgy", "Fusible alloys", "Materials", "Nuclear materials", "Alloys", "Matter" ]
6,900,318
https://en.wikipedia.org/wiki/Building%20insulation
Building insulation is material used in a building (specifically the building envelope) to reduce the flow of thermal energy. While the majority of insulation in buildings is for thermal purposes, the term also applies to acoustic insulation, fire insulation, and impact insulation (e.g. for vibrations caused by industrial applications). Often an insulation material will be chosen for its ability to perform several of these functions at once. Since prehistoric times, humans have created thermal insulation with materials such as animal fur and plants. With the agricultural development, earth, stone, and cave shelters arose. In the 19th century, people started to produce insulated panels and other artificial materials. Now, insulation is divided into two main categories: bulk insulation and reflective insulation. Buildings typically use a combination. Insulation is an important economic and environmental investment for buildings. By installing insulation, buildings use less energy for heating and cooling and occupants experience less thermal variability. Retrofitting buildings with further insulation is an important climate change mitigation tactic, especially when buildings are heated by oil, natural gas, or coal-based electricity. Local and national governments and utilities often have a mix of incentives and regulations to encourage insulation efforts on new and renovated buildings as part of efficiency programs in order to reduce grid energy use and its related environmental impacts and infrastructure costs. Insulation The definition of thermal insulation Thermal insulation usually refers to the use of appropriate insulation materials and design adaptations for buildings to slow the transfer of heat through the enclosure to reduce heat loss and gain. The transfer of heat is caused by the temperature difference between indoors and outdoors. Heat may be transferred either by conduction, convection, or radiation. The rate of transmission is closely related to the propagating medium. Heat is lost or gained by transmission through the ceilings, walls, floors, windows, and doors. This heat reduction and acquisition are usually unwelcome. It not only increases the load on the HVAC system resulting in more energy wastes but also reduces the thermal comfort of people in the building. Thermal insulation in buildings is an important factor in achieving thermal comfort for its occupants. Insulation reduces unwanted heat loss or gain and can decrease the energy demands of heating and cooling systems. It does not necessarily deal with issues of adequate ventilation and may or may not affect the level of sound insulation. In a narrow sense, insulation can just refer to the insulation materials employed to slow heat loss, such as: cellulose, glass wool, rock wool, polystyrene, polyurethane foam, vermiculite, perlite, wood fiber, plant fiber (cannabis, flax, cotton, cork, etc.), recycled cotton denim, straw, animal fiber (sheep's wool), cement, and earth or soil, reflective insulation (also known as a radiant barrier) but it can also involve a range of designs and techniques to address the main modes of heat transfer - conduction, radiation, and convection materials. Most of the materials in the above list only retain a large amount of air or other gases between the molecules of the material. The gas conducts heat much less than the solids. These materials can form gas cavities, which can be used to insulate heat with low heat transfer efficiency. This situation also occurs in the fur of animals and birds feathers, animal hair can employ the low thermal conductivity of small pockets of gas, so as to achieve the purpose of reducing heat loss. The effectiveness of reflective insulation (radiant barrier) is commonly evaluated by the reflectivity (emittance) of the surface with airspace facing to the heat source. The effectiveness of bulk insulation is commonly evaluated by its R-value, of which there are two – metric (SI) (with unit K⋅W−1⋅m2) and US customary (with unit °F⋅ft2⋅h/BTU), the former being 0.176 times the latter numerically, or the reciprocal quantity the thermal conductivity or U-value W⋅K−1⋅m−2. For example, in the US the insulation standard for attics, is recommended to be at least R-38 US units, (equivalent to R-6.7 or a U value of 0.15 in SI units). The equivalent standard in the UK are technically comparable, the approved document L would normally require an average U value over the roof area of 0.11 to 0.18 depending on the age of the property and the type of roof construction. Newer buildings have to meet a higher standard than those built under previous versions of the regulations. It is important to realise a single R-value or U-value does not take into account the quality of construction or local environmental factors for each building. Construction quality issues can include inadequate vapor barriers and problems with draft-proofing. In addition, the properties and density of the insulation material itself are critical. Most countries have some regime of either inspections or certification of approved installers to make sure that good standards are maintained. History of thermal insulation The history of thermal insulation is not so long compared with other materials, but human beings have been aware of the importance of insulation for a long time. In the prehistoric time, human beings began their activity of making shelters against wild animals and heavy weather, human beings started their exploration of thermal insulation. Prehistoric peoples built their dwellings by using the materials of animal skins, fur, and plant materials like reed, flax, and straw, these materials were first used as clothing materials, because their dwellings were temporary, they were more likely to use the materials they used in clothing, which were easy to obtain and process. The materials of animal furs and plant products can hold a large amount of air between molecules which can create an air cavity to reduce the heat exchange. Later, human beings' long life spans and the development of agriculture determined that they needed a fixed place of residence, earth-sheltered houses, stone houses, and cave dwellings began to emerge. The high density of these materials can cause a time lag effect in thermal transfer, which can make the inside temperature change slowly. This effect keep inside of the buildings warm in winter and cool in summer, also because of the materials like earth or stone is easy to get, this design is really popular in many places like Russia, Iceland, Greenland. Organic materials were the first available to build a shelter for people to protect themselves from bad weather conditions and to help keep them warm. But organic materials like animal and plant fiber cannot exist for a long time, so these natural materials cannot satisfy people's long-term need for thermal insulation. So, people began to search for substitutes which are more durable. In the 19th century, people were no longer satisfied with using natural materials for thermal insulation, they processed the organic materials and produced the first insulated panels. At the same time, more and more artificial materials start to emerge, and a large range of artificial thermal insulation materials were developed, e.g. rock wool, fiberglass, foam glass, and hollow bricks. Significance of thermal insulation Thermal insulation can play a significant role in buildings, great demands of thermal comfort result in a large amount of energy consumed for full-heating for all rooms. Around 40% of energy consumption can be attributed to the building, mainly consumed by heating or cooling. Sufficient thermal insulation is the fundamental task that ensures a healthy indoor environment and against structure damages. It is also a key factor in dealing with high energy consumption, it can reduce the heat flow through the building envelope. Good thermal insulation can also bring the following benefits to the building: Preventing building damage caused by the formation of moisture on the inside of the building envelope. Thermal insulation makes sure that the temperatures of room surface don't fall below a critical level, which avoids condensation and the formation of mould. According to the Building Damage reports, 12.7% and 14% of building damage was caused by mould problems. If there is no sufficient thermal insulation in the building, high relative humidity inside the building will lead to condensation and finally result in mould problems. Producing a comfortable thermal environment for people living in the building. Good thermal insulation allows sufficiently high temperatures inside the building during the winter, and it also achieves the same level of thermal comfort by offering relatively low air temperature in the summer. Reducing unwanted heating or cooling energy input. Thermal insulation reduces the heat exchange through the building envelope, which allows the heating and cooling machines to achieve the same indoor air temperature with less energy input. Planning and examples How much insulation a house should have depends on building design, climate, energy costs, budget, and personal preference. Regional climates make for different requirements. Building codes often set minimum standards for fire safety and energy efficiency, which can be voluntarily exceeded within the context of sustainable architecture for green certifications such as LEED. The insulation strategy of a building needs to be based on a careful consideration of the mode of energy transfer and the direction and intensity in which it moves. This may alter throughout the day and from season to season. It is important to choose an appropriate design, the correct combination of materials, and building techniques to suit the particular situation. United States The thermal insulation requirements in the USA follow the ASHRAE 90.1 which is the U.S. energy standard for all commercial and some residential buildings. ASHRAE 90.1 standard considers multiple perspectives such as prescriptive, building envelope types and energy cost budget. And the standard has some mandatory thermal insulation requirements. All thermal insulation requirements in ASHRAE 90.1 are divided by the climate zone, it means that the amount of insulation needed for a building is determined by which climate zone the building locates. The thermal insulation requirements are shown as R-value and continuous insulation R-value as the second index. The requirements for different types of walls (wood framed walls, steel framed walls, and mass walls) are shown in the table. To determine whether you should add insulation, you first need to find out how much insulation you already have in your home and where. A qualified home energy auditor will include an insulation check as a routine part of a whole-house energy audit. However, you can sometimes perform a self-assessment in certain areas of the home, such as attics. Here, a visual inspection, along with use of a ruler, can give you a sense of whether you may benefit from additional insulation. Residential energy audits are often initiated due homeowners being alerted by a gradual increase in their utility bills which often reflects the buildings attic as being poorly insulated. An initial estimate of insulation needs in the United States can be determined by the US Department of Energy's ZIP code insulation calculator. Russia In Russia, the availability of abundant and cheap gas has led to poorly insulated, overheated, and inefficient consumption of energy. The Russian Center for Energy Efficiency found that Russian buildings are either over- or under-heated, and often consume up to 50 percent more heat and hot water than needed. 53 percent of all carbon dioxide (CO2) emissions in Russia are produced through heating and generating electricity for buildings. However, greenhouse gas emissions from the former Soviet Bloc are still below their 1990 levels. Energy codes in the Soviet Union start to establish in 1955, norms and rules first mentioned the performance of the building envelope and heat losses, and they formed norms to regulate the energy characteristics of the building envelope. And the most recent version of Russia energy code (SP 50.13330.2012) was published in 2003. The energy codes of Russia were established by experts of government institutes or nongovernmental organization like ABOK. The energy code of Russia have been revised several times since 1955, the 1995 versions reduced energy depletion per square meter for heating by 20%, and the 2000 version reduced by 40%. The code also has a mandatory requirement on thermal insulation of buildings accompany with some voluntary provisions, mainly focused on heat loss from the building shell. Australia The thermal insulation requirements of Australia follow the climate of the building location, the table below is the minimum insulation requirements based on climate, which is determined by the Building Code of Australia (BCA). The building in Australia applies insulation in roofs, ceilings, external walls, and various components of the building (such as Veranda roofs in the hot climate, Bulkhead, Floors). Bulkheads (wall section between ceilings which are in different heights) should have the same insulated level as the ceilings since they suffer the same temperature levels. And the external walls of Australia's building should be insulated to decrease all kinds of heat transfer. Besides the walls and ceilings, the Australia energy code also requires insulation for floors (not all floors). Raised timber floors must have around 400mm soil clearance below the lowest timbers to provide sufficient space for insulation, and concrete slab such as suspended slabs and slab-on-ground should be insulated in the same way. China China has various climatic characters, which are divided by geographical areas. There are five climate zones in China to identify the building design include thermal insulation. (The very cold zone, cold zone, hot summer and cold winter zone, hot summer and warm winter zone and cold winter zone). Germany Germany established its requirements of building energy efficiency in 1977, and the first energy code-the Energy Saving Ordinance (EnEV) which based on the building performance was introduced in 2002. And the 2009 version of the Energy Saving Ordinance increased the minimum R-values of the thermal insulation of the building shell and introduced requirements for air-tightness tests. The Energy Saving Ordinance (EnEV) 2013 clarified the requirement of thermal insulation of the ceiling. And it mentioned that if the ceiling was not fulfilled, thermal insulation will be needed in accessible ceilings over upper floor's heated rooms. [U-Value must be under 0.24 W/(m2⋅K)] Netherlands The building decree (Bouwbesluit) of the Netherlands makes a clear distinction between home renovation or newly built houses. New builds count as completely new homes, but also new additions and extensions are considered to be new builds. Furthermore, renovations whereby at least 25% of the surface of the integral building is changed or enlarged is also considered to be a new build. Therefore, during thorough renovations, there's a chance that the new construction must meet the new building requirement for insulation of the Netherlands. If the renovation is of a smaller nature, the renovation directive applies. Examples of renovation are post-insulation of a cavity wall and post-insulation of a sloping roof against the roof boarding or under the tiles. Note that every renovation must meet the minimum Rc value of 1.3 W/(m2⋅K). If the current insulation has a higher insulation value (the legally obtained level), then this value counts as a lower limit. New Zealand Insulation requirements for new houses and small buildings in New Zealand are set out in the Building Code and standard NZS 4128:2009. Zones 1 and 2 include most of the North Island, including Waiheke Island and Great Barrier Island. Zone 3 includes the Taupo District, Ruapehu District, and the Rangitikei District north of 39°50′ latitude south (i.e. north of and including Mangaweka) in the North Island, the South Island, Stewart Island, and the Chatham Islands. United Kingdom Insulation requirements are specified in the Building regulations and in England and Wales the technical content is published as Approved Documents Document L defines thermal requirements, and while setting minimum standards can allow for the U values for elements such as roofs and walls to be traded off against other factors such as the type of heating system in a whole building energy use assessment. Scotland and Northern Ireland have similar systems but the detail technical standards are not identical. The standards have been revised several times in recent years, requiring more efficient use of energy as the UK moves towards a low-carbon economy. Technologies and strategies in different climates Cold climates Strategies in cold climate In cold conditions, the main aim is to reduce heat flow out of the building. The components of the building envelope—windows, doors, roofs, floors/foundations, walls, and air infiltration barriers—are all important sources of heat loss; in an otherwise well insulated home, windows will then become an important source of heat transfer. The resistance to conducted heat loss for standard single glazing corresponds to an R-value of about 0.17 m2⋅K⋅W−1 or more than twice that for typical double glazing (compared to 2–4 m2⋅K⋅W−1 for glass wool batts). Losses can be reduced by good weatherisation, bulk insulation, and minimising the amount of non-insulative (particularly non-solar facing) glazing. Indoor thermal radiation can also be a disadvantage with spectrally selective (low-e, low-emissivity) glazing. Some insulated glazing systems can double to triple R values. Technologies in cold climate The vacuum panels and aerogel wall surface insulation are two technologies that can enhance the energy performance and thermal insulating effectiveness of the residential buildings and commercial buildings in cold climate regions such as New England and Boston. In the past time, the price of thermal insulation materials that displayed high insulated performance was very expensive. With the development of material industry and the booming of science technologies, more and more insulation materials and insulated technologies have emerged during the 20th century, which gives us various options for building insulation. Especially in the cold climate areas, a large amount of thermal insulation is needed to deal with the heat losses caused by cold weather (infiltration, ventilation, and radiation). There are two technologies that are worth discussing: Exterior insulation system (EIFS) based on Vacuum insulation panels (VIP) VIPs are noted for their ultra-high thermal resistance, their ability of thermal resistance is four to eight times more than conventional foam insulation materials which lead to a thinner thickness of thermal insulation to the building shell compared with traditional materials. The VIPs are usually composed of core panels and metallic enclosures. The common materials that used to produce Core panels are fumed and precipitated silica, open-cell polyurethane (PU), and different types of fiberglass. And the core panel is covered by the metallic enclosure to create a vacuum environment, the metallic enclosure can make sure that the core panel is kept in the vacuum environment. Although this material has a high thermal performance, it still maintains a high price in the last twenty years. Aerogel exterior and interior wall surface insulation Aerogel was first discovered by Samuel Stephens Kistle in 1931. It is a kind of gel of which the liquid component of the material is replaced by a gas, thus creating a material that is 99% air. This material has a relatively high R-value of around R-10 per inch which is considerably higher compared with conventional plastic foam insulation materials, due to their designed high porosity. But the difficulties in processing and low productivity limit the development of Aerogels, the cost price of this material still remains at a high level. Only two companies in the United States offer the commercial Aerogel product for wall insulation purposes. Aerogels for glazing The DOE estimates thermal losses nearing 30% through windows, and thermal gains from sunlight leading to unwanted heating. Due to the high R associated with aerogels, their use for glazing applications has become an area of interest explored by many research institutions. Their implementation, however, must not hinder the primary function of windows: transparency. Typically, aerogels have low transmission and appear hazy, even amongst those considered transparent, which is why they have generally been reserved to wall insulation applications. Eldho Abraham, a researcher at the University of Colorado Boulder, recently demonstrated the capabilities of aerogels by designing a silanized cellulose aerogel (SiCellA) which offers near 99% visible transmission in addition to thermal conductivities which effectively reject or retain heat depending on the interior environment, akin to heating/cooling alterations. This is due to the designed 97.5% porosity of the SiCellA: pores are smaller than the wavelength of visible light, leading to transmission; the pores also minimize contact between the cellulose fibers, leading to lower thermal conductivities. The use of cellulose fibers lends itself to sustainability, as this is a naturally derived fiber sourced from wood pulps. This opens the door not only to aerogels, but also more general wood-based materials implementation in an effort to assist sustainable design alternatives with compounding energy saving effects. Hot climates Strategies in hot climate In hot conditions, the greatest source of heat energy is solar radiation. This can enter buildings directly through windows or it can heat the building shell to a higher temperature than the ambient, increasing the heat transfer through the building envelope. The Solar Heat Gain Co-efficient (SHGC) (a measure of solar heat transmittance) of standard single glazing can be around 78–85%. Solar gain can be reduced by adequate shading from the sun, light coloured roofing, spectrally selective (heat-reflective) paints and coatings and, various types of insulation for the rest of the envelope. Specially coated glazing can reduce SHGC to around 10%. Radiant barriers are highly effective for attic spaces in hot climates. In this application, they are much more effective in hot climates than cold climates. For downward heat flow, convection is weak and radiation dominates heat transfer across an air space. Radiant barriers must face an adequate air-gap to be effective. If refrigerative air-conditioning is employed in a hot, humid climate, then it is particularly important to seal the building envelope. Dehumidification of humid air infiltration can waste significant energy. On the other hand, some building designs are based on effective cross-ventilation instead of refrigerative air-conditioning to provide convective cooling from prevailing breezes. Technologies in hot climate In hot dry climate regions like Egypt and Africa, thermal comfort in the summer is the main question, nearly half of energy consumption in urban area is depleted by air conditioning systems to satisfy peoples' demand for thermal comfort, many developing countries in hot dry climate region suffer a shortage of electricity in the summer due to the increasing use of cooling machines. A new technology called Cool Roof has been introduced to ameliorate this situation. In the past, architects used thermal mass materials to improve thermal comfort, the heavy thermal insulation could cause the time-lag effect which might slow down the speed of heat transfer during the daytime and keep the indoor temperature in a certain range (Hot and dry climate regions usually have a large temperature difference between the day and night). The cool roof is low-cost technology based on solar reflectance and thermal emittance, which uses reflective materials and light colors to reflect the solar radiation. The solar reflectance and the thermal emittance are two key factors that determine the thermal performance of the roof, and they can also improve the effectiveness of the thermal insulation since around 30% solar radiation is reflected back to the sky. The shape of the roof is also under consideration, the curved roof can receive less solar energy compared with conventional shapes. Meanwhile, the drawback of this technology is obvious that the high reflectivity will cause visual discomfort. On the other hand, the high reflectivity and thermal emittance of the roof will increase the heating load of the building. Orientation – passive solar design Optimal placement of building elements (e.g. windows, doors, heaters) can play a significant role in insulation by considering the impact of solar radiation on the building and the prevailing breezes. Reflective laminates can help reduce passive solar heat in pole barns, garages, and metal buildings. Construction See insulated glass and quadruple glazing for discussion of windows. Building envelope The thermal envelope defines the conditioned or living space in a house. The attic or basement may or may not be included in this area. Reducing airflow from inside to outside can help to reduce convective heat transfer significantly. Ensuring low convective heat transfer also requires attention to building construction (weatherization) and the correct installation of insulative materials. The less natural airflow into a building, the more mechanical ventilation will be required to support human comfort. High humidity can be a significant issue associated with lack of airflow, causing condensation, rotting construction materials, and encouraging microbial growth such as mould and bacteria. Moisture can also drastically reduce the effectiveness of insulation by creating a thermal bridge (see below). Air exchange systems can be actively or passively incorporated to address these problems. Thermal bridge Thermal bridges are points in the building envelope that allow heat conduction to occur. Since heat flows through the path of least resistance, thermal bridges can contribute to poor energy performance. A thermal bridge is created when materials create a continuous path across a temperature difference, in which the heat flow is not interrupted by thermal insulation. Common building materials that are poor insulators include glass and metal. A building design may have limited capacity for insulation in some areas of the structure. A common construction design is based on stud walls, in which thermal bridges are common in wood or steel studs and joists, which are typically fastened with metal. Notable areas that most commonly lack sufficient insulation are the corners of buildings, and areas where insulation has been removed or displaced to make room for system infrastructure, such as electrical boxes (outlets and light switches), plumbing, fire alarm equipment, etc. Thermal bridges can also be created by uncoordinated construction, for example by closing off parts of external walls before they are fully insulated. The existence of inaccessible voids within the wall cavity which are devoid of insulation can be a source of thermal bridging. Some forms of insulation transfer heat more readily when wet, and can therefore also form a thermal bridge in this state. The heat conduction can be minimized by any of the following: reducing the cross sectional area of the bridges, increasing the bridge length, or decreasing the number of thermal bridges. One method of reducing thermal bridge effects is the installation of an insulation board (e.g. foam board EPS XPS, wood fibre board, etc.) over the exterior outside wall. Another method is using insulated lumber framing for a thermal break inside the wall. Installation Insulating buildings during construction is much easier than retrofitting, as generally the insulation is hidden, and parts of the building need to be deconstructed to reach them. Depending on the country there are different regulations as to which type of insulation is the best alternative for buildings, considering energy efficiency and environmental factors. Geographical location also affects the type of insulation needed as colder climates will need a bigger investment than warmer ones on installation costs. Materials There are essentially two types of building insulation - bulk insulation and reflective insulation. Most buildings use a combination of both types to make up a total building insulation system. The type of insulation used is matched to create maximum resistance to each of the three forms of building heat transfer - conduction, convection, and radiation. The classification of thermal insulation materials According to three ways of heat exchange, most thermal insulation we use in our buildings can be divided into two categories: Conductive and convective insulators and radiant heat barriers. And there are more detailed classifications to distinguish between different materials. Many thermal insulation materials work by creating tiny air cavities between molecules, these air cavities can largely reduce the heat exchange through the materials. But there are two exceptions which don't use air cavities as their functional element to prevent heat transfer. One is reflective thermal insulation, which creates a great airspace by forming a radiation barrier by attaching metal foil on one side or both sides, this thermal insulation mainly reduces the radiation heat transfer. Although the polished metal foil attached on the materials can only prevent the radiation heat transfer, its effect to stop heat transfer can be dramatic. Another thermal insulation that doesn't apply air cavities is vacuum insulation, the vacuum-insulated panels can stop all kinds of convection and conduction and it can also largely mitigate the radiation heat transfer. But the effectiveness of vacuum insulation is also limited by the edge of the material, since the edge of the vacuum panel can form a thermal bridge which leads to a reduction of the effectiveness of the vacuum insulation. The effectiveness of the vacuum insulation is also related to the area of the vacuum panels. Conductive and convective insulators Bulk insulators block conductive heat transfer and convective flow either into or out of a building. Air is a very poor conductor of heat and therefore makes a good insulator. Insulation to resist conductive heat transfer uses air spaces between fibers, inside foam or plastic bubbles and in building cavities like the attic. This is beneficial in an actively cooled or heated building, but can be a liability in a passively cooled building; adequate provisions for cooling by ventilation or radiation are needed. Fibrous insulation materials Fibrous materials are made by tiny diameter fibers which evenly distribute the airspace. The commonly used materials are silica, glass, rock wool, and slag wool. Glass fiber and mineral wool are two insulation materials that are most widely used in this type. Cellular insulation materials Cellular insulation is composed of small cells which are separated from each other. The commonly cellular materials are glass and foamed plastic like polystyrene, polyolefin, and polyurethane. Radiant heat barriers Radiant barriers work in conjunction with an air space to reduce radiant heat transfer across the air space. Radiant or reflective insulation reflects heat instead of either absorbing it or letting it pass through. Radiant barriers are often seen used in reducing downward heat flow, because upward heat flow tends to be dominated by convection. This means that for attics, ceilings, and roofs, they are most effective in hot climates. They also have a role in reducing heat losses in cool climates. However, much greater insulation can be achieved through the addition of bulk insulators (see above). Some radiant barriers are spectrally selective and will preferentially reduce the flow of infra-red radiation in comparison to other wavelengths. For instance, low-emissivity (low-e) windows will transmit light and short-wave infra-red energy into a building but reflect the long-wave infra-red radiation generated by interior furnishings. Similarly, special heat-reflective paints are able to reflect more heat than visible light, or vice versa. Thermal emissivity values probably best reflect the effectiveness of radiant barriers. Some manufacturers quote an 'equivalent' R-value for these products but these figures can be difficult to interpret, or even misleading, since R-value testing measures total heat loss in a laboratory setting and does not control the type of heat loss responsible for the net result (radiation, conduction, convection). A film of dirt or moisture can alter the emissivity and hence the performance of radiant barriers. Eco-friendly insulation Eco-friendly insulation is a term used for insulating products with limited environmental impact. The commonly accepted approach to determine whether or not an insulation product, or in fact any product or service is eco-friendly is by doing a life-cycle assessment (LCA). A number of studies compared the environmental impact of insulation materials in their application. The comparison shows that most important is the insulation value of the product meeting the technical requirements for the application. Only in a second order step, a differentiation between materials becomes relevant. The report commissioned by the Belgian government to VITO is a good example of such a study. See also External wall insulation R-value (insulation) – includes a list of insulations with R-values Thermal insulation Thermal mass Materials Building insulation materials Hempcrete Insulated glazing Mineral wool Packing (firestopping) Quadruple glazing Window insulation film Wool insulation Design Cool roof Green roof Low-energy building Passive house Passive solar building design Passive solar design Solar architecture Superinsulation Zero energy building Zero heating building Construction Building construction Building Envelope Building performance Deep energy retrofit Weatherization Other Condensation Draught excluder HVAC Ventilation References External links Tips for Selecting Roof Insulation Best Practice Guide Air Sealing & Insulation Retrofits for Single Family Homes insulate surfaces from water, heat and moisture Sustainable building Insulators Thermal protection Energy conservation Heat transfer Building materials
Building insulation
[ "Physics", "Chemistry", "Engineering" ]
6,554
[ "Transport phenomena", "Sustainable building", "Physical phenomena", "Heat transfer", "Building engineering", "Architecture", "Construction", "Materials", "Thermodynamics", "Matter", "Building materials" ]
6,904,158
https://en.wikipedia.org/wiki/Space%20Power%20Facility
Space Power Facility (SPF) is a NASA facility used to test spaceflight hardware under simulated launch and spaceflight conditions. The SPF is part of NASA's Neil A. Armstrong Test Facility, which in turn is part of the Glenn Research Center. The Neil A. Armstrong Test Facility and the SPF are located near Sandusky, Ohio (Oxford Township, Erie County, Ohio). The SPF is able to simulate a spacecraft's launch environment, as well as in-space environments. NASA has developed these capabilities under one roof to optimize testing of spaceflight hardware while minimizing transportation issues. Space Power Facility has become a "One Stop Shop" to qualify flight hardware for crewed space flight. This facility provides the capability to perform the following environmental testing: Thermal-vacuum testing Reverberation acoustic testing Mechanical vibration testing Modal testing Electromagnetic interference and compatibility testing Thermal-vacuum test chamber This is a vacuum chamber built by NASA in 1969. It stands high and in diameter, enclosing a bullet-shaped space. It is the world's largest thermal vacuum chamber. It was originally commissioned for nuclear-electric power studies under vacuum conditions, but was later decommissioned. It was subsequently recommissioned for use in testing spacecraft propulsion systems. Recent uses include testing the airbag landing systems for the Mars Pathfinder and the Mars Exploration Rovers Spirit and Opportunity, under simulated Mars atmospheric conditions. The facility was designed and constructed to test both nuclear and non-nuclear space hardware in a simulated low-Earth-orbiting environment. Although the facility was designed for testing nuclear hardware, only non-nuclear tests have been performed throughout its history. Test programs performed at the facility include high-energy experiments, rocket-fairing separation tests, Mars Lander system tests, deployable solar sail tests, and International Space Station hardware tests. The facility can sustain a high vacuum (10−6 torr, 130 μPa), and simulate solar radiation via a 4 MW quartz heat lamp array, solar spectrum by a 400 kW arc lamp, and cold environments () with a variable geometry cryogenic cold shroud. The facility is available on a full-cost reimbursable basis to government, universities, and the private sector. Aluminum test chamber The aluminum test chamber is a vacuum-tight aluminum plate vessel that is in diameter and high. Designed for an external pressure of and internal pressure of , the chamber is constructed of Type 5083 aluminum which is a clad on the interior surface with a thick type 3003 aluminum for corrosion resistance. This material was selected because of its low neutron absorption cross-section. The floor plate and vertical shell are (total) thick, while the dome shell is . Welded circumferentially to the exterior surface is aluminum structural T-section members that are deep and wide. The doors of the test chamber are in size and have double door seals to prevent leakage. The chamber floor was designed for a load of 300 tons. Concrete chamber enclosure The concrete chamber enclosure serves not only as a radiological shield but also as a primary vacuum barrier from atmospheric pressure. in diameter and in height, the chamber was designed to withstand atmospheric pressure outside of the chamber at the same time vacuum conditions are occurring within. The concrete thickness varies from and contains a leak-tight steel containment barrier embedded within. The chamber's doors are and have inflatable seals. The space between the concrete enclosure and the aluminum test chamber is pumped down to a pressure of during a test. Brian Cox of the BBC's Human Universe filmed a rock and feather drop episode at the Space Power Facility. Electromagnetic interference/compatibility (EMI/EMC) functionality Designed specifically as a large-scale thermal-vacuum test chamber for qualification testing of vehicles and equipment in outer-space conditions, it was discovered in the late 2000s that the unique construction of the SPF interior aluminum vacuum chamber also makes it an extremely large and electrically complex microwave or radio frequency cavity with excellent reverberant electro-magnetic characteristics. In 2009 these characteristics were measured by the National Institute of Standards and Technology and others after which the facility was understood to be, not only the world's largest Vacuum chamber, but also the world's largest EMI/EMC test facility. In 2011, the Glenn Research Center successfully performed a calibration of the aluminum vacuum chamber using IEC 61000-4-21 methodologies. As a result of these activities, the SPF can perform radiated susceptibility EMI tests for vehicles and equipment per MIL-STD-461, and can achieve MIL-STD-461F limits above approximately 80 MHz. In the spring of 2017 the low-power characterizations and calibrations from 2009 and 2011 were proven correct in a series of high-power tests performed in the chamber to validate its capabilities. The SPF chamber is currently being prepared for EMI radiated susceptibility testing of the crew module for the Artemis 1 of NASA's Orion spacecraft. Reverberant Acoustic Test Facility The Reverberant Acoustic Test Facility has 36 nitrogen-driven horns to simulate the high noise levels that are experienced during a space vehicle launch and supersonic ascent conditions. The RATF is capable of an overall sound pressure level of 163 dB within a chamber. Mechanical Vibration Test Facility The Mechanical Vibration Test Facility (MVF) is a three-axis vibration system. It will apply vibration in each of the three orthogonal axes (not simultaneously) with one direction in parallel to the Earth-launch thrust axis (X) at 5–150 Hz, 0-1.25 g-pk vertical, and 5–150 Hz 0-1.0 g-pk for the horizontal axes. Vertical, or the thrust axis, shaking is accomplished by using 16 vertical actuators manufactured by TEAM Corporation, each capable of . The 16 vertical actuators allow for testing of up to a article at the previously stated frequency and amplitude limits. Horizontal shaking is accomplished by four TEAM Corporation Horizontal Actuators. The horizontal actuators are used during vertical testing to counteract cross axis forces and overturning moments. Modal test facility In addition to the sine vibe table, a fixed-base modal floor sufficient for the diameter test article is available. The fixed-base modal test facility is a thick steel floor on top of of concrete, that is tied to the earth using deep tensioned rock anchors. There were over of rock anchors, and of concrete used in the construction of the fixed-base modal test facility and mechanical vibration test facility. Assembly area The SPF layout is ideal for performing multiple test programs. The facility has two large high bay areas adjacent to either side of the vacuum chamber. The advantage of having both areas available is that it allows for two complex tests to be prepared simultaneously. One can be prepared in a high bay while another is being conducted in the vacuum chamber. Large chamber doors provide access to the test chamber from either high bay. References External links Neil Armstrong Test Facility - official NASA website Skylab Shroud in Plum Brook Space Power Facility NASA image gallery, featuring the SPF Detailed facility capabilities "Space Power Facility Construction" at Youtube Aerospace engineering Glenn Research Center NASA facilities Buildings and structures in Erie County, Ohio
Space Power Facility
[ "Engineering" ]
1,468
[ "Aerospace engineering" ]
6,906,913
https://en.wikipedia.org/wiki/Townsend%20discharge
In electromagnetism, the Townsend discharge or Townsend avalanche is an ionisation process for gases where free electrons are accelerated by an electric field, collide with gas molecules, and consequently free additional electrons. Those electrons are in turn accelerated and free additional electrons. The result is an avalanche multiplication that permits significantly increased electrical conduction through the gas. The discharge requires a source of free electrons and a significant electric field; without both, the phenomenon does not occur. The Townsend discharge is named after John Sealy Townsend, who discovered the fundamental ionisation mechanism by his work circa 1897 at the Cavendish Laboratory, Cambridge. General description The avalanche occurs in a gaseous medium that can be ionised (such as air). The electric field and the mean free path of the electron must allow free electrons to acquire an energy level (velocity) that can cause impact ionisation. If the electric field is too small, then the electrons do not acquire enough energy. If the mean free path is too short, then the electron gives up its acquired energy in a series of non-ionising collisions. If the mean free path is too long, then the electron reaches the anode before colliding with another molecule. The avalanche mechanism is shown in the accompanying diagram. The electric field is applied across a gaseous medium; initial ions are created with ionising radiation (for example, cosmic rays). An original ionisation event produces an ion pair; the positive ion accelerates towards the cathode while the free electron accelerates towards the anode. If the electric field is strong enough, then the free electron can gain sufficient velocity (energy) to liberate another electron when it next collides with a molecule. The two free electrons then travel towards the anode and gain sufficient energy from the electric field to cause further impact ionisations, and so on. This process is effectively a chain reaction that generates free electrons. Initially, the number of collisions grows exponentially, but eventually, this relationship will break down—the limit to the multiplication in an electron avalanche is known as the Raether limit. The Townsend avalanche can have a large range of current densities. In common gas-filled tubes, such as those used as gaseous ionisation detectors, magnitudes of currents flowing during this process can range from about 10−18 to 10−5 amperes. Quantitative description Townsend's early experimental apparatus consisted of planar parallel plates forming two sides of a chamber filled with a gas. A direct-current high-voltage source was connected between the plates, the lower-voltage plate being the cathode and the upper-voltage the anode. He forced the cathode to emit electrons using the photoelectric effect by irradiating it with x-rays, and he found that the current flowing through the chamber depended on the electric field between the plates. However, this current showed an exponential increase as the plate gaps became small, leading to the conclusion that the gas ions were multiplying as they moved between the plates due to the high electric field. Townsend observed currents varying exponentially over ten or more orders of magnitude with a constant applied voltage when the distance between the plates was varied. He also discovered that gas pressure influenced conduction: he was able to generate ions in gases at low pressure with a much lower voltage than that required to generate a spark. This observation overturned conventional thinking about the amount of current that an irradiated gas could conduct. The experimental data obtained from his experiments are described by the formula where is the current flowing in the device, is the photoelectric current generated at the cathode surface, is Euler's number, is the first Townsend ionisation coefficient, expressing the number of ion pairs generated per unit length (e.g. meter) by a negative ion (anion) moving from cathode to anode, and is the distance between the plates of the device. The almost-constant voltage between the plates is equal to the breakdown voltage needed to create a self-sustaining avalanche: it decreases when the current reaches the glow discharge regime. Subsequent experiments revealed that the current rises faster than predicted by the above formula as the distance increases; two different effects were considered in order to better model the discharge: positive ions and cathode emission. Gas ionisation caused by motion of positive ions Townsend put forward the hypothesis that positive ions also produce ion pairs, introducing a coefficient expressing the number of ion pairs generated per unit length by a positive ion (cation) moving from anode to cathode. The following formula was found: since , in very good agreement with experiments. The first Townsend coefficient ( α ), also known as first Townsend avalanche coefficient, is a term used where secondary ionisation occurs because the primary ionisation electrons gain sufficient energy from the accelerating electric field, or from the original ionising particle. The coefficient gives the number of secondary electrons produced by primary electron per unit path length. Cathode emission caused by impact of ions Townsend, Holst and Oosterhuis also put forward an alternative hypothesis, considering the augmented emission of electrons by the cathode caused by impact of positive ions. This introduced Townsend's second ionisation coefficient , the average number of electrons released from a surface by an incident positive ion, according to the formula These two formulas may be thought as describing limiting cases of the effective behavior of the process: either can be used to describe the same experimental results. Other formulas describing various intermediate behaviors are found in the literature, particularly in reference 1 and citations therein. Conditions A Townsend discharge can be sustained only over a limited range of gas pressure and electric field intensity. The accompanying plot shows the variation of voltage drop and the different operating regions for a gas-filled tube with a constant pressure, but a varying current between its electrodes. The Townsend avalanche phenomena occurs on the sloping plateau B-D. Beyond D, the ionisation is sustained. At higher pressures, discharges occur more rapidly than the calculated time for ions to traverse the gap between electrodes, and the streamer theory of spark discharge of Raether, Meek, and Loeb is applicable. In highly non-uniform electric fields, the corona discharge process is applicable. See Electron avalanche for further description of these mechanisms. Discharges in vacuum require vaporization and ionisation of electrode atoms. An arc can be initiated without a preliminary Townsend discharge, for example when electrodes touch and are then separated. Penning discharge In the presence of a magnetic field, the likelihood of an avalanche discharge occurring under high vacuum conditions can be increased through a phenomenon known as Penning discharge. This occurs when electrons can become trapped within a potential minimum, thereby extending the mean free path of the electrons [Fränkle 2014]. Applications Gas-discharge tubes The starting of Townsend discharge sets the upper limit to the blocking voltage a glow discharge gas-filled tube can withstand. This limit is the Townsend discharge breakdown voltage, also called ignition voltage of the tube. The occurrence of Townsend discharge, leading to glow discharge breakdown, shapes the current–voltage characteristic of a gas-discharge tube such as a neon lamp in such a way that it has a negative differential resistance region of the S-type. The negative resistance can be used to generate electrical oscillations and waveforms, as in the relaxation oscillator whose schematic is shown in the picture on the right. The sawtooth shaped oscillation generated has frequency where is the glow discharge breakdown voltage, is the Townsend discharge breakdown voltage, and , and are respectively the capacitance, the resistance and the supply voltage of the circuit. Since temperature and time stability of the characteristics of gas diodes and neon lamps is low, and also the statistical dispersion of breakdown voltages is high, the above formula can only give a qualitative indication of what the real frequency of oscillation is. Gas phototubes Avalanche multiplication during Townsend discharge is naturally used in gas phototubes, to amplify the photoelectric charge generated by incident radiation (visible light or not) on the cathode: achievable current is typically 10~20 times greater respect to that generated by vacuum phototubes. Ionising radiation detectors Townsend avalanche discharges are fundamental to the operation of gaseous ionisation detectors such as the Geiger–Müller tube and the proportional counter in either detecting ionising radiation or measuring its energy. The incident radiation will ionise atoms or molecules in the gaseous medium to produce ion pairs, but different use is made by each detector type of the resultant avalanche effects. In the case of a GM tube, the high electric field strength is sufficient to cause complete ionisation of the fill gas surrounding the anode from the initial creation of just one ion pair. The GM tube output carries information that the event has occurred, but no information about the energy of the incident radiation. In the case of proportional counters, multiple creation of ion pairs occurs in the "ion drift" region near the cathode. The electric field and chamber geometries are selected so that an "avalanche region" is created in the immediate proximity of the anode. A negative ion drifting towards the anode enters this region and creates a localised avalanche that is independent of those from other ion pairs, but which can still provide a multiplication effect. In this way, spectroscopic information on the energy of the incident radiation is available by the magnitude of the output pulse from each initiating event. The accompanying plot shows the variation of ionisation current for a co-axial cylinder system. In the ion chamber region, there are no avalanches and the applied voltage only serves to move the ions towards the electrodes to prevent re-combination. In the proportional region, localised avalanches occur in the gas space immediately round the anode which are numerically proportional to the number of original ionising events. Increasing the voltage further increases the number of avalanches until the Geiger region is reached where the full volume of the fill gas around the anodes ionised, and all proportional energy information is lost. Beyond the Geiger region, the gas is in continuous discharge owing to the high electric field strength. See also Avalanche breakdown Electric arc Electric discharge in gases Field electron emission Paschen's law Photoelectric effect Townsend (unit) Notes References . Chapter 11 "Electrical conduction in gases" and chapter 12 "Glow- and Arc-discharge tubes and circuits". External links Simulation showing electron paths during avalanche Electrical discharge in gases Ionization Ions Molecular physics Electron
Townsend discharge
[ "Physics", "Chemistry" ]
2,130
[ "Ionization", "Electron", "Physical phenomena", "Matter", "Electrical discharge in gases", "Molecular physics", "Plasma phenomena", " molecular", "nan", "Atomic", "Ions", " and optical physics" ]
6,907,585
https://en.wikipedia.org/wiki/Ultraviolet%20germicidal%20irradiation
Ultraviolet germicidal irradiation (UVGI) is a disinfection technique employing ultraviolet (UV) light, particularly UV-C (180–280 nm), to kill or inactivate microorganisms. UVGI primarily inactivates microbes by damaging their genetic material, thereby inhibiting their capacity to carry out vital functions. The use of UVGI extends to an array of applications, encompassing food, surface, air, and water disinfection. UVGI devices can inactivate microorganisms including bacteria, viruses, fungi, molds, and other pathogens. Recent studies have substantiated the ability of UV-C light to inactivate SARS-CoV-2, the strain of coronavirus that causes COVID-19. UV-C wavelengths demonstrate varied germicidal efficacy and effects on biological tissue. Many germicidal lamps like low-pressure mercury (LP-Hg) lamps, with peak emissions around 254 nm, contain UV wavelengths that can be hazardous to humans. As a result, UVGI systems have been primarily limited to applications where people are not directly exposed, including hospital surface disinfection, upper-room UVGI, and water treatment. More recently, the application of wavelengths between 200-235 nm, often referred to as far-UVC, has gained traction for surface and air disinfection. These wavelengths are regarded as much safer due to their significantly reduced penetration into human tissue.. Moreover, their efficiency relies on the fact, that in addition to the DNA damage related to the formation of pyrimidine dimers, they provoke important DNA photoionization, leading to oxidative damage. Notably, UV-C light is virtually absent in sunlight reaching the Earth's surface due to the absorptive properties of the ozone layer within the atmosphere. History Origins of UV germicidal action The development of UVGI traces back to 1878 when Arthur Downes and Thomas Blunt found that sunlight, particularly its shorter wavelengths, hindered microbial growth. Expanding upon this work, Émile Duclaux, in 1885, identified variations in sunlight sensitivity among different bacterial species. A few years later, in 1890, Robert Koch demonstrated the lethal effect of sunlight on Mycobacterium tuberculosis, hinting at UVGI's potential for combating diseases like tuberculosis. Subsequent studies further defined the wavelengths most efficient for germicidal inactivation. In 1892, it was noted that the UV segment of sunlight had the most potent bactericidal effect. Research conducted in the early 1890s demonstrated the superior germicidal efficacy of UV-C compared to UV-A and UV-B. The mutagenic effects of UV were first unveiled in a 1914 study that observed metabolic changes in Bacillus anthracis upon exposure to sublethal doses of UV. Frederick Gates, in the late 1920s, offered the first quantitative bactericidal action spectra for Staphylococcus aureus and Bacillus coli, noting peak effectiveness at 265 nm. This matched the absorption spectrum of nucleic acids, hinting at DNA damage as the key factor in bacterial inactivation. This understanding was solidified by the 1960s through research demonstrating the ability of UV-C to form thymine dimers, leading to microbial inactivation. These early findings collectively laid the groundwork for modern UVGI as a disinfection tool. UVGI for air disinfection The utilization of UVGI for air disinfection began in earnest in the mid-1930s. William F. Wells demonstrated in 1935 that airborne infectious organisms, specifically aerosolized B. coli exposed to 254 nm UV, could be rapidly inactivated. This built upon earlier theories of infectious droplet nuclei transmission put forth by Carl Flügge and Wells himself. Prior to this, UV radiation had been studied predominantly in the context of liquid or solid media, rather than airborne microbes. Shortly after Wells' initial experiments, high-intensity UVGI was employed to disinfect a hospital operating room at Duke University in 1936. The method proved a success, reducing postoperative wound infections from 11.62% without the use of UVGI to 0.24% with the use of UVGI. Soon, this approach was extended to other hospitals and infant wards using UVGI "light curtains", designed to prevent respiratory cross-infections, with noticeable success. Adjustments in the application of UVGI saw a shift from "light curtains" to upper-room UVGI, confining germicidal irradiation above human head level. Despite its dependency on good vertical air movement, this approach yielded favorable outcomes in preventing cross-infections. This was exemplified by Wells' successful usage of upper-room UVGI between 1937 and 1941 to curtail the spread of measles in suburban Philadelphia day schools. His study found that 53.6% of susceptibles in schools without UVGI became infected, while only 13.3% of susceptibles in schools with UVGI were infected. Richard L. Riley, initially a student of Wells, continued the study of airborne infection and UVGI throughout the 1950s and 60s, conducting significant experiments in a Veterans Hospital TB ward. Riley successfully demonstrated that UVGI could efficiently inactivate airborne pathogens and prevent the spread of tuberculosis. Despite initial successes, the use of UVGI declined in the second half of the 20th century era due to various factors, including a rise in alternative infection control and prevention methods, inconsistent efficacy results, and concerns regarding its safety and maintenance requirements. However, recent events like a rise in multiple drug-resistant bacteria and the COVID-19 pandemic have renewed interest in UVGI for air disinfection. UVGI for water treatment Using UV light for disinfection of drinking water dates back to 1910 in Marseille, France. The prototype plant was shut down after a short time due to poor reliability. In 1955, UV water treatment systems were applied in Austria and Switzerland; by 1985 about 1,500 plants were employed in Europe. In 1998 it was discovered that protozoa such as cryptosporidium and giardia were more vulnerable to UV light than previously thought; this opened the way to wide-scale use of UV water treatment in North America. By 2001, over 6,000 UV water treatment plants were operating in Europe. Over time, UV costs have declined as researchers develop and use new UV methods to disinfect water and wastewater. Several countries have published regulations and guidance for the use of UV to disinfect drinking water supplies, including the US and the UK. Method of operation UV light is electromagnetic radiation with wavelengths shorter than visible light but longer than X-rays. UV is categorised into several wavelength ranges, with short-wavelength UV (UV-C) considered "germicidal UV". Wavelengths between about 200 nm and 300 nm are strongly absorbed by nucleic acids. The absorbed energy can result in defects including pyrimidine dimers. These dimers can prevent replication or can prevent the expression of necessary proteins, resulting in the death or inactivation of the organism. Recently, it has been shown that these dimers are fluorescent. Mercury-based lamps operating at low vapor pressure emit UV light at the 253.7 nm line. Ultraviolet light-emitting diode (UV-C LED) lamps emit UV light at selectable wavelengths between 255 and 280 nm. Pulsed-xenon lamps emit UV light across the entire UV spectrum with a peak emission near 230 nm. This process is similar to, but stronger than, the effect of longer wavelengths (UV-B) producing sunburn in humans. Microorganisms have less protection against UV and cannot survive prolonged exposure to it. A UVGI system is designed to expose environments such as water tanks, rooms and forced air systems to germicidal UV. Exposure comes from germicidal lamps that emit germicidal UV at the correct wavelength, thus irradiating the environment. The forced flow of air or water through this environment ensures exposure of that air or water. Effectiveness The effectiveness of germicidal UV depends on the UV dose, i.e. how much UV light reaches the microbe (measured as radiant exposure) and how susceptible the microbe is to the given wavelength(s) of UV light, defined by the germicidal effectiveness curve. UV Dose The UV dose is measured in light energy per area, i.e. radiant exposure or fluence. The fluence a microbe is exposed to is the product of the light intensity, i.e. irradiance and the time of exposure, according to: UV dose (μJ/cm2) = UV intensity (μW/cm2) × exposure time (seconds) Likewise, the irradiance depends on the brightness (radiant intensity, W/sr) of the UV source, the distance between the UV source and the microbe, the attenuation of filters (e.g. fouled glass) in the light path, the attenuation of the medium (e.g. microbes in turbid water), the presence of particles or objects that can shield the microbes from UV, and the presence of reflectors that can direct the same UV-light through the medium multiple times. Additionally, if the microbes are not free-flowing, such as in a biofilm, they will block each other from irradiation. The U.S. Environmental Protection Agency (EPA) published UV dosage guidelines for water treatment applications in 1986. It is difficult to measure UV dose directly but it can also be estimated from: Flow rate (contact time) Transmittance (light reaching the target) Turbidity (cloudiness) Lamp age or fouling or outages (reduction in UV intensity) Bulbs require periodic cleaning and replacement to ensure effectiveness. The lifetime of germicidal UV bulbs varies depending on design. Also, the material that the bulb is made of can absorb some of the germicidal rays. Lamp cooling under airflow can also lower UV output. The UV dose should be calculated using the end of lamp life (EOL is specified in number of hours when the lamp is expected to reach 80% of its initial UV output). Some shatter-proof lamps are coated with a fluorated ethylene polymer to contain glass shards and mercury in case of breakage; this coating reduces UV output by as much as 20%. UV source intensity is sometimes specified as irradiance at a distance of 1 meter, which can be easily converted to radiant intensity. UV intensity is inversely proportional to the square of the distance so it decreases at longer distances. Alternatively, it rapidly increases at distances shorter than 1m. In the above formula, the UV intensity must always be adjusted for distance unless the UV dose is calculated at exactly from the lamp. The UV dose should be calculated at the furthest distance from the lamp on the periphery of the target area. Increases in fluence can be achieved by using reflection, such that the same light passes through the medium several times before being absorbed. Aluminum has the highest reflectivity rate versus other metals and is recommended when using UV. In static applications the exposure time can be as long as needed for an effective UV dose to be reached. In waterflow/airflow disinfection, exposure time can be increased by increasing the illuminated volume, decreasing the fluid speed, or recirculating the air or water repeatedly through the illuminated section. This ensures multiple passes so that the UV is effective against the highest number of microorganisms and will irradiate resistant microorganisms more than once to break them down. Inactivation of microorganisms Microbes are more susceptible to certain wavelengths of UV light, a function called the germicidal effectiveness curve. The curve for E. coli is given in the figure, with the most effective UV light having a wavelength of 265 nm. This applies to most bacteria and does not change significantly for other microbes. Dosages for a 90% kill rate of most bacteria and viruses range between 2,000 and 8,000 μJ/cm2. Larger parasites such as Cryptosporidium require a lower dose for inactivation. As a result, US EPA has accepted UV disinfection as a method for drinking water plants to obtain Cryptosporidium, Giardia or virus inactivation credits. For example, for a 90% reduction of Cryptosporidium, a minimum dose of 2,500 μW·s/cm2 is required based on EPA's 2006 guidance manual. "Sterilization" is often misquoted as being achievable. While it is theoretically possible in a controlled environment, it is very difficult to prove and the term "disinfection" is generally used by companies offering this service as to avoid legal reprimand. Specialist companies will often advertise a certain log reduction, e.g., 6-log reduction or 99.9999% effective, instead of sterilization. This takes into consideration a phenomenon known as light and dark repair (photoreactivation and base excision repair, respectively), in which a cell can repair DNA that has been damaged by UV light. Safety Skin and eye safety Many UVGI systems use UV wavelengths that can be harmful to humans, resulting in both immediate and long-term effects. Acute impacts on the eyes and skin can include conditions such as photokeratitis (often termed "snow blindness") and erythema (reddening of the skin), while chronic exposure may heighten the risk of skin cancer. However, the safety and effects of UV vary extensively by wavelength, implying that not all UVGI systems pose the same level of hazards. Humans typically encounter UV light in the form of solar UV, which comprises significant portions of UV-A and UV-B, but excludes UV-C. The UV-B band, able to penetrate deep into living, replicating tissue, is recognized as the most damaging and carcinogenic. Many standard UVGI systems, such as low-pressure mercury (LP-Hg) lamps, produce broad-band emissions in the UV-C range and also peaks in the UV-B band. This often makes it challenging to attribute damaging effects to a specific wavelength. Nevertheless, longer wavelengths in the UV-C band can cause conditions like photokeratitis and erythema. Hence, many UVGI systems are used in settings where direct human exposure is limited, such as with upper-room UVGI air cleaners and water disinfection systems. Precautions are commonly implemented to protect users of these UVGI systems, including: Warning labels: Labels alert users to the dangers of UV light. Interlocking systems: Shielded systems, such as closed water tanks or air circulation units, often have interlocks that automatically shut off the UV lamps if the system is opened for human access. Clear viewports that block UV-C are also available. Personal protective equipment: Most protective eyewear, particularly those compliant with ANSI Z87.1, block UV-C. Similarly, clothing, plastics, and most types of glass (excluding fused silica) effectively impede UV-C. Since the early 2010s there has been growing interest in the far-UVC wavelengths of 200-235 nm for whole-room exposure. These wavelengths are generally considered safer due to their limited penetration depth caused by increased protein absorption. This feature confines far-UVC exposure to the superficial layers of tissue, such as the outer layer of dead skin (the stratum corneum) and the tear film and surface cells of the cornea. As these tissues do not contain replicating cells, damage to them poses less carcinogenic risk. It has also been demonstrated that far-UVC does not cause erythema or damage to the cornea at levels many times that of solar UV or conventional 254 nm UVGI systems. Exposure limits Exposure limits for UV, particularly the germicidal UV-C range, have evolved over time due to scientific research and changing technology. The American Conference of Governmental Industrial Hygienists (ACGIH) and the International Commission on Non-Ionizing Radiation Protection (ICNIRP) have set exposure limits to safeguard against both immediate and long-term effects of UV exposure. These limits, also referred to as Threshold Limit Values (TLVs), form the basis for emission limits in product safety standards. The UV-C photobiological spectral band is defined as 100–280 nm, with limits currently applying only from 180 to 280 nm. This reflects concerns about acute damage such as erythema and photokeratitis as well as long-term delayed effects like photocarcinogenesis. However, with the increased safety evidence surrounding UV-C for germicidal applications, the existing ACGIH TLVs were revised in 2022. The TLVs for the 222 nm UV-C wavelength (peak emissions from KrCl excimer lamps), following the 2022 revision, are now 161 mJ/cm2 for eye exposure and 479 mJ/cm2 for skin exposure over an eight-hour period. For the 254 nm UV wavelength, the updated exposure limit is now set at 6 mJ/cm2 for eyes and 10 mJ/cm2 for skin. Indoor air chemistry UV can influence indoor air chemistry, leading to the formation of ozone and other potentially harmful pollutants, including particulate pollution. This occurs primarily through photolysis, where UV photons break molecules into smaller radicals that form radicals such as OH. The radicals can react with volatile organic compounds (VOCs) to produce oxidized VOCs (OVOCs) and secondary organic aerosols (SOA). Wavelengths below 242 nm can also generate ozone, which not only contributes to OVOCs and SOA formation but can be harmful in itself. When inhaled in high quantities, these pollutants can irritate the eyes and respiratory system and exacerbate conditions like asthma. The specific pollutants produced depend on the initial air chemistry and the UV source power and wavelength. To control ozone and other indoor pollutants, ventilation and filtration methods are used, diluting airborne pollutants and maintaining indoor air quality. Polymer damage UVC radiation is able to break chemical bonds. This leads to rapid aging of plastics and other material, and insulation and gaskets. Plastics sold as "UV-resistant" are tested only for the lower-energy UVB since UVC does not normally reach the surface of the Earth. When UV is used near plastic, rubber, or insulation, these materials may be protected by metal tape or aluminum foil. Applications Air disinfection UVGI can be used to disinfect air with prolonged exposure. In the 1930s and 40s, an experiment in public schools in Philadelphia showed that upper-room ultraviolet fixtures could significantly reduce the transmission of measles among students. UV and violet light are able to neutralize the infectivity of SARS-CoV-2. Viral titers usually found in the sputum of COVID-19 patients are completely inactivated by levels of UV-A and UV-B irradiation that are similar to those levels experienced from natural sun exposure. This finding suggests that the reduced incidence of SARS-COV-2 in the summer may be, in part, due to the neutralizing activity of solar UV irradiation. Various UV-emitting devices can be used for SARS-CoV-2 disinfection, and these devices may help in reducing the spread of infection. SARS-CoV-2 can be inactivated by a wide range of UVC wavelengths, and the wavelength of 222 nm provides the most effective disinfection performance. Disinfection is a function of UV intensity and time. For this reason, it is in theory not as effective on moving air, or when the lamp is perpendicular to the flow, as exposure times are dramatically reduced. However, numerous professional and scientific publications have indicated that the overall effectiveness of UVGI actually increases when used in conjunction with fans and HVAC ventilation, which facilitate whole-room circulation that exposes more air to the UV source. Air purification UVGI systems can be free-standing units with shielded UV lamps that use a fan to force air past the UV light. Other systems are installed in forced air systems so that the circulation for the premises moves microorganisms past the lamps. Key to this form of sterilization is placement of the UV lamps and a good filtration system to remove the dead microorganisms. For example, forced air systems by design impede line-of-sight, thus creating areas of the environment that will be shaded from the UV light. However, a UV lamp placed at the coils and drain pans of cooling systems will keep microorganisms from forming in these naturally damp places. Water disinfection Ultraviolet disinfection of water is a purely physical, chemical-free process. Even parasites such as Cryptosporidium or Giardia, which are extremely resistant to chemical disinfectants, are efficiently reduced. UV can also be used to remove chlorine and chloramine species from water; this process is called photolysis, and requires a higher dose than normal disinfection. The dead microorganisms are not removed from the water. UV disinfection does not remove dissolved organics, inorganic compounds or particles in the water. The world's largest water disinfection plant treats drinking water for New York City. The Catskill-Delaware Water Ultraviolet Disinfection Facility, commissioned on 8 October 2013, incorporates a total of 56 energy-efficient UV reactors treating up to a day. Ultraviolet can also be combined with ozone or hydrogen peroxide to produce hydroxyl radicals to break down trace contaminants through an advanced oxidation process. It used to be thought that UV disinfection was more effective for bacteria and viruses, which have more-exposed genetic material, than for larger pathogens that have outer coatings or that form cyst states (e.g., Giardia) that shield their DNA from UV light. However, it was recently discovered that ultraviolet radiation can be somewhat effective for treating the microorganism Cryptosporidium. The findings resulted in the use of UV radiation as a viable method to treat drinking water. Giardia in turn has been shown to be very susceptible to UV-C when the tests were based on infectivity rather than excystation. It has been found that protists are able to survive high UV-C doses but are sterilized at low doses. UV water treatment devices can be used for well water and surface water disinfection. UV treatment compares favourably with other water disinfection systems in terms of cost, labour and the need for technically trained personnel for operation. Water chlorination treats larger organisms and offers residual disinfection, but these systems are expensive because they need special operator training and a steady supply of a potentially hazardous material. Finally, boiling of water is the most reliable treatment method but it demands labour and imposes a high economic cost. UV treatment is rapid and, in terms of primary energy use, approximately 20,000 times more efficient than boiling. UV disinfection is most effective for treating high-clarity, purified reverse osmosis distilled water. Suspended particles are a problem because microorganisms buried within particles are shielded from the UV light and pass through the unit unaffected. However, UV systems can be coupled with a pre-filter to remove those larger organisms that would otherwise pass through the UV system unaffected. The pre-filter also clarifies the water to improve light transmittance and therefore UV dose throughout the entire water column. Another key factor of UV water treatment is the flow rate—if the flow is too high, water will pass through without sufficient UV exposure. If the flow is too low, heat may build up and damage the UV lamp. A disadvantage of UVGI is that while water treated by chlorination is resistant to reinfection (until the chlorine off-gasses), UVGI water is not resistant to reinfection. UVGI water must be transported or delivered in such a way as to avoid reinfection. A 2006 project at University of California, Berkeley produced a design for inexpensive water disinfection in resource deprived settings. The project was designed to produce an open source design that could be adapted to meet local conditions. In a somewhat similar proposal in 2014, Australian students designed a system using potato chip (crisp) packet foil to reflect solar UV radiation into a glass tube that disinfects water without power. Modeling Sizing of a UV system is affected by three variables: flow rate, lamp power, and UV transmittance in the water. Manufacturers typically developed sophisticated computational fluid dynamics (CFD) models validated with bioassay testing. This involves testing the UV reactor's disinfection performance with either MS2 or T1 bacteriophages at various flow rates, UV transmittance, and power levels in order to develop a regression model for system sizing. For example, this is a requirement for all public water systems in the United States per the EPA UV manual. The flow profile is produced from the chamber geometry, flow rate, and particular turbulence model selected. The radiation profile is developed from inputs such as water quality, lamp type (power, germicidal efficiency, spectral output, arc length), and the transmittance and dimension of the quartz sleeve. Proprietary CFD software simulates both the flow and radiation profiles. Once the 3D model of the chamber is built, it is populated with a grid or mesh that comprises thousands of small cubes. Points of interest—such as at a bend, on the quartz sleeve surface, or around the wiper mechanism—use a higher resolution mesh, whilst other areas within the reactor use a coarse mesh. Once the mesh is produced, hundreds of thousands of virtual particles are "fired" through the chamber. Each particle has several variables of interest associated with it, and the particles are "harvested" after the reactor. Discrete phase modeling produces delivered dose, head loss, and other chamber-specific parameters. When the modeling phase is complete, selected systems are validated using a professional third party to provide oversight and to determine how closely the model is able to predict the reality of system performance. System validation uses non-pathogenic surrogates such as MS 2 phage or Bacillus subtilis to determine the Reduction Equivalent Dose (RED) ability of the reactors. Most systems are validated to deliver 40 mJ/cm2 within an envelope of flow and transmittance. To validate effectiveness in drinking water systems, the method described in the EPA UV guidance manual is typically used by US water utilities, whilst Europe has adopted Germany's DVGW 294 standard. For wastewater systems, the NWRI/AwwaRF Ultraviolet Disinfection Guidelines for Drinking Water and Water Reuse protocols are typically used, especially in wastewater reuse applications. Wastewater treatment Ultraviolet in sewage treatment is commonly replacing chlorination. This is in large part because of concerns that reaction of the chlorine with organic compounds in the waste water stream could synthesize potentially toxic and long lasting chlorinated organics and also because of the environmental risks of storing chlorine gas or chlorine containing chemicals. Individual wastestreams to be treated by UVGI must be tested to ensure that the method will be effective due to potential interferences such as suspended solids, dyes, or other substances that may block or absorb the UV radiation. According to the World Health Organization, "UV units to treat small batches (1 to several liters) or low flows (1 to several liters per minute) of water at the community level are estimated to have costs of US$20 per megaliter, including the cost of electricity and consumables and the annualized capital cost of the unit." Large-scale urban UV wastewater treatment is performed in cities such as Edmonton, Alberta. The use of ultraviolet light has now become standard practice in most municipal wastewater treatment processes. Effluent is now starting to be recognized as a valuable resource, not a problem that needs to be dumped. Many wastewater facilities are being renamed as water reclamation facilities, whether the wastewater is discharged into a river, used to irrigate crops, or injected into an aquifer for later recovery. Ultraviolet light is now being used to ensure water is free from harmful organisms. Aquarium and pond Ultraviolet sterilizers are often used to help control unwanted microorganisms in aquaria and ponds. UV irradiation ensures that pathogens cannot reproduce, thus decreasing the likelihood of a disease outbreak in an aquarium. Aquarium and pond sterilizers are typically small, with fittings for tubing that allows the water to flow through the sterilizer on its way from a separate external filter or water pump. Within the sterilizer, water flows as close as possible to the ultraviolet light source. Water pre-filtration is critical as water turbidity lowers UV-C penetration. Many of the better UV sterilizers have long dwell times and limit the space between the UV-C source and the inside wall of the UV sterilizer device. Laboratory hygiene UVGI is often used to disinfect equipment such as safety goggles, instruments, pipettors, and other devices. Lab personnel also disinfect glassware and plasticware this way. Microbiology laboratories use UVGI to disinfect surfaces inside biological safety cabinets ("hoods") between uses. Food and beverage protection Since the U.S. Food and Drug Administration issued a rule in 2001 requiring that virtually all fruit and vegetable juice producers follow HACCP controls, and mandating a 5-log reduction in pathogens, UVGI has seen some use in sterilization of juices such as fresh-pressed. UV Sources Mercury vapor lamps Germicidal UV for disinfection is most typically generated by a mercury-vapor lamp. Low-pressure mercury vapor has a strong emission line at 254 nm, which is within the range of wavelengths that demonstrate strong disinfection effect. The optimal wavelengths for disinfection are close to 260 nm. Mercury vapor lamps may be categorized as either low-pressure (including amalgam) or medium-pressure lamps. Low-pressure UV lamps offer high efficiencies (approx. 35% UV-C) but lower power, typically 1 W/cm power density (power per unit of arc length). Amalgam UV lamps utilize an amalgam to control mercury pressure to allow operation at a somewhat higher temperature and power density. They operate at higher temperatures and have a lifetime of up to 16,000 hours. Their efficiency is slightly lower than that of traditional low-pressure lamps (approx. 33% UV-C output), and power density is approximately 2–3 W/cm3. Medium-pressure UV lamps operate at much higher temperatures, up to about 800 degrees Celsius, and have a polychromatic output spectrum and a high radiation output but lower UV-C efficiency of 10% or less. Typical power density is 30 W/cm3 or greater. Depending on the quartz glass used for the lamp body, low-pressure and amalgam UV emit radiation at 254 nm and also at 185 nm, which has chemical effects. UV radiation at 185 nm is used to generate ozone. The UV lamps for water treatment consist of specialized low-pressure mercury-vapor lamps that produce ultraviolet radiation at 254 nm, or medium-pressure UV lamps that produce a polychromatic output from 200 nm to visible and infrared energy. The UV lamp never contacts the water; it is either housed in a quartz glass sleeve inside the water chamber or mounted externally to the water, which flows through the transparent UV tube. Water passing through the flow chamber is exposed to UV rays, which are absorbed by suspended solids, such as microorganisms and dirt, in the stream. LEDs Recent developments in LED technology have led to commercially available UV-C LEDs. UV-C LEDs use semiconductors to emit light between 255 nm and 280 nm. The wavelength emission is tuneable by adjusting the material of the semiconductor. , the electrical-to-UV-C conversion efficiency of LEDs was lower than that of mercury lamps. The reduced size of LEDs opens up options for small reactor systems allowing for point-of-use applications and integration into medical devices. Low power consumption of semiconductors introduce UV disinfection systems that utilized small solar cells in remote or Third World applications. UV-C LEDs don't necessarily last longer than traditional germicidal lamps in terms of hours used, instead having more-variable engineering characteristics and better tolerance for short-term operation. A UV-C LED can achieve a longer installed time than a traditional germicidal lamp in intermittent use. Likewise, LED degradation increases with heat, while filament and HID lamp output wavelength is dependent on temperature, so engineers can design LEDs of a particular size and cost to have a higher output and faster degradation or a lower output and slower decline over time. See also HEPA filter Portable water purification Sanitation Sanitation Standard Operating Procedures Solar water disinfection References External links International Ultraviolet Association Radiobiology Ultraviolet radiation Hygiene Waste treatment technology Sterilization (microbiology)
Ultraviolet germicidal irradiation
[ "Physics", "Chemistry", "Engineering", "Biology" ]
6,881
[ "Spectrum (physical sciences)", "Water treatment", "Radiobiology", "Electromagnetic spectrum", "Ultraviolet radiation", "Microbiology techniques", "Sterilization (microbiology)", "Environmental engineering", "Waste treatment technology", "Radioactivity" ]
9,173,273
https://en.wikipedia.org/wiki/Coronal%20loop
In solar physics, a coronal loop is a well-defined arch-like structure in the Sun's atmosphere made up of relatively dense plasma confined and isolated from the surrounding medium by magnetic flux tubes. Coronal loops begin and end at two footpoints on the photosphere and project into the transition region and lower corona. They typically form and dissipate over periods of seconds to days and may span anywhere from in length. Coronal loops are often associated with the strong magnetic fields located within active regions and sunspots. The number of coronal loops varies with the 11 year solar cycle. Origin and physical features Due to a natural process called the solar dynamo driven by heat produced in the Sun's core, convective motion of the electrically conductive plasma which makes up the Sun creates electric currents, which in turn create powerful magnetic fields in the Sun's interior. These magnetic fields are in the form of closed loops of magnetic flux, which are twisted and tangled by solar differential rotation (the different rotation rates of the plasma at different latitudes of the solar sphere). A coronal loop occurs when a curved arc of the magnetic field projects through the visible surface of the Sun, the photosphere, protruding into the solar atmosphere. Within a coronal loop, the paths of the moving electrically charged particles which make up its plasma—electrons and ions—are sharply bent by the Lorentz force when moving transverse to the loop's magnetic field. As a result, they can only move freely parallel to the magnetic field lines, tending to spiral around these lines. Thus, the plasma within a coronal loop cannot escape sideways out of the loop and can only flow along its length. This is known as the frozen-in condition. The strong interaction of the magnetic field with the dense plasma on and below the Sun's surface tends to tie the magnetic field lines to the motion of the Sun's plasma; thus, the two footpoints (the location where the loop enters the photosphere) are anchored to and rotate with the Sun's surface. Within each footpoint, the strong magnetic flux tends to inhibit the convection currents which carry hot plasma from the Sun's interior to the surface, so the footpoints are often (but not always) cooler than the surrounding photosphere. These appear as dark spots on the Sun's surface, known as sunspots. Thus, sunspots tend to occur under coronal loops, and tend to come in pairs of opposite magnetic polarity; a point where the magnetic field loop emerges from the photosphere is a North magnetic pole, and the other where the loop enters the surface again is a South magnetic pole. Coronal loops form in a wide range of sizes, from 10 km to 10,000 km. Coronal loops have a wide variety of temperatures along their lengths. Loops at temperatures below 1 megakelvin (MK) are generally known as cool loops; those existing at around 1 MK are known as warm loops; and those beyond 1 MK are known as hot loops. Naturally, these different categories radiate at different wavelengths. A related phenomenon is the open flux tube, in which magnetic fields extend from the surface far into the corona and heliosphere; these are the source of the Sun's large scale magnetic field (magnetosphere) and the solar wind. Location Coronal loops have been shown on both active and quiet regions of the solar surface. Active regions on the solar surface take up small areas but produce the majority of activity and are often the source of flares and coronal mass ejections due to the intense magnetic field present. Active regions produce 82% of the total coronal heating energy. Dynamic flows Many solar observation missions have observed strong plasma flows and highly dynamic processes in coronal loops. For example, SUMER observations suggest flow velocities of 5–16 km/s in the solar disk, and other joint SUMER/TRACE observations detect flows of 15–40 km/s. Very high plasma velocities (in the range of 40–60 km/s) have been detected by the Flat Crystal Spectrometer (FCS) on board the Solar Maximum Mission. History of observations Before 1991 Despite progress made by ground-based telescopes and eclipse observations of the corona, space-based observations became necessary to escape the obscuring effect of the Earth's atmosphere. Rocket missions such as the Aerobee flights and Skylark rockets successfully measured solar extreme ultraviolet (EUV) and X-ray emissions. However, these rocket missions were limited in lifetime and payload. Later, satellites such as the Orbiting Solar Observatory series (OSO-1 to OSO-8), Skylab, and the Solar Maximum Mission (the first observatory to last the majority of a solar cycle: from 1980 to 1989) were able to gain far more data across a much wider range of emission. 1991–present day In August 1991, the solar observatory spacecraft Yohkoh launched from the Kagoshima Space Center. During its 10 years of operation, it revolutionized X-ray observations. Yohkoh carried four instruments; of particular interest is the SXT instrument, which observed X-ray-emitting coronal loops. This instrument observed X-rays in the 0.25–4.0 keV range, resolving solar features to 2.5 arc seconds with a temporal resolution of 0.5–2 seconds. SXT was sensitive to plasma in the 2–4 MK temperature range, making its data ideal for comparison with data later collected by TRACE of coronal loops radiating in the extra ultraviolet (EUV) wavelengths. The next major step in solar physics came in December 1995, with the launch of the Solar and Heliospheric Observatory (SOHO) from Cape Canaveral Air Force Station. SOHO originally had an operational lifetime of two years. The mission was extended to March 2007 due to its resounding success, allowing SOHO to observe a complete 11-year solar cycle. SOHO has 12 instruments on board, all of which are used to study the transition region and corona. In particular, the Extreme ultraviolet Imaging Telescope (EIT) instrument is used extensively in coronal loop observations. EIT images the transition region through to the inner corona by using four band passes—171 Å FeIX, 195 Å FeXII, 284 Å FeXV, and 304 Å HeII, each corresponding to different EUV temperatures—to probe the chromospheric network to the lower corona. In April 1998, the Transition Region and Coronal Explorer (TRACE) was launched from Vandenberg Air Force Base. Its observations of the transition region and lower corona, made in conjunction with SOHO, give an unprecedented view of the solar environment during the rising phase of the solar maximum, an active phase in the solar cycle. Due to the high spatial (1 arc second) and temporal resolution (1–5 seconds), TRACE has been able to capture highly detailed images of coronal structures, whilst SOHO provides the global (lower resolution) picture of the Sun. This campaign demonstrates the observatory's ability to track the evolution of steady-state (or 'quiescent') coronal loops. TRACE uses filters sensitive to various types of electromagnetic radiation; in particular, the 171 Å, 195 Å, and 284 Å band passes are sensitive to the radiation emitted by quiescent coronal loops. See also Solar spicule Solar prominence Coronal hole References External links TRACE homepage Solar and Heliospheric Observatory, including near-real-time images of the solar corona Coronal heating problem at Innovation Reports NASA/GSFC description of the coronal heating problem FAQ about coronal heating Animated explanation of Coronal loops and their role in creating Prominences (University of South Wales) Sun Space plasmas Astrophysics Articles containing video clips
Coronal loop
[ "Physics", "Astronomy" ]
1,594
[ "Space plasmas", "Astronomical sub-disciplines", "Astrophysics" ]
9,174,778
https://en.wikipedia.org/wiki/Microprocessor%20development%20board
A microprocessor development board is a printed circuit board containing a microprocessor and the minimal support logic needed for an electronic engineer or any person who wants to become acquainted with the microprocessor on the board and to learn to program it. It also served users of the microprocessor as a method to prototype applications in products. Unlike a general-purpose system such as a home computer, usually a development board contains little or no hardware dedicated to a user interface. It will have some provision to accept and run a user-supplied program, such as downloading a program through a serial port to flash memory, or some form of programmable memory in a socket in earlier systems. History The reason for the existence of a development board was solely to provide a system for learning to use a new microprocessor, not for entertainment, so everything superfluous was left out to keep costs down. Even an enclosure was not supplied, nor a power supply. This is because the board would only be used in a "laboratory" environment so it did not need an enclosure, and the board could be powered by a typical bench power supply already available to an electronic engineer. Microprocessor training development kits were not always produced by microprocessor manufacturers. Many systems that can be classified as microprocessor development kits were produced by third parties, one example is the Sinclair MK14, which was inspired by the official SC/MP development board from National Semiconductor, the "NS introkit". Although these development boards were not designed for hobbyists, they were often bought by them because they were the earliest cheap microcomputer devices available. They often added all kinds of expansions, such as more memory, a video interface etc. It was very popular to use (or write) an implementation of Tiny Basic. The most popular microprocessor board, the KIM-1, received the most attention from the hobby community, because it was much cheaper than most other development boards, and more software was available for it (Tiny Basic, games, assemblers), and cheap expansion cards to add more memory or other functionality. More articles were published in magazines like "Kilobaud Microcomputing" that described home-brew software and hardware for the KIM-1 than for other development boards. Today some chip producers still release "test boards" to demonstrate their chips, and to use them as a "reference design". Their significance these days is much smaller than it was in the days that such boards, (the KIM-1 being the canonical example) were the only low cost way to get "hands-on" acquainted with microprocessors.. Features The most important feature of the microprocessor development board was the ROM-based built-in machine language monitor, or "debugger" as it was also sometimes called. Often the name of the board was related to the name of this monitor program, for example the name of the monitor program of the KIM-1 was "Keyboard Input Monitor", because the ROM-based software allowed entry of programs without the rows of cumbersome toggle switches that older systems used. The popular Motorola 6800-based systems often used a monitor with a name with the word "bug" for "debugger" in it, for example the popular "MIKBUG". Input was normally done with a hexadecimal keyboard, using a machine language monitor program, and the display only consisted of a 7-segment display. Backup storage of written assembler programs was primitive: only a cassette type interface was typically provided, or the serial Teletype interface was used to read (or punch) a papertape. Often the board has some kind to expansion connector that brought out all the necessary CPU signals, so that an engineer could build and test an experimental interface or other electronic device. External interfaces on the bare board were often limited to a single RS-232 or current loop serial port, so a terminal, printer, or Teletype could be connected. List of historical development boards 8085AAT, an Intel 8085 microprocessor training unit from Paccom CDP18S020 evaluation board for the RCA CDP1802 microprocessor EVK 300 6800 single board from American Microsystems (AMI) Explorer/85 expandable learning system based on the 8085, by Netronics's research and development ltd. ITT experimenter used switches and LEDs, and an Intel 8080 JOLT was designed by Raymond M. Holt, co-founder of Microcomputer Associates, Incorporated. KIM-1 the development board for the MOS Technology/Rockwell/Synertek 6502 microprocessor. The name KIM is short for "keyboard input monitor" SYM-1 a slightly improved KIM-1 with improved software, more memory, and I/O. Also known as the VIM AIM-65 an improved KIM-1 with an alpha-numerical LED display, and a built-in printer. The KIM-1 also lead to some unofficial copies, such as the super-KIM and the Junior from the magazine Elektor, and the MCS Alpha 1 LC80 by Kombinat Mikroelektronik Erfurt MAXBOARD development board for the Motorola 6802. MEK6800D2 the official development board for the Motorola 6800 microprocessor. The name of the monitor software was MIKBUG MicroChroma 68 color graphics kit. Developed by Motorola to demonstrate their new 6847 video display processor. The monitor software was called TVBUG Motorola EXORciser development system (rack based) for the Motorola 6809 Microprofessor I (MPF-1) Z80 development and training system by Acer Tangerine Microtan 65 6502 development system with VDU, that could be expanded to a more capable system. MST-80B 8080 training system by the Lawrence Livermore National Laboratory NS introkit by National Semiconductor featuring the SC/MP, the predecessor to the Sinclair MK14 NRI microcomputer, a system developed to teach computer courses by McGraw-Hill and the National Radio Institute (NRI) MK14 Training system for the SC/MP microprocessor from Sinclair Research Ltd. SDK-80 Intel's development board for their 8080 microprocessor SDK-51 Intel's development board for their Intel MCS-51 SDK-85 Intel's development board for their 8085 microprocessor SDK-86 Intel's development board for their 8086 microprocessor Siemens Microset-8080 boxed system based on an 8080. Signetics Instructor 50 based on the Signetics 2650. SGS-ATES Nanocomputer Z80. RCA Cosmac Super Elf by RCA . a 1802 learning system with an RCA 1861 Video Display Controller. TK-80 the development board for NEC's clone of Intel's i8080, the μPD 8080A TM 990/100M evaluation board for the Texas Instruments TMS9900 TM 990/180M evaluation board for the Texas Instruments TMS9800 XPO-1 Texas Instruments development system for the PPS-4/1 line of microcontrollers DSP evaluation boards A DSP evaluation board, sometimes also known as a DSP starter kit (DSK) or a DSP evaluation module, is an electronic board with a digital signal processor used for experiments, evaluation and development. Applications are developed in DSP Starter Kits using software usually referred as an integrated development environment (IDE). Texas Instruments and Spectrum Digital are two companies who produce these kits. Two examples are the DSK 6416 by Texas Instruments, based on the TMS320C6416 fixed point digital signal processor, a member of C6000 series of processors that is based on VelociTI.2 architecture, and the DSK 6713 by Texas Instruments, which was developed in cooperation with Spectrum Digital, based on the TMS320C6713 32-bit floating point digital signal processor, which allows for programming in C and assembly. See also Embedded system Intel system development kit Single-board computer Single-board microcontroller References Early microcomputers Telecommunications engineering
Microprocessor development board
[ "Engineering" ]
1,708
[ "Electrical engineering", "Telecommunications engineering" ]
9,175,375
https://en.wikipedia.org/wiki/Prandtl%E2%80%93Meyer%20function
In aerodynamics, the Prandtl–Meyer function describes the angle through which a flow turns isentropically from sonic velocity (M=1) to a Mach (M) number greater than 1. The maximum angle through which a sonic (M = 1) flow can be turned around a convex corner is calculated for M = . For an ideal gas, it is expressed as follows, where is the Prandtl–Meyer function, is the Mach number of the flow and is the ratio of the specific heat capacities. By convention, the constant of integration is selected such that As Mach number varies from 1 to , takes values from 0 to , where where, is the absolute value of the angle through which the flow turns, is the flow Mach number and the suffixes "1" and "2" denote the initial and final conditions respectively. See also Gas dynamics Prandtl–Meyer expansion fan References Aerodynamics Fluid dynamics
Prandtl–Meyer function
[ "Chemistry", "Engineering" ]
189
[ "Chemical engineering", "Aerodynamics", "Aerospace engineering", "Piping", "Fluid dynamics stubs", "Fluid dynamics" ]
9,176,452
https://en.wikipedia.org/wiki/Supramolecular%20polymer
Supramolecular polymers are a subset of polymers where the monomeric units are connected by reversible and highly directional secondary interactions–that is, non-covalent bonds. These non-covalent interactions include van der Waals interactions, hydrogen bonding, Coulomb or ionic interactions, π-π stacking, metal coordination, halogen bonding, chalcogen bonding, and host–guest interaction. Their behavior can be described by the theories of polymer physics) in dilute and concentrated solution, as well as in the bulk. Additionally, some supramolecular polymers have distinctive characteristics, such as the ability to self-heal. Covalent polymers can be difficult to recycle, but supramolecular polymers may address this problem. History The preamble of the field of supramolecular polymers can be considered dye-aggregates and host-guest complexes. In early 19th century, it was noticed that dyes aggregate via "a special kind of polymerization". In 1988, Takuzo Aida, a Japanese polymer chemist, reported the concept of cofacial assembly wherein the amphiphilic porphyrin monomers are connected via van der Waals interaction forming one-dimensional architectures in solution, which can be considered as a prototype of supramolecular polymers. Soon thereafter, one-dimensional aggregates were described based on hydrogen bonding interaction in the crystalline state. With a different strategyusing hydrogen bonds, Jean M. J. Fréchet showed in 1989 that mesogenic molecules with carboxylic acid and pyridyl motifs, upon mixing in bulk, heterotropically dimerize to form a stable liquid crystalline structure. In 1990, Jean-Marie Lehn showed that this strategy can be expanded to form a new category of polymers, which he called "liquid crystalline supramolecular polymer" using complementary triple hydrogen bonding motifs in bulk. In 1993, M. Reza Ghadiri reported a nanotubular supramolecular polymer where a b-sheet-forming macrocyclic peptide monomer assembled together via multiple hydrogen bonding between adjacent macrocycles. In 1994, Anselm. C. Griffin showed an amorphous supramolecular material using a single hydrogen bond between a homotropic molecules having carboxylic acid and pyridine termini. The idea to make mechanically strong polymeric materials by 1D supramolecular association of small molecules requires a high association constant between the repeating building blocks. In 1997, E.W. "Bert" Meijer reported a telechelic monomer with ureidopyrimidinone termini as a "self-complementary" quadruple hydrogen bonding motif and demonstrated that the resulting supramolecular polymer in chloroform shows a temperature-dependent viscoelastic property in solution. This is the first demonstration that supramolecular polymers, when sufficiently mechanically robust, are physically entangled in solution. Formation mechanisms Monomers undergoing supramolecular polymerization are considered to be in equilibrium with the growing polymers, and thermodynamic factors therefore dominate the system. However, when the constituent monomers are connected via strong and multivalent interactions, a "metastable" kinetic state can dominate the polymerization. An externally supplied energy, in the form of heat in most cases, can transform the "metastable" state into a thermodynamically stable polymer. A clear understanding of multiple pathways exist in supramolecular polymerization is still under debate, however, the concept of "pathway complexity", introduced by E.W. "Bert" Meijer, shed a light on the kinetic behavior of supramolecular polymerization. Thereafter, many dedicated scientists are expanding the scope of "pathway complexity" because it can produce a variety of interesting assembled structures from the same monomeric units. Along this line of kinetically controlled processes, supramolecular polymers having "stimuli-responsive" and "thermally bisignate" characteristics is also possible. In conventional covalent polymerization, two models based on step-growth and chain-growth mechanisms are operative. Nowadays, a similar subdivision is acceptable for supramolecular polymerization; isodesmic also known as equal-K model (step-growth mechanism) and cooperative or nucleation-elongation model (chain-growth mechanism). A third category is seeded supramolecular polymerization, which can be considered as a special case of chain-growth mechanism. Step-growth polymerization Supramolecular equivalent of step-growth mechanism is commonly known as isodesmic or equal-K model (K represents the total binding interaction between two neighboring monomers). In isodesmic supramolecular polymerization, no critical temperature or concentration of monomers is required for the polymerization to occur and the association constant between polymer and monomer is independent of the polymer chain length. Instead, the length of the supramolecular polymer chains rises as the concentration of monomers in the solution increases, or as the temperature decreases. In conventional polycondensation, the association constant is usually large that leads to a high degree of polymerization; however, a byproduct is observed. In isodesmic supramolecular polymerization, due to non-covalent bonding, the association between monomeric units is weak, and the degree of polymerization strongly depends on the strength of interaction, i.e. multivalent interaction between monomeric units. For instance, supramolecular polymers consisting of bifunctional monomers having single hydrogen bonding donor/acceptor at their termini usually end up with low degree of polymerization, however those with quadrupole hydrogen bonding, as in the case of ureidopyrimidinone motifs, result in a high degree of polymerization. In ureidopyrimidinone-based supramolecular polymer, the experimentally observed molecular weight at semi-dilute concentrations is in the order of 106 Dalton and the molecular weight of the polymer can be controlled by adding mono-functional chain-cappers. Chain-growth polymerization Conventional chain-growth polymerization involves at least two phases; initiation and propagation, while and in some cases termination and chain transfer phases also occur. Chain-growth supramolecular polymerization in a broad sense involves two distinct phases; a less favored nucleation and a favored propagation. In this mechanism, after the formation of a nucleus of a certain size, the association constant is increased, and further monomer addition becomes more favored, at which point the polymer growth is initiated. Long polymer chains will form only above a minimum concentration of monomer and below a certain temperature. However, to realize a covalent analogue of chain-growth supramolecular polymerization, a challenging prerequisite is the design of appropriate monomers that can polymerize only by the action of initiators. Recently one example of chain-growth supramolecular polymerization with "living" characteristics is demonstrated. In this case, a bowl-shaped monomer with amide-appended side chains form a kinetically favored intramolecular hydrogen bonding network and does not spontaneously undergo supramolecular polymerization at ambient temperatures. However, an N-methylated version of the monomer serves as an initiator by opening the intramolecular hydrogen bonding network for the supramolecular polymerization, just like ring-opening covalent polymerization. The chain end in this case remains active for further extension of supramolecular polymer and hence chain-growth mechanism allows for the precise control of supramolecular polymer materials. Seeded polymerization This is a special category of chain-growth supramolecular polymerization, where the monomer nucleates only in an early stage of polymerization to generate "seeds" and becomes active for polymer chain elongation upon further addition of a new batch of monomer. A secondary nucleation is suppressed in most of the case and thus possible to realize a narrow polydispersity of the resulting supramolecular polymer. In 2007, Ian Manners and Mitchell A. Winnik introduced this concept using a polyferrocenyldimethylsilane–polyisoprene diblock copolymer as the monomer, which assembles into cylindrical micelles. When a fresh feed of the monomer is added to the micellar "seeds" obtained by sonication, the polymerization starts in a living polymerization manner. They named this method as crystallization-driven self-assembly (CDSA) and is applicable to construct micron-scale supramolecular anisotropic structures in 1D–3D. A conceptually different seeded supramolecular polymerization was shown by Kazunori Sugiyasu in a porphyrin-based monomer bearing amide-appended long alkyl chains. At low temperature, this monomer preferentially forms spherical J-aggregates while fibrous H-aggregates at higher temperature. By adding a sonicated mixture of the J-aggregates ("seeds") into a concentrated solution of the J-aggregate particles, long fibers can be prepared via living seeded supramolecular polymerization. Frank Würthner achieved similar seeded supramolecular polymerization of amide functionalized perylene bisimide as monomer. Importantly, the seeded supramolecular polymerization is also applicable to prepare supramolecular block copolymers. Examples Hydrogen bonding interaction Monomers capable of forming single, double, triple or quadruple hydrogen bonding has been utilized for making supramolecular polymers, and increased association of monomers obviously possible when monomers have maximum number of hydrogen bonding donor/acceptor motifs. For instance, ureidopyrimidinone-based monomer with self-complementary quadruple hydrogen bonding termini polymerized in solution, accordingly with the theory of conventional polymers and displayed a distinct viscoelastic nature at ambient temperatures. π-π stacking Monomers with aromatic motifs such as bis(merocyanine), oligo(para-phenylenevinylene) (OPV), perylene bisimide (PBI) dye, cyanine dye, corannulene and nano-graphene derivatives have been employed to prepare supramolecular polymers. In some cases, hydrogen bonding side chains appended onto the core aromatic motif help to hold the monomer strongly in the supramolecular polymer. A notable system in this category is a nanotubular supramolecular polymer formed by the supramolecular polymerization of amphiphilic hexa-peri-hexabenzocoronene (HBC) derivatives. Generally, nanotubes are categorized as 1D objects morphologically, however, their walls adopt a 2D geometry and therefore require a different design strategy. HBC amphiphiles in polar solvents solvophobically assemble into a 2D bilayer membrane, which roles up into a helical tape or a nanotubular polymer. Conceptually similar amphiphilic design based on cyanine dye and zinc chlorin dye also polymerize in water resulting in nanotubular supramolecular polymers. Host-guest interaction A variety of supramolecular polymers can be synthesized by using monomers with host-guest complementary binding motifs, such as crown ethers/ammonium ions, cucurbiturils/viologens, calixarene/viologens, cyclodextrins/adamantane derivatives, and pillar arene/imidazolium derivatives [30–33]. When the monomers are "heteroditopic", supramolecular copolymers results, provided the monomers does not homopolymerize. Akira Harada was one of the firstwhorecognize the importance of combining polymers and cyclodextrins. Feihe Huang showed an example of supramolecular alternating copolymer from two heteroditopic monomers carrying both crown ether and ammonium ion termini. Takeharo Haino demonstrated an extreme example of sequence control in supramolecular copolymer, where three heteroditopic monomers are arranged in an ABC sequence along the copolymer chain. The design strategy utilizing three distinct binding interactions; ball-and-socket (calix[5]arene/C60), donor-acceptor (bisporphyrin/trinitrofluorenone), and Hamilton's H-bonding interactions is the key to attain a high orthogonality to form an ABC supramolecular terpolymer. Chirality Stereochemical information of a chiral monomer can be expressed in a supramolecular polymer. Helical supramolecular polymer with P-and M-conformation are widely seen, especially those composed of disc-shaped monomers. When the monomers are achiral, both P-and M-helices are formed in equal amounts. When the monomers are chiral, typically due to the presence of one or more stereocenters in the side chains, the diastereomeric relationship between P- and M-helices leads to the preference of one conformation over the other. Typical example is a C3-symmetric disk-shaped chiral monomer that forms helical supramolecular polymers via the "majority rule". A slight excess of one enantiomer of the chiral monomer resulted in a strong bias to either the right-handed or left-handed helical geometry at the supramolecular polymer level. In this case, a characteristic nonlinear dependence of the anisotropic factor, g, on the enantiomeric excess of a chiral monomer can be generally observed. Like in small molecule based chiral system, chirality of a supramolecular polymer also affected by chiral solvents. Some application such as a catalyst for asymmetric synthesis and circular polarized luminescence are observed in chiral supramolecular polymers too. Copolymers A copolymer is formed from more than one monomeric species. Advanced polymerization techniques have been established for the preparation of covalent copolymers, however supramolecular copolymers are still in its infancy and is slowly progressing. In recent years, all plausible category of supramolecular copolymers such as random, alternating, block, blocky, or periodic has been demonstrated in a broad sense. Properties Supramolecular polymers are the subject of research in academia and industry. Reversibility and dynamicity The stability of a supramolecular polymer can be described using the association constant, Kass. When Kass ≤ 104M−1, the polymeric aggregates are typically small in size and do not show any interesting properties and when Kass≥ 1010 M−1, the supramolecular polymer behaves just like covalent polymers due to the lack of dynamics. So, an optimum Kass = 104–1010M−1need to be attained for producing functional supramolecular polymers. The dynamics and stability of the supramolecular polymers often affect by the influence of additives (e.g. co-solvent or chain-capper). When a good solvent, for instance chloroform, is added to a supramolecular polymer in a poor solvent, for instance heptane, the polymer disassembles. However, in some cases, cosolvents contribute the stabilization/destabilization of supramolecular polymer. For instance, supramolecular polymerization of a hydrogen bonding porphyrin-based monomer in a hydrocarbon solvent containing a minute amount of a hydrogen bond scavenging alcohol shows distinct pathways, i.e. polymerization favored both by cooling as well as heating, and is known as "thermally bisignate supramolecular polymerization". In another example, minute amounts of molecularly dissolved water molecules in apolar solvents, like methylcyclohexane, become part of the supramolecular polymer at lower temperatures, due to specific hydrogen bonding interaction between the monomer and water. Self-healing Supramolecular polymers may be relevant to self-healing materials. A supramolecular rubber based on vitrimers can self-heal simply by pressing the two broken edges of the material together. High mechanical strength of a material and self-healing ability are generally mutually exclusive. Thus, a glassy material that can self-heal at room temperature remained a challenge until recently. A supramolecularly polymer based on ether-thiourea is mechanically robust (e= 1.4 GPa) but can self-heal at room temperature by a compression at the fractured surfaces. The invention of self-healable polymer glass updated the preconception that only soft rubbery materials can heal. Another strategy uses a bivalent poly(isobutylene)s (PIBs) functionalized with barbituric acid at head and tail. Multiple hydrogen bonding existed between the carbonyl group and amide group of barbituric acid enable it to form a supramolecular network. In this case, the snipped small PIBs-based disks can recover itself from mechanical damage after several-hour contact at room temperature. Interactions between catechol and ferric ions exhibit pH-controlled self-healing supramolecular polymers. The formation of mono-, bis- and triscatehchol-Fe3+ complexes can be manipulated by pH, of which the bis- and triscatehchol-Fe3+ complexes show elastic moduli as well as self-healing capacity. For example, the triscatehchol-Fe3+ can restore its cohesiveness and shape after being torn. Chain-folding polyimide and pyrenyl-end-capped chains give rise to supramolecular networks. Optoelectronic By incorporating electron donors and electron acceptors into the supramolecular polymers, features of artificial photosynthesis can be replicated. Biocompatible DNA is a major example of a supramolecular polymer. protein Much effort has been develoted to related but synthetic materials. At the same time, their reversible and dynamic nature make supramolecular polymers bio-degradable, which surmounts hard-to-degrade issue of covalent polymers and makes supramolecular polymers a promising platform for biomedical applications. Being able to degrade in biological environment lowers potential toxicity of polymers to a great extent and therefore, enhances biocompatibility of supramolecular polymers. Biomedical applications With the excellent nature in biodegradation and biocompatibility, supramolecular polymers show great potential in the development of drug delivery, gene transfection and other biomedical applications. Drug delivery: Multiple cellular stimuli could induce responses in supramolecular polymers. The dynamic molecular skeletons of supramolecular polymers can be depolymerized when exposing to the external stimuli like pH in vivo. On the basis of this property, supramolecular polymers are capable of being a drug carrier. Making use of hydrogen bonding between nucleobases to induce self-assemble into pH-sensitive spherical micelles. Gene transfection: Effective and low-toxic nonviral cationic vectors are highly desired in the field of gene therapy. On account of the dynamic and stimuli-responsive properties, supramolecular polymers offer a cogent platform to construct vectors for gene transfection. By combining ferrocene dimer with β-cyclodextrin dimer, a redox-control supramolecular polymers system has been proposed as a vector. In COS-7 cells, this supramolecular polymersic vector can release enclosed DNA upon exposing to hydrogen peroxide and achieve gene transfection. Adjustable mechanical properties Basic Principle : Noncovalent interactions between polymer molecules significantly affect the mechanical properties of supramolecular polymers. More interaction between polymers tends to enhance the interaction strength between polymers. The association rate and dissociation rate of interacting groups in polymer molecules determine intermolecular interaction strength. For supramolecular polymers, the dissociation kinetics for dynamic networks plays a critical role in the material design and mechanical properties of the SPNs(supramolecular polymer networks). By changing the dissociation rate of polymer crosslink dynamics, supramolecular polymers have adjustable mechanical properties. With a slow dissociation rate for dynamic networks of supramolecular polymers, glass-like mechanical properties are dominant, on the other hand, rubber-like mechanical properties are dominant for a fast dissociation rate. These properties can be obtained by changing the molecular structure of the crosslink part of the molecule. Experimental examples : One research controlled the molecular design of cucurbit[8]uril, CB[8]. The hydrophobic structure of the second guest of CB-mediated host-guest interaction within its molecular structure can tune the dissociative kinetics of the dynamic crosslinks. To slow the dissociation rate (kd), a stronger enthalpic driving force is needed for the second guest association (ka) to release more of the conformationally restricted water from the CB(8] cavity. In other words, the hydrophobic second guest exhibited the highest Keq and lowest kd values. Therefore, by polymerizing different concentrations of polymer subgroups, different dynamics of the intermolecular network can be designed.For example, mechanical properties like compressive strain can be tuned by this process. Polymerized with different hydrophobic subgroups in CB[B], The compressive strength was found to increase across the series in correlation with a decrease of kd, which could be tuned between 10–100MPa. NVI, is the most hydrophobic subgroup structure of monomer which have two benzene rings, on the other hand, BVI is the least hydrophobic subgroup structure of monomer via control group. Besides, varying concentrations of hydrophobic subgroups in CB[B], polymerized molecules show different compressive properties. Polymers with the highest concentration of hydrophobic subgroups show the highest compressive strain and vice versa. Biomaterials Supramolecular polymers can simultaneously meet the requirements of aqueous compatibility, bio-degradability, biocompatibility, stimuli-responsiveness and other strict criterion. Consequently, supramolecular polymers could be applicable to the biomedical fields. The reversible nature of supramolecular polymers can produce biomaterials that can sense and respond to physiological cues, or that mimic the structural and functional aspects of biological signaling. Protein delivery, bio-imaging and diagnosis and tissue engineering, are also well developed. Further reading < References Supramolecular chemistry Polymers
Supramolecular polymer
[ "Chemistry", "Materials_science" ]
4,811
[ "Polymer chemistry", "nan", "Polymers", "Nanotechnology", "Supramolecular chemistry" ]
9,179,644
https://en.wikipedia.org/wiki/Multi-threshold%20CMOS
Multi-threshold CMOS (MTCMOS) is a variation of CMOS chip technology which has transistors with multiple threshold voltages (Vth) in order to optimize delay or power. The Vth of a MOSFET is the gate voltage where an inversion layer forms at the interface between the insulating layer (oxide) and the substrate (body) of the transistor. Low Vth devices switch faster, and are therefore useful on critical delay paths to minimize clock periods. The penalty is that low Vth devices have substantially higher static leakage power. High Vth devices are used on non-critical paths to reduce static leakage power without incurring a delay penalty. Typical high Vth devices reduce static leakage by 10 times compared with low Vth devices. One method of creating devices with multiple threshold voltages is to apply different bias voltages (Vb) to the base or bulk terminal of the transistors. Other methods involve adjusting the gate oxide thickness, gate oxide dielectric constant (material type), or dopant concentration in the channel region beneath the gate oxide. A common method of fabricating multi-threshold CMOS involves simply adding additional photolithography and ion implantation steps. For a given fabrication process, the Vth is adjusted by altering the concentration of dopant atoms in the channel region beneath the gate oxide. Typically, the concentration is adjusted by ion implantation method. For example, photolithography methods are applied to cover all devices except the p-MOSFETs with photoresist. Ion implantation is then completed, with ions of the chosen dopant type penetrating the gate oxide in areas where no photoresist is present. The photoresist is then stripped. Photolithography methods are again applied to cover all devices except the n-MOSFETs. Another implantation is then completed using a different dopant type, with ions penetrating the gate oxide. The photoresist is stripped. At some point during the subsequent fabrication process, implanted ions are activated by annealing at an elevated temperature. In principle, any number of threshold voltage transistors can be produced. For CMOS having two threshold voltages, one additional photomasking and implantation step is required for each of p-MOSFET and n-MOSFET. For fabrication of normal, low, and high Vth CMOS, four additional steps are required relative to conventional single-Vth CMOS. Implementation The most common implementation of MTCMOS for reducing power makes use of sleep transistors. Logic is supplied by a virtual power rail. Low Vth devices are used in the logic where fast switching speed is important. High Vth devices connecting the power rails and virtual power rails are turned on in active mode, off in sleep mode. High Vth devices are used as sleep transistors to reduce static leakage power. The design of the power switch which turns on and off the power supply to the logic gates is essential to low-voltage, high-speed circuit techniques such as MTCMOS. The speed, area, and power of a logic circuit are influenced by the characteristics of the power switch. In a "coarse-grained" approach, high Vth sleep transistors gate the power to entire logic blocks. The sleep signal is de-asserted during active mode, causing the transistor to turn on and provide virtual power (ground) to the low Vth logic. The sleep signal is asserted during sleep mode, causing the transistor to turn off and disconnect power (ground) from the low Vth logic. The drawbacks of this approach are that: logic blocks must be partitioned to determine when a block may be safely turned off (on) sleep transistors are large and must be carefully sized to supply the current required by the circuit block an always active (never in sleep mode) power management circuit must be added In a "fine-grained" approach, high Vth sleep transistors are incorporated within every gate. Low Vth transistors are used for the pull-up and pull-down networks, and a high Vth transistor is used to gate the leakage current between the two networks. This approach eliminates problems of logic block partitioning and sleep transistor sizing. However, a large amount of area overhead is added due both to inclusion of additional transistors in every Boolean gate, and in creating a sleep signal distribution tree. An intermediate approach is to incorporate high Vth sleep transistors into threshold gates having more complicated function. Since fewer such threshold gates are required to implement any arbitrary function compared to Boolean gates, incorporating MTCMOS into each gate requires less area overhead. Examples of threshold gates having more complicated function are found with Null Convention Logic (NCL) and Sleep Convention Logic (SCL). Some art is required to implement MTCMOS without causing glitches or other problems. References Electronic design Digital electronics Logic families
Multi-threshold CMOS
[ "Engineering" ]
1,025
[ "Electronic design", "Electronic engineering", "Design", "Digital electronics" ]
10,673,638
https://en.wikipedia.org/wiki/Solar-like%20oscillations
Solar-like oscillations are oscillations in stars that are excited in the same way as those in the Sun, namely by turbulent convection in its outer layers. Stars that show solar-like oscillations are called solar-like oscillators. The oscillations are standing pressure and mixed pressure-gravity modes that are excited over a range in frequency, with the amplitudes roughly following a bell-shaped distribution. Unlike opacity-driven oscillators, all the modes in the frequency range are excited, making the oscillations relatively easy to identify. The surface convection also damps the modes, and each is well-approximated in frequency space by a Lorentzian curve, the width of which corresponds to the lifetime of the mode: the faster it decays, the broader is the Lorentzian. All stars with surface convection zones are expected to show solar-like oscillations, including cool main-sequence stars (up to surface temperatures of about 7000K), subgiants and red giants. Because of the small amplitudes of the oscillations, their study has advanced tremendously thanks to space-based missions (mainly COROT and Kepler). Solar-like oscillations have been used, among other things, to precisely determine the masses and radii of planet-hosting stars and thus improve the measurements of the planets' masses and radii. Red giants In red giants, mixed modes are observed, which are in part directly sensitive to the core properties of the star. These have been used to distinguish red giants burning helium in their cores from those that are still only burning hydrogen in a shell, to show that the cores of red giants are rotating more slowly than models predict and to constrain the internal magnetic fields of the cores Echelle diagrams The peak of the oscillation power roughly corresponds to lower frequencies and radial orders for larger stars. For the Sun, the highest amplitude modes occur around a frequency of 3 mHz with order , and no mixed modes are observed. For more massive and more evolved stars, the modes are of lower radial order and overall lower frequencies. Mixed modes can be seen in the evolved stars. In principle, such mixed modes may also be present in main-sequence stars but they are at too low frequency to be excited to observable amplitudes. High-order pressure modes of a given angular degree are expected to be roughly evenly-spaced in frequency, with a characteristic spacing known as the large separation . This motivates the echelle diagram, in which the mode frequencies are plotted as a function of the frequency modulo the large separation, and modes of a particular angular degree form roughly vertical ridges. Scaling relations The frequency of maximum oscillation power is accepted to vary roughly with the acoustic cut-off frequency, above which waves can propagate in the stellar atmosphere, and thus are not trapped and do not contribute to standing modes. This gives Similarly, the large frequency separation is known to be roughly proportional to the square root of the density: When combined with an estimate of the effective temperature, this allows one to solve directly for the mass and radius of the star, basing the constants of proportionality on the known values for the Sun. These are known as the scaling relations: Equivalently, if one knows the star's luminosity, then the temperature can be replaced via the blackbody luminosity relationship , which gives Some bright solar-like oscillators Procyon Alpha Centauri A and B Mu Herculis See also Asteroseismology Helioseismology Variable stars References External links Lecture Notes on Stellar Oscillations published by J. Christensen-Dalsgaard (Aarhus University, Denmark) Variable stars Asteroseismology
Solar-like oscillations
[ "Physics" ]
779
[ "Physical phenomena", "Stellar phenomena", "Astrophysics", "Asteroseismology" ]
10,685,654
https://en.wikipedia.org/wiki/Second%20sound
In condensed matter physics, second sound is a quantum mechanical phenomenon in which heat transfer occurs by wave-like motion, rather than by the more usual mechanism of diffusion. Its presence leads to a very high thermal conductivity. It is known as "second sound" because the wave motion of entropy and temperature is similar to the propagation of pressure waves in air (sound). The phenomenon of second sound was first described by Lev Landau in 1941. Description Normal sound waves are fluctuations in the displacement and density of molecules in a substance; second sound waves are fluctuations in the density of quasiparticle thermal excitations (rotons and phonons). Second sound can be observed in any system in which most phonon-phonon collisions conserve momentum, like superfluids and in some dielectric crystals when Umklapp scattering is small. Contrary to molecules in a gas, quasiparticles are not necessarily conserved. Also gas molecules in a box conserve momentum (except at the boundaries of box), while quasiparticles can sometimes not conserve momentum in the presence of impurities or Umklapp scattering. Umklapp phonon-phonon scattering exchanges momentum with the crystal lattice, so phonon momentum is not conserved, but Umklapp processes can be reduced at low temperatures. Normal sound in gases is a consequence of the collision rate between molecules being large compared to the frequency of the sound wave . For second sound, the Umklapp rate has to be small compared to the oscillation frequency for energy and momentum conservation. However analogous to gasses, the relaxation time describing the collisions has to be large with respect to the frequency , leaving a window: for sound-like behaviour or second sound. The second sound thus behaves as oscillations of the local number of quasiparticles (or of the local energy carried by these particles). Contrary to the normal sound where energy is related to pressure and temperature, in a crystal the local energy density is purely a function of the temperature. In this sense, the second sound can also be considered as oscillations of the local temperature. Second sound is a wave-like phenomenon which makes it very different from usual heat diffusion. In helium II Second sound is observed in liquid helium at temperatures below the lambda point, 2.1768 K, where 4He becomes a superfluid known as helium II. Helium II has the highest thermal conductivity of any known material (several hundred times higher than copper). Second sound can be observed either as pulses or in a resonant cavity. The speed of second sound is close to zero near the lambda point, increasing to approximately 20 m/s around 1.8 K, about ten times slower than normal sound waves. At temperatures below 1 K, the speed of second sound in helium II increases as the temperature decreases. Second sound is also observed in superfluid helium-3 below its lambda point 2.5 mK. As per the two-fluid, the speed of second sound is given by where is the temperature, is the entropy, is the specific heat, is the superfluid density and is the normal fluid density. As , , where is the ordinary (or first) sound speed. In other media Second sound has been observed in solid 4He and 3He, and in some dielectric solids such as Bi in the temperature range of 1.2 to 4.0 K with a velocity of 780 ± 50 m/s, or solid sodium fluoride (NaF) around 10 to 20 K. In 2021 this effect was observed in a BKT superfluid as well as in a germanium semiconductor In graphite In 2019 it was reported that ordinary graphite exhibits second sound at 120 K. This feature was both predicted theoretically and observed experimentally, and was by far the highest temperature at which second sound has been observed. However, this second sound is observed only at the microscale, because the wave dies out exponentially with characteristic length 1-10 microns. Therefore, presumably graphite in the right temperature regime has extraordinarily high thermal conductivity but only for the purpose of transferring heat pulses distances of order 10 microns, and for pulses of duration on the order of 10 nanoseconds. For more "normal" heat-transfer, graphite's observed thermal conductivity is less than that of, e.g., copper. The theoretical models, however, predict longer absorption lengths would be seen in isotopically pure graphite, and perhaps over a wider temperature range, e.g. even at room temperature. (As of March 2019, that experiment has not yet been tried.) Applications Measuring the speed of second sound in 3He-4He mixtures can be used as a thermometer in the range 0.01-0.7 K. Oscillating superleak transducers (OST) use second sound to locate defects in superconducting accelerator cavities. See also Zero sound Third sound References Bibliography Sinyan Shen, Surface Second Sound in Superfluid Helium. PhD Dissertation (1973). http://adsabs.harvard.edu/abs/1973PhDT.......142S V. Peshkov, "'Second Sound' in Helium II," J. Phys. (Moscow) 8, 381 (1944) U. Piram, "Numerical investigation of second sound in liquid helium," Dipl.-Ing. Dissertation (1991). Retrieved on April 15, 2007. Quantum mechanics Thermodynamics Superfluidity Lev Landau
Second sound
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,151
[ "Physical phenomena", "Phase transitions", "Theoretical physics", "Phases of matter", "Quantum mechanics", "Superfluidity", "Thermodynamics", "Condensed matter physics", "Exotic matter", "Dynamical systems", "Matter", "Fluid dynamics" ]
10,685,778
https://en.wikipedia.org/wiki/Kumada%20coupling
In organic chemistry, the Kumada coupling is a type of cross coupling reaction, useful for generating carbon–carbon bonds by the reaction of a Grignard reagent and an organic halide. The procedure uses transition metal catalysts, typically nickel or palladium, to couple a combination of two alkyl, aryl or vinyl groups. The groups of Robert Corriu and Makoto Kumada reported the reaction independently in 1972. The reaction is notable for being among the first reported catalytic cross-coupling methods. Despite the subsequent development of alternative reactions (Suzuki, Sonogashira, Stille, Hiyama, Negishi), the Kumada coupling continues to be employed in many synthetic applications, including the industrial-scale production of aliskiren, a hypertension medication, and polythiophenes, useful in organic electronic devices. History The first investigations into the catalytic coupling of Grignard reagents with organic halides date back to the 1941 study of cobalt catalysts by Morris S. Kharasch and E. K. Fields. In 1971, Tamura and Kochi elaborated on this work in a series of publications demonstrating the viability of catalysts based on silver, copper and iron. However, these early approaches produced poor yields due to substantial formation of homocoupling products, where two identical species are coupled. These efforts culminated in 1972, when the Corriu and Kumada groups concurrently reported the use of nickel-containing catalysts. With the introduction of palladium catalysts in 1975 by the Murahashi group, the scope of the reaction was further broadened. Subsequently, many additional coupling techniques have been developed, culminating in the 2010 Nobel Prize in Chemistry recognized Ei-ichi Negishi, Akira Suzuki and Richard F. Heck for their contributions to the field. Mechanism Palladium catalysis According to the widely accepted mechanism, the palladium-catalyzed Kumada coupling is understood to be analogous to palladium's role in other cross coupling reactions. The proposed catalytic cycle involves both palladium(0) and palladium(II) oxidation states. Initially, the electron-rich Pd(0) catalyst (1) inserts into the R–X bond of the organic halide. This oxidative addition forms an organo-Pd(II)-complex (2). Subsequent transmetalation with the Grignard reagent forms a hetero-organometallic complex (3). Before the next step, isomerization is necessary to bring the organic ligands next to each other into mutually cis positions. Finally, reductive elimination of (4) forms a carbon–carbon bond and releases the cross coupled product while regenerating the Pd(0) catalyst (1). For palladium catalysts, the frequently rate-determining oxidative addition occurs more slowly than with nickel catalyst systems. Nickel catalysis Current understanding of the mechanism for the nickel-catalyzed coupling is limited. Indeed, the reaction mechanism is believed to proceed differently under different reaction conditions and when using different nickel ligands. In general the mechanism can still be described as analogous to the palladium scheme (right). Under certain reaction conditions, however, the mechanism fails to explain all observations. Examination by Vicic and coworkers using tridentate terpyridine ligand identified intermediates of a Ni(II)-Ni(I)-Ni(III) catalytic cycle, suggesting a more complicated scheme. Additionally, with the addition of butadiene, the reaction is believed to involve a Ni(IV) intermediate. Scope Organic halides and pseudohalides The Kumada coupling has been successfully demonstrated for a variety of aryl or vinyl halides. In place of the halide reagent pseudohalides can also be used, and the coupling has been shown to be quite effective using tosylate and triflate species in variety of conditions. Despite broad success with aryl and vinyl couplings, the use of alkyl halides is less general due to several complicating factors. Having no π-electrons, alkyl halides require different oxidative addition mechanisms than aryl or vinyl groups, and these processes are currently poorly understood. Additionally, the presence of β-hydrogens makes alkyl halides susceptible to competitive elimination processes. These issues have been circumvented by the presence of an activating group, such as the carbonyl in α-bromoketones, that drives the reaction forward. However, Kumada couplings have also been performed with non-activated alkyl chains, often through the use of additional catalysts or reagents. For instance, with the addition of 1,3-butadienes Kambe and coworkers demonstrated nickel catalyzed alkyl–alkyl couplings that would otherwise be unreactive. Though poorly understood, the mechanism of this reaction is proposed to involve the formation of an octadienyl nickel complex. This catalyst is proposed to undergo transmetalation with a Grignard reagent first, prior to the reductive elimination of the halide, reducing the risk of β-hydride elimination. However, the presence of a Ni(IV) intermediate is contrary to mechanisms proposed for aryl or vinyl halide couplings. Grignard reagent Couplings involving aryl and vinyl Grignard reagents were reported in the original publications by Kumada and Corriu. Alkyl Grignard reagents can also be used without difficulty, as they do not suffer from β-hydride elimination processes. Although the Grignard reagent inherently has poor functional group tolerance, low-temperature syntheses have been prepared with highly functionalized aryl groups. Catalysts Kumada couplings can be performed with a variety of nickel(II) or palladium(II) catalysts. The structures of the catalytic precursors can be generally formulated as ML2X2, where L is a phosphine ligand. Common choices for L2 include bidentate diphosphine ligands such as dppe and dppp among others. Work by Alois Fürstner and coworkers on iron-based catalysts have shown reasonable yields. The catalytic species in these reactions is proposed to be an "inorganic Grignard reagent" consisting of . Reaction conditions The reaction typically is carried out in tetrahydrofuran or diethyl ether as solvent. Such ethereal solvents are convenient because these are typical solvents for generating the Grignard reagent. Due to the high reactivity of the Grignard reagent, Kumada couplings have limited functional group tolerance which can be problematic in large syntheses. In particular, Grignard reagents are sensitive to protonolysis from even mildly acidic groups such as alcohols. They also add to carbonyls and other oxidative groups. As in many coupling reactions, the transition metal palladium catalyst is often air-sensitive, requiring an inert Argon or nitrogen reaction environment. A sample synthetic preparation is available at the Organic Syntheses website. Selectivity Stereoselectivity Both cis- and trans-olefin halides promote the overall retention of geometric configuration when coupled with alkyl Grignards. This observation is independent of other factors, including the choice of catalyst ligands and vinylic substituents. Conversely, a Kumada coupling using vinylic Grignard reagents proceeds without stereospecificity to form a mixture of cis- and trans-alkenes. The degree of isomerization is dependent on a variety of factors including reagent ratios and the identity of the halide group. According to Kumada, this loss of stereochemistry is attributable to side-reactions between two equivalents of the allylic Grignard reagent. Enantioselectivity Asymmetric Kumada couplings can be effected through the use of chiral ligands. Using planar chiral ferrocene ligands, enantiomeric excesses (ee) upward of 95% have been observed in aryl couplings. More recently, Gregory Fu and co-workers have demonstrated enantioconvergent couplings of α-bromoketones using catalysts based on bis-oxazoline ligands, wherein the chiral catalyst converts a racemic mixture of starting material to one enantiomer of product with up to 95% ee. The latter reaction is also significant for involving a traditionally inaccessible alkyl halide coupling. Chemoselectivity Grignard reagents do not typically couple with chlorinated arenes. This low reactivity is the basis for chemoselectivity for nickel insertion into the C–Br bond of bromochlorobenzene using a NiCl2-based catalyst. Applications Synthesis of aliskiren The Kumada coupling is suitable for large-scale, industrial processes, such as drug synthesis. The reaction is used to construct the carbon skeleton of aliskiren (trade name Tekturna), a treatment for hypertension. Synthesis of polythiophenes The Kumada coupling also shows promise in the synthesis of conjugated polymers, polymers such as polyalkylthiophenes (PAT), which have a variety of potential applications in organic solar cells and light-emitting diodes. In 1992, McCollough and Lowe developed the first synthesis of regioregular polyalkylthiophenes by utilizing the Kumada coupling scheme pictured below, which requires subzero temperatures. Since this initial preparation, the synthesis has been improved to obtain higher yields and operate at room temperature. See also Heck reaction Hiyama coupling Suzuki reaction Negishi coupling Petasis reaction Stille reaction Sonogashira coupling Murahashi coupling Citations Carbon-carbon bond forming reactions Name reactions
Kumada coupling
[ "Chemistry" ]
2,044
[ "Coupling reactions", "Name reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
10,685,957
https://en.wikipedia.org/wiki/Haplogroup%20S%20%28mtDNA%29
In human genetics, Haplogroup S is a human mitochondrial DNA (mtDNA) haplogroup found only among Indigenous Australians. It is a descendant of macrohaplogroup N. Origin Haplogroup S mtDNA evolved within Australia between 64,000 and 40,000 years ago (51 kya). Distribution It is found in the Indigenous Australian population. Haplogroup S2 found in Willandra Lakes human remain WLH4 dated back Late Holocene (3,000–500 years ago). The following table lists relevant GenBank samples: Subclades Tree This phylogenetic tree of haplogroup S subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation and subsequent published research. The TMRCA for haplogroup S is between 49 and 51 KYA according to Nano Nagle's Aboriginal Australian mitochondrial genome variation – an increased understanding of population antiquity and diversity publication that published in 2017. S (64-40 kya) in Australia S1 (53-32 kya) in Australia S1a (44-29 kya) found in WA, NT, QLD and NSW S1b (37-22 kya) found in NT, QLD and NSW S1b1 (30-10 kya) found in NT and QLD S1b1a (24-6 kya) found in QLD S1b2 (17-3 kya) found in QLD S1b3 (20-4 kya) found in QLD and NSW S2 (44-22 kya) in Australia S2a (38-18 kya) found in NT, QLD, NSW and TAS S2a1 (31-12 kya) found in NSW, QLD and TAS S2a1a (19-6 kya) found in NSW and QLD S2a2 (38-11 kya) found in NT, QLD and NSW S2b (42-18 kya) found in WA, NT, QLD and VIC S2b1(27-9 kya) found in NT, QLD and VIC S2b2 (37-12 kya) found in WA, NT and QLD S-T152C! S3 (17-1 kya) found in NT S4 found in NT S5 found in WA S6 found in NSW See also Genealogical DNA test Genetic genealogy Human mitochondrial genetics Population genetics References External links Ian Logan's Mitochondrial DNA Site: Haplogroup S Mannis van Oven's Phylotree S
Haplogroup S (mtDNA)
[ "Chemistry", "Biology" ]
552
[ "Biochemistry stubs", "Biotechnology stubs", "Bioinformatics", "Bioinformatics stubs" ]
10,685,970
https://en.wikipedia.org/wiki/Haplogroup%20pre-JT
Haplogroup pre-JT is a human mitochondrial DNA haplogroup (mtDNA). It is also called R2'JT. Origin Haplogroup pre-JT is a descendant of the haplogroup R. It is characterised by the mutation T4216C. The pre-JT clade has two direct descendant lineages, haplogroup JT and haplogroup R2. Distribution According to YFull MTree, haplogroup R2'JT has allegedly been sequenced in at least three individuals, among whom one came from ancient Egypt and one from modern Denmark. However, Ian Logan mutationally interpreted the Denmark sample as being a member of T1a. One carrier of haplogroup R2'JT was found in an in-depth study of "108 Scandinavian Neolithic individuals". Subclades Its major subclade is Haplogroup JT, which further divides into Haplogroup J and Haplogroup T. Its other subclade is Haplogroup R2, which has such branches as R2a, R2b, and R2c. Tree R2'JT R2 JT J T See also Genealogical DNA test Genetic genealogy Human mitochondrial genetics Population genetics References External links Ian Logan's Mitochondrial DNA Site JT
Haplogroup pre-JT
[ "Chemistry", "Biology" ]
280
[ "Biochemistry stubs", "Biotechnology stubs", "Bioinformatics", "Bioinformatics stubs" ]
13,169,712
https://en.wikipedia.org/wiki/Embedment
Embedment is a phenomenon in mechanical engineering in which the surfaces between mechanical members of a loaded joint embed. It can lead to failure by fatigue as described below, and is of particular concern when considering the design of critical fastener joints. Mechanism The mechanism behind embedment is different from creep. When the loading of the joint varies (e.g. due to vibration or thermal expansion) the protruding points of the imperfect surfaces will see local stress concentrations and yield until the stress concentration is relieved. Over time, surfaces can flatten an appreciable amount in the order of thousandths of an inch. Consequences In critical fastener joints, embedment can mean loss of preload. Flattening of a surface allows the strain of a screw to relax, which in turn correlates with a loss in tension and thus preload. In bolted joints with particularly short grip lengths, the loss of preload due to embedment can be especially significant, causing complete loss of preload. Therefore, embedment can lead directly to loosening of a fastener joint and subsequent fatigue failure. In bolted joints, most of the embedment occurs during torquing. Only embedment that occurs after installation can cause a loss of preload, and values of up to 0.0005 inches can be seen at each surface mate, as reported by SAE. Prevention and solutions Embedment can be prevented by designing mating surfaces of a joint to have high surface hardness and very smooth surface finish. Exceptionally hard and smooth surfaces will have less susceptibility to the mechanism that causes embedment. In most cases, some degree of embedment is inevitable. That said, short grip lengths should be avoided. For two bolted joints of identical design and installation, except the second having a longer grip length, the first joint will be more likely to loosen and fail. Since both joints have the same loading, the surfaces will experience the same amount of embedment. However, the relaxation in strain is less significant to the longer grip length and the loss in preload will be minimized. For this reason, bolted joints should always be designed with careful consideration for the grip length. If a short grip length can not be avoided, the use of conical spring washers (Belleville washers or disc springs) can also reduce the loss of bolt pre-load due to embedment. See also Stress relaxation References Comer, Dr. Jess. (2005); "Source of Fatigue Failures of Threaded Fasteners", T. Jaglinski, et al. (2007); "Study of Bolt Load Loss in Bolted Aluminum Joints", External links SAE Fatigue Design and Evaluation Committee Mechanical engineering Reliability engineering Fasteners Materials degradation
Embedment
[ "Physics", "Materials_science", "Engineering" ]
561
[ "Systems engineering", "Applied and interdisciplinary physics", "Reliability engineering", "Fasteners", "Materials science", "Construction", "Mechanical engineering", "Materials degradation" ]
8,558,371
https://en.wikipedia.org/wiki/Alclad
Alclad is a corrosion-resistant aluminium sheet formed from high-purity aluminium surface layers metallurgically bonded (rolled onto) to high-strength aluminium alloy core material. It has a melting point of about . Alclad is a trademark of Alcoa but the term is also used generically. Since the late 1920s, Alclad has been produced as an aviation-grade material, being first used by the sector in the construction of the ZMC-2 airship. The material has significantly more resistance to corrosion than most aluminium-based alloys, for only a modest increase in weight, making Alclad attractive for building various elements of aircraft, such as the fuselage, structural members, skin, and cowling. Accordingly, it became a relatively popular material for aircraft manufacturing. Details The material was described in NACA-TN-259 of August 1927, as "a new corrosion resistant aluminium product which is markedly superior to the present strong alloys. Its use should result in greatly increased life of a structural part. Alclad is a heat-treated aluminium, copper, manganese, magnesium alloy that has the corrosion resistance of pure metal at the surface and the strength of the strong alloy underneath. Of particular importance is the thorough character of the union between the alloy and the pure aluminium. Preliminary results of salt spray tests (24 weeks of exposure) show changes in tensile strength and elongation of Alclad 17ST, when any occurred, to be so small as to be well within the limits of experimental error." In applications involving aircraft construction, Alclad has proven to have increased resistance to corrosion at the expense of increased weight when compared to sheet aluminium. As pure aluminium possesses a relatively greater resistance to corrosion over the majority of aluminium alloys, it was soon recognised that a thin coating of pure aluminium over the exterior surface of those alloys would take advantage of the superior qualities of both materials. Thus, a key advantage of Alclad over most aluminium alloys is its high corrosion resistance. However, considerable care must be taken while working on an Alclad-covered exterior surface, such as while cleaning the skin of an aircraft, to avoid scarring the surface to expose the vulnerable alloy underneath and prematurely age those elements. Due to its relatively shiny natural finish, it is often considered to be cosmetically pleasing when used for external elements, particularly during restoration efforts. It has been observed that some fabrication techniques, such as welding, are not suitable when used in conjunction with Alclad. Mild cleaners with a neutral pH value and finer abrasives are recommended for cleaning and polishing Alclad surfaces. It is common for waterproof wax and other inhibitive coverings to be applied to further reduce corrosion. In the twenty-first century, research and evaluation was underway into new coatings and application techniques. History Alclad sheeting has become a widely used material within the aviation industry for the construction of aircraft due to its favourable qualities, such as a high fatigue resistance and its strength. During the first half of the twentieth century, substantial studies were conducted into the corrosion qualities of various lightweight aluminium alloys for aviation purposes. The first aircraft to be constructed from Alclad was the all-metal US Navy airship ZMC-2, which was constructed in 1927 at Naval Air Station Grosse Ile. Prior to this, aluminium had been used on the pioneering zeppelins constructed by Ferdinand Zeppelin. Alclad has been most commonly present in certain elements of an aircraft, including the fuselage, structural members, skin, and cowls. The aluminium alloy that Alclad is derived from has become one of the most commonly used of all aluminium-based alloys. While unclad aluminium has also continued to be extensively used on modern aircraft, which has a lower weight than Alclad, it is more prone to corrosion; the alternating use of the two materials is often defined by the specific components or elements that are composed of them. In aviation-grade Alclad, the thickness of the outer cladding layer typically varies between 1% and 15% of the total thickness. See also Kynal-Core, similar aluminium-clad alloys produced by ICI Duralumin, an aviation-related, copper-content aluminium alloy patented by its inventor Alfred Wilm by 1906 References Citations Bibliography External links Aluminium Alloys via aircraftmaterials.com Corrosion and Inspection of General Aviation Aircraft via caa.co.uk Aluminium Aluminium alloys Corrosion prevention Aerospace materials Bimetal
Alclad
[ "Chemistry", "Materials_science", "Engineering" ]
904
[ "Corrosion prevention", "Aerospace materials", "Metallurgy", "Corrosion", "Bimetal", "Aluminium alloys", "Alloys", "Aerospace engineering" ]
8,559,342
https://en.wikipedia.org/wiki/Deal%E2%80%93Grove%20model
The Deal–Grove model mathematically describes the growth of an oxide layer on the surface of a material. In particular, it is used to predict and interpret thermal oxidation of silicon in semiconductor device fabrication. The model was first published in 1965 by Bruce Deal and Andrew Grove of Fairchild Semiconductor, building on Mohamed M. Atalla's work on silicon surface passivation by thermal oxidation at Bell Labs in the late 1950s. This served as a step in the development of CMOS devices and the fabrication of integrated circuits. Physical assumptions The model assumes that the oxidation reaction occurs at the interface between the oxide layer and the substrate material, rather than between the oxide and the ambient gas. Thus, it considers three phenomena that the oxidizing species undergoes, in this order: It diffuses from the bulk of the ambient gas to the surface. It diffuses through the existing oxide layer to the oxide-substrate interface. It reacts with the substrate. The model assumes that each of these stages proceeds at a rate proportional to the oxidant's concentration. In the first step, this means Henry's law; in the second, Fick's law of diffusion; in the third, a first-order reaction with respect to the oxidant. It also assumes steady state conditions, i.e. that transient effects do not appear. Results Given these assumptions, the flux of oxidant through each of the three phases can be expressed in terms of concentrations, material properties, and temperature. By setting the three fluxes equal to each other the following relations can be derived: Assuming a diffusion controlled growth i.e. where determines the growth rate, and substituting and in terms of from the above two relations into and equation respectively, one obtains: If N is the concentration of the oxidant inside a unit volume of the oxide, then the oxide growth rate can be written in the form of a differential equation. The solution to this equation gives the oxide thickness at any time t. where the constants and encapsulate the properties of the reaction and the oxide layer respectively, and is the initial layer of oxide that was present at the surface. These constants are given as: where , with being the gas solubility parameter of the Henry's law and is the partial pressure of the diffusing gas. Solving the quadratic equation for x yields: Taking the short and long time limits of the above equation reveals two main modes of operation. The first mode, where the growth is linear, occurs initially when is small. The second mode gives a quadratic growth and occurs when the oxide thickens as the oxidation time increases. The quantities B and B/A are often called the quadratic and linear reaction rate constants. They depend exponentially on temperature, like this: where is the activation energy and is the Boltzmann constant in eV. differs from one equation to the other. The following table lists the values of the four parameters for single-crystal silicon under conditions typically used in industry (low doping, atmospheric pressure). The linear rate constant depends on the orientation of the crystal (usually indicated by the Miller indices of the crystal plane facing the surface). The table gives values for and silicon. Validity for silicon The Deal–Grove model works very well for single-crystal silicon under most conditions. However, experimental data shows that very thin oxides (less than about 25 nanometres) grow much more quickly in than the model predicts. In silicon nanostructures (e.g., silicon nanowires) this rapid growth is generally followed by diminishing oxidation kinetics in a process known as self-limiting oxidation, necessitating a modification of the Deal–Grove model. If the oxide grown in a particular oxidation step greatly exceeds 25 nm, a simple adjustment accounts for the aberrant growth rate. The model yields accurate results for thick oxides if, instead of assuming zero initial thickness (or any initial thickness less than 25 nm), we assume that 25 nm of oxide exists before oxidation begins. However, for oxides near to or thinner than this threshold, more sophisticated models must be used. In the 1980s, it became obvious that an update to the Deal-Grove model is necessary to model the aforementioned thin oxides (self-limiting cases). One such approach that more accurately models thin oxides is the Massoud model from 1985 [2]. The Massoud model is analytical and based on parallel oxidation mechanisms. It changes the parameters of the Deal-Grove model to better model the initial oxide growth with the addition of rate-enhancement terms. The Deal-Grove model also fails for polycrystalline silicon ("poly-silicon"). First, the random orientation of the crystal grains makes it difficult to choose a value for the linear rate constant. Second, oxidant molecules diffuse rapidly along grain boundaries, so that poly-silicon oxidizes more rapidly than single-crystal silicon. Dopant atoms strain the silicon lattice, and make it easier for silicon atoms to bond with incoming oxygen. This effect may be neglected in many cases, but heavily doped silicon oxidizes significantly faster. The pressure of the ambient gas also affects oxidation rate. References Bibliography External links Online Calculator including pressure, doping, and thin oxide effects Semiconductor device fabrication Chemical engineering Nanomaterials Nanoelectronics
Deal–Grove model
[ "Chemistry", "Materials_science", "Engineering" ]
1,089
[ "Microtechnology", "Chemical engineering", "Semiconductor device fabrication", "Nanoelectronics", "nan", "Nanotechnology", "Nanomaterials" ]
8,559,750
https://en.wikipedia.org/wiki/Hin%20recombinase
Hin recombinase is a 21kD protein composed of 198 amino acids that is found in the bacteria Salmonella. Hin belongs to the serine recombinase family (B2) of DNA invertases in which it relies on the active site serine to initiate DNA cleavage and recombination. The related protein, gamma-delta resolvase shares high similarity to Hin, of which much structural work has been done, including structures bound to DNA and reaction intermediates. Hin functions to invert a 900 base pair (bp) DNA segment within the salmonella genome that contains a promoter for downstream flagellar genes, fljA and fljB. Inversion of the intervening DNA alternates the direction of the promoter and thereby alternates expression of the flagellar genes. This is advantageous to the bacterium as a means of escape from the host immune response. Hin functions by binding to two 26bp imperfect inverted repeat sequences as a homodimer. These hin binding sites flank the invertible segment which not only encodes the Hin gene itself, but also contains an enhancer element to which the bacterial Fis proteins binds with nanomolar affinity. Four molecules of Fis bind to this site as a homodimers and are required for the recombination reaction to proceed. The initial reaction requires binding of Hin and Fis to their respective DNA sequences and assemble into a higher-order nucleoprotein complex with branched plectonemic supercoils with the aid of the DNA bending protein HU. At this point, it is believed that the Fis protein modulates subtle contacts to activate the reaction, possibly through direct interactions with the Hin protein. Activation of the 4 catalytic serine residues within the Hin tetramer make a 2-bp double stranded DNA break and forms a covalent reaction intermediate. The DNA cleavage event also requires the divalent metal cation magnesium. A large conformational change reveals a large hydrophobic interface that allows for subunit rotation which may be driven by superhelical torsion within the protein-DNA complex. After this 180° rotation, Hin returns to its native conformation and re-ligates the cleaved DNA, without the aid of high energy cofactors and without the loss of any DNA. References Genetics techniques Enzymes
Hin recombinase
[ "Engineering", "Biology" ]
478
[ "Genetics techniques", "Genetic engineering" ]
8,562,026
https://en.wikipedia.org/wiki/Electroluminescent%20wire
Electroluminescent wire (often abbreviated as EL wire) is a thin copper wire coated in a phosphor that produces light through electroluminescence when an alternating current is applied to it. It can be used in a wide variety of applications—vehicle and structure decoration, safety and emergency lighting, toys, clothing etc.—much as rope light or Christmas lights are often used. Unlike these types of strand lights, EL wire is not a series of points, but produces a continuous unbroken line of visible light. Its thin diameter makes it flexible and ideal for use in a variety of applications such as clothing or costumes. Structure EL wire's construction consists of five major components. First is a solid-copper wire core coated with phosphor. A very fine wire or pair of wires is spiral-wound around the phosphor-coated copper core and then the outer Indium tin oxide (ITO) conductive coating is evaporated on. This fine wire is electrically isolated from the copper core. Surrounding this "sandwich" of copper core, phosphor and fine copper wire is a clear PVC sleeve. Finally, surrounding this thin and clear PVC sleeve is another clear, colored translucent or fluorescent PVC sleeve. An alternating current electric potential of approximately 90 to 120 volts at about 1000 Hz is applied between the copper core wire and the fine wire that surrounds the copper core. The wire can be modeled as a coaxial capacitor with about 1 nF of capacitance per 30 cm, and the rapid charging and discharging of this capacitor excites the phosphor to emit light. The colors of light that can be produced efficiently by phosphors are limited, so many types of wire use an additional fluorescent organic dye in the clear PVC sleeve to produce the final result. These organic dyes produce colors like red and purple when excited by the blue-green light of the core. A resonant oscillator is typically used to generate the high voltage drive signal. Because of the capacitance load of the EL wire, using an inductive (coiled) transformer makes the driver a very efficient tuned LC oscillator. The efficiency of EL wire is very high, and thus up to a hundred meters of EL wire can be driven by AA batteries for several hours. In recent years, the LC circuit has been replaced for some applications with a single chip switched capacitor inverter IC such as the Supertex HV850; this can run 30 cm of angel hair wire at high efficiency, and is suitable for solar lanterns and safety applications. The other advantage of these chips is that the control signals can be derived from a microcontroller, so brightness and colour can be varied programmatically; this can be controlled by using external sensors that sense, for example, battery state, ambient temperature, or ambient light etc. EL wire - in common with other types of EL devices - does have limitations: at high frequency it dissipates a lot of heat, and that can lead to breakdown and loss of brightness over time. Because the wire is unshielded and typically operates at a relatively high voltage, EL wire can produce high-frequency interference (corresponding to the frequency of the oscillator) that can be picked up by sensitive audio equipment, such as guitar pickups. There is also a voltage limit: typical EL wire breaks down at around 180 volts peak-to-peak, so if using an unregulated transformer, back-to-back zener diodes and series current-limiting resistors are essential. In addition, EL sheet and wire can sometimes be used as a touch sensor, since compressing the capacitor will change its value. Sequencers EL wire sequencers can flash electroluminescent wire, or EL wire, in sequential patterns. EL wire requires a low-power, high-frequency driver to cause the wire to illuminate. Most EL wire drivers simply light up one strand of EL wire in a constant-on mode, and some drivers may additionally have a blink or strobe mode. A sound-activated driver will light EL wire in synchronization to music, speech, or other ambient sound, but an EL wire sequencer will allow multiple lengths of EL wire to be flashed in a desired sequence. The lengths of EL wire can all be the same color, or a variety of colors. The images above show a sign that displays a telephone number, where the numbers were formed using different colors of EL wire. There are ten numbers, each of which is connected to a different channel of the EL wire sequencer. Like EL wire drivers, sequencers are rated to drive (or power) a range or specific length of EL wire. For example, using a sequencer rated for 1.5 to 14 meters (5 to 45 feet), if less than 1.5m is used, there is a risk of burning out the sequencer, and if more than 14m is used, the EL wire will not light as brightly as intended. There are commercially available EL wire sequencers capable of lighting three, four, five, or ten lengths of EL wire. There are professional and experimental sequencers with many more than ten channels, but for most applications, ten channels is enough. Sequencers usually have options for changing the speed, reversing, changing the order of the sequence, and sometimes for changing whether the first wires remain lit or go off as the rest of the wires in the sequence are lit. EL wire sequencers tend to be smaller than a pack of cigarettes and most are powered by batteries. This versatility lends to the sequencers' use at nighttime events where mains electricity is not available. Applications By arranging each strand of EL wire into a shape slightly different from the previous one, it is possible to create animations using EL wire sequencers. EL wire sequencers are also used for costumes and have been used to create animations on various items such as kimono, purses, neckties, and motorcycle tanks. They are increasingly popular among artists, dancers, maker culture, and similar creative communities, such as exhibited in the annual Burning Man alt-culture festival. References 5,753,381 US Patent, Electroluminescent Filament Notes External links How Electroluminescent (EL) Wire Works, by Joanna Burgess // How Stuff Works Display technology Lighting Luminescence Wire
Electroluminescent wire
[ "Chemistry", "Engineering" ]
1,328
[ "Electronic engineering", "Luminescence", "Molecular physics", "Display technology" ]
8,562,999
https://en.wikipedia.org/wiki/Rippling
In computer science, more particularly in automated theorem proving, rippling refers to a group of meta-level heuristics, developed primarily in the Mathematical Reasoning Group in the School of Informatics at the University of Edinburgh, and most commonly used to guide inductive proofs in automated theorem proving systems. Rippling may be viewed as a restricted form of rewrite system, where special object level annotations are used to ensure fertilization upon the completion of rewriting, with a measure decreasing requirement ensuring termination for any set of rewrite rules and expression. History Raymond Aubin was the first person to use the term "rippling out" whilst working on his 1976 PhD thesis at the University of Edinburgh. He recognised a common pattern of movement during the rewriting stage of inductive proofs. Alan Bundy later turned this concept on its head by defining rippling to be this pattern of movement, rather than a side effect. Since then, "rippling sideways", "rippling in" and "rippling past" were coined, so the term was generalised to rippling. Rippling continues to be developed at Edinburgh, and elsewhere, as of 2007. Rippling has been applied to many problems traditionally viewed as being hard in the inductive theorem proving community, including Bledsoe's limit theorems and a proof of the Gordon microprocessor, a miniature computer developed by Michael J. C. Gordon and his team at Cambridge. Overview Very often, when attempting to prove a proposition, we are given a source expression and a target expression, which differ only by the inclusion of a few extra syntactic elements. This is especially true in inductive proofs, where the given expression is taken to be the inductive hypothesis, and the target expression the inductive conclusion. Usually, the differences between the hypothesis and conclusion are only minor, perhaps the inclusion of a successor function (e.g., +1) around the induction variable. At the start of rippling the differences between the two expressions, known as wave-fronts in rippling parlance, are identified. Typically these differences prevent the completion of the proof and need to be "moved away". The target expression is annotated to distinguish the wavefronts (differences) and skeleton (common structure) between the two expressions. Special rules, called wave rules, can then be used in a terminating fashion to manipulate the target expression until the source expression can be used to complete the proof. Example We aim to show that the addition of natural numbers is commutative. This is an elementary property, and the proof is by routine induction. Nevertheless, the search space for finding such a proof may become quite large. Typically, the base case of any inductive proof is solved by methods other than rippling. For this reason, we will concentrate on the step case. Our step case takes the following form, where we have chosen to use x as the induction variable: We may also possess several rewrite rules, drawn from lemmas, inductive definitions or elsewhere, that can be used to form wave-rules. Suppose we have the following three rewrite rules: then these can be annotated, to form: Note that all these annotated rules preserve the skeleton (x + y = y + x, in the first case and x + y in the second/third). Now, annotating the inductive step case, gives us: And we are all set to perform rippling: Note that the final rewrite causes all wave-fronts to disappear, and we may now apply fertilization, the application of the inductive hypotheses, to complete the proof. References Further reading Heuristics Automated theorem proving
Rippling
[ "Mathematics" ]
770
[ "Mathematical logic", "Computational mathematics", "Automated theorem proving" ]
8,563,981
https://en.wikipedia.org/wiki/OMDoc
OMDoc (Open Mathematical Documents) is a semantic markup format for mathematical documents. While MathML only covers mathematical formulae and the related OpenMath standard only supports formulae and “content dictionaries” containing definitions of the symbols used in formulae, OMDoc covers the whole range of written mathematics. Coverage OMDoc allows for mathematical expressions on three levels: Object levelFormulae, written in Content MathML (the non-presentational subset of MathML), OpenMath or languages for mathematical logic. Statement levelDefinitions, theorems, proofs, examples and the relations between them (e.g. “this proof proves that theorem”). Theory levelA theory is a set of contextually related statements. Theories may import each other, thereby forming a graph. Seen as collections of symbol definitions, OMDoc theories are compatible to OpenMath content dictionaries. On each level, formal syntax and informal natural language can be used, depending on the application. Semantics and Presentation OMDoc is a semantic markup language that allows writing down the meaning of texts about mathematics. In contrast to LaTeX, for example, it is not primarily presentation-oriented. An OMDoc document need not specify what its contents should look like. A conversion to LaTeX and XHTML (with Presentation MathML for the formulae) is possible, though. To this end, the presentation of each symbol can be defined. Applications Today, OMDoc is used in the following settings: E-learningCreation of customized textbooks. Data exchangeOMDoc import and export modules are available for many automated theorem provers and computer algebra systems. OMDoc is intended to be used for communication between mathematical web services. Document preparationDocuments about mathematics can be prepared in OMDoc and later exported to a presentation-oriented format like LaTeX or XHTML+MathML. History OMDoc has been developed by the German mathematician and computer scientist Michael Kohlhase since 1998. So far, there have been the following releases: 1.0 (November 2000) 1.1 (December 2001) 1.2 (July 2006) Future developments It is planned to create the infrastructure for a “semantic web for technology and science” based on OMDoc. To this end, OMDoc is being extended towards sciences other than mathematics. The first result is PhysML, an OMDoc variant extended towards physics. For a better integration with other Semantic Web applications, an OWL ontology of OMDoc is under development, as well as an export facility to RDF. See also Mathematical knowledge management References Michael Kohlhase (2006): An Open Markup Format for Mathematical Documents (Version 1.2). Lecture Notes in Artificial Intelligence, no. 4180. Springer Verlag, Heidelberg. . External links Wiki for OMDoc and related projects Markup languages Mathematical markup languages Semantic Web XML-based standards
OMDoc
[ "Mathematics", "Technology" ]
608
[ "Computer standards", "Mathematical markup languages", "XML-based standards" ]
8,564,033
https://en.wikipedia.org/wiki/Ronchi%20test
In optical testing a Ronchi test is a method of determining the surface shape (figure) of a mirror used in telescopes and other optical devices. Description In 1923 Italian physicist Vasco Ronchi published a description of the eponymous Ronchi test, which is a variation of the Foucault knife-edge test and which uses simple equipment to test the quality of optics, especially concave mirrors. . A "Ronchi tester" consists of: A light source A diffuser A Ronchi grating A Ronchi grating consists of alternate dark and clear stripes. One design is a small frame with several evenly spaced fine wires attached. Light is emitted through the Ronchi grating (or a single slit), reflected by the mirror being tested, then passes through the Ronchi grating again and is observed by the person doing the test. The observer's eye is placed close to the centre of curvature of the mirror under test looking at the mirror through the grating. The Ronchi grating is a short distance (less than 2 cm) closer to the mirror. The observer sees the mirror covered in a pattern of stripes that reveal the shape of the mirror. The pattern is compared to a mathematically generated diagram (usually done on a computer today) of what it should look like for a given figure. Inputs to the program are line frequency of the Ronchi grating, focal length and diameter of the mirror, and the figure required. If the mirror is spherical, the pattern consists of straight lines. Applications The Ronchi test is used in the testing of mirrors for reflecting telescopes especially in the field of amateur telescope making. It is much faster to set up than the standard Foucault knife-edge test. The Ronchi test differs from the knife-edge test, requiring a specialized target (the Ronchi grating, which amounts to a periodic series of knife edges) and being more difficult to interpret. This procedure offers a quick evaluation of the mirror's shape and condition. It readily identifies a 'turned edge' (rolled down outer diameter of the mirror), a common fault that can develop in objective mirror making. The figure quality of a convex lens may be visually tested using a similar principle. The grating is moved around the focal point of the lens while viewing the virtual image through the opposite side. Distortions in the lens surface figure then appear as asymmetries in the periodic grating image. Footnotes References Scienceworld - Wolfram.com - Ronchi test The ATM's Workshop Matching Ronchi Test Optics Telescopes
Ronchi test
[ "Physics", "Chemistry", "Astronomy" ]
518
[ "Applied and interdisciplinary physics", "Optics", "Telescopes", " molecular", "Astronomical instruments", "Atomic", " and optical physics" ]
8,564,483
https://en.wikipedia.org/wiki/Bogdanov%E2%80%93Takens%20bifurcation
In bifurcation theory, a field within mathematics, a Bogdanov–Takens bifurcation is a well-studied example of a bifurcation with co-dimension two, meaning that two parameters must be varied for the bifurcation to occur. It is named after Rifkat Bogdanov and Floris Takens, who independently and simultaneously described this bifurcation. A system y''' = f(y) undergoes a Bogdanov–Takens bifurcation if it has a fixed point and the linearization of f around that point has a double eigenvalue at zero (assuming that some technical nondegeneracy conditions are satisfied). Three codimension-one bifurcations occur nearby: a saddle-node bifurcation, an Andronov–Hopf bifurcation and a homoclinic bifurcation. All associated bifurcation curves meet at the Bogdanov–Takens bifurcation. The normal form of the Bogdanov–Takens bifurcation is There exist two codimension-three degenerate Takens–Bogdanov bifurcations, also known as Dumortier–Roussarie–Sotomayor bifurcations. References Bogdanov, R. "Bifurcations of a Limit Cycle for a Family of Vector Fields on the Plane." Selecta Math. Soviet 1, 373–388, 1981. Kuznetsov, Y. A. Elements of Applied Bifurcation Theory. New York: Springer-Verlag, 1995. Takens, F. "Forced Oscillations and Bifurcations." Comm. Math. Inst. Rijksuniv. Utrecht 2, 1–111, 1974. Dumortier F., Roussarie R., Sotomayor J. and Zoladek H., Bifurcations of Planar Vector Fields'', Lecture Notes in Math. vol. 1480, 1–164, Springer-Verlag (1991). External links Bifurcation theory
Bogdanov–Takens bifurcation
[ "Mathematics" ]
435
[ "Bifurcation theory", "Dynamical systems" ]
8,564,970
https://en.wikipedia.org/wiki/Landau%E2%80%93Kolmogorov%20inequality
In mathematics, the Landau–Kolmogorov inequality, named after Edmund Landau and Andrey Kolmogorov, is the following family of interpolation inequalities between different derivatives of a function f defined on a subset T of the real numbers: On the real line For k = 1, n = 2 and T = [c,∞) or T = R, the inequality was first proved by Edmund Landau with the sharp constants C(2, 1, [c,∞)) = 2 and C(2, 1, R) = √2. Following contributions by Jacques Hadamard and Georgiy Shilov, Andrey Kolmogorov found the sharp constants and arbitrary n, k: where an are the Favard constants. On the half-line Following work by Matorin and others, the extremising functions were found by Isaac Jacob Schoenberg, explicit forms for the sharp constants are however still unknown. Generalisations There are many generalisations, which are of the form Here all three norms can be different from each other (from L1 to L∞, with p=q=r=∞ in the classical case) and T may be the real axis, semiaxis or a closed segment. The Kallman–Rota inequality generalizes the Landau–Kolmogorov inequalities from the derivative operator to more general contractions on Banach spaces. Notes Inequalities →
Landau–Kolmogorov inequality
[ "Mathematics" ]
300
[ "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
8,565,423
https://en.wikipedia.org/wiki/No-teleportation%20theorem
In quantum information theory, the no-teleportation theorem states that an arbitrary quantum state cannot be converted into a sequence of classical bits (or even an infinite number of such bits); nor can such bits be used to reconstruct the original state, thus "teleporting" it by merely moving classical bits around. Put another way, it states that the unit of quantum information, the qubit, cannot be exactly, precisely converted into classical information bits. This should not be confused with quantum teleportation, which does allow a quantum state to be destroyed in one location, and an exact replica to be created at a different location. In crude terms, the no-teleportation theorem stems from the Heisenberg uncertainty principle and the EPR paradox: although a qubit can be imagined to be a specific direction on the Bloch sphere, that direction cannot be measured precisely, for the general case ; if it could, the results of that measurement would be describable with words, i.e. classical information. The no-teleportation theorem is implied by the no-cloning theorem: if it were possible to convert a qubit into classical bits, then a qubit would be easy to copy (since classical bits are trivially copyable). Formulation The term quantum information refers to information stored in the state of a quantum system. Two quantum states ρ1 and ρ2 are identical if the measurement results of any physical observable have the same expectation value for ρ1 and ρ2. Thus measurement can be viewed as an information channel with quantum input and classical output, that is, performing measurement on a quantum system transforms quantum information into classical information. On the other hand, preparing a quantum state takes classical information to quantum information. In general, a quantum state is described by a density matrix. Suppose one has a quantum system in some mixed state ρ. Prepare an ensemble, of the same system, as follows: Perform a measurement on ρ. According to the measurement outcome, prepare a system in some pre-specified state. The no-teleportation theorem states that the result will be different from ρ, irrespective of how the preparation procedure is related to measurement outcome. A quantum state cannot be determined via a single measurement. In other words, if a quantum channel measurement is followed by preparation, it cannot be the identity channel. Once converted to classical information, quantum information cannot be recovered. In contrast, perfect transmission is possible if one wishes to convert classical information to quantum information then back to classical information. For classical bits, this can be done by encoding them in orthogonal quantum states, which can always be distinguished. See also Among other no-go theorems in quantum information are: No-communication theorem. Entangled states cannot be used to transmit classical information. No-cloning theorem. Quantum states cannot be copied. No-broadcast theorem. A generalization of the no cloning theorem, to the case of mixed states. No-deleting theorem. A result dual to the no-cloning theorem: copies cannot be deleted. With the aid of shared entanglement, quantum states can be teleported, see Quantum teleportation References Jozef Gruska, Iroshi Imai, "Power, Puzzles and Properties of Entanglement" (2001) pp 25–68, appearing in Machines, Computations, and Universality: Third International Conference. edited by Maurice Margenstern, Yurii Rogozhin. (see p 41) Anirban Pathak, Elements of Quantum Computation and Quantum Communication (2013) CRC Press. (see p. 128) Quantum information theory Limits of computation No-go theorems
No-teleportation theorem
[ "Physics" ]
756
[ "Physical phenomena", "No-go theorems", "Equations of physics", "Limits of computation", "Physics theorems" ]
5,400,794
https://en.wikipedia.org/wiki/Vesicular%20monoamine%20transporter%201
Vesicular monoamine transporter 1 (VMAT1) also known as chromaffin granule amine transporter (CGAT) or solute carrier family 18 member 1 (SLC18A1) is a protein that in humans is encoded by the SLC18A1 gene. VMAT1 is an integral membrane protein, which is embedded in synaptic vesicles and serves to transfer monoamines, such as norepinephrine, epinephrine, dopamine, and serotonin, between the cytosol and synaptic vesicles. SLC18A1 is an isoform of the vesicular monoamine transporter. Discovery The idea that there must be specific transport proteins associated with the uptake of monoamines and acetylcholine into vesicles developed due to the discovery of specific inhibitors which interfered with monoamine neurotransmission and also depleted monoamines in neuroendocrine tissues. VMAT1 and VMAT2 were first identified in rats upon cloning cDNAs for proteins which gave non-amine accumulating recipient cells the ability to sequester monoamines. Subsequently, human VMATs were cloned using human cDNA libraries with the rat homologs as probes, and heterologous-cell amine uptake assays were performed to verify transport properties. Structure Across mammalian species, VMATs have been found to be structurally well conserved; VMAT1s have an overall sequence identity exceeding 80%. However, there exists only a 60% sequence identity between the human VMAT1 and VMAT2. VMAT1 is an acidic glycoprotein with an apparent weight of 40 kDa. Although the crystallographic structure has not yet been fully resolved, VMAT1 is known to have either twelve transmembrane domains (TMDs), based on Kyte-Doolittle hydrophobicity scale analysis or ten TMDs, based on MAXHOM alignment. MAXHOM alignment was determined using the "profile-fed neural network systems from Heidelberg" (PHD) program. The main difference between these two models arises from the placement of TMDs II and IV in the vesicle lumen or the cytoplasm. Localization Cell types VMATs are found in a variety of cell types throughout the body, however, VMAT1 is found exclusively in neuroendocrine cells, in contrast to VMAT2, which is also found in the PNS and CNS. Specifically, VMAT1 is found in chromaffin cells, enterochromaffin cells, and small intensely fluorescent cells (SIFs). Chromaffin cells are responsible for releasing the catecholamines (norepinephrine and epinephrine) into systemic circulation. Enterochromaffin cells are responsible for storing serotonin in the gastrointestinal tract. SIFs are interneurons associated with the sympathetic nervous system which are managed by dopamine. Vesicles VMAT1 is found in both large dense-core vesicles (LDCVs) as well as in small synaptic vesicles (SSVs). This was discovered via studying rat adrenal medulla cells (PC12 cells). LDCVs are 70-200 nm in size and exist throughout the neuron (soma, dendrites, etc.). SSVs are much smaller (usually about 40 nm) and typically exist as clusters in the presynaptic cleft. Function Active transport of monoamines Driving force The active transport of monoamines from the cytosol into storage vesicles operates against a large (>105) concentration gradient. Secondary active transport is the type of active transport used, meaning that VMAT1 is an antiporter. This transport is facilitated via proton gradient generated by the protein proton ATPase. The inward transport of the monoamine is coupled with the efflux of two protons per monoamine. The first proton is thought to cause a change in VMAT1's conformation, which pushes a high affinity amine binding site, to which the monoamine attaches. The second proton then causes a second change in the conformation which pulls the monoamine into the vesicle and greatly reduces the affinity of the binding site for amines. A series of tests suggest that His419, located between TMDs X and XI, plays the key role in the first of these conformational changes, and that Asp431, located on TMD XI, does likewise during the second change. Inhibition Several reuptake inhibitors of VMATs are known to exist, including reserpine (RES), tetrabenazine (TBZ), dihydrotetrabenazine (DTBZOH), and ketanserin (KET). It is thought that RES exhibits competitive inhibition, binding to the same site as the monoamine substrate, as studies have shown that it can be displaced via introduction of norepinephrine. TBZ, DTBZOH, and KET are thought to exhibit non-competitive inhibition, instead binding to allosteric sites and decreasing the activity of the VMAT rather than simply blocking its substrate binding site. It has been found that these inhibitors are less effective at inhibiting VMAT1 than VMAT2, and the inhibitory effects of the tetrabenazines on VMAT1 is negligible. Clinical significance Pancreatic cancer The expression of VMAT1 in healthy endocrine cells was compared to VMAT1 expression in infants with hyperinsulinemic hypoglycemia and adults with pancreatic endocrine tumors. Through immunohistochemistry (IHC) and in situ hybridization (ISH), they found VMAT1 and VMAT2 were located in mutually exclusive cell types, and that in insulinomas VMAT2 activity disappeared, suggesting that if only VMAT1 activity is present in the endocrine system, this type of cancer is likely. Digestive system VMAT1 also has effects on the modulation of gastrin processing in G cells. These intestinal endocrine cells process amine precursors, and VMAT1 pulls them into vesicles for storage. The activity of VMAT1 in these cells has a seemingly inhibitory effect on the processing of gastrin. Essentially, this means that certain compounds in the gut can be taken into these G cells and either amplify or inhibit the function of VMAT1, which will impact gastrin processing (conversion from G34 to G17). Additionally, VMAT1 is known to play a role in the uptake and secretion of serotonin in the gut. Enterochromaffin cells in the intestines will secrete serotonin in response to the activation of certain mechanosensors. The regulation of serotonin in the gut is critically important, as it modulates appetite and controls intestinal contraction. Protection against hypothermia Presence of VMAT1 in cells has been shown to protect them from the damaging effects of cooling and rewarming associated with hypothermia. Experiments were carried out on aortic and kidney cells and tissues. Evidence was found that an accumulation of serotonin using VMAT1 and TPH1 allowed for the subsequent release of serotonin when exposed to cold temperatures. This allows cystathionine beta synthase (CBS) mediated generation of H2S. The protection against the damage caused by hypothermia is due to a reduction in the generation of reactive oxygen species (ROS), which can induce apoptosis, due to the presence of H2S. Mental disorders VMAT1 (SLC18A1) maps to a shared bipolar disorder(BPD)/schizophrenia locus, which is located on chromosome 8p21. It is thought that disruption in transport of monoamine neurotransmitters due to variation in the VMAT1 gene may be relevant to the etiology of these mental disorders. One study looked at a population of European descent, examining the genotypes of a bipolar group and a control group. The study confirmed expression of VMAT1 in the brain at a protein and mRNA level, and found a significant difference between the two groups, suggesting that, at least for people of European descent, variation in the VMAT1 gene may confer susceptibility. A second study examined a population of Japanese individuals, one group healthy and the other schizophrenic. This study resulted in mostly inconclusive findings, but some indications that variation in the VMAT1 gene would confer susceptibility to schizophrenia in Japanese women. While these studies provide some promising insight into the cause of some of the most prevalent mental disorders, it is clear that additional research will be necessary in order to gain a full understanding. References External links Amphetamine Biogenic amines Molecular neuroscience Neurotransmitter transporters Receptors Signal transduction Solute carrier family
Vesicular monoamine transporter 1
[ "Chemistry", "Biology" ]
1,880
[ "Biomolecules by chemical classification", "Biogenic amines", "Signal transduction", "Receptors", "Molecular neuroscience", "Molecular biology", "Biochemistry", "Neurochemistry" ]
5,401,178
https://en.wikipedia.org/wiki/Low-density%20lipoprotein%20receptor%20gene%20family
The low-density lipoprotein receptor gene family codes for a class of structurally related cell surface receptors that fulfill diverse biological functions in different organs, tissues, and cell types. The role that is most commonly associated with this evolutionarily ancient family is cholesterol homeostasis (maintenance of appropriate concentration of cholesterol). In humans, excess cholesterol in the blood is captured by low-density lipoprotein (LDL) and removed by the liver via endocytosis of the LDL receptor. Recent evidence indicates that the members of the LDL receptor gene family are active in the cell signalling pathways between specialized cells in many, if not all, multicellular organisms. There are seven members of the LDLR family in mammals, namely: LDLR VLDL receptor (VLDLR) ApoER2, or LRP8 Low density lipoprotein receptor-related protein 4 also known as multiple epidermal growth factor (EGF) repeat-containing protein (MEGF7) LDLR-related protein 1 LDLR-related protein 1b Megalin. Human proteins containing this domain Listed below are human proteins containing low-density lipoprotein receptor domains: Class A C6; C7; 8A; 8B; C9; CD320; CFI; CORIN; DGCR2; HSPG2; LDLR; LDLRAD2; LDLRAD3; LRP1; LRP10; LRP11; LRP12; LRP1B; LRP2; LRP3; LRP4; LRP5; LRP6; LRP8; MAMDC4; MFRP; PRSS7; RXFP1; RXFP2; SORL1; SPINT1; SSPO; ST14; TMPRSS4; TMPRSS6; TMPRSS7; TMPRSS9 (serase-1B); VLDLR; Class B EGF; LDLR; LRP1; LRP10; LRP1B; LRP2; LRP4; LRP5; LRP5L; LRP6; LRP8; NID1; NID2; SORL1; VLDLR; See also Soluble low-density lipoprotein receptor-related protein (sLRP) - impaired function is related to Alzheimer's disease. Structure The members of the LDLR family are characterized by distinct functional domains present in characteristic numbers. These modules are: LDL receptor type A (LA) repeats of 40 residues each, displaying a triple-disulfide-bond-stabilized negatively charged surface; certain head-to-tail combinations of these repeats are believed to specify ligand interactions; LDL receptor type B repeats, also known as EGF precursor homology regions, containing EGF-like repeats and YWTD beta propeller domains; a transmembrane domain, and the cytoplasmic region with (a) signal(s) for receptor internalization via coated pits, containing the consensus tetrapeptide Asn-Pro-Xaa-Tyr (NPxY). This cytoplasmic tail controls both endocytosis and signaling by interacting with the phosphotyrosine binding (PTB) domain-containing proteins. In addition to these domains which can be found in all receptors of the gene family, LDL receptor and certain isoforms of ApoER2 and VLDLR contain a short region which can undergo O-linked glycosylation, known as O-linked sugar domain. ApoER2 moreover, can harbour a cleavage site for the protease furin between type A and type B repeats which enables production of a soluble receptor fragment by furin-mediated processing. References External links Schematic representation of the seven mammalian LDL receptor (LDLR) family members LDL receptor family members Receptors Protein families Signal transduction Neurophysiology
Low-density lipoprotein receptor gene family
[ "Chemistry", "Biology" ]
824
[ "Protein classification", "Signal transduction", "Receptors", "Biochemistry", "Protein families", "Neurochemistry" ]
5,401,558
https://en.wikipedia.org/wiki/Cleaner%20production
Cleaner production is a preventive, company-specific environmental protection initiative. It is intended to minimize waste and emissions and maximize product output. By analysing the flow of materials and energy in a company, one tries to identify options to minimize waste and emissions out of industrial processes through source reduction strategies. Improvements of organisation and technology help to reduce or suggest better choices in use of materials and energy, and to avoid waste, waste water generation, and gaseous emissions, and also waste heat and noise. Overview The concept was developed during the preparation of the Rio Summit as a programme of UNEP (United Nations Environmental Programme) and UNIDO (United Nations Industrial Development Organization) under the leadership of Jacqueline Aloisi de Larderel, the former Assistant Executive Director of UNEP. The programme was meant to reduce the environmental impact of industry. It built on ideas used by the company 3M in its 3P programme (pollution prevention pays). It has found more international support than all other comparable programmes. The programme idea was described "...to assist developing nations in leapfrogging from pollution to less pollution, using available technologies". Starting from the simple idea to produce with less waste Cleaner Production was developed into a concept to increase the resource efficiency of production in general. UNIDO has been operating National Cleaner Production Centers and Programmes (NCPCs/NCPPs) with centres in Latin America, Africa, Asia and Europe. Cleaner production is endorsed by UNEP's International Declaration on Cleaner Production, "a voluntary and public statement of commitment to the practice and promotion of Cleaner Production". Implementing guidelines for cleaner production were published by UNEP in 2001. In the US, the term pollution prevention is more commonly used for cleaner production. Options Examples for cleaner production options are: Documentation of consumption (as a basic analysis of material and energy flows, e. g. with a Sankey diagram) Use of indicators and controlling (to identify losses from poor planning, poor education and training, mistakes) Substitution of raw materials and auxiliary materials (especially renewable materials and energy) Increase of useful life of auxiliary materials and process liquids (by avoiding drag in, drag out, contamination) Improved control and automatisation Reuse of waste (internal or external) New, low waste processes and technologies Initiatives One of the first European initiatives in cleaner production was started in Austria in 1992 by the BMVIT (Bundesministerium für Verkehr, Innovation und Technologie). This resulted in two initiatives: "Prepare" and EcoProfit. The "PIUS" initiative was founded in Germany in 1999. Since 1994, the United Nations Industrial Development Organization operates the National Cleaner Production Centre Programme with centres in Central America, South America, Africa, Asia, and Europe. See also Cradle-to-cradle design Energy conservation Environmental management Environmental Quality Management Green design Industrial ecology ISO 9001 ISO 14001 Source reduction) Sustainability Total quality management Waste minimisation Clean Production Agreement References Bibliography Fresner, J., Bürki, T., Sittig, H., Ressourceneffizienz in der Produktion -Kosten senken durch Cleaner Production, , Symposion Publishing, 2009 Organisation For Economic Co-Operation And Development(OECD)(Hrsg.): Technologies For Cleaner Production And Products- Towards Technological Transformation For Sustainable Development. Paris: OECD, 1995 Google Books Pauli, G., From Deep Ecology to The Blue Economy, 2011, ZERI Schaltegger, S.; Bennett, M.; Burritt, R. & Jasch, C.: Environmental Management Accounting as a Support for Cleaner Production, in: Schaltegger, S.; Bennett, M.; Burritt, R. & Jasch, C. (Eds): Environmental Management Accounting for Cleaner Production. Dordrecht: Springer, 2008, 3-26 External links Cleaner Production by sectors Clean Production Council Chile Official site of the National Service that promotes Cleaner Production in that country. Journal of Cleaner Production National Pollution Prevention Roundtable Finds P2 Programs Effective (article) Pollution prevention in China Pollution prevention directory: TURI - Toxics Use Reduction Institute United States National Pollution Prevention Information Center United States Pollution Prevention Regional Information Center Waste minimisation Environmental engineering Industrial ecology Waste management concepts
Cleaner production
[ "Chemistry", "Engineering" ]
876
[ "Chemical engineering", "Industrial engineering", "Civil engineering", "Environmental engineering", "Industrial ecology" ]
5,404,610
https://en.wikipedia.org/wiki/Open%E2%80%93closed%20principle
In object-oriented programming, the open–closed principle (OCP) states "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification"; that is, such an entity can allow its behaviour to be extended without modifying its source code. The name open–closed principle has been used in two ways. Both ways use generalizations (for instance, inheritance or delegate functions) to resolve the apparent dilemma, but the goals, techniques, and results are different. The open–closed principle is one of the five SOLID principles of object-oriented design. Meyer's open–closed principle Bertrand Meyer is generally credited for having originated the term open–closed principle, which appeared in his 1988 book Object-Oriented Software Construction. A module will be said to be open if it is still available for extension. For example, it should be possible to add fields to the data structures it contains, or new elements to the set of functions it performs. A module will be said to be closed if [it] is available for use by other modules. This assumes that the module has been given a well-defined, stable description (the interface in the sense of information hiding). At the time Meyer was writing, adding fields or functions to a library inevitably required changes to any programs depending on that library. Meyer's proposed solution to this problem relied on the notion of object-oriented inheritance (specifically implementation inheritance): A class is closed, since it may be compiled, stored in a library, baselined, and used by client classes. But it is also open, since any new class may use it as parent, adding new features. When a descendant class is defined, there is no need to change the original or to disturb its clients. Polymorphic open–closed principle During the 1990s, the open–closed principle became popularly redefined to refer to the use of abstracted interfaces, where the implementations can be changed and multiple implementations could be created and polymorphically substituted for each other. In contrast to Meyer's usage, this definition advocates inheritance from abstract base classes. Interface specifications can be reused through inheritance but implementation need not be. The existing interface is closed to modifications and new implementations must, at a minimum, implement that interface. Robert C. Martin's 1996 article "The Open-Closed Principle" was one of the seminal writings to take this approach. In 2001, Craig Larman related the open–closed principle to the pattern by Alistair Cockburn called Protected Variations, and to the David Parnas discussion of information hiding. See also SOLID – the "O" in "SOLID" represents the open–closed principle Robustness principle References External links The Principles of OOD The Open/Closed Principle: Concerns about Change in Software Design The Open-Closed Principle -- and What Hides Behind It Object-oriented programming Type theory Software design Programming principles Software development philosophies
Open–closed principle
[ "Mathematics", "Engineering" ]
591
[ "Mathematical structures", "Mathematical logic", "Mathematical objects", "Type theory", "Design", "Software design" ]
5,406,474
https://en.wikipedia.org/wiki/Consensus%20%28computer%20science%29
A fundamental problem in distributed computing and multi-agent systems is to achieve overall system reliability in the presence of a number of faulty processes. This often requires coordinating processes to reach consensus, or agree on some data value that is needed during computation. Example applications of consensus include agreeing on what transactions to commit to a database in which order, state machine replication, and atomic broadcasts. Real-world applications often requiring consensus include cloud computing, clock synchronization, PageRank, opinion formation, smart power grids, state estimation, control of UAVs (and multiple robots/agents in general), load balancing, blockchain, and others. Problem description The consensus problem requires agreement among a number of processes (or agents) on a single data value. Some of the processes (agents) may fail or be unreliable in other ways, so consensus protocols must be fault-tolerant or resilient. The processes must put forth their candidate values, communicate with one another, and agree on a single consensus value. The consensus problem is a fundamental problem in controlling multi-agent systems. One approach to generating consensus is for all processes (agents) to agree on a majority value. In this context, a majority requires at least one more than half of the available votes (where each process is given a vote). However, one or more faulty processes may skew the resultant outcome such that consensus may not be reached or may be reached incorrectly. Protocols that solve consensus problems are designed to deal with a limited number of faulty processes. These protocols must satisfy several requirements to be useful. For instance, a trivial protocol could have all processes output binary value 1. This is not useful; thus, the requirement is modified such that the production must depend on the input. That is, the output value of a consensus protocol must be the input value of some process. Another requirement is that a process may decide upon an output value only once, and this decision is irrevocable. A method is correct in an execution if it does not experience a failure. A consensus protocol tolerating halting failures must satisfy the following properties. Termination Eventually, every correct process decides some value. Integrity If all the correct processes proposed the same value , then any correct process must decide . Agreement Every correct process must agree on the same value. Variations on the definition of integrity may be appropriate, according to the application. For example, a weaker type of integrity would be for the decision value to equal a value that some correct process proposed – not necessarily all of them. There is also a condition known as validity in the literature which refers to the property that a message sent by a process must be delivered. A protocol that can correctly guarantee consensus amongst n processes of which at most t fail is said to be t-resilient. In evaluating the performance of consensus protocols two factors of interest are running time and message complexity. Running time is given in Big O notation in the number of rounds of message exchange as a function of some input parameters (typically the number of processes and/or the size of the input domain). Message complexity refers to the amount of message traffic that is generated by the protocol. Other factors may include memory usage and the size of messages. Models of computation Varying models of computation may define a "consensus problem". Some models may deal with fully connected graphs, while others may deal with rings and trees. In some models message authentication is allowed, whereas in others processes are completely anonymous. Shared memory models in which processes communicate by accessing objects in shared memory are also an important area of research. Communication channels with direct or transferable authentication In most models of communication protocol participants communicate through authenticated channels. This means that messages are not anonymous, and receivers know the source of every message they receive. Some models assume a stronger, transferable form of authentication, where each message is signed by the sender, so that a receiver knows not just the immediate source of every message, but the participant that initially created the message. This stronger type of authentication is achieved by digital signatures, and when this stronger form of authentication is available, protocols can tolerate a larger number of faults. The two different authentication models are often called oral communication and written communication models. In an oral communication model, the immediate source of information is known, whereas in stronger, written communication models, every step along the receiver learns not just the immediate source of the message, but the communication history of the message. Inputs and outputs of consensus In the most traditional single-value consensus protocols such as Paxos, cooperating nodes agree on a single value such as an integer, which may be of variable size so as to encode useful metadata such as a transaction committed to a database. A special case of the single-value consensus problem, called binary consensus, restricts the input, and hence the output domain, to a single binary digit {0,1}. While not highly useful by themselves, binary consensus protocols are often useful as building blocks in more general consensus protocols, especially for asynchronous consensus. In multi-valued consensus protocols such as Multi-Paxos and Raft, the goal is to agree on not just a single value but a series of values over time, forming a progressively-growing history. While multi-valued consensus may be achieved naively by running multiple iterations of a single-valued consensus protocol in succession, many optimizations and other considerations such as reconfiguration support can make multi-valued consensus protocols more efficient in practice. Crash and Byzantine failures There are two types of failures a process may undergo, a crash failure or a Byzantine failure. A crash failure occurs when a process abruptly stops and does not resume. Byzantine failures are failures in which absolutely no conditions are imposed. For example, they may occur as a result of the malicious actions of an adversary. A process that experiences a Byzantine failure may send contradictory or conflicting data to other processes, or it may sleep and then resume activity after a lengthy delay. Of the two types of failures, Byzantine failures are far more disruptive. Thus, a consensus protocol tolerating Byzantine failures must be resilient to every possible error that can occur. A stronger version of consensus tolerating Byzantine failures is given by strengthening the Integrity constraint: IntegrityIf a correct process decides , then must have been proposed by some correct process. Asynchronous and synchronous systems The consensus problem may be considered in the case of asynchronous or synchronous systems. While real world communications are often inherently asynchronous, it is more practical and often easier to model synchronous systems, given that asynchronous systems naturally involve more issues than synchronous ones. In synchronous systems, it is assumed that all communications proceed in rounds. In one round, a process may send all the messages it requires, while receiving all messages from other processes. In this manner, no message from one round may influence any messages sent within the same round. The FLP impossibility result for asynchronous deterministic consensus In a fully asynchronous message-passing distributed system, in which at least one process may have a crash failure, it has been proven in the famous 1985 FLP impossibility result by Fischer, Lynch and Paterson that a deterministic algorithm for achieving consensus is impossible. This impossibility result derives from worst-case scheduling scenarios, which are unlikely to occur in practice except in adversarial situations such as an intelligent denial-of-service attacker in the network. In most normal situations, process scheduling has a degree of natural randomness. In an asynchronous model, some forms of failures can be handled by a synchronous consensus protocol. For instance, the loss of a communication link may be modeled as a process which has suffered a Byzantine failure. Randomized consensus algorithms can circumvent the FLP impossibility result by achieving both safety and liveness with overwhelming probability, even under worst-case scheduling scenarios such as an intelligent denial-of-service attacker in the network. Permissioned versus permissionless consensus Consensus algorithms traditionally assume that the set of participating nodes is fixed and given at the outset: that is, that some prior (manual or automatic) configuration process has permissioned a particular known group of participants who can authenticate each other as members of the group. In the absence of such a well-defined, closed group with authenticated members, a Sybil attack against an open consensus group can defeat even a Byzantine consensus algorithm, simply by creating enough virtual participants to overwhelm the fault tolerance threshold. A permissionless consensus protocol, in contrast, allows anyone in the network to join dynamically and participate without prior permission, but instead imposes a different form of artificial cost or barrier to entry to mitigate the Sybil attack threat. Bitcoin introduced the first permissionless consensus protocol using proof of work and a difficulty adjustment function, in which participants compete to solve cryptographic hash puzzles, and probabilistically earn the right to commit blocks and earn associated rewards in proportion to their invested computational effort. Motivated in part by the high energy cost of this approach, subsequent permissionless consensus protocols have proposed or adopted other alternative participation rules for Sybil attack protection, such as proof of stake, proof of space, and proof of authority. Equivalency of agreement problems Three agreement problems of interest are as follows. Terminating Reliable Broadcast A collection of processes, numbered from to communicate by sending messages to one another. Process must transmit a value to all processes such that: if process is correct, then every correct process receives for any two correct processes, each process receives the same value. It is also known as The General's Problem. Consensus Formal requirements for a consensus protocol may include: Agreement: All correct processes must agree on the same value. Weak validity: For each correct process, its output must be the input of some correct process. Strong validity: If all correct processes receive the same input value, then they must all output that value. Termination: All processes must eventually decide on an output value Weak Interactive Consistency For n processes in a partially synchronous system (the system alternates between good and bad periods of synchrony), each process chooses a private value. The processes communicate with each other by rounds to determine a public value and generate a consensus vector with the following requirements: if a correct process sends , then all correct processes receive either or nothing (integrity property) all messages sent in a round by a correct process are received in the same round by all correct processes (consistency property). It can be shown that variations of these problems are equivalent in that the solution for a problem in one type of model may be the solution for another problem in another type of model. For example, a solution to the Weak Byzantine General problem in a synchronous authenticated message passing model leads to a solution for Weak Interactive Consistency. An interactive consistency algorithm can solve the consensus problem by having each process choose the majority value in its consensus vector as its consensus value. Solvability results for some agreement problems There is a t-resilient anonymous synchronous protocol which solves the Byzantine Generals problem, if and the Weak Byzantine Generals case where is the number of failures and is the number of processes. For systems with processors, of which are Byzantine, it has been shown that there exists no algorithm that solves the consensus problem for in the oral-messages model. The proof is constructed by first showing the impossibility for the three-node case and using this result to argue about partitions of processors. In the written-messages model there are protocols that can tolerate . In a fully asynchronous system there is no consensus solution that can tolerate one or more crash failures even when only requiring the non triviality property. This result is sometimes called the FLP impossibility proof named after the authors Michael J. Fischer, Nancy Lynch, and Mike Paterson who were awarded a Dijkstra Prize for this significant work. The FLP result has been mechanically verified to hold even under fairness assumptions. However, FLP does not state that consensus can never be reached: merely that under the model's assumptions, no algorithm can always reach consensus in bounded time. In practice it is highly unlikely to occur. Some consensus protocols The Paxos consensus algorithm by Leslie Lamport, and variants of it such as Raft, are used pervasively in widely deployed distributed and cloud computing systems. These algorithms are typically synchronous, dependent on an elected leader to make progress, and tolerate only crashes and not Byzantine failures. An example of a polynomial time binary consensus protocol that tolerates Byzantine failures is the Phase King algorithm by Garay and Berman. The algorithm solves consensus in a synchronous message passing model with n processes and up to f failures, provided n > 4f. In the phase king algorithm, there are f + 1 phases, with 2 rounds per phase. Each process keeps track of its preferred output (initially equal to the process's own input value). In the first round of each phase each process broadcasts its own preferred value to all other processes. It then receives the values from all processes and determines which value is the majority value and its count. In the second round of the phase, the process whose id matches the current phase number is designated the king of the phase. The king broadcasts the majority value it observed in the first round and serves as a tie breaker. Each process then updates its preferred value as follows. If the count of the majority value the process observed in the first round is greater than n/2 + f, the process changes its preference to that majority value; otherwise it uses the phase king's value. At the end of f + 1 phases the processes output their preferred values. Google has implemented a distributed lock service library called Chubby. Chubby maintains lock information in small files which are stored in a replicated database to achieve high availability in the face of failures. The database is implemented on top of a fault-tolerant log layer which is based on the Paxos consensus algorithm. In this scheme, Chubby clients communicate with the Paxos master in order to access/update the replicated log; i.e., read/write to the files. Many peer-to-peer online real-time strategy games use a modified lockstep protocol as a consensus protocol in order to manage game state between players in a game. Each game action results in a game state delta broadcast to all other players in the game along with a hash of the total game state. Each player validates the change by applying the delta to their own game state and comparing the game state hashes. If the hashes do not agree then a vote is cast, and those players whose game state is in the minority are disconnected and removed from the game (known as a desync.) Another well-known approach is called MSR-type algorithms which have been used widely from computer science to control theory. Permissionless consensus protocols Bitcoin uses proof of work, a difficulty adjustment function and a reorganization function to achieve permissionless consensus in its open peer-to-peer network. To extend bitcoin's blockchain or distributed ledger, miners attempt to solve a cryptographic puzzle, where probability of finding a solution is proportional to the computational effort expended in hashes per second. The node that first solves such a puzzle has their proposed version of the next block of transactions added to the ledger and eventually accepted by all other nodes. As any node in the network can attempt to solve the proof-of-work problem, a Sybil attack is infeasible in principle unless the attacker has over 50% of the computational resources of the network. Other cryptocurrencies (e.g. Ethereum, NEO, STRATIS, ...) use proof of stake, in which nodes compete to append blocks and earn associated rewards in proportion to stake, or existing cryptocurrency allocated and locked or staked for some time period. One advantage of a 'proof of stake' over a 'proof of work' system, is the high energy consumption demanded by the latter. As an example, bitcoin mining (2018) is estimated to consume non-renewable energy sources at an amount similar to the entire nations of Czech Republic or Jordan, while the total energy consumption of Ethereum, the largest proof of stake network, is just under that of 205 average US households. Some cryptocurrencies, such as Ripple, use a system of validating nodes to validate the ledger. This system used by Ripple, called Ripple Protocol Consensus Algorithm (RPCA), works in rounds: Step 1: every server compiles a list of valid candidate transactions; Step 2: each server amalgamates all candidates coming from its Unique Nodes List (UNL) and votes on their veracity; Step 3: transactions passing the minimum threshold are passed to the next round; Step 4: the final round requires 80% agreement. Other participation rules used in permissionless consensus protocols to impose barriers to entry and resist sybil attacks include proof of authority, proof of space, proof of burn, or proof of elapsed time. Contrasting with the above permissionless participation rules, all of which reward participants in proportion to amount of investment in some action or resource, proof of personhood protocols aim to give each real human participant exactly one unit of voting power in permissionless consensus, regardless of economic investment. Proposed approaches to achieving one-per-person distribution of consensus power for proof of personhood include physical pseudonym parties, social networks, pseudonymized government-issued identities, and biometrics. Consensus number To solve the consensus problem in a shared-memory system, concurrent objects must be introduced. A concurrent object, or shared object, is a data structure which helps concurrent processes communicate to reach an agreement. Traditional implementations using critical sections face the risk of crashing if some process dies inside the critical section or sleeps for an intolerably long time. Researchers defined wait-freedom as the guarantee that the algorithm completes in a finite number of steps. The consensus number of a concurrent object is defined to be the maximum number of processes in the system which can reach consensus by the given object in a wait-free implementation. Objects with a consensus number of can implement any object with a consensus number of or lower, but cannot implement any objects with a higher consensus number. The consensus numbers form what is called Herlihy's hierarchy of synchronization objects. According to the hierarchy, read/write registers cannot solve consensus even in a 2-process system. Data structures like stacks and queues can only solve consensus between two processes. However, some concurrent objects are universal (notated in the table with ), which means they can solve consensus among any number of processes and they can simulate any other objects through an operation sequence. See also Uniform consensus Quantum Byzantine agreement Byzantine fault References Further reading Bashir, Imran. "Blockchain Consensus." Blockchain Consensus - An Introduction to Classical, Blockchain, and Quantum Consensus Protocols. Apress, Berkeley, CA, 2022. Distributed computing problems Fault-tolerant computer systems
Consensus (computer science)
[ "Mathematics", "Technology", "Engineering" ]
3,915
[ "Distributed computing problems", "Reliability engineering", "Computational problems", "Computer systems", "Fault-tolerant computer systems", "Mathematical problems" ]
5,407,355
https://en.wikipedia.org/wiki/Beam%20dump
A beam dump, also known as a beam block, a beam stop, or a beam trap, is a device designed to absorb the energy of photons or other particles within an energetic beam. Types of beam dumps Beam blocks Beam blocks are simple optical elements that absorb a beam of light using a material with strong absorption and low reflectance. Materials commonly used for beam blocks include certain types of acrylic paint, carbon nanotubes, anodized aluminum, and nickel-phosphate coatings. Beam traps Beam traps are used when it is important that there is no reflectance. Beam traps can incorporate materials used for beam blocks in their design to further reduce the possibility of reflectance. Charged-particle beam dumps The purpose of a charged-particle beam dump is to safely absorb a beam of charged particles such as electrons, protons, nuclei, or ions. This is necessary when, for example, a circular particle accelerator has to be shut down. Dealing with the heat deposited can be an issue, since the energies of the beams to be absorbed can run into the megajoules. An example of a charged-particle beam dump is the one used by CERN for the Super Proton Synchrotron. Currently, the SPS uses a beam dump that consists of graphite, molybdenum, and tungsten surrounded by concrete, marble, and cast-iron shielding. References Accelerator physics Optical devices Laser applications
Beam dump
[ "Physics", "Materials_science", "Engineering" ]
290
[ "Glass engineering and science", "Applied and interdisciplinary physics", "Optical devices", "Experimental physics", "Accelerator physics" ]
2,940,855
https://en.wikipedia.org/wiki/Fiber%20Bragg%20grating
A fiber Bragg grating (FBG) is a type of distributed Bragg reflector constructed in a short segment of optical fiber that reflects particular wavelengths of light and transmits all others. This is achieved by creating a periodic variation in the refractive index of the fiber core, which generates a wavelength-specific dielectric mirror. Hence a fiber Bragg grating can be used as an inline optical filter to block certain wavelengths, can be used for sensing applications, or it can be used as wavelength-specific reflector. History The first in-fiber Bragg grating was demonstrated by Ken Hill in 1978. Initially, the gratings were fabricated using a visible laser propagating along the fiber core. In 1989, Gerald Meltz and colleagues demonstrated the much more flexible transverse holographic inscription technique where the laser illumination came from the side of the fiber. This technique uses the interference pattern of ultraviolet laser light to create the periodic structure of the fiber Bragg grating. Theory The fundamental principle behind the operation of an FBG is Fresnel reflection, where light traveling between media of different refractive indices may both reflect and refract at the interface. The refractive index will typically alternate over a defined length. The reflected wavelength (), called the Bragg wavelength, is defined by the relationship, where is the effective refractive index of the fiber core and is the grating period. The effective refractive index quantifies the velocity of propagating light as compared to its velocity in vacuum. depends not only on the wavelength but also (for multimode waveguides) on the mode in which the light propagates. For this reason, it is also called modal index. The wavelength spacing between the first minima (nulls, see Fig. 2), or the bandwidth (), is (in the strong grating limit) given by, where is the variation in the refractive index (), and is the fraction of power in the core. Note that this approximation does not apply to weak gratings where the grating length, , is not large compared to \ . The peak reflection () is approximately given by, where is the number of periodic variations. The full equation for the reflected power (), is given by, where, Types of gratings The term type in this context refers to the underlying photosensitivity mechanism by which grating fringes are produced in the fiber. The different methods of creating these fringes have a significant effect on physical attributes of the produced grating, particularly the temperature response and ability to withstand elevated temperatures. Thus far, five (or six) types of FBG have been reported with different underlying photosensitivity mechanisms. These are summarized below: Standard, or type I, gratings Written in both hydrogenated and non-hydrogenated fiber of all types, type I gratings are usually known as standard gratings and are manufactured in fibers of all types under all hydrogenation conditions. Typically, the reflection spectra of a type I grating is equal to 1-T where T is the transmission spectra. This means that the reflection and transmission spectra are complementary and there is negligible loss of light by reflection into the cladding or by absorption. Type I gratings are the most commonly used of all grating types, and the only types of grating available off-the-shelf at the time of writing. Type IA gratings Regenerated grating written after erasure of a type I grating in hydrogenated germanosilicate fiber of all types Type IA gratings were first observed in 2001 during experiments designed to determine the effects of hydrogen loading on the formation of IIA gratings in germanosilicate fiber. In contrast to the anticipated decrease (or 'blue shift') of the gratings' Bragg wavelength, a large increase (or 'red shift') was observed. Later work showed that the increase in Bragg wavelength began once an initial type I grating had reached peak reflectivity and begun to weaken. For this reason, it was labeled as a regenerated grating. Determination of the type IA gratings' temperature coefficient showed that it was lower than a standard grating written under similar conditions. The key difference between the inscription of type IA and IIA gratings is that IA gratings are written in hydrogenated fibers, whereas type IIA gratings are written in non-hydrogenated fibers. Type IIA, or type In, gratings These are gratings that form as the negative part of the induced index change overtakes the positive part. It is usually associated with gradual relaxation of induced stress along the axis and/or at the interface. It has been proposed that these gratings could be relabeled type In (for type 1 gratings with a negative index change; type II label could be reserved for those that are distinctly made above the damage threshold of the glass). Later research by Xie et al. showed the existence of another type of grating with similar thermal stability properties to the type II grating. This grating exhibited a negative change in the mean index of the fiber and was termed type IIA. The gratings were formed in germanosilicate fibers with pulses from a frequency doubled XeCl pumped dye laser. It was shown that initial exposure formed a standard (type I) grating within the fiber which underwent a small red shift before being erased. Further exposure showed that a grating reformed which underwent a steady blue shift whilst growing in strength. Regenerated gratings These are gratings that are reborn at higher temperatures after erasure of gratings, usually type I gratings and usually, though not always, in the presence of hydrogen. They have been interpreted in different ways including dopant diffusion (oxygen being the most popular current interpretation) and glass structural change. Recent work has shown that there exists a regeneration regime beyond diffusion where gratings can be made to operate at temperatures in excess of 1,295 °C, outperforming even type II femtosecond gratings. These are extremely attractive for ultra high temperature applications. Type II gratings Damage written gratings inscribed by multiphoton excitation with higher intensity lasers that exceed the damage threshold of the glass. Lasers employed are usually pulsed in order to reach these intensities. They include recent developments in multiphoton excitation using femtosecond pulses where the short timescales (commensurate on a timescale similar to local relaxation times) offer unprecedented spatial localization of the induced change. The amorphous network of the glass is usually transformed via a different ionization and melting pathway to give either higher index changes or create, through micro-explosions, voids surrounded by more dense glass. Archambault et al. showed that it was possible to inscribe gratings of ~100% (>99.8%) reflectance with a single UV pulse in fibers on the draw tower. The resulting gratings were shown to be stable at temperatures as high as 800 °C (up to 1,000 °C in some cases, and higher with femtosecond laser inscription). The gratings were inscribed using a single 40 mJ pulse from an excimer laser at 248 nm. It was further shown that a sharp threshold was evident at ~30 mJ; above this level the index modulation increased by more than two orders of magnitude, whereas below 30 mJ the index modulation grew linearly with pulse energy. For ease of identification, and in recognition of the distinct differences in thermal stability, they labeled gratings fabricated below the threshold as type I gratings and above the threshold as type II gratings. Microscopic examination of these gratings showed a periodic damage track at the grating's site within the fiber [10]; hence type II gratings are also known as damage gratings. However, these cracks can be very localized so as to not play a major role in scattering loss if properly prepared. Grating structure The structure of the FBG can vary via the refractive index, or the grating period. The grating period can be uniform or graded, and either localised or distributed in a superstructure. The refractive index has two primary characteristics, the refractive index profile, and the offset. Typically, the refractive index profile can be uniform or apodized, and the refractive index offset is positive or zero. There are six common structures for FBGs; uniform positive-only index change, Gaussian apodized, raised-cosine apodized, chirped, discrete phase shift, and superstructure. The first complex grating was made by J. Canning in 1994. This supported the development of the first distributed feedback (DFB) fiber lasers, and also laid the groundwork for most complex gratings that followed, including the sampled gratings first made by Peter Hill and colleagues in Australia. Apodized gratings There are basically two quantities that control the properties of the FBG. These are the grating length, , given as and the grating strength, . There are, however, three properties that need to be controlled in a FBG. These are the reflectivity, the bandwidth, and the side-lobe strength. As shown above, in the strong grating limit (i.e., for large ) the bandwidth depends on the grating strength, and not the grating length. This means the grating strength can be used to set the bandwidth. The grating length, effectively , can then be used to set the peak reflectivity, which depends on both the grating strength and the grating length. The result of this is that the side-lobe strength cannot be controlled, and this simple optimisation results in significant side-lobes. A third quantity can be varied to help with side-lobe suppression. This is apodization of the refractive index change. The term apodization refers to the grading of the refractive index to approach zero at the end of the grating. Apodized gratings offer significant improvement in side-lobe suppression while maintaining reflectivity and a narrow bandwidth. The two functions typically used to apodize a FBG are Gaussian and raised-cosine. Chirped fiber Bragg gratings The refractive index profile of the grating may be modified to add other features, such as a linear variation in the grating period, called a chirp. The reflected wavelength changes with the grating period, broadening the reflected spectrum. A grating possessing a chirp has the property of adding dispersion—namely, different wavelengths reflected from the grating will be subject to different delays. This property has been used in the development of phased-array antenna systems and polarization mode dispersion compensation, as well. Tilted fiber Bragg gratings In standard FBGs, the grading or variation of the refractive index is along the length of the fiber (the optical axis), and is typically uniform across the width of the fiber. In a tilted FBG (TFBG), the variation of the refractive index is at an angle to the optical axis. The angle of tilt in a TFBG has an effect on the reflected wavelength, and bandwidth. Long-period gratings Typically the grating period is the same size as the Bragg wavelength, as shown above. For a grating that reflects at 1,500 nm, the grating period is 500 nm, using a refractive index of 1.5. Longer periods can be used to achieve much broader responses than are possible with a standard FBG. These gratings are called long-period fiber grating. They typically have grating periods on the order of 100 micrometers, to a millimeter, and are therefore much easier to manufacture. Phase-shifted fiber Bragg gratings Phase-shifted fiber Bragg gratings (PS-FBGs) are an important class of gratings structures which have interesting applications in optical communications and sensing due to their special filtering characteristics. These types of gratings can be reconfigurable through special packaging and system design. Different coatings of diffractive structure are used for fiber Bragg gratings in order to reduce the mechanical impact on the Bragg wavelength shift for 1.1–15 times as compared to an uncoated waveguide. Addressed fiber Bragg structures Addressed fiber Bragg structures (AFBS) is an emerging class of FBGs developed in order to simplify interrogation and enhance performance of FBG-based sensors. The optical frequency response of an AFBS has two narrowband notches with the frequency spacing between them being in the radio frequency (RF) range. The frequency spacing is called the address frequency of AFBS and is unique for each AFBS in a system. The central wavelength of AFBS can be defined without scanning its spectral response, unlike conventional FBGs that are probed by optoelectronic interrogators. An interrogation circuit of AFBS is significantly simplified in comparison with conventional interrogators and consists of a broadband optical source, an optical filter with a predefined linear inclined frequency response, and a photodetector. Manufacture Fiber Bragg gratings are created by "inscribing" or "writing" systematic (periodic or aperiodic) variation of refractive index into the core of a special type of optical fiber using an intense ultraviolet (UV) source such as a UV laser. Two main processes are used: interference and masking. The method that is preferable depends on the type of grating to be manufactured. Although polymer optic fibers starting gaining research interest in the 2000s, germanium-doped silica fiber is most commonly used. The germanium-doped fiber is photosensitive, which means that the refractive index of the core changes with exposure to UV light. The amount of the change depends on the intensity and duration of the exposure as well as the photosensitivity of the fiber. To write a high reflectivity fiber Bragg grating directly in the fiber the level of doping with germanium needs to be high. However, standard fibers can be used if the photosensitivity is enhanced by pre-soaking the fiber in hydrogen. Interference This was the first method used widely for the fabrication of fiber Bragg gratings and uses two-beam interference. Here the UV laser is split into two beams which interfere with each other creating a periodic intensity distribution along the interference pattern. The refractive index of the photosensitive fiber changes according to the intensity of light that it is exposed to. This method allows for quick and easy changes to the Bragg wavelength, which is directly related to the interference period and a function of the incident angle of the laser light. Sequential writing Complex grating profiles can be manufactured by exposing a large number of small, partially overlapping gratings in sequence. Advanced properties such as phase shifts and varying modulation depth can be introduced by adjusting the corresponding properties of the subgratings. In the first version of the method, subgratings were formed by exposure with UV pulses, but this approach had several drawbacks, such as large energy fluctuations in the pulses and low average power. A sequential writing method with continuous UV radiation that overcomes these problems has been demonstrated and is now used commercially. The photosensitive fiber is translated by an interferometrically controlled airbearing borne carriage. The interfering UV beams are focused onto the fiber, and as the fiber moves, the fringes move along the fiber by translating mirrors in an interferometer. As the mirrors have a limited range, they must be reset every period, and the fringes move in a sawtooth pattern. All grating parameters are accessible in the control software, and it is therefore possible to manufacture arbitrary gratings structures without any changes in the hardware. Photomask A photomask having the intended grating features may also be used in the manufacture of fiber Bragg gratings. The photomask is placed between the UV light source and the photosensitive fiber. The shadow of the photomask then determines the grating structure based on the transmitted intensity of light striking the fiber. Photomasks are specifically used in the manufacture of chirped Fiber Bragg gratings, which cannot be manufactured using an interference pattern. Point-by-point A single UV laser beam may also be used to 'write' the grating into the fiber point-by-point. Here, the laser has a narrow beam that is equal to the grating period. The main difference of this method lies in the interaction mechanisms between infrared laser radiation and dielectric material - multiphoton absorption and tunnel ionization. This method is specifically applicable to the fabrication of long period fiber gratings. Point-by-point is also used in the fabrication of tilted gratings. Production Originally, the manufacture of the photosensitive optical fiber and the 'writing' of the fiber Bragg grating were done separately. Today, production lines typically draw the fiber from the preform and 'write' the grating, all in a single stage. As well as reducing associated costs and time, this also enables the mass production of fiber Bragg gratings. Mass production is in particular facilitating applications in smart structures utilizing large numbers (3000) of embedded fiber Bragg gratings along a single length of fiber. Applications Communications The primary application of fiber Bragg gratings is in optical communications systems. They are specifically used as notch filters. They are also used in optical multiplexers and demultiplexers with an optical circulator, or optical add-drop multiplexer (OADM). Figure 5 shows 4 channels, depicted as 4 colours, impinging onto a FBG via an optical circulator. The FBG is set to reflect one of the channels, here channel 4. The signal is reflected back to the circulator where it is directed down and dropped out of the system. Since the channel has been dropped, another signal on that channel can be added at the same point in the network. A demultiplexer can be achieved by cascading multiple drop sections of the OADM, where each drop element uses an FBG set to the wavelength to be demultiplexed. Conversely, a multiplexer can be achieved by cascading multiple add sections of the OADM. FBG demultiplexers and OADMs can also be tunable. In a tunable demultiplexer or OADM, the Bragg wavelength of the FBG can be tuned by strain applied by a piezoelectric transducer. The sensitivity of a FBG to strain is discussed below in fiber Bragg grating sensors. Fiber Bragg grating sensors As well as being sensitive to strain, the Bragg wavelength is also sensitive to temperature. This means that fiber Bragg gratings can be used as sensing elements in optical fiber sensors. In a FBG sensor, the measurand causes a shift in the Bragg wavelength, . The relative shift in the Bragg wavelength, , due to an applied strain () and a change in temperature () is approximately given by, or, Here, is the coefficient of strain, which is related to the strain optic coefficient . Also, is the coefficient of temperature, which is made up of the thermal expansion coefficient of the optical fiber, , and the thermo-optic coefficient, . Fiber Bragg gratings can then be used as direct sensing elements for strain and temperature. They can also be used as transduction elements, converting the output of another sensor, which generates a strain or temperature change from the measurand, for example fiber Bragg grating gas sensors use an absorbent coating, which in the presence of a gas expands generating a strain, which is measurable by the grating. Technically, the absorbent material is the sensing element, converting the amount of gas to a strain. The Bragg grating then transduces the strain to the change in wavelength. Specifically, fiber Bragg gratings are finding uses in instrumentation applications such as seismology, pressure sensors for extremely harsh environments, and as downhole sensors in oil and gas wells for measurement of the effects of external pressure, temperature, seismic vibrations and inline flow measurement. As such they offer a significant advantage over traditional electronic gauges used for these applications in that they are less sensitive to vibration or heat and consequently are far more reliable. In the 1990s, investigations were conducted for measuring strain and temperature in composite materials for aircraft and helicopter structures. Fiber Bragg gratings used in fiber lasers Recently the development of high power fiber lasers has generated a new set of applications for fiber Bragg gratings (FBGs), operating at power levels that were previously thought impossible. In the case of a simple fiber laser, the FBGs can be used as the high reflector (HR) and output coupler (OC) to form the laser cavity. The gain for the laser is provided by a length of rare earth doped optical fiber, with the most common form using Yb3+ ions as the active lasing ion in the silica fiber. These Yb-doped fiber lasers first operated at the 1 kW CW power level in 2004 based on free space cavities but were not shown to operate with fiber Bragg grating cavities until much later. Such monolithic, all-fiber devices are produced by many companies worldwide and at power levels exceeding 1 kW. The major advantage of these all fiber systems, where the free space mirrors are replaced with a pair of fiber Bragg gratings (FBGs), is the elimination of realignment during the life of the system, since the FBG is spliced directly to the doped fiber and never needs adjusting. The challenge is to operate these monolithic cavities at the kW CW power level in large mode area (LMA) fibers such as 20/400 (20 μm diameter core and 400 μm diameter inner cladding) without premature failures at the intra-cavity splice points and the gratings. Once optimized, these monolithic cavities do not need realignment during the life of the device, removing any cleaning and degradation of fiber surface from the maintenance schedule of the laser. However, the packaging and optimization of the splices and FBGs themselves are non-trivial at these power levels as are the matching of the various fibers, since the composition of the Yb-doped fiber and various passive and photosensitive fibers needs to be carefully matched across the entire fiber laser chain. Although the power handling capability of the fiber itself far exceeds this level, and is possibly as high as >30 kW CW, the practical limit is much lower due to component reliability and splice losses. Process of matching active and passive fibers In a double-clad fiber there are two waveguides – the Yb-doped core that forms the signal waveguide and the inner cladding waveguide for the pump light. The inner cladding of the active fiber is often shaped to scramble the cladding modes and increase pump overlap with the doped core. The matching of active and passive fibers for improved signal integrity requires optimization of the core/clad concentricity, and the MFD through the core diameter and NA, which reduces splice loss. This is principally achieved by tightening all of the pertinent fiber specifications. Matching fibers for improved pump coupling requires optimization of the clad diameter for both the passive and the active fiber. To maximize the amount of pump power coupled into the active fiber, the active fiber is designed with a slightly larger clad diameter than the passive fibers delivering the pump power. As an example, passive fibers with clad diameters of 395-μm spliced to active octagon shaped fiber with clad diameters of 400-μm improve the coupling of the pump power into the active fiber. An image of such a splice is shown, showing the shaped cladding of the doped double-clad fiber. The matching of active and passive fibers can be optimized in several ways. The easiest method for matching the signal carrying light is to have identical NA and core diameters for each fiber. This however does not account for all the refractive index profile features. Matching of the MFD is also a method used to create matched signal carrying fibers. It has been shown that matching all of these components provides the best set of fibers to build high power amplifiers and lasers. Essentially, the MFD is modeled and the resulting target NA and core diameter are developed. The core-rod is made and before being drawn into fiber its core diameter and NA are checked. Based on the refractive index measurements, the final core/clad ratio is determined and adjusted to the target MFD. This approach accounts for details of the refractive index profile which can be measured easily and with high accuracy on the preform, before it is drawn into fiber. See also Bragg's law Dielectric mirror Diffraction Diffraction grating Distributed temperature sensing by fiber optics Hydrogen sensor Long-period fiber grating PHOSFOS project – embedding FBGs in flexible skins Photonic crystal fiber References External links FOSNE - Fibre Optic Sensing Network Europe Bragg gratings in Subsea infrastructure monitoring Fiber optics Diffraction
Fiber Bragg grating
[ "Physics", "Chemistry", "Materials_science" ]
5,232
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
2,942,638
https://en.wikipedia.org/wiki/Gloss%20%28optics%29
Gloss is an optical property which indicates how well a surface reflects light in a specular (mirror-like) direction. It is one of the important parameters that are used to describe the visual appearance of an object. Other categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables, including gloss among the involved aspects. The factors that affect gloss are the refractive index of the material, the angle of incident light and the surface topography. Apparent gloss depends on the amount of specular reflection – light reflected from the surface in an equal amount and the symmetrical angle to the one of incoming light – in comparison with diffuse reflection – the amount of light scattered into other directions. Theory When light illuminates an object, it interacts with it in a number of ways: Absorbed within it (largely responsible for colour) Transmitted through it (dependent on the surface transparency and opacity) Scattered from or within it (diffuse reflection, haze and transmission) Specularly reflected from it (gloss) Variations in surface texture directly influence the level of specular reflection. Objects with a smooth surface, i.e. highly polished or containing coatings with finely dispersed pigments, appear shiny to the eye due to a large amount of light being reflected in a specular direction whilst rough surfaces reflect no specular light as the light is scattered in other directions and therefore appears dull. The image forming qualities of these surfaces are much lower making any reflections appear blurred and distorted. Substrate material type also influences the gloss of a surface. Non-metallic materials, i.e. plastics etc. produce a higher level of reflected light when illuminated at a greater illumination angle due to light being absorbed into the material or being diffusely scattered depending on the colour of the material. Metals do not suffer from this effect producing higher amounts of reflection at any angle. The Fresnel formula gives the specular reflectance, , for an unpolarized light of intensity , at angle of incidence , giving the intensity of specularly reflected beam of intensity , while the refractive index of the surface specimen is . The Fresnel equation is given as follows : Surface roughness Surface roughness influences the specular reflectance levels; in the visible frequencies, the surface finish in the micrometre range is most relevant. The diagram on the right depicts the reflection at an angle on a rough surface with a characteristic roughness height variation . The path difference between rays reflected from the top and bottom of the surface bumps is: When the wavelength of the light is , the phase difference will be: If is small, the two beams (see Figure 1) are nearly in phase, resulting in constructive interference; therefore, the specimen surface can be considered smooth. But when , then beams are not in phase and through destructive interference, cancellation of each other will occur. Low intensity of specularly reflected light means the surface is rough and it scatters the light in other directions. If the middle phase value is taken as criterion for smooth surface, , then substitution into the equation above will produce: This smooth surface condition is known as the Rayleigh roughness criterion. History The earliest studies of gloss perception are attributed to Leonard R. Ingersoll who in 1914 examined the effect of gloss on paper. By quantitatively measuring gloss using instrumentation Ingersoll based his research around the theory that light is polarised in specular reflection whereas diffusely reflected light is non-polarized. The Ingersoll "glarimeter" had a specular geometry with incident and viewing angles at 57.5°. Using this configuration gloss was measured using a contrast method which subtracted the specular component from the total reflectance using a polarizing filter. In the 1930s work by A. H. Pfund, suggested that although specular shininess is the basic (objective) evidence of gloss, actual surface glossy appearance (subjective) relates to the contrast between specular shininess and the diffuse light of the surrounding surface area (now called "contrast gloss" or "luster"). If black and white surfaces of the same shininess are visually compared, the black surface will always appear glossier because of the greater contrast between the specular highlight and the black surroundings as compared to that with white surface and surroundings. Pfund was also the first to suggest that more than one method was needed to analyze gloss correctly. In 1937 R. S. Hunter, as part of his research paper on gloss, described six different visual criteria attributed to apparent gloss. The following diagrams show the relationships between an incident beam of light, I, a specularly reflected beam, S, a diffusely reflected beam, D and a near-specularly reflected beam, B. Specular gloss – the perceived brightness and the brilliance of highlights Defined as the ratio of the light reflected from a surface at an equal but opposite angle to that incident on the surface. Sheen – the perceived shininess at low grazing angles Defined as the gloss at grazing angles of incidence and viewing Contrast gloss – the perceived brightness of specularly and diffusely reflecting areas Defined as the ratio of the specularly reflected light to that diffusely reflected normal to the surface; Absence of bloom – the perceived cloudiness in reflections near the specular direction Defined as a measure of the absence of haze or a milky appearance adjacent to the specularly reflected light: haze is the inverse of absence-of-bloom Distinctness of image gloss – identified by the distinctness of images reflected in surfaces Defined as the sharpness of the specularly reflected light Surface texture gloss – identified by the lack of surface texture and surface blemishes Defined as the uniformity of the surface in terms of visible texture and defects (orange peel, scratches, inclusions etc.) A surface can therefore appear very shiny if it has a well-defined specular reflectance at the specular angle. The perception of an image reflected in the surface can be degraded by appearing unsharp, or by appearing to be of low contrast. The former is characterised by the measurement of the distinctness-of-image and the latter by the haze or contrast gloss. In his paper Hunter also noted the importance of three main factors in the measurement of gloss: The amount of light reflected in the specular direction The amount and way in which the light is spread around the specular direction The change in specular reflection as the specular angle changes For his research he used a glossmeter with a specular angle of 45° as did most of the first photoelectric methods of that type, later studies however by Hunter and D. B. Judd in 1939, on a larger number of painted samples, concluded that the 60 degree geometry was the best angle to use so as to provide the closest correlation to a visual observation. Standard gloss measurement Standardisation in gloss measurement was led by Hunter and ASTM (American Society for Testing and Materials) who produced ASTM D523 Standard test method for specular gloss in 1939. This incorporated a method for measuring gloss at a specular angle of 60°. Later editions of the Standard (1951) included methods for measuring at 20° for evaluating high gloss finishes, developed at the DuPont Company (Horning and Morse, 1947) and 85° (matte, or low, gloss). ASTM has a number of other gloss-related standards designed for application in specific industries including the old 45° method which is used primarily now used for glazed ceramics, polyethylene and other plastic films. In 1937, the paper industry adopted a 75° specular-gloss method because the angle gave the best separation of coated book papers. This method was adopted in 1951 by the Technical Association of Pulp and Paper Industries as TAPPI Method T480. In the paint industry, measurements of the specular gloss are made according to International Standard ISO 2813 (BS 3900, Part 5, UK; DIN 67530, Germany; NFT 30-064, France; AS 1580, Australia; JIS Z8741, Japan, are also equivalent). This standard is essentially the same as ASTM D523 although differently drafted. Studies of polished metal surfaces and anodised aluminium automotive trim in the 1960s by Tingle, Potter and George led to the standardisation of gloss measurement of high gloss surfaces by goniophotometry under the designation ASTM E430. In this standard it also defined methods for the measurement of distinctness of image gloss and reflection haze. See also List of optical topics Distinctness of image References Sources External links PCI Magazin article: What is the Level of Confidence in Measuring Gloss? NPL: Good practice guide for the measurement of Gloss Optics Physical properties
Gloss (optics)
[ "Physics", "Chemistry" ]
1,780
[ "Physical phenomena", "Applied and interdisciplinary physics", "Optics", " molecular", "Atomic", "Physical properties", " and optical physics" ]
1,483,778
https://en.wikipedia.org/wiki/Yarrow%20oil
Yarrow essential oil is a volatile oil including the chemical proazulene. The dark blue essential oil is extracted by steam distillation of the flowers of yarrow (Achillea millefolium). It kills the larvae of the mosquito Aedes albopictus. References Essential oils Further reading Supercritical CO2 extraction of essential oil from yarrow Production of Yarrow (Achillea millefolium L.) in Norway: Essential Oil Content and Quality Physicochemical Characteristics and Fatty Acid Profile of Yarrow (Achillea tenuifolia) Seed Oil Essential oil composition of three polyploids in the Achillea millefolium ‘complex’ Phytochemical analysis of the essential oil of Achillea millefolium L. from various European Countries Essential oil composition of two yarrow taxonomic forms
Yarrow oil
[ "Chemistry" ]
175
[ "Essential oils", "Natural products" ]
1,483,799
https://en.wikipedia.org/wiki/Geometrical%20frustration
In condensed matter physics, geometrical frustration (or in short, frustration) is a phenomenon where the combination of conflicting inter-atomic forces leads to complex structures. Frustration can imply a plenitude of distinct ground states at zero temperature, and usual thermal ordering may be suppressed at higher temperatures. Much-studied examples include amorphous materials, glasses, and dilute magnets. The term frustration, in the context of magnetic systems, has been introduced by Gerard Toulouse in 1977. Frustrated magnetic systems had been studied even before. Early work includes a study of the Ising model on a triangular lattice with nearest-neighbor spins coupled antiferromagnetically, by G. H. Wannier, published in 1950. Related features occur in magnets with competing interactions, where both ferromagnetic as well as antiferromagnetic couplings between pairs of spins or magnetic moments are present, with the type of interaction depending on the separation distance of the spins. In that case commensurability, such as helical spin arrangements may result, as had been discussed originally, especially, by A. Yoshimori, T. A. Kaplan, R. J. Elliott, and others, starting in 1959, to describe experimental findings on rare-earth metals. A renewed interest in such spin systems with frustrated or competing interactions arose about two decades later, beginning in the 1970s, in the context of spin glasses and spatially modulated magnetic superstructures. In spin glasses, frustration is augmented by stochastic disorder in the interactions, as may occur experimentally in non-stoichiometric magnetic alloys. Carefully analyzed spin models with frustration include the Sherrington–Kirkpatrick model, describing spin glasses, and the ANNNI model, describing commensurability magnetic superstructures. Recently, the concept of frustration has been used in brain network analysis to identify the non-trivial assemblage of neural connections and highlight the adjustable elements of the brain. Magnetic ordering Geometrical frustration is an important feature in magnetism, where it stems from the relative arrangement of spins. A simple 2D example is shown in Figure 1. Three magnetic ions reside on the corners of a triangle with antiferromagnetic interactions between them; the energy is minimized when each spin is aligned opposite to neighbors. Once the first two spins align antiparallel, the third one is frustrated because its two possible orientations, up and down, give the same energy. The third spin cannot simultaneously minimize its interactions with both of the other two. Since this effect occurs for each spin, the ground state is sixfold degenerate. Only the two states where all spins are up or down have more energy. Similarly in three dimensions, four spins arranged in a tetrahedron (Figure 2) may experience geometric frustration. If there is an antiferromagnetic interaction between spins, then it is not possible to arrange the spins so that all interactions between spins are antiparallel. There are six nearest-neighbor interactions, four of which are antiparallel and thus favourable, but two of which (between 1 and 2, and between 3 and 4) are unfavourable. It is impossible to have all interactions favourable, and the system is frustrated. Geometrical frustration is also possible if the spins are arranged in a non-collinear way. If we consider a tetrahedron with a spin on each vertex pointing along the easy axis (that is, directly towards or away from the centre of the tetrahedron), then it is possible to arrange the four spins so that there is no net spin (Figure 3). This is exactly equivalent to having an antiferromagnetic interaction between each pair of spins, so in this case there is no geometrical frustration. With these axes, geometric frustration arises if there is a ferromagnetic interaction between neighbours, where energy is minimized by parallel spins. The best possible arrangement is shown in Figure 4, with two spins pointing towards the centre and two pointing away. The net magnetic moment points upwards, maximising ferromagnetic interactions in this direction, but left and right vectors cancel out (i.e. are antiferromagnetically aligned), as do forwards and backwards. There are three different equivalent arrangements with two spins out and two in, so the ground state is three-fold degenerate. Mathematical definition The mathematical definition is simple (and analogous to the so-called Wilson loop in quantum chromodynamics): One considers for example expressions ("total energies" or "Hamiltonians") of the form where G is the graph considered, whereas the quantities are the so-called "exchange energies" between nearest-neighbours, which (in the energy units considered) assume the values ±1 (mathematically, this is a signed graph), while the are inner products of scalar or vectorial spins or pseudo-spins. If the graph G has quadratic or triangular faces P, the so-called "plaquette variables" PW, "loop-products" of the following kind, appear: and respectively, which are also called "frustration products". One has to perform a sum over these products, summed over all plaquettes. The result for a single plaquette is either +1 or −1. In the last-mentioned case the plaquette is "geometrically frustrated". It can be shown that the result has a simple gauge invariance: it does not change – nor do other measurable quantities, e.g. the "total energy" – even if locally the exchange integrals and the spins are simultaneously modified as follows: Here the numbers εi and εk are arbitrary signs, i.e. +1 or −1, so that the modified structure may look totally random. Water ice Although most previous and current research on frustration focuses on spin systems, the phenomenon was first studied in ordinary ice. In 1936 Giauque and Stout published The Entropy of Water and the Third Law of Thermodynamics. Heat Capacity of Ice from 15 K to 273 K, reporting calorimeter measurements on water through the freezing and vaporization transitions up to the high temperature gas phase. The entropy was calculated by integrating the heat capacity and adding the latent heat contributions; the low temperature measurements were extrapolated to zero, using Debye's then recently derived formula. The resulting entropy, S1 = 44.28 cal/(K·mol) = 185.3 J/(mol·K) was compared to the theoretical result from statistical mechanics of an ideal gas, S2 = 45.10 cal/(K·mol) = 188.7 J/(mol·K). The two values differ by S0 = 0.82 ± 0.05 cal/(K·mol) = 3.4 J/(mol·K). This result was then explained by Linus Pauling to an excellent approximation, who showed that ice possesses a finite entropy (estimated as 0.81 cal/(K·mol) or 3.4 J/(mol·K)) at zero temperature due to the configurational disorder intrinsic to the protons in ice. In the hexagonal or cubic ice phase the oxygen ions form a tetrahedral structure with an O–O bond length 2.76 Å (276 pm), while the O–H bond length measures only 0.96 Å (96 pm). Every oxygen (white) ion is surrounded by four hydrogen ions (black) and each hydrogen ion is surrounded by 2 oxygen ions, as shown in Figure 5. Maintaining the internal H2O molecule structure, the minimum energy position of a proton is not half-way between two adjacent oxygen ions. There are two equivalent positions a hydrogen may occupy on the line of the O–O bond, a far and a near position. Thus a rule leads to the frustration of positions of the proton for a ground state configuration: for each oxygen two of the neighboring protons must reside in the far position and two of them in the near position, so-called ‘ice rules’. Pauling proposed that the open tetrahedral structure of ice affords many equivalent states satisfying the ice rules. Pauling went on to compute the configurational entropy in the following way: consider one mole of ice, consisting of N O2− and 2N protons. Each O–O bond has two positions for a proton, leading to 22N possible configurations. However, among the 16 possible configurations associated with each oxygen, only 6 are energetically favorable, maintaining the H2O molecule constraint. Then an upper bound of the numbers that the ground state can take is estimated as Ω < 22N()N. Correspondingly the configurational entropy S0 = kBln(Ω) = NkBln() = 0.81 cal/(K·mol) = 3.4 J/(mol·K) is in amazing agreement with the missing entropy measured by Giauque and Stout. Although Pauling's calculation neglected both the global constraint on the number of protons and the local constraint arising from closed loops on the Wurtzite lattice, the estimate was subsequently shown to be of excellent accuracy. Spin ice A mathematically analogous situation to the degeneracy in water ice is found in the spin ices. A common spin ice structure is shown in Figure 6 in the cubic pyrochlore structure with one magnetic atom or ion residing on each of the four corners. Due to the strong crystal field in the material, each of the magnetic ions can be represented by an Ising ground state doublet with a large moment. This suggests a picture of Ising spins residing on the corner-sharing tetrahedral lattice with spins fixed along the local quantization axis, the <111> cubic axes, which coincide with the lines connecting each tetrahedral vertex to the center. Every tetrahedral cell must have two spins pointing in and two pointing out in order to minimize the energy. Currently the spin ice model has been approximately realized by real materials, most notably the rare earth pyrochlores Ho2Ti2O7, Dy2Ti2O7, and Ho2Sn2O7. These materials all show nonzero residual entropy at low temperature. Extension of Pauling’s model: General frustration The spin ice model is only one subdivision of frustrated systems. The word frustration was initially introduced to describe a system's inability to simultaneously minimize the competing interaction energy between its components. In general frustration is caused either by competing interactions due to site disorder (see also the Villain model) or by lattice structure such as in the triangular, face-centered cubic (fcc), hexagonal-close-packed, tetrahedron, pyrochlore and kagome lattices with antiferromagnetic interaction. So frustration is divided into two categories: the first corresponds to the spin glass, which has both disorder in structure and frustration in spin; the second is the geometrical frustration with an ordered lattice structure and frustration of spin. The frustration of a spin glass is understood within the framework of the RKKY model, in which the interaction property, either ferromagnetic or anti-ferromagnetic, is dependent on the distance of the two magnetic ions. Due to the lattice disorder in the spin glass, one spin of interest and its nearest neighbors could be at different distances and have a different interaction property, which thus leads to different preferred alignment of the spin. Artificial geometrically frustrated ferromagnets With the help of lithography techniques, it is possible to fabricate sub-micrometer size magnetic islands whose geometric arrangement reproduces the frustration found in naturally occurring spin ice materials. Recently R. F. Wang et al. reported the discovery of an artificial geometrically frustrated magnet composed of arrays of lithographically fabricated single-domain ferromagnetic islands. These islands are manually arranged to create a two-dimensional analog to spin ice. The magnetic moments of the ordered ‘spin’ islands were imaged with magnetic force microscopy (MFM) and then the local accommodation of frustration was thoroughly studied. In their previous work on a square lattice of frustrated magnets, they observed both ice-like short-range correlations and the absence of long-range correlations, just like in the spin ice at low temperature. These results solidify the uncharted ground on which the real physics of frustration can be visualized and modeled by these artificial geometrically frustrated magnets, and inspires further research activity. These artificially frustrated ferromagnets can exhibit unique magnetic properties when studying their global response to an external field using Magneto-Optical Kerr Effect. In particular, a non-monotonic angular dependence of the square lattice coercivity is found to be related to disorder in the artificial spin ice system. Geometric frustration without lattice Another type of geometrical frustration arises from the propagation of a local order. A main question that a condensed matter physicist faces is to explain the stability of a solid. It is sometimes possible to establish some local rules, of chemical nature, which lead to low energy configurations and therefore govern structural and chemical order. This is not generally the case and often the local order defined by local interactions cannot propagate freely, leading to geometric frustration. A common feature of all these systems is that, even with simple local rules, they present a large set of, often complex, structural realizations. Geometric frustration plays a role in fields of condensed matter, ranging from clusters and amorphous solids to complex fluids. The general method of approach to resolve these complications follows two steps. First, the constraint of perfect space-filling is relaxed by allowing for space curvature. An ideal, unfrustrated, structure is defined in this curved space. Then, specific distortions are applied to this ideal template in order to embed it into three dimensional Euclidean space. The final structure is a mixture of ordered regions, where the local order is similar to that of the template, and defects arising from the embedding. Among the possible defects, disclinations play an important role. Simple two-dimensional examples Two-dimensional examples are helpful in order to get some understanding about the origin of the competition between local rules and geometry in the large. Consider first an arrangement of identical discs (a model for a hypothetical two-dimensional metal) on a plane; we suppose that the interaction between discs is isotropic and locally tends to arrange the disks in the densest way as possible. The best arrangement for three disks is trivially an equilateral triangle with the disk centers located at the triangle vertices. The study of the long range structure can therefore be reduced to that of plane tilings with equilateral triangles. A well known solution is provided by the triangular tiling with a total compatibility between the local and global rules: the system is said to be "unfrustrated". But now, the interaction energy is supposed to be at a minimum when atoms sit on the vertices of a regular pentagon. Trying to propagate in the long range a packing of these pentagons sharing edges (atomic bonds) and vertices (atoms) is impossible. This is due to the impossibility of tiling a plane with regular pentagons, simply because the pentagon vertex angle does not divide 2. Three such pentagons can easily fit at a common vertex, but a gap remains between two edges. It is this kind of discrepancy which is called "geometric frustration". There is one way to overcome this difficulty. Let the surface to be tiled be free of any presupposed topology, and let us build the tiling with a strict application of the local interaction rule. In this simple example, we observe that the surface inherits the topology of a sphere and so receives a curvature. The final structure, here a pentagonal dodecahedron, allows for a perfect propagation of the pentagonal order. It is called an "ideal" (defect-free) model for the considered structure. Dense structures and tetrahedral packings The stability of metals is a longstanding question of solid state physics, which can only be understood in the quantum mechanical framework by properly taking into account the interaction between the positively charged ions and the valence and conduction electrons. It is nevertheless possible to use a very simplified picture of metallic bonding and only keeps an isotropic type of interactions, leading to structures which can be represented as densely packed spheres. And indeed the crystalline simple metal structures are often either close packed face-centered cubic (fcc) or hexagonal close packing (hcp) lattices. Up to some extent amorphous metals and quasicrystals can also be modeled by close packing of spheres. The local atomic order is well modeled by a close packing of tetrahedra, leading to an imperfect icosahedral order. A regular tetrahedron is the densest configuration for the packing of four equal spheres. The dense random packing of hard spheres problem can thus be mapped on the tetrahedral packing problem. It is a practical exercise to try to pack table tennis balls in order to form only tetrahedral configurations. One starts with four balls arranged as a perfect tetrahedron, and try to add new spheres, while forming new tetrahedra. The next solution, with five balls, is trivially two tetrahedra sharing a common face; note that already with this solution, the fcc structure, which contains individual tetrahedral holes, does not show such a configuration (the tetrahedra share edges, not faces). With six balls, three regular tetrahedra are built, and the cluster is incompatible with all compact crystalline structures (fcc and hcp). Adding a seventh sphere gives a new cluster consisting in two "axial" balls touching each other and five others touching the latter two balls, the outer shape being an almost regular pentagonal bi-pyramid. However, we are facing now a real packing problem, analogous to the one encountered above with the pentagonal tiling in two dimensions. The dihedral angle of a tetrahedron is not commensurable with 2; consequently, a hole remains between two faces of neighboring tetrahedra. As a consequence, a perfect tiling of the Euclidean space R3 is impossible with regular tetrahedra. The frustration has a topological character: it is impossible to fill Euclidean space with tetrahedra, even severely distorted, if we impose that a constant number of tetrahedra (here five) share a common edge. The next step is crucial: the search for an unfrustrated structure by allowing for curvature in the space, in order for the local configurations to propagate identically and without defects throughout the whole space. Regular packing of tetrahedra: the polytope {3,3,5} Twenty irregular tetrahedra pack with a common vertex in such a way that the twelve outer vertices form a regular icosahedron. Indeed, the icosahedron edge length l is slightly longer than the circumsphere radius r (l ≈ 1.05r). There is a solution with regular tetrahedra if the space is not Euclidean, but spherical. It is the polytope {3,3,5}, using the Schläfli notation, also known as the 600-cell. There are one hundred and twenty vertices which all belong to the hypersphere S3 with radius equal to the golden ratio (φ = ) if the edges are of unit length. The six hundred cells are regular tetrahedra grouped by five around a common edge and by twenty around a common vertex. This structure is called a polytope (see Coxeter) which is the general name in higher dimension in the series containing polygons and polyhedra. Even if this structure is embedded in four dimensions, it has been considered as a three dimensional (curved) manifold. This point is conceptually important for the following reason. The ideal models that have been introduced in the curved Space are three dimensional curved templates. They look locally as three dimensional Euclidean models. So, the {3,3,5} polytope, which is a tiling by tetrahedra, provides a very dense atomic structure if atoms are located on its vertices. It is therefore naturally used as a template for amorphous metals, but one should not forget that it is at the price of successive idealizations. Literature References Condensed matter physics Thermodynamic entropy Magnetic ordering
Geometrical frustration
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,199
[ "Physical quantities", "Phases of matter", "Electric and magnetic fields in matter", "Thermodynamic entropy", "Materials science", "Magnetic ordering", "Entropy", "Condensed matter physics", "Statistical mechanics", "Matter" ]
1,483,960
https://en.wikipedia.org/wiki/Charge%20invariance
Charge invariance refers to the fixed value of the electric charge of a particle regardless of its motion. Like mass, total spin and magnetic moment, particle's charge quantum number remains unchanged between two reference frames in relative motion. For example, an electron has a specific charge e, total spin , and invariant mass me. Accelerate that electron, and the charge, spin and mass assigned to it in all physical laws in the frame at rest and the moving frame remain the same – e, , me. In contrast, the particle's total relativistic energy or de Broglie wavelength change values between the reference frames. The origin of charge invariance, and all relativistic invariants, is presently unclear. There may be some hints proposed by string/M-theory. It is possible the concept of charge invariance may provide a key to unlocking the mystery of unification in physics – the single theory of gravity, electromagnetism, the strong, and weak nuclear forces. The property of charge invariance is embedded in the charge density – current density four-vector , whose vanishing divergence then signifies charge conservation. See also Charge conservation Pohlmeyer charge References Particle physics
Charge invariance
[ "Physics" ]
246
[ "Particle physics" ]
1,484,098
https://en.wikipedia.org/wiki/Titanium%20carbide
Titanium carbide, TiC, is an extremely hard (Mohs 9–9.5) refractory ceramic material, similar to tungsten carbide. It has the appearance of black powder with the sodium chloride (face-centered cubic) crystal structure. It occurs in nature as a form of the very rare mineral () - (Ti,V,Fe)C. It was discovered in 1984 on Mount Arashan in the Chatkal District, USSR (modern Kyrgyzstan), near the Uzbek border. The mineral was named after Ibragim Khamrabaevich Khamrabaev, director of Geology and Geophysics of Tashkent, Uzbekistan. Its crystals as found in nature range in size from 0.1 to 0.3 mm. Physical properties Titanium carbide has an elastic modulus of approximately 400 GPa and a shear modulus of 188 GPa. Titanium carbide is soluble in solid titanium oxide, with a range of compositions which are collectively named "titanium oxycarbide" and created by carbothermic reduction of the oxide. Manufacturing and machining Tool bits without tungsten content can be made of titanium carbide in nickel-cobalt matrix cermet, enhancing the cutting speed, precision, and smoothness of the workpiece. The resistance to wear, corrosion, and oxidation of a tungsten carbide–cobalt material can be increased by adding 6–30% of titanium carbide to tungsten carbide. This forms a solid solution that is more brittle and susceptible to breakage. Titanium carbide can be etched with reactive-ion etching. Applications Titanium carbide is used in preparation of cermets, which are frequently used to machine steel materials at high cutting speed. It is also used as an abrasion-resistant surface coating on metal parts, such as tool bits and watch mechanisms. Titanium carbide is also used as a heat shield coating for atmospheric reentry of spacecraft. 7075 aluminium alloy (AA7075) is almost as strong as steel, but weighs one third as much. Using thin AA7075 rods with TiC nanoparticles allows larger alloys pieces to be welded without phase-segregation induced cracks. See also Metallocarbohedryne, a family of metal-carbon clusters including References Carbides Ceramic materials Refractory materials Superhard materials Titanium(IV) compounds Rock salt crystal structure
Titanium carbide
[ "Physics", "Engineering" ]
499
[ "Refractory materials", "Materials", "Superhard materials", "Ceramic materials", "Ceramic engineering", "Matter" ]
1,484,228
https://en.wikipedia.org/wiki/Montel%27s%20theorem
In complex analysis, an area of mathematics, Montel's theorem refers to one of two theorems about families of holomorphic functions. These are named after French mathematician Paul Montel, and give conditions under which a family of holomorphic functions is normal. Locally uniformly bounded families are normal The first, and simpler, version of the theorem states that a family of holomorphic functions defined on an open subset of the complex numbers is normal if and only if it is locally uniformly bounded. This theorem has the following formally stronger corollary. Suppose that is a family of meromorphic functions on an open set . If is such that is not normal at , and is a neighborhood of , then is dense in the complex plane. Functions omitting two values The stronger version of Montel's theorem (occasionally referred to as the Fundamental Normality Test) states that a family of holomorphic functions, all of which omit the same two values is normal. Necessity The conditions in the above theorems are sufficient, but not necessary for normality. Indeed, the family is normal, but does not omit any complex value. Proofs The first version of Montel's theorem is a direct consequence of Marty's theorem (which states that a family is normal if and only if the spherical derivatives are locally bounded) and Cauchy's integral formula. This theorem has also been called the Stieltjes–Osgood theorem, after Thomas Joannes Stieltjes and William Fogg Osgood. The Corollary stated above is deduced as follows. Suppose that all the functions in omit the same neighborhood of the point . By postcomposing with the map we obtain a uniformly bounded family, which is normal by the first version of the theorem. The second version of Montel's theorem can be deduced from the first by using the fact that there exists a holomorphic universal covering from the unit disk to the twice punctured plane . (Such a covering is given by the elliptic modular function). This version of Montel's theorem can be also derived from Picard's theorem, by using Zalcman's lemma. Relationship to theorems for entire functions A heuristic principle known as Bloch's principle (made precise by Zalcman's lemma) states that properties that imply that an entire function is constant correspond to properties that ensure that a family of holomorphic functions is normal. For example, the first version of Montel's theorem stated above is the analog of Liouville's theorem, while the second version corresponds to Picard's theorem. See also Montel space Fundamental normality test Riemann mapping theorem Notes References Compactness theorems Theorems in complex analysis
Montel's theorem
[ "Mathematics" ]
574
[ "Compactness theorems", "Theorems in mathematical analysis", "Theorems in complex analysis", "Theorems in topology" ]
1,484,457
https://en.wikipedia.org/wiki/Atmospheric%20focusing
Atmospheric focusing is a type of wave interaction causing shock waves to affect areas at a greater distance than otherwise expected. Variations in the atmosphere create distortions in the wavefront by refracting a segment, allowing it to converge at certain points and constructively interfere. In the case of destructive shock waves, this may result in areas of damage far beyond the theoretical extent of its blast effect. Examples of this are seen during supersonic booms, large extraterrestrial impacts from objects like meteors, and nuclear explosions. Density variations in the atmosphere (e.g. due to temperature variations), or airspeed variations cause refraction along the shock wave, allowing the uniform wavefront to separate and eventually interfere, dispersing the wave at some points and focusing it at others. A similar effect occurs in water when a wave travels through a patch of different density fluid, causing it to diverge over a large distance. For powerful shock waves this can cause damage farther than expected; the shock wave energy density will decrease beyond expected values based on uniform geometry ( falloff for weak shock or acoustic waves, as expected at large distances). Types of atmospheric focusing Supersonic booms Atmospheric focusing from supersonic booms is a modern occurrence and a result of the actions of air forces across the world. When objects like planes travel faster than the speed of sound, they create sonic booms and pressure waves that can be focused. Atmospheric factors present when these waves are created can focus the waves and cause damage. Planes can also create boom waves and explosion waves that can be focused. Consideration for atmospheric focusing in flight plans is critical. The wind and altitude during a flight can create environments for atmospheric focusing, which can be determined through reference to a focusing curve. When this is the case, supersonic flight may cause damage on the ground. Meteor impacts Meteors can also cause shock waves that can be focused. As the meteor enters Earth’s atmosphere and reaches lower altitudes, it can create a shock wave. The shock wave is impacted by what the meteor is made of, temperature, and pressure. Because the meteors need to have a large size and mass, there is only a small percentage of meteors that can create these shock waves. Radar and Infrasonic methodologies are able to detect meteor shock waves. These tools are used to study these shock waves and can help create new methods of learning about meteor shock waves. Nuclear explosions and bombs Nuclear explosions and bombs can also lead to atmospheric focusing. The effects of focusing may be found hundreds of kilometers from the blast site. An example of this is the case of the Tsar Bomba test, where damage was caused up to approximately 1,000 km away. Atmospheric focusing can increase the damage caused by these explosions. See also Knudsen number Nuclear weapons testing Rankine–Hugoniot conditions References Shock waves Nuclear weapons
Atmospheric focusing
[ "Physics" ]
575
[ "Waves", "Physical phenomena", "Shock waves" ]
1,484,696
https://en.wikipedia.org/wiki/Dependency%20injection
In software engineering, dependency injection is a programming technique in which an object or function receives other objects or functions that it requires, as opposed to creating them internally. Dependency injection aims to separate the concerns of constructing objects and using them, leading to loosely coupled programs. The pattern ensures that an object or function that wants to use a given service should not have to know how to construct those services. Instead, the receiving "client" (object or function) is provided with its dependencies by external code (an "injector"), which it is not aware of. Dependency injection makes implicit dependencies explicit and helps solve the following problems: How can a class be independent from the creation of the objects it depends on? How can an application, and the objects it uses support different configurations? Dependency injection is often used to keep code in-line with the dependency inversion principle. In statically typed languages using dependency injection means that a client only needs to declare the interfaces of the services it uses, rather than their concrete implementations, making it easier to change which services are used at runtime without recompiling. Application frameworks often combine dependency injection with inversion of control. Under inversion of control, the framework first constructs an object (such as a controller), and then passes control flow to it. With dependency injection, the framework also instantiates the dependencies declared by the application object (often in the constructor method's parameters), and passes the dependencies into the object. Dependency injection implements the idea of "inverting control over the implementations of dependencies", which is why certain Java frameworks generically name the concept "inversion of control" (not to be confused with inversion of control flow). Roles Dependency injection involves four roles: services, clients, interfaces and injectors. Services and clients A service is any class which contains useful functionality. In turn, a client is any class which uses services. The services that a client requires are the client's dependencies. Any object can be a service or a client; the names relate only to the role the objects play in an injection. The same object may even be both a client (it uses injected services) and a service (it is injected into other objects). Upon injection, the service is made part of the client's state, available for use. Interfaces Clients should not know how their dependencies are implemented, only their names and API. A service which retrieves emails, for instance, may use the IMAP or POP3 protocols behind the scenes, but this detail is likely irrelevant to calling code that merely wants an email retrieved. By ignoring implementation details, clients do not need to change when their dependencies do. Injectors The injector, sometimes also called an assembler, container, provider or factory, introduces services to the client. The role of injectors is to construct and connect complex object graphs, where objects may be both clients and services. The injector itself may be many objects working together, but must not be the client, as this would create a circular dependency. Because dependency injection separates how objects are constructed from how they are used, it often diminishes the importance of the new keyword found in most object-oriented languages. Because the framework handles creating services, the programmer tends to only directly construct value objects which represents entities in the program's domain (such as an Employee object in a business app or an Order object in a shopping app). Analogy As an analogy, cars can be thought of as services which perform the useful work of transporting people from one place to another. Car engines can require gas, diesel or electricity, but this detail is unimportant to the client—a driver—who only cares if it can get them to their destination. Cars present a uniform interface through their pedals, steering wheels and other controls. As such, which engine they were 'injected' with on the factory line ceases to matter and drivers can switch between any kind of car as needed. Advantages and disadvantages Advantages A basic benefit of dependency injection is decreased coupling between classes and their dependencies. By removing a client's knowledge of how its dependencies are implemented, programs become more reusable, testable and maintainable. This also results in increased flexibility: a client may act on anything that supports the intrinsic interface the client expects. More generally, dependency injection reduces boilerplate code, since all dependency creation is handled by a singular component. Finally, dependency injection allows concurrent development. Two developers can independently develop classes that use each other, while only needing to know the interface the classes will communicate through. Plugins are often developed by third-parties that never even talk to developers of the original product. Testing Many of dependency injection's benefits are particularly relevant to unit-testing. For example, dependency injection can be used to externalize a system's configuration details into configuration files, allowing the system to be reconfigured without recompilation. Separate configurations can be written for different situations that require different implementations of components. Similarly, because dependency injection does not require any change in code behavior, it can be applied to legacy code as a refactoring. This makes clients more independent and are easier to unit test in isolation, using stubs or mock objects, that simulate other objects not under test. This ease of testing is often the first benefit noticed when using dependency injection. Disadvantages Critics of dependency injection argue that it: Creates clients that demand configuration details, which can be onerous when obvious defaults are available. Makes code difficult to trace because it separates behavior from construction. Is typically implemented with reflection or dynamic programming, hindering IDE automation. Typically requires more upfront development effort. Encourages dependence on a framework. Types of dependency injection There are three main ways in which a client can receive injected services: Constructor injection, where dependencies are provided through a client's class constructor. Method Injection, where dependencies are provided to a method only when required for specific functionality. Setter injection, where the client exposes a setter method which accepts the dependency. Interface injection, where the dependency's interface provides an injector method that will inject the dependency into any client passed to it. In some frameworks, clients do not need to actively accept dependency injection at all. In Java, for example, reflection can make private attributes public when testing and inject services directly. Without dependency injection In the following Java example, the Client class contains a Service member variable initialized in the constructor. The client directly constructs and controls which service it uses, creating a hard-coded dependency. public class Client { private Service service; Client() { // The dependency is hard-coded. this.service = new ExampleService(); } } Constructor injection The most common form of dependency injection is for a class to request its dependencies through its constructor. This ensures the client is always in a valid state, since it cannot be instantiated without its necessary dependencies. public class Client { private Service service; // The dependency is injected through a constructor. Client(final Service service) { if (service == null) { throw new IllegalArgumentException("service must not be null"); } this.service = service; } } Method Injection Dependencies are passed as arguments to a specific method, allowing them to be used only during that method's execution without maintaining a long-term reference. This approach is particularly useful for temporary dependencies or when different implementations are needed for various method calls. public class Client { public void performAction(Service service) { if (service == null) { throw new IllegalArgumentException("service must not be null"); } service.execute(); } } Setter injection By accepting dependencies through a setter method, rather than a constructor, clients can allow injectors to manipulate their dependencies at any time. This offers flexibility, but makes it difficult to ensure that all dependencies are injected and valid before the client is used. public class Client { private Service service; // The dependency is injected through a setter method. public void setService(final Service service) { if (service == null) { throw new IllegalArgumentException("service must not be null"); } this.service = service; } } Interface injection With interface injection, dependencies are completely ignorant of their clients, yet still send and receive references to new clients. In this way, the dependencies become injectors. The key is that the injecting method is provided through an interface. An assembler is still needed to introduce the client and its dependencies. The assembler takes a reference to the client, casts it to the setter interface that sets that dependency, and passes it to that dependency object which in turn passes a reference to itself back to the client. For interface injection to have value, the dependency must do something in addition to simply passing back a reference to itself. This could be acting as a factory or sub-assembler to resolve other dependencies, thus abstracting some details from the main assembler. It could be reference-counting so that the dependency knows how many clients are using it. If the dependency maintains a collection of clients, it could later inject them all with a different instance of itself. public interface ServiceSetter { void setService(Service service); } public class Client implements ServiceSetter { private Service service; @Override public void setService(final Service service) { if (service == null) { throw new IllegalArgumentException("service must not be null"); } this.service = service; } } public class ServiceInjector { private final Set<ServiceSetter> clients = new HashSet<>(); public void inject(final ServiceSetter client) { this.clients.add(client); client.setService(new ExampleService()); } public void switch() { for (final Client client : this.clients) { client.setService(new AnotherExampleService()); } } } public class ExampleService implements Service {} public class AnotherExampleService implements Service {} Assembly The simplest way of implementing dependency injection is to manually arrange services and clients, typically done at the program's root, where execution begins. public class Program { public static void main(final String[] args) { // Build the service. final Service service = new ExampleService(); // Inject the service into the client. final Client client = new Client(service); // Use the objects. System.out.println(client.greet()); } } Manual construction may be more complex and involve builders, factories, or other construction patterns. Frameworks Manual dependency injection is often tedious and error-prone for larger projects, promoting the use of frameworks which automate the process. Manual dependency injection becomes a dependency injection framework once the constructing code is no longer custom to the application and is instead universal. While useful, these tools are not required in order to perform dependency injection. Some frameworks, like Spring, can use external configuration files to plan program composition: import org.springframework.beans.factory.BeanFactory; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public class Injector { public static void main(final String[] args) { // Details about which concrete service to use are stored in configuration separate from the program itself. final BeanFactory beanfactory = new ClassPathXmlApplicationContext("Beans.xml"); final Client client = (Client) beanfactory.getBean("client"); System.out.println(client.greet()); } } Even with a potentially long and complex object graph, the only class mentioned in code is the entry point, in this case Client.Client has not undergone any changes to work with Spring and remains a POJO. By keeping Spring-specific annotations and calls from spreading out among many classes, the system stays only loosely dependent on Spring. Examples AngularJS The following example shows an AngularJS component receiving a greeting service through dependency injection. function SomeClass(greeter) { this.greeter = greeter; } SomeClass.prototype.doSomething = function(name) { this.greeter.greet(name); } Each AngularJS application contains a service locator responsible for the construction and look-up of dependencies. // Provide the wiring information in a module var myModule = angular.module('myModule', []); // Teach the injector how to build a greeter service. // greeter is dependent on the $window service. myModule.factory('greeter', function($window) { return { greet: function(text) { $window.alert(text); } }; }); We can then create a new injector that provides components defined in the myModule module, including the greeter service. var injector = angular.injector(['myModule', 'ng']); var greeter = injector.get('greeter'); To avoid the service locator antipattern, AngularJS allows declarative notation in HTML templates which delegates creating components to the injector. <div ng-controller="MyController"> <button ng-click="sayHello()">Hello</button> </div> function MyController($scope, greeter) { $scope.sayHello = function() { greeter.greet('Hello World'); }; } The ng-controller directive triggers the injector to create an instance of the controller and its dependencies. C# This sample provides an example of constructor injection in C#. using System; namespace DependencyInjection; // Our client will only know about this interface, not which specific gamepad it is using. interface IGamepadFunctionality { string GetGamepadName(); void SetVibrationPower(float power); } // The following services provide concrete implementations of the above interface. class XboxGamepad : IGamepadFunctionality { float vibrationPower = 1.0f; public string GetGamepadName() => "Xbox controller"; public void SetVibrationPower(float power) => this.vibrationPower = Math.Clamp(power, 0.0f, 1.0f); } class PlaystationJoystick : IGamepadFunctionality { float vibratingPower = 100.0f; public string GetGamepadName() => "PlayStation controller"; public void SetVibrationPower(float power) => this.vibratingPower = Math.Clamp(power * 100.0f, 0.0f, 100.0f); } class SteamController : IGamepadFunctionality { double vibrating = 1.0; public string GetGamepadName() => "Steam controller"; public void SetVibrationPower(float power) => this.vibrating = Convert.ToDouble(Math.Clamp(power, 0.0f, 1.0f)); } // This class is the client which receives a service. class Gamepad { IGamepadFunctionality gamepadFunctionality; // The service is injected through the constructor and stored in the above field. public Gamepad(IGamepadFunctionality gamepadFunctionality) => this.gamepadFunctionality = gamepadFunctionality; public void Showcase() { // The injected service is used. var gamepadName = this.gamepadFunctionality.GetGamepadName(); var message = $"We're using the {gamepadName} right now, do you want to change the vibrating power?"; Console.WriteLine(message); } } class Program { static void Main() { var steamController = new SteamController(); // We could have also passed in an XboxController, PlaystationJoystick, etc. // The gamepad doesn't know what it's using and doesn't need to. var gamepad = new Gamepad(steamController); gamepad.Showcase(); } } Go Go does not support classes and usually dependency injection is either abstracted by a dedicated library that utilizes reflection or generics (the latter being supported since Go 1.18). A simpler example without using dependency injection libraries is illustrated by the following example of an MVC web application. First, pass the necessary dependencies to a router and then from the router to the controllers: package router import ( "database/sql" "net/http" "example/controllers/users" "github.com/go-chi/chi/v5" "github.com/go-chi/chi/v5/middleware" "github.com/redis/go-redis/v9" "github.com/rs/zerolog" ) type RoutingHandler struct { // passing the values by pointer further down the call stack // means we won't create a new copy, saving memory log *zerolog.Logger db *sql.DB cache *redis.Client router chi.Router } // connection, logger and cache initialized usually in the main function func NewRouter( log *zerolog.Logger, db *sql.DB, cache *redis.Client, ) (r *RoutingHandler) { rtr := chi.NewRouter() return &RoutingHandler{ log: log, db: db, cache: cache, router: rtr, } } func (r *RoutingHandler) SetupUsersRoutes() { uc := users.NewController(r.log, r.db, r.cache) r.router.Get("/users/:name", func(w http.ResponseWriter, r *http.Request) { uc.Get(w, r) }) } Then, you can access the private fields of the struct in any method that is its pointer receiver, without violating encapsulation. package users import ( "database/sql" "net/http" "example/models" "github.com/go-chi/chi/v5" "github.com/redis/go-redis/v9" "github.com/rs/zerolog" ) type Controller struct { log *zerolog.Logger storage models.UserStorage cache *redis.Client } func NewController(log *zerolog.Logger, db *sql.DB, cache *redis.Client) *Controller { return &Controller{ log: log, storage: models.NewUserStorage(db), cache: cache, } } func (uc *Controller) Get(w http.ResponseWriter, r *http.Request) { // note that we can also wrap logging in a middleware, this is for demonstration purposes uc.log.Info().Msg("Getting user") userParam := chi.URLParam(r, "name") var user *models.User // get the user from the cache err := uc.cache.Get(r.Context(), userParam).Scan(&user) if err != nil { uc.log.Error().Err(err).Msg("Error getting user from cache. Retrieving from SQL storage") } user, err = uc.storage.Get(r.Context(), "johndoe") if err != nil { uc.log.Error().Err(err).Msg("Error getting user from SQL storage") http.Error(w, "Internal server error", http.StatusInternalServerError) return } } Finally you can use the database connection initialized in your main function at the data access layer: package models import ( "database/sql" "time" ) type ( UserStorage struct { conn *sql.DB } User struct { Name string 'json:"name" db:"name,primarykey"' JoinedAt time.Time 'json:"joined_at" db:"joined_at"' Email string 'json:"email" db:"email"' } ) func NewUserStorage(conn *sql.DB) *UserStorage { return &UserStorage{ conn: conn, } } func (us *UserStorage) Get(name string) (user *User, err error) { // assuming 'name' is a unique key query := "SELECT * FROM users WHERE name = $1" if err := us.conn.QueryRow(query, name).Scan(&user); err != nil { return nil, err } return user, nil } See also Architecture description language Factory pattern Inversion of control Plug-in (computing) Strategy pattern Service locator pattern Parameter (computer programming) Quaject References External links Composition Root by Mark Seemann A beginners guide to Dependency Injection Dependency Injection & Testable Objects: Designing loosely coupled and testable objects - Jeremy Weiskotten; Dr. Dobb's Journal, May 2006. Design Patterns: Dependency Injection -- MSDN Magazine, September 2005 Martin Fowler's original article that introduced the term Dependency Injection P of EAA: Plugin - Andrew McVeigh - A detailed history of dependency injection. What is Dependency Injection? - An alternative explanation - Jakob Jenkov Writing More Testable Code with Dependency Injection -- Developer.com, October 2006 Managed Extensibility Framework Overview -- MSDN Old fashioned description of the Dependency Mechanism by Hunt 1998 Refactor Your Way to a Dependency Injection Container Understanding DI in PHP You Don't Need a Dependency Injection Container Component-based software engineering Software architecture Software design patterns Articles with example Java code
Dependency injection
[ "Technology" ]
4,741
[ "Component-based software engineering", "Components" ]
1,484,951
https://en.wikipedia.org/wiki/Selected-ion%20flow-tube%20mass%20spectrometry
Selected-ion flow-tube mass spectrometry (SIFT-MS) is a quantitative mass spectrometry technique for trace gas analysis which involves the chemical ionization of trace volatile compounds by selected positive precursor ions during a well-defined time period along a flow tube. Absolute concentrations of trace compounds present in air, breath or the headspace of bottled liquid samples can be calculated in real time from the ratio of the precursor and product ion signal ratios, without the need for sample preparation or calibration with standard mixtures. The detection limit of commercially available SIFT-MS instruments extends to the single digit pptv range. The instrument is an extension of the selected ion flow tube, SIFT, technique, which was first described in 1976 by Adams and Smith. It is a fast flow tube/ion swarm method to react positive or negative ions with atoms and molecules under truly thermalised conditions over a wide range of temperatures. It has been used extensively to study ion-molecule reaction kinetics. Its application to ionospheric and interstellar ion chemistry over a 20-year period has been crucial to the advancement and understanding of these topics. SIFT-MS was initially developed for use in human breath analysis, and has shown great promise as a non-invasive tool for physiological monitoring and disease diagnosis. It has since shown potential for use across a wide variety of fields, particularly in the life sciences, such as agriculture and animal husbandry, environmental research and food technology. SIFT-MS has been popularised as a technology which is sold and marketed by Syft Technologies based in Christchurch, New Zealand. The SIFT technique, which is the basis of SIFT-MS, was conceived and developed in the 1970s at the University of Birmingham, England, by Nigel Adams and David Smith. Instrumentation In the selected ion flow tube mass spectrometer, SIFT-MS, ions are generated in a microwave plasma ion source, usually from a mixture of laboratory air and water vapor. From the formed plasma, a single ionic species is selected using a quadrupole mass filter to act as "precursor ions" (also frequently referred to as primary or reagent ions in SIFT-MS and other processes involving chemical ionization). In SIFT-MS analyses, H3O+, NO+ and O2+ are used as precursor ions, and these have been chosen because they are known not to react significantly with the major components of air (nitrogen, oxygen, etc.), but can react with many of the very low level (trace) gases. The selected precursor ions are injected into a flowing carrier gas (usually helium at a pressure of 1 Torr) via a Venturi orifice (~1 mm diameter) where they travel along the reaction flow tube by convection. Concurrently, the neutral analyte molecules of a sample vapor enter the flow tube, via a heated sampling tube, where they meet the precursor ions and may undergo chemical ionization, depending on their chemical properties, such as their proton affinity or ionization energy. The newly formed "product ions" flow into the mass spectrometer chamber, which contains a second quadrupole mass filter, and an electron multiplier detector, which are used to separate the ions by their mass-to-charge ratios (m/z) and measure the count rates of the ions in the desired m/z range. Analysis The concentrations of individual compounds can be derived largely using the count rates of the precursor and product ions, and the reaction rate coefficients, k. Exothermic proton transfer reactions with H3O+ are assumed to proceed at the collisional rate (see Collision theory), the coefficient for which, kc, is calculable using the method described by Su and Chesnavich, providing the polarizability and dipole moment are known for the reactant molecule. NO+ and O2+ reactions proceed at kc less frequently, and thus the reaction rates of the reactant molecule with these precursor ions must often be derived experimentally by comparing the decline in the count rates of each of the NO+ and O2+ precursor ions to that of H3O+ as the sample flow of reactant molecules is increased. The product ions and rate coefficients have been derived in this way for well over 200 volatile compounds, which can be found in the scientific literature. The instrument can be programmed either to scan across a range of masses to produce a mass spectrum (Full Scan, FS, mode), or to rapidly switch between only the m/z values of interest (Multiple Ion Monitoring, MIM, mode). Due to the different chemical properties of the aforementioned precursor ions (H3O+, NO+, and O2+), different FS mode spectra can be produced for a vapor sample, and these can give different information relating to the composition of the sample. Using this information, it is often possible to identify the trace compound(s) that are present. The MIM mode, on the other hand will usually employ a much longer dwell time on each ion, and as a result, accurate quantification is possible to the parts per billion (ppb) level. SIFT-MS utilises an extremely soft ionisation process which greatly simplifies the resulting spectra and thereby facilitates the analysis of complex mixtures of gases, such as human breath. Another very soft ionization technique is secondary electrospray ionization (SESI-MS). For example, even proton-transfer-reaction mass spectrometry (PTR-MS), another soft ionisation technology that uses the H3O+ reagent ion, has been shown to give considerably more product ion fragmentation than SIFT-MS. Another key feature of SIFT-MS is the upstream mass quadrupole, which allows the use of multiple precursor ions. The ability to use three precursor ions, H3O+, NO+ and O2+, to obtain three different spectra is extremely valuable because it allows the operator to analyse a much wider variety of compounds. An example of this is methane, which cannot be analysed using H3O+ as a precursor ion (because it has a proton affinity of 543.5kJ/mol, somewhat less than that of H2O), but can be analysed using O2+. Furthermore, the parallel use of three precursor ions may allow the operator to distinguish between two or more compounds that react to produce ions of the same mass-to-charge ratio in certain spectra. For example, dimethyl sulfide (C2H6S, 62 amu) accepts a proton when it reacts with H3O+ to generate C2H7S+ product ions which appear at m/z 63 in the resulting spectrum. This may conflict with other product ions, such as the association product from the reaction with carbon dioxide, H3O+CO2, and the single hydrate of the protonated acetaldehyde ion, C2H5O+(H2O), which also appear at m/z 63, and so it may be unidentifiable in certain samples. However dimethyl sulfide reacts with NO+ by charge transfer, to produce the ion C2H6S+, which appears at m/z 62 in resulting spectra, whereas carbon dioxide does not react with NO+, and acetaldehyde donates a hydride ion, giving a single product ion at m/z 43, C2H3O+, and so dimethyl sulfide can be easily distinguished. Over recent years, advances in SIFT-MS technology have vastly increased the sensitivity of these devices such that the limits of detection now extend down to the single-digit-ppt level. References Literature "The selected ion flow tube (SIFT); A technique for studying ion-neutral reactions" Adams N.G., Smith D.; International Journal of Mass Spectrometry and Ion Physics 21 (1976) pp. 349–359. "Parametrization of the ion-polar molecule collision rate constant by trajectory calculations" Su T., Chesnavich W.J.; Journal of Chemical Physics 76 (1982) pp. 5183–5186. "Selected ion flow tube mass spectrometry (SIFT-MS) for on-line trace gas analysis" Smith D., Španěl P.; Mass Spectrometry Reviews 24 (2005) pp. 661–700. "Quantification of methane in humid air and exhaled breath using selected ion flow tube mass spectrometry" Dryahina K., Smith D., Španěl P.; Rapid Communications in Mass Spectrometry 24 (2010) pp. 1296–1304. Mass spectrometry
Selected-ion flow-tube mass spectrometry
[ "Physics", "Chemistry" ]
1,794
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
1,485,104
https://en.wikipedia.org/wiki/Chemical%20ionization
Chemical ionization (CI) is a soft ionization technique used in mass spectrometry. This was first introduced by Burnaby Munson and Frank H. Field in 1966. This technique is a branch of gaseous ion-molecule chemistry. Reagent gas molecules (often methane or ammonia) are ionized by electron ionization to form reagent ions, which subsequently react with analyte molecules in the gas phase to create analyte ions for analysis by mass spectrometry. Negative chemical ionization (NCI), charge-exchange chemical ionization, atmospheric-pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) are some of the common variants of the technique. CI mass spectrometry finds general application in the identification, structure elucidation and quantitation of organic compounds as well as some utility in biochemical analysis. Samples to be analyzed must be in vapour form, or else (in the case of liquids or solids), must be vapourized before introduction into the source. Principles of operation The chemical ionization process generally imparts less energy to an analyte molecule than does electron impact (EI) ionization, resulting in less fragmentation and usually a simpler spectrum. The amount of fragmentation, and therefore the amount of structural information produced by the process can be controlled to some degree by selection of the reagent ion. In addition to some characteristic fragment ion peaks, a CI spectrum usually has an identifiable protonated molecular ion peak [M+1]+, allowing determination of the molecular mass. CI is thus useful as an alternative technique in cases where EI produces excessive fragmentation of the analyte, causing the molecular-ion peak to be weak or completely absent. Instrumentation The CI source design for a mass spectrometer is very similar to that of the EI source. To facilitate the reactions between the ions and molecules, the chamber is kept relatively gas tight at a pressure of about 1 torr. Electrons are produced externally to the source volume (at a lower pressure of 10−4 torr or below) by heating a metal filament which is made of tungsten, rhenium, or iridium. The electrons are introduced through a small aperture in the source wall at energies 200–1000 eV so that they penetrate to at least the centre of the box. In contrast to EI, the magnet and the electron trap are not needed for CI, since the electrons do not travel to the end of the chamber. Many modern sources are dual or combination EI/CI sources and can be switched from EI mode to CI mode and back in seconds. Mechanism A CI experiment involves the use of gas phase acid-base reactions in the chamber. Some common reagent gases include: methane, ammonia, water and isobutane. Inside the ion source, the reagent gas is present in large excess compared to the analyte. Electrons entering the source will mainly ionize the reagent gas because it is in large excess compared to the analyte. The primary reagent ions then undergo secondary ion/molecule reactions (as below) to produce more stable reagent ions which ultimately collide and react with the lower concentration analyte molecules to form product ions. The collisions between reagent ions and analyte molecules occur at close to thermal energies, so that the energy available to fragment the analyte ions is limited to the exothermicity of the ion-molecule reaction. For a proton transfer reaction, this is just the difference in proton affinity between the neutral reagent molecule and the neutral analyte molecule. This results in significantly less fragmentation than does 70 eV electron ionization (EI). The following reactions are possible with methane as the reagent gas. Primary ion formation CH4{} + e^- -> CH4^{+\bullet}{} + 2e^- Secondary reagent ions CH4{} + CH4^{+\bullet} -> CH5+{} + CH3^{\bullet} CH4 + CH3^+ -> C2H5+ + H2 Product ion formation M + CH5+ -> CH4 + [M + H]+    (protonation) AH + CH3+ -> CH4 + A+    (H^- abstraction) M + C2H5+ -> [M + C2H5]+    (adduct formation) A + CH4+ -> CH4 + A+    (charge exchange) If ammonia is the reagent gas, NH3{} + e^- -> NH3^{+\bullet}{} + 2e^- NH3{} + NH3^{+\bullet} -> NH4+{} + NH2 M + NH4^+ -> MH+ + NH3 For isobutane as the reagent gas, C3H7^+{} + C4H10^{+\bullet} -> C4H9^+{} + C3H8 M + C4H9^+ -> MH^+ + C4H8 Self chemical ionization is possible if the reagent ion is an ionized form of the analyte. Advantages and limitations One of the main advantages of CI over EI is the reduced fragmentation as noted above, which for more fragile molecules, results in a peak in the mass spectrum indicative of the molecular weight of the analyte. This proves to be a particular advantage for biological applications where EI often does not yield useful molecular ions in the spectrum. The spectra given by CI are simpler than EI spectra and CI can be more sensitive than other ionization methods, at least in part to the reduced fragmentation which concentrates the ion signal in fewer and therefore more intense peaks. The extent of fragmentation can be somewhat controlled by proper selection of reagent gases. Moreover, CI is often coupled to chromatographic separation techniques, thereby improving its usefulness in identification of compounds. As with EI, the method is limited compounds that can be vapourized in the ion source. The lower degree of fragmentation can be a disadvantage in that less structural information is provided. Additionally, the degree of fragmentation and therefore the mass spectrum, can be sensitive to source conditions such as pressure, temperature, and the presence of impurities (such as water vapour) in the source. Because of this lack of reproducibility, libraries of CI spectra have not been generated for compound identification. Applications CI mass spectrometry is a useful tool in structure elucidation of organic compounds. This is possible with CI, because formation of [M+1]+ eliminates a stable molecule, which can be used to guess the functional groups present. Besides that, CI facilitates the ability to detect the molecular ion peak, due to less extensive fragmentation. Chemical ionization can also be used to identify and quantify an analyte present in a sample, by coupling chromatographic separation techniques to CI such as gas chromatography (GC), high performance liquid chromatography (HPLC) and capillary electrophoresis (CE). This allows selective ionization of an analyte from a mixture of compounds, where accurate and precised results can be obtained. Variants Negative chemical ionization Chemical ionization for gas phase analysis is either positive or negative. Almost all neutral analytes can form positive ions through the reactions described above. In order to see a response by negative chemical ionization (NCI, also NICI), the analyte must be capable of producing a negative ion (stabilize a negative charge) for example by electron capture ionization. Because not all analytes can do this, using NCI provides a certain degree of selectivity that is not available with other, more universal ionization techniques (EI, PCI). NCI can be used for the analysis of compounds containing acidic groups or electronegative elements (especially halogens).Moreover, negative chemical ionization is more selective and demonstrates a higher sensitivity toward oxidizing agents and alkylating agents. Because of the high electronegativity of halogen atoms, NCI is a common choice for their analysis. This includes many groups of compounds, such as PCBs, pesticides, and fire retardants. Most of these compounds are environmental contaminants, thus much of the NCI analysis that takes place is done under the auspices of environmental analysis. In cases where very low limits of detection are needed, environmental toxic substances such as halogenated species, oxidizing and alkylating agents are frequently analyzed using an electron capture detector coupled to a gas chromatograph. Negative ions are formed by resonance capture of a near-thermal energy electron, dissociative capture of a low energy electron and via ion-molecular interactions such as proton transfer, charge transfer and hydride transfer. Compared to the other methods involving negative ion techniques, NCI is quite advantageous, as the reactivity of anions can be monitored in the absence of a solvent. Electron affinities and energies of low-lying valencies can be determined by this technique as well. Charge-exchange chemical ionization This is also similar to CI and the difference lies in the production of a radical cation with an odd number of electrons. The reagent gas molecules are bombarded with high energy electrons and the product reagent gas ions abstract electrons from the analyte to form radical cations. The common reagent gases used for this technique are toluene, benzene, NO, Xe, Ar and He. Careful control over the selection of reagent gases and the consideration toward the difference between the resonance energy of the reagent gas radical cation and the ionization energy of the analyte can be used to control fragmentation. The reactions for charge-exchange chemical ionization are as follows. He{} + e^- -> He^{+\bullet}{} + 2e^- He^{+\bullet}{} + M -> M^{+\bullet} Atmospheric-pressure chemical ionization Chemical ionization in an atmospheric pressure electric discharge is called atmospheric pressure chemical ionization (APCI), which usually uses water as the reagent gas. An APCI source is composed of a liquid chromatography outlet, nebulizing the eluent, a heated vaporizer tube, a corona discharge needle and a pinhole entrance to 10−3 torr vacuum. The analyte is a gas or liquid spray and ionization is accomplished using an atmospheric pressure corona discharge. This ionization method is often coupled with high performance liquid chromatography where the mobile phase containing eluting analyte sprayed with high flow rates of nitrogen or helium and the aerosol spray is subjected to a corona discharge to create ions. It is applicable to relatively less polar and thermally less stable compounds. The difference between APCI and CI is that APCI functions under atmospheric pressure, where the frequency of collisions is higher. This enables the improvement in sensitivity and ionization efficiency. See also Electrospray ionization Proton-transfer-reaction mass spectrometry References Bibliography External links Using Amines as Chemical Ionization Reagents and Building Custom Manifold Ion source Mass spectrometry Scientific techniques
Chemical ionization
[ "Physics", "Chemistry" ]
2,329
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Ion source", "Mass spectrometry", "Matter" ]
1,488,320
https://en.wikipedia.org/wiki/No-communication%20theorem
In physics, the no-communication theorem (also referred to as the no-signaling principle) is a no-go theorem in quantum information theory. It asserts that during the measurement of an entangled quantum state, it is impossible for one observer to transmit information to another observer, regardless of their spatial separation. This conclusion preserves the principle of causality in quantum mechanics and ensures that information transfer does not violate special relativity by exceeding the speed of light. The theorem is significant because quantum entanglement creates correlations between distant events that might initially appear to enable faster-than-light communication. The no-communication theorem establishes conditions under which such transmission is impossible, thus resolving paradoxes like the Einstein-Podolsky-Rosen (EPR) paradox and addressing the violations of local realism observed in Bell's theorem. Specifically, it demonstrates that the failure of local realism does not imply the existence of "spooky action at a distance," a phrase originally coined by Einstein. Informal overview The no-communication theorem states that, within the context of quantum mechanics, it is not possible to transmit classical bits of information by means of carefully prepared mixed or pure states, whether entangled or not. The theorem is only a sufficient condition that states that if the Kraus matrices commute then there can be no communication through the quantum entangled states and that this is applicable to all communication. From a relativity and quantum field perspective, also faster than light or "instantaneous" communication is disallowed. Being only a sufficient condition, there can be other reasons communication is not allowed. The basic premise entering into the theorem is that a quantum-mechanical system is prepared in an initial state with some entangled states, and that this initial state is describable as a mixed or pure state in a Hilbert space H. After a certain amount of time, the system is divided in two parts each of which contains some non-entangled states and half of the quantum entangled states, and the two parts become spatially distinct, A and B, sent to two distinct observers, Alice and Bob, who are free to perform quantum mechanical measurements on their portion of the total system (viz, A and B). The question is: is there any action that Alice can perform on A that would be detectable by Bob making an observation of B? The theorem replies 'no'. An important assumption going into the theorem is that neither Alice nor Bob is allowed, in any way, to affect the preparation of the initial state. If Alice were allowed to take part in the preparation of the initial state, it would be trivially easy for her to encode a message into it; thus neither Alice nor Bob participates in the preparation of the initial state. The theorem does not require that the initial state be somehow 'random' or 'balanced' or 'uniform': indeed, a third party preparing the initial state could easily encode messages in it, received by Alice and Bob. Simply, the theorem states that, given some initial state, prepared in some way, there is no action that Alice can take that would be detectable by Bob. The proof proceeds by defining how the total Hilbert space H can be split into two parts, HA and HB, describing the subspaces accessible to Alice and Bob. The total state of the system is described by a density matrix σ. The goal of the theorem is to prove that Bob cannot in any way distinguish the pre-measurement state σ from the post-measurement state P(σ). This is accomplished mathematically by comparing the trace of σ and the trace of P(σ), with the trace being taken over the subspace HA. Since the trace is only over a subspace, it is technically called a partial trace. Key to this step is that the (partial) trace adequately summarizes the system from Bob's point of view. That is, everything that Bob has access to, or could ever have access to, measure, or detect, is completely described by a partial trace over HA of the system σ. The fact that this trace never changes as Alice performs her measurements is the conclusion of the proof of the no-communication theorem. Formulation The proof of the theorem is commonly illustrated for the setup of Bell tests in which two observers Alice and Bob perform local observations on a common bipartite system, and uses the statistical machinery of quantum mechanics, namely density states and quantum operations. Alice and Bob perform measurements on system S whose underlying Hilbert space is It is also assumed that everything is finite-dimensional to avoid convergence issues. The state of the composite system is given by a density operator on H. Any density operator σ on H is a sum of the form: where Ti and Si are operators on HA and HB respectively. For the following, it is not required to assume that Ti and Si are state projection operators: i.e. they need not necessarily be non-negative, nor have a trace of one. That is, σ can have a definition somewhat broader than that of a density matrix; the theorem still holds. Note that the theorem holds trivially for separable states. If the shared state σ is separable, it is clear that any local operation by Alice will leave Bob's system intact. Thus the point of the theorem is no communication can be achieved via a shared entangled state. Alice performs a local measurement on her subsystem. In general, this is described by a quantum operation, on the system state, of the following kind where Vk are called Kraus matrices which satisfy The term from the expression means that Alice's measurement apparatus does not interact with Bob's subsystem. Supposing the combined system is prepared in state σ and assuming, for purposes of argument, a non-relativistic situation, immediately (with no time delay) after Alice performs her measurement, the relative state of Bob's system is given by the partial trace of the overall state with respect to Alice's system. In symbols, the relative state of Bob's system after Alice's operation is where is the partial trace mapping with respect to Alice's system. One can directly calculate this state: From this it is argued that, statistically, Bob cannot tell the difference between what Alice did and a random measurement (or whether she did anything at all). Some comments The no-communication theorem implies the no-cloning theorem, which states that quantum states cannot be (perfectly) copied. That is, cloning is a sufficient condition for the communication of classical information to occur. To see this, suppose that quantum states could be cloned. Assume parts of a maximally entangled Bell state are distributed to Alice and Bob. Alice could send bits to Bob in the following way: If Alice wishes to transmit a "0", she measures the spin of her electron in the z direction, collapsing Bob's state to either or . To transmit "1", Alice does nothing to her qubit. Bob creates many copies of his electron's state, and measures the spin of each copy in the z direction. Bob will know that Alice has transmitted a "0" if all his measurements will produce the same result; otherwise, his measurements will have outcomes or with equal probability. This would allow Alice and Bob to communicate classical bits between each other (possibly across space-like separations, violating causality). The version of the no-communication theorem discussed in this article assumes that the quantum system shared by Alice and Bob is a composite system, i.e. that its underlying Hilbert space is a tensor product whose first factor describes the part of the system that Alice can interact with and whose second factor describes the part of the system that Bob can interact with. In quantum field theory, this assumption can be replaced by the assumption that Alice and Bob are spacelike separated. This alternate version of the no-communication theorem shows that faster-than-light communication cannot be achieved using processes which obey the rules of quantum field theory. History In 1978, Phillippe H. Eberhard's paper, Bell's Theorem and the Different Concepts of Locality, rigorously demonstrated the impossibility of faster-than-light communication through quantum systems. Eberhard introduced several mathematical concepts of locality and showed how quantum mechanics contradicts most of them while preserving causality. Further, in 1988, the paper Quantum Field Theory Cannot Provide Faster-Than-Light Communication by Eberhard and Ronald R. Ross analyzed how relativistic quantum field theory inherently forbids faster-than-light communication. This work elaborates on how misinterpretations of quantum field properties had led to claims of superluminal communication and pinpoints the mathematical principles that prevent it. In regards to communication, a quantum channel can always be used to transfer classical information by means of shared quantum states. In 2008 Matthew Hastings proved a counterexample where the minimum output entropy is not additive for all quantum channels. Therefore, by an equivalence result due to Peter Shor, the Holevo capacity is not just additive, but super-additive like the entropy, and by consequence there may be some quantum channels where you can transfer more than the classical capacity. Typically overall communication happens at the same time via quantum and non quantum channels, and in general time ordering and causality cannot be violated. In August 24th 2015, a team led by physicist Ronald Hanson from Delft University of Technology in the Netherlands uploaded their latest paper to the preprint website arXiv, reporting the first Bell experiment that simultaneously addressed both the detection loophole and the communication loophole. The research team used a clever technique known as "entanglement swapping," which combines the benefits of photons and matter particles. The final measurements showed coherence between the two electrons that exceeded the Bell limit, once again supporting the standard view of quantum mechanics and rejecting Einstein's hidden variable theory. Furthermore, since electrons are easily detectable, the detection loophole is no longer an issue, and the large distance between the two electrons also eliminates the communication loophole. See also No-broadcast theorem No-cloning theorem No-deleting theorem No-hiding theorem No-teleportation theorem References Quantum measurement Quantum information science Theorems in quantum mechanics Statistical mechanics theorems No-go theorems
No-communication theorem
[ "Physics", "Mathematics" ]
2,097
[ "Theorems in dynamical systems", "Theorems in quantum mechanics", "No-go theorems", "Equations of physics", "Quantum mechanics", "Statistical mechanics theorems", "Theorems in mathematical physics", "Quantum measurement", "Statistical mechanics", "Physics theorems" ]
2,135,870
https://en.wikipedia.org/wiki/Wu%E2%80%93Yang%20monopole
The Wu–Yang monopole was the first solution (found in 1968 by Tai Tsun Wu and Chen Ning Yang) to the Yang–Mills field equations. It describes a magnetic monopole which is pointlike and has a potential which behaves like 1/r everywhere. See also Meron Dyon Instanton Wu–Yang dictionary Notes References Gauge Fields, Classification and Equations of Motion, M.Carmeli, Kh. Huleilil and E. Leibowitz, World Scientific Publishing Gauge theories Magnetic monopoles
Wu–Yang monopole
[ "Physics", "Astronomy" ]
109
[ "Astronomical hypotheses", "Unsolved problems in physics", "Quantum mechanics", "Magnetic monopoles", "Quantum physics stubs" ]
2,137,509
https://en.wikipedia.org/wiki/Perfect%20fluid
In physics, a perfect fluid or ideal fluid is a fluid that can be completely characterized by its rest frame mass density and isotropic pressure . Usually, "perfect fluid" is reserved for relativistic models and "ideal fluid" for classical inviscid flow. Real fluids are "sticky" and contain (and conduct) heat. Perfect fluids are idealized models in which these possibilities are ignored. Specifically, perfect fluids have no shear stresses, viscosity, or heat conduction. A quark–gluon plasma and graphene are examples of nearly perfect fluids that could been studied in a laboratory. D'Alembert paradox In classical mechanics, ideal fluids are described by Euler equations. Ideal fluids produce no drag according to d'Alembert's paradox. Relativistic formulation In space-positive metric signature tensor notation, the stress–energy tensor of a perfect fluid can be written in the form where U is the 4-velocity vector field of the fluid and where is the metric tensor of Minkowski spacetime. In time-positive metric signature tensor notation, the stress–energy tensor of a perfect fluid can be written in the form where is the 4-velocity of the fluid and where is the metric tensor of Minkowski spacetime. This takes on a particularly simple form in the rest frame where is the energy density and is the pressure of the fluid. Perfect fluids admit a Lagrangian formulation, which allows the techniques used in field theory, in particular, quantization, to be applied to fluids. Perfect fluids are used in general relativity to model idealized distributions of matter, such as the interior of a star or an isotropic universe. In the latter case, the equation of state of the perfect fluid may be used in Friedmann–Lemaître–Robertson–Walker equations to describe the evolution of the universe. In general relativity, the expression for the stress–energy tensor of a perfect fluid is written as where is the 4-velocity vector field of the fluid and where is the inverse metric, written with a space-positive signature. See also Equation of state Ideal gas Fluid solutions in general relativity Potential flow References Further reading , (pbk.) Topical review. Fluid mechanics Superfluidity Physics
Perfect fluid
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
462
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Superfluidity", "Civil engineering", "Condensed matter physics", "Exotic matter", "Fluid mechanics", "Matter", "Fluid dynamics" ]
2,138,745
https://en.wikipedia.org/wiki/Jump%20and%20Smile
The Jump & Smile is a type of fairground ride which consists of gondolas arranged on a number of radial arms around a central axis. As the central axis rotates, the arms are lifted into the air using compressed air cylinders at pseudo-random intervals, providing an erratic jumping motion. Most versions of the ride have preloaded patterns which the arms can move in, leading to an eye-catching display. Notable Jump & Smile manufactures include Sartori, Safeco, PWS, SBF Visa and Fabbri. Variants Standard A standard Jump & Smile has 12 arms each holding one three-person gondola. Riders are secured by a simple lap bar. These rides have become extremely popular in Europe, particularly the models offered by Safeco, Sartori and PWS. Floorless A floorless Jump & Smile has 12 arms each holding one two-person floorless gondola. Unlike the regular Jump & Smile, the arms are at a higher angle so that the gondolas have enough room to not crash into the floor. The gondolas on these rides can also usually rotate freely. Riders are secured by over-the-shoulder restraints. Examples of these rides are the Fabbri Smashing Jump, the Sartori Roto Techno and the Safeco Hang-Jump. References Amusement rides
Jump and Smile
[ "Physics", "Technology" ]
266
[ "Physical systems", "Machines", "Amusement rides" ]
2,139,649
https://en.wikipedia.org/wiki/Moore%20space%20%28algebraic%20topology%29
In algebraic topology, a branch of mathematics, Moore space is the name given to a particular type of topological space that is the homology analogue of the Eilenberg–Maclane spaces of homotopy theory, in the sense that it has only one nonzero homology (rather than homotopy) group. The study of Moore spaces was initiated by John Coleman Moore in 1954. Formal definition Given an abelian group G and an integer n ≥ 1, let X be a CW complex such that and for i ≠ n, where denotes the n-th singular homology group of X and is the i-th reduced homology group. Then X is said to be a Moore space. It's also sensible to require (as Moore did) that X be simply-connected if n>1. Examples is a Moore space of for . is a Moore space of for . See also Eilenberg–MacLane space, the homotopy analog. Homology sphere References Hatcher, Allen. Algebraic topology, Cambridge University Press (2002), . For further discussion of Moore spaces, see Chapter 2, Example 2.40. A free electronic version of this book is available on the author's homepage. Algebraic topology
Moore space (algebraic topology)
[ "Mathematics" ]
252
[ "Topology stubs", "Fields of abstract algebra", "Topology", "Algebraic topology" ]
2,139,794
https://en.wikipedia.org/wiki/Paratrooper%20%28ride%29
The Paratrooper, also known as the "Parachute Ride" or "Umbrella Ride", is a type of fairground ride. It is a ride where seats suspended below a wheel rotate at an angle. The seats are free to rock sideways and swing out under centrifugal force as the wheel rotates. Invariably, the seats on the Paratrooper ride have a round shaped umbrella or other shaped canopy above the seats. In contrast to modern thrill rides, the Paratrooper is a ride suitable for almost all ages. Most Paratrooper rides require the rider to be at least 36 inches (91.44 cm) tall to be accompanied by an adult, and over 48 inches (121.92 cm) to ride alone. Older Paratrooper rides have a rotating wheel which is permanently raised, which has the disadvantage that riders can only load two at a time as each seat is brought to hang vertically at the lowest point of the wheel. Some models have a lower platform that's slightly raised on the ends that could permit the loading of up to three seats at a time. Most of these rides were made by the manufacturing companies Bennett, Watkins or Hrubetz. The German manufacturer Heintz-Fahtze also made larger models of the Paratrooper under the name of the Twister. Modern Paratrooper rides use a hydraulic lifting piston to raise the wheel to their riding angle while spinning the seats. In its lowered position, all the seats hang vertically near the ground and can be loaded simultaneously. The above manufacturers also made these types and the height requirements to ride them remain the same. Variations The Force 10 is a ride made by Tivoli Enterprises that features some of the same motion of the Paratrooper. The Star Trooper is a variant created by Dartron Industries that features seats facing both ways. The Star Trooper's initial design eventually evolved into the Cliffhanger, also made by Dartron Industries. The same seats for this ride are used in the Swift-O-Plane and the same height requirement is the same as Enterprise. In the 1980s, British amusement manufacturer David Ward developed the Super Trooper, of which the wheel rises horizontally up a central column. Once at the top, the wheel slants up to 45 degrees in either direction. He built two 12-seat versions and a 10-seat version. In 2018, PWS Rides Ltd. acquired the plans from Ward to build a new version with the first example due to be delivered in early 2019. References Amusement rides
Paratrooper (ride)
[ "Physics", "Technology" ]
508
[ "Physical systems", "Machines", "Amusement rides" ]
2,139,849
https://en.wikipedia.org/wiki/Mass%20ratio
In aerospace engineering, mass ratio is a measure of the efficiency of a rocket. It describes how much more massive the vehicle is with propellant than without; that is, the ratio of the rocket's wet mass (vehicle plus contents plus propellant) to its dry mass (vehicle plus contents). A more efficient rocket design requires less propellant to achieve a given goal, and would therefore have a lower mass ratio; however, for any given efficiency a higher mass ratio typically permits the vehicle to achieve higher delta-v. The mass ratio is a useful quantity for back-of-the-envelope rocketry calculations: it is an easy number to derive from either or from rocket and propellant mass, and therefore serves as a handy bridge between the two. It is also a useful for getting an impression of the size of a rocket: while two rockets with mass fractions of, say, 92% and 95% may appear similar, the corresponding mass ratios of 12.5 and 20 clearly indicate that the latter system requires much more propellant. Typical multistage rockets have mass ratios in the range from 8 to 20. The Space Shuttle, for example, has a mass ratio around 16. Derivation The definition arises naturally from Tsiolkovsky's rocket equation: where Δv is the desired change in the rocket's velocity ve is the effective exhaust velocity (see specific impulse) m0 is the initial mass (rocket plus contents plus propellant) m1 is the final mass (rocket plus contents) This equation can be rewritten in the following equivalent form: The fraction on the left-hand side of this equation is the rocket's mass ratio by definition. This equation indicates that a Δv of times the exhaust velocity requires a mass ratio of . For instance, for a vehicle to achieve a of 2.5 times its exhaust velocity would require a mass ratio of (approximately 12.2). One could say that a "velocity ratio" of requires a mass ratio of . Alternative definition Sutton defines the mass ratio inversely as: In this case, the values for mass fraction are always less than 1. See also Rocket fuel Propellant mass fraction Payload fraction References Astrodynamics Mass Ratios
Mass ratio
[ "Physics", "Mathematics", "Engineering" ]
445
[ "Scalar physical quantities", "Astrodynamics", "Physical quantities", "Quantity", "Mass", "Ratios", "Size", "Arithmetic", "Aerospace engineering", "Wikipedia categories named after physical quantities", "Matter" ]
15,838,199
https://en.wikipedia.org/wiki/Thymosin%20beta-4
Thymosin beta-4 is a protein that in humans is encoded by the TMSB4X gene. Recommended INN (International Nonproprietary Name) for thymosin beta-4 is 'timbetasin', as published by the World Health Organization (WHO). The protein consists (in humans) of 43 amino acids (sequence: SDKPDMAEI EKFDKSKLKK TETQEKNPLP SKETIEQEKQ AGES) and has a molecular weight of 4921 g/mol. Thymosin-β4 is a major cellular constituent in many tissues. Its intracellular concentration may reach as high as 0.5 mM. Following Thymosin α1, β4 was the second of the biologically active peptides from Thymosin Fraction 5 to be completely sequenced and synthesized. Function This gene encodes an actin sequestering protein which plays a role in regulation of actin polymerization. The protein is also involved in cell proliferation, migration, and differentiation. This gene escapes X inactivation and has a homolog on chromosome Y (TMSB4Y). Biological activities of thymosin β4 Any concepts of the biological role of thymosin β4 must inevitably be coloured by the demonstration that total ablation of the thymosin β4 gene in the mouse allows apparently normal embryonic development of mice which are fertile as adults. Actin binding Thymosin β4 was initially perceived as a thymic hormone. However this changed when it was discovered that it forms a 1:1 complex with G (globular) actin, and is present at high concentration in a wide range of mammalian cell types. When appropriate, G-actin monomers polymerize to form F (filamentous) actin, which, together with other proteins that bind to actin, comprise cellular microfilaments. Formation by G-actin of the complex with β-thymosin (= "sequestration") opposes this. Due to its profusion in the cytosol and its ability to bind G-actin but not F-actin, thymosin β4 is regarded as the principal actin-sequestering protein in many cell types. Thymosin β4 functions like a buffer for monomeric actin as represented in the following reaction: F-actin ↔ G-actin + Thymosin β4 ↔ G-actin/Thymosin β4 Release of G-actin monomers from thymosin β4 occurs as part of the mechanism that drives actin polymerization in the normal function of the cytoskeleton in cell morphology and cell motility. The sequence LKKTET, which starts at residue 17 of the 43-aminoacid sequence of thymosin beta-4, and is strongly conserved between all β-thymosins, together with a similar sequence in WH2 domains, is frequently referred to as "the actin-binding motif" of these proteins, although modelling based on X-ray crystallography has shown that essentially the entire length of the β-thymosin sequence interacts with actin in the actin-thymosin complex. "Moonlighting" In addition to its intracellular role as the major actin-sequestering molecule in cells of many multicellular animals, thymosin β4 shows a remarkably diverse range of effects when present in the fluid surrounding animal tissue cells. Taken together, these effects suggest that thymosin has a general role in tissue regeneration. This has suggested a variety of possible therapeutic applications, and several have now been extended to animal models and human clinical trials. It is considered unlikely that thymosin β4 exerts all these effects via intracellular sequestration of G-actin. This would require its uptake by cells, and moreover, in most cases the cells affected already have substantial intracellular concentrations. The diverse activities related to tissue repair may depend on interactions with receptors quite distinct from actin and possessing extracellular ligand-binding domains. Such multi-tasking by, or "partner promiscuity" of, proteins has been referred to as protein moonlighting. Proteins such as thymosins which lack stable folded structure in aqueous solution, are known as intrinsically unstructured proteins (IUPs). Because IUPs acquire specific folded structures only on binding to their partner proteins, they offer special possibilities for interaction with multiple partners. A candidate extracellular receptor of high affinity for thymosin β4 is the β subunit of cell surface-located ATP synthase, which would allow extracellular thymosin to signal via a purinergic receptor. Some of the multiple activities of thymosin β4 unrelated to actin may be mediated by a tetrapeptide enzymically cleaved from its N-terminus, N-acetyl-ser-asp-lys-pro, brand names Seraspenide or Goralatide, best known as an inhibitor of the proliferation of haematopoietic (blood-cell precursor) stem cells of bone marrow. Tissue regeneration Work with cell cultures and experiments with animals have shown that administration of thymosin β4 can promote migration of cells, formation of blood vessels, maturation of stem cells, survival of various cell types and lowering of the production of pro-inflammatory cytokines. These multiple properties have provided the impetus for a worldwide series of on-going clinical trials of potential effectiveness of thymosin β4 in promoting repair of wounds in skin, cornea and heart. Such tissue-regenerating properties of thymosin β4 may ultimately contribute to repair of human heart muscle damaged by heart disease and heart attack. In mice, administration of thymosin β4 has been shown to stimulate formation of new heart muscle cells from otherwise inactive precursor cells present in the outer lining of adult hearts, to induce migration of these cells into heart muscle and recruit new blood vessels within the muscle. Anti-inflammatory role for sulfoxide In 1999 researchers in Glasgow University found that an oxidised derivative of thymosin β4 (the sulfoxide, in which an oxygen atom is added to the methionine near the N-terminus) exerted several potentially anti-inflammatory effects on neutrophil leucocytes. It promoted their dispersion from a focus, inhibited their response to a small peptide (F-Met-Leu-Phe) which attracts them to sites of bacterial infection and lowered their adhesion to endothelial cells. (Adhesion to endothelial cells of blood vessel walls is pre-requisite for these cells to leave the bloodstream and invade infected tissue). A possible anti-inflammatory role for the β4 sulfoxide was supported by the group's finding that it counteracted artificially-induced inflammation in mice. The group had first identified the thymosin sulfoxide as an active factor in culture fluid of cells responding to treatment with a steroid hormone, suggesting that its formation might form part of the mechanism by which steroids exert anti-inflammatory effects. Extracellular thymosin β4 would be readily oxidised to the sulfoxide in vivo at sites of inflammation, by the respiratory burst. Terminal deoxynucleotidyl transferase Thymosin β4 induces the activity of the enzyme terminal deoxynucleotidyl transferase in populations of thymocytes (thymus-derived lymphocytes). This suggests that the peptide may contribute to the maturation of these cells. Clinical significance Tβ4 has been studied in a number of clinical trials. In phase 2 trials with patients having pressure ulcers, venous pressure ulcers, and epidermolysis bullosa, Tβ4 accelerated the rate of repair. It was also found to be safe and well tolerated. In human clinical trials, Tβ4 improves the conditions of dry eye and neurotrophic keratopathy with effects lasting long after the end of treatment. Doping in sports Thymosin beta-4 is considered a performance enhancing substance and is banned in sports by the World Anti-Doping Agency due to its effect of aiding soft tissue recovery and enabling higher training loads. It was central to two controversies in Australia in the 2010s which saw a large proportion of the playing lists from two professional football clubs – the Cronulla-Sutherland Sharks of the National Rugby League and the Essendon Football Club of the Australian Football League – found guilty of doping and suspended from playing; in both cases, the players were administered thymosin beta-4 in a program organised by sports scientist Stephen Dank. Interactions TMSB4X has been shown to interact with ACTA1 and ACTG1. See also Beta thymosins Thymosin beta-4, Y-chromosomal Thymosins References Further reading Peptides
Thymosin beta-4
[ "Chemistry" ]
1,829
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
15,842,342
https://en.wikipedia.org/wiki/Laboratory%20diagnosis%20of%20viral%20infections
In the diagnostic laboratory, virus infections can be confirmed by a myriad of methods. Diagnostic virology has changed rapidly due to the advent of molecular techniques and increased clinical sensitivity of serological assays. Sampling A wide variety of samples can be used for virological testing. The type of sample sent to the laboratory often depends on the type of viral infection being diagnosed and the test required. Proper sampling technique is essential to avoid potential pre-analytical errors. For example, different types of samples must be collected in appropriate tubes to maintain the integrity of the sample and stored at appropriate temperatures (usually 4 °C) to preserve the virus and prevent bacterial or fungal growth. Sometimes multiple sites may also be sampled. Types of samples include the following: Nasopharyngeal swab Blood Skin Sputum, gargles and bronchial washings Urine Semen Faeces Cerebrospinal fluid Tissues (biopsies or post-mortem) Dried blood spots For example, a nasal mucus test may be done to diagnose rhinovirus. Virus isolation Viruses are often isolated from the initial patient sample. This allows the virus sample to be grown into larger quantities and allows a larger number of tests to be run on them. This is particularly important for samples that contain new or rare viruses for which diagnostic tests are not yet developed. Many viruses can be grown in cell culture in the lab. To do this, the virus sample is mixed with cells, a process called adsorption, after which the cells become infected and produce more copies of the virus. Although different viruses often only grow in certain types of cells, there are cells that support the growth of a large variety of viruses and are a good starting point, for example, the African monkey kidney cell line (Vero cells), human lung fibroblasts (MRC-5), and human epidermoid carcinoma cells (HEp-2). One means of determining whether the cells are successfully replicating the virus is to check for a change in cell morphology or for the presence of cell death using a microscope. Other viruses may require alternative methods for growth such as the inoculation of embryonated chicken eggs (e.g. avian influenza viruses) or the intracranial inoculation of virus using newborn mice (e.g. lyssaviruses). Nucleic acid based methods Molecular techniques are the most specific and sensitive diagnostic tests. They are capable of detecting either the whole viral genome or parts of the viral genome. In the past nucleic acid tests have mainly been used as a secondary test to confirm positive serological results. However, as they become cheaper and more automated, they are increasingly becoming the primary tool for diagnostics and can also be use for monitoring of treatment of viral infected individuals t. Polymerase chain reaction Detection of viral RNA and DNA genomes can be performed using polymerase chain reaction. This technique makes many copies of the virus genome using virus-specific probes. Variations of PCR such as nested reverse transcriptase PCR and real time PCR can also be used to determine viral loads in patient serum. This is often used to monitor treatment success in HIV cases. Sequencing Sequencing is the only diagnostic method that will provide the full sequence of a virus genome. Hence, it provides the most information about very small differences between two viruses that would look the same using other diagnostic tests. Currently it is only used when this depth of information is required. For example, sequencing is useful when specific mutations in the patient are tested for in order to determine antiviral therapy and susceptibility to infection. However, as the tests are getting cheaper, faster and more automated, sequencing will likely become the primary diagnostic tool in the future. Microscopy based methods Immunofluorescence or immunoperoxidase Immunofluorescence or immunoperoxidase assays are commonly used to detect whether a virus is present in a tissue sample. These tests are based on the principle that if the tissue is infected with a virus, an antibody specific to that virus will be able to bind to it. To do this, antibodies that are specific to different types of viruses are mixed with the tissue sample. After the tissue is exposed to a specific wavelength of light or a chemical that allows the antibody to be visualized. These tests require specialized antibodies that are produced and purchased from commercial companies. These commercial antibodies are usually well characterized and are known to bind to only one specific type of virus. They are also conjugated to a special kind of tag that allows the antibody to be visualized in the lab, i.e.so that it will emit fluorescence or a color. Hence, immunofluorescence refers to the detection of a fluorescent antibody (immuno) and immunoperoxidase refers to the detection of a colored antibody (peroxidase produces a dark brown color). Electron microscopy Electron microscopy is a method that can take a picture of a whole virus and can reveal its shape and structure. It is not typically used as a routine diagnostic test as it requires a highly specialized type of sample preparation, microscope and technical expertise. However, electron microscopy is highly versatile due to its ability to analyze any type of sample and identify any type of virus. Therefore, it remains the gold standard for identifying viruses that do not show up on routine diagnostic tests or for which routine tests present conflicting results. Host antibody detection A person who has recently been infected by a virus will produce antibodies in their bloodstream that specifically recognize that virus. This is called humoral immunity. Two types of antibodies are important. The first called IgM is highly effective at neutralizing viruses but is only produced by the cells of the immune system for a few weeks. The second, called, IgG is produced indefinitely. Therefore, the presence of IgM in the blood of the host is used to test for acute infection, whereas IgG indicates an infection sometime in the past. Both types of antibodies are measured when tests for immunity are carried out. Antibody testing has become widely available. It can be done for individual viruses (e.g. using an ELISA assay) but automated panels that can screen for many viruses at once are becoming increasingly common. Hemagglutination assay Some viruses attach to molecules present on the surface of red blood cells, for example, influenza virus. A consequence of this is that – at certain concentrations – a viral suspension may bind together (agglutinate) the red blood cells thus preventing them from settling out of suspension. See also Serology Molecular diagnostics References Diagnostic virology Laboratory medicine techniques Viral diseases
Laboratory diagnosis of viral infections
[ "Chemistry" ]
1,349
[ "Laboratory medicine techniques" ]
15,846,772
https://en.wikipedia.org/wiki/Nucleofection
Nucleofection is an electroporation-based transfection method which enables transfer of nucleic acids such as DNA and RNA into cells by applying a specific voltage and reagents. Nucleofection, also referred to as nucleofector technology, was invented by the biotechnology company Amaxa. "Nucleofector" and "nucleofection" are trademarks owned by Lonza Cologne AG, part of the Lonza Group. Applications Nucleofection is a method to transfer substrates into mammalian cells so far considered difficult or even impossible to transfect. Examples for such substrates are nucleic acids, like the DNA of an isolated gene cloned into a plasmid, or small interfering RNA (siRNA) for knocking down expression of a specific endogenous gene. Primary cells, for example stem cells, especially fall into this category, although many other cell lines are also difficult to transfect. Primary cells are freshly isolated from body tissue and thus cells are unchanged, closely resembling the in-vivo situation, and are therefore of particular relevance for medical research purposes. In contrast, cell lines have often been cultured for decades and may significantly differ from their origin. Mechanism Based on the physical method of electroporation, nucleofection uses a combination of electrical parameters, generated by a device called Nucleofector, with cell-type specific reagents. The substrate is transferred directly into the cell nucleus and the cytoplasm. In contrast, other commonly used non-viral transfection methods rely on cell division for the transfer of DNA into the nucleus. Thus, nucleofection provides the ability to transfect even non-dividing cells, such as neuron and resting blood cells. Before the introduction of the Nucleofector Technology, efficient gene transfer into primary cells had been restricted to the use of viral vectors, which typically involve disadvantages such as safety risks, lack of reliability, and high cost. The non-viral gene transfer methods available were not suitable for the efficient transfection of primary cells. Non-viral delivery methods may require cell division for completion of transfection, since the DNA enters the nucleus during breakdown of the nuclear envelope upon cell division or by a specific localization sequence. Optimal nucleofection conditions depend upon the individual cell type, not on the substrate being transfected. This means that identical conditions are used for the nucleofection of DNA, RNA, siRNAs, shRNAs, mRNAs and pre-mRNAs, BACs, peptides, morpholinos, PNA, or other biologically active molecules. See also Electroporation References Molecular biology Genetic engineering
Nucleofection
[ "Chemistry", "Engineering", "Biology" ]
549
[ "Biochemistry", "Biological engineering", "Genetic engineering", "Molecular biology" ]
15,848,842
https://en.wikipedia.org/wiki/Freddy%20II
Freddy (1969–1971) and Freddy II (1973–1976) were experimental robots built in the Department of Machine Intelligence and Perception (later Department of Artificial Intelligence, now part of the School of Informatics at the University of Edinburgh). Technology Technical innovations involving Freddy were at the forefront of the 70s robotics field. Freddy was one of the earliest robots to integrate vision, manipulation and intelligent systems as well as having versatility in the system and ease in retraining and reprogramming for new tasks. The idea of moving the table instead of the arm simplified the construction. Freddy also used a method of recognising the parts visually by using graph matching on the detected features. The system used an innovative collection of high level procedures for programming the arm movements which could be reused for each new task. Lighthill controversy In the mid 1970s there was controversy about the utility of pursuing a general purpose robotics programme in both the USA and the UK. A BBC TV programme in 1973, referred to as the "Lighthill Debate", pitched James Lighthill, who had written a critical report for the science and engineering research funding agencies in the UK, against Donald Michie from the University of Edinburgh and John McCarthy from Stanford University. The Edinburgh Freddy II and Stanford/SRI Shakey robots were used to illustrate the state-of-the-art at the time in intelligent robotics systems. Freddy I and II Freddy Mark I (1969–1971) was an experimental prototype, with 3 degrees-of-freedom created by a rotating platform driven by a pair of independent wheels. The other main components were a video camera and bump sensors connected to a computer. The computer moved the platform so that the camera could see and then recognise the objects. Freddy II (1973–1976) was a 5 degrees of freedom manipulator with a large vertical 'hand' that could move up and down, rotate about the vertical axis and rotate objects held in its gripper around one horizontal axis. Two remaining translational degrees of freedom were generated by a work surface that moved beneath the gripper. The gripper was a two finger pinch gripper. A video camera was added as well as a later a light stripe generator. The Freddy and Freddy II projects were initiated and overseen by Donald Michie. The mechanical hardware and analogue electronics were designed and built by Stephen Salter (who also pioneered renewable energy from waves (see Salter's Duck)), and the digital electronics and computer interfacing were designed by Harry Barrow and Gregan Crawford. The software was developed by a team led by Rod Burstall, Robin Popplestone and Harry Barrow which used the POP-2 programming language, one of the world's first functional programming languages. The computing hardware was an Elliot 4130 computer with 384KB (128K 24-bit words) RAM and a hard disk linked to a small Honeywell H316 computer with 16KB of RAM which directly performed sensing and control. Freddy was a versatile system which could be trained and reprogrammed to perform a new task in a day or two. The tasks included putting rings on pegs and assembling simple model toys consisting of wooden blocks of different shapes, a boat with a mast and a car with axles and wheels. Information about part locations was obtained using the video camera, and then matched to previously stored models of the parts. It was soon realised in the Freddy project that the 'move here, do this, move there' style of robot behavior programming (actuator or joint level programming) is tedious and also did not allow for the robot to cope with variations in part position, part shape and sensor noise. Consequently, the RAPT robot programming language was developed by Pat Ambler and Robin Popplestone, in which robot behavior was specified at the object level. This meant that robot goals were specified in terms of desired position relationships between the robot, objects and the scene, leaving the details of how to achieve the goals to the underlying software system. Although developed in the 1970s RAPT is still considerably more advanced than most commercial robot programming languages. The team of people who contributed to the project were leaders in the field at the time and included Pat Ambler, Harry Barrow, Ilona Bellos, Chris Brown, Rod Burstall, Gregan Crawford, Jim Howe, Donald Michie, Robin Popplestone, Stephen Salter, Austin Tate and Ken Turner. Also of interest in the project was the use of a structured-light 3D scanner to obtain the 3D shape and position of the parts being manipulated. The Freddy II robot is currently on display at the Royal Museum in Edinburgh, Scotland, with a segment of the assembly video shown in a continuous loop. References External links Edinburgh's Artificial Intelligence Applications Institute page on Freddy for more information. Freddy II A video (167 Mb WMV) from 1973 of Freddy II in action assembling a model car and ship simultaneously. Harry Barrow is the narrator. Pat Ambler, Harry Barrow, and Robin Popplestone appear briefly in the video. historical overview of the Freddy project by Pat Ambler. Record of experiences Harry Barrow writes on interfacing Freddy I to a computer. Presentation slide Freddy is mentioned in Aaron Sloman's (slide 23) (PDF) public symposium on 50 years of AI Aaron Sloman at the University of Bremen in June 2006 BBC Robotics Timeline list includes Freddy II. History of artificial intelligence History of computing in the United Kingdom Historical robots 1970s robots Robotic manipulators Robots of the United Kingdom Science and technology in Edinburgh University of Edinburgh School of Informatics 1973 robots
Freddy II
[ "Technology" ]
1,135
[ "History of computing", "History of computing in the United Kingdom" ]
958,117
https://en.wikipedia.org/wiki/Nitrogen%20assimilation
Nitrogen assimilation is the formation of organic nitrogen compounds like amino acids from inorganic nitrogen compounds present in the environment. Organisms like plants, fungi and certain bacteria that can fix nitrogen gas (N2) depend on the ability to assimilate nitrate or ammonia for their needs. Other organisms, like animals, depend entirely on organic nitrogen from their food. Nitrogen assimilation in plants Plants absorb nitrogen from the soil in the form of nitrate (NO3−) and ammonium (NH4+). In aerobic soils where nitrification can occur, nitrate is usually the predominant form of available nitrogen that is absorbed. However this is not always the case as ammonia can predominate in grasslands and in flooded, anaerobic soils like rice paddies. Plant roots themselves can affect the abundance of various forms of nitrogen by changing the pH and secreting organic compounds or oxygen. This influences microbial activities like the inter-conversion of various nitrogen species, the release of ammonia from organic matter in the soil and the fixation of nitrogen by non-nodule-forming bacteria. Ammonium ions are absorbed by the plant via ammonia transporters. Nitrate is taken up by several nitrate transporters that use a proton gradient to power the transport. Nitrogen is transported from the root to the shoot via the xylem in the form of nitrate, dissolved ammonia and amino acids. Usually (but not always) most of the nitrate reduction is carried out in the shoots while the roots reduce only a small fraction of the absorbed nitrate to ammonia. Ammonia (both absorbed and synthesized) is incorporated into amino acids via the glutamine synthetase-glutamate synthase (GS-GOGAT) pathway. While nearly all the ammonia in the root is usually incorporated into amino acids at the root itself, plants may transport significant amounts of ammonium ions in the xylem to be fixed in the shoots. This may help avoid the transport of organic compounds down to the roots just to carry the nitrogen back as amino acids. Nitrate reduction is carried out in two steps. Nitrate is first reduced to nitrite (NO2−) in the cytosol by nitrate reductase using NADH or NADPH. Nitrite is then reduced to ammonia in the chloroplasts (plastids in roots) by a ferredoxin dependent nitrite reductase. In photosynthesizing tissues, it uses an isoform of ferredoxin (Fd1) that is reduced by PSI while in the root it uses a form of ferredoxin (Fd3) that has a less negative midpoint potential and can be reduced easily by NADPH. In non photosynthesizing tissues, NADPH is generated by glycolysis and the pentose phosphate pathway. In the chloroplasts, glutamine synthetase incorporates this ammonia as the amide group of glutamine using glutamate as a substrate. Glutamate synthase (Fd-GOGAT and NADH-GOGAT) transfer the amide group onto a 2-oxoglutarate molecule producing two glutamates. Further transaminations are carried out make other amino acids (most commonly asparagine) from glutamine. While the enzyme glutamate dehydrogenase (GDH) does not play a direct role in the assimilation, it protects the mitochondrial functions during periods of high nitrogen metabolism and takes part in nitrogen remobilization. pH and Ionic balance during nitrogen assimilation Every nitrate ion reduced to ammonia produces one OH− ion. To maintain a pH balance, the plant must either excrete it into the surrounding medium or neutralize it with organic acids. This results in the medium around the plants roots becoming alkaline when they take up nitrate. To maintain ionic balance, every NO3− taken into the root must be accompanied by either the uptake of a cation or the excretion of an anion. Plants like tomatoes take up metal ions like K+, Na+, Ca2+ and Mg2+ to exactly match every nitrate taken up and store these as the salts of organic acids like malate and oxalate. Other plants like the soybean balance most of their NO3− intake with the excretion of OH− or HCO3−. Plants that reduce nitrates in the shoots and excrete alkali from their roots need to transport the alkali in an inert form from the shoots to the roots. To achieve this they synthesize malic acid in the leaves from neutral precursors like carbohydrates. The potassium ions brought to the leaves along with the nitrate in the xylem are then sent along with the malate to the roots via the phloem. In the roots, the malate is consumed. When malate is converted back to malic acid prior to use, an OH− is released and excreted. (RCOO− + H2O -> RCOOH +OH−) The potassium ions are then recirculated up the xylem with fresh nitrate. Thus the plants avoid having to absorb and store excess salts and also transport the OH−. Plants like castor reduce a lot of nitrate in the root itself, and excrete the resulting base. Some of the base produced in the shoots is transported to the roots as salts of organic acids while a small amount of the carboxylates are just stored in the shoot itself. Nitrogen use efficiency Nitrogen use efficiency (NUE) is the proportion of nitrogen present that a plant absorbs and uses. Improving nitrogen use efficiency and thus fertilizer efficiency is important to make agriculture more sustainable, by reducing pollution (fertilizer runoff) and production cost and increasing yield. Worldwide, crops generally have less than 50% NUE. Better fertilizers, improved crop management, selective breeding, and genetic engineering can increase NUE. Nitrogen use efficiency can be measured at various levels: the crop plant, the soil, by fertilizer input, by ecosystem productivity, etc. At the level of photosynthesis in leaves, it is termed photosynthetic nitrogen use efficiency (PNUE). References Assimilation Assimilation Metabolism Plant physiology
Nitrogen assimilation
[ "Chemistry", "Biology" ]
1,276
[ "Plant physiology", "Plants", "Nitrogen cycle", "Cellular processes", "Biochemistry", "Metabolism" ]
960,361
https://en.wikipedia.org/wiki/Transduction%20%28machine%20learning%29
In logic, statistical inference, and supervised learning, transduction or transductive inference is reasoning from observed, specific (training) cases to specific (test) cases. In contrast, induction is reasoning from observed training cases to general rules, which are then applied to the test cases. The distinction is most interesting in cases where the predictions of the transductive model are not achievable by any inductive model. Note that this is caused by transductive inference on different test sets producing mutually inconsistent predictions. Transduction was introduced in a computer science context by Vladimir Vapnik in the 1990s, motivated by his view that transduction is preferable to induction since, according to him, induction requires solving a more general problem (inferring a function) before solving a more specific problem (computing outputs for new cases): "When solving a problem of interest, do not solve a more general problem as an intermediate step. Try to get the answer that you really need but not a more general one.". An example of learning which is not inductive would be in the case of binary classification, where the inputs tend to cluster in two groups. A large set of test inputs may help in finding the clusters, thus providing useful information about the classification labels. The same predictions would not be obtainable from a model which induces a function based only on the training cases. Some people may call this an example of the closely related semi-supervised learning, since Vapnik's motivation is quite different. The most well-known example of a case-bases learning algorithm is the k-nearest neighbor algorithm, which is related to transductive learning algorithms. Another example of an algorithm in this category is the Transductive Support Vector Machine (TSVM). A third possible motivation of transduction arises through the need to approximate. If exact inference is computationally prohibitive, one may at least try to make sure that the approximations are good at the test inputs. In this case, the test inputs could come from an arbitrary distribution (not necessarily related to the distribution of the training inputs), which wouldn't be allowed in semi-supervised learning. An example of an algorithm falling in this category is the Bayesian Committee Machine (BCM). Historical Context The mode of inference from particulars to particulars, which Vapnik came to call transduction, was already distinguished from the mode of inference from particulars to generalizations in part III of the Cambridge philosopher and logician W.E. Johnson's 1924 textbook, Logic. In Johnson's work, the former mode was called 'eduction' and the latter was called 'induction'. Bruno de Finetti developed a purely subjective form of Bayesianism in which claims about objective chances could be translated into empirically respectable claims about subjective credences with respect to observables through exchangeability properties. An early statement of this view can be found in his 1937 La Prévision: ses Lois Logiques, ses Sources Subjectives and a mature statement in his 1970 Theory of Probability. Within de Finetti's subjective Bayesian framework, all inductive inference is ultimately inference from particulars to particulars. Example problem The following example problem contrasts some of the unique properties of transduction against induction. A collection of points is given, such that some of the points are labeled (A, B, or C), but most of the points are unlabeled (?). The goal is to predict appropriate labels for all of the unlabeled points. The inductive approach to solving this problem is to use the labeled points to train a supervised learning algorithm, and then have it predict labels for all of the unlabeled points. With this problem, however, the supervised learning algorithm will only have five labeled points to use as a basis for building a predictive model. It will certainly struggle to build a model that captures the structure of this data. For example, if a nearest-neighbor algorithm is used, then the points near the middle will be labeled "A" or "C", even though it is apparent that they belong to the same cluster as the point labeled "B", compare semi-supervised learning. Transduction has the advantage of being able to consider all of the points, not just the labeled points, while performing the labeling task. In this case, transductive algorithms would label the unlabeled points according to the clusters to which they naturally belong. The points in the middle, therefore, would most likely be labeled "B", because they are packed very close to that cluster. An advantage of transduction is that it may be able to make better predictions with fewer labeled points, because it uses the natural breaks found in the unlabeled points. One disadvantage of transduction is that it builds no predictive model. If a previously unknown point is added to the set, the entire transductive algorithm would need to be repeated with all of the points in order to predict a label. This can be computationally expensive if the data is made available incrementally in a stream. Further, this might cause the predictions of some of the old points to change (which may be good or bad, depending on the application). A supervised learning algorithm, on the other hand, can label new points instantly, with very little computational cost. Transduction algorithms Transduction algorithms can be broadly divided into two categories: those that seek to assign discrete labels to unlabeled points, and those that seek to regress continuous labels for unlabeled points. Algorithms that seek to predict discrete labels tend to be derived by adding partial supervision to a clustering algorithm. Two classes of algorithms can be used: flat clustering and hierarchical clustering. The latter can be further subdivided into two categories: those that cluster by partitioning, and those that cluster by agglomerating. Algorithms that seek to predict continuous labels tend to be derived by adding partial supervision to a manifold learning algorithm. Partitioning transduction Partitioning transduction can be thought of as top-down transduction. It is a semi-supervised extension of partition-based clustering. It is typically performed as follows: Consider the set of all points to be one large partition. While any partition P contains two points with conflicting labels: Partition P into smaller partitions. For each partition P: Assign the same label to all of the points in P. Of course, any reasonable partitioning technique could be used with this algorithm. Max flow min cut partitioning schemes are very popular for this purpose. Agglomerative transduction Agglomerative transduction can be thought of as bottom-up transduction. It is a semi-supervised extension of agglomerative clustering. It is typically performed as follows: Compute the pair-wise distances, D, between all the points. Sort D in ascending order. Consider each point to be a cluster of size 1. For each pair of points {a,b} in D: If (a is unlabeled) or (b is unlabeled) or (a and b have the same label) Merge the two clusters that contain a and b. Label all points in the merged cluster with the same label. Manifold transduction Manifold-learning-based transduction is still a very young field of research. See also Epilogism References De Finetti, Bruno. "La prévision: ses lois logiques, ses sources subjectives." Annales de l'institut Henri Poincaré. Vol. 7. No. 1. 1937. de Finetti, Bruno (1970). Theory of Probability: A Critical Introductory Treatment. New York: John Wiley. W.E. Johnson Logic part III, CUP Archive, 1924. B. Russell. The Problems of Philosophy, Home University Library, 1912. . V. N. Vapnik. Statistical learning theory. New York: Wiley, 1998. (See pages 339-371) V. Tresp. A Bayesian committee machine, Neural Computation, 12, 2000, pdf. External links A Gammerman, V. Vovk, V. Vapnik (1998). "Learning by Transduction." An early explanation of transductive learning. "A Discussion of Semi-Supervised Learning and Transduction," Chapter 25 of Semi-Supervised Learning, Olivier Chapelle, Bernhard Schölkopf and Alexander Zien, eds. (2006). MIT Press. A discussion of the difference between SSL and transduction. Waffles is an open source C++ library of machine learning algorithms, including transduction algorithms, also Waffles. SVMlight is a general purpose SVM package that includes the transductive SVM option. Machine learning
Transduction (machine learning)
[ "Engineering" ]
1,789
[ "Artificial intelligence engineering", "Machine learning" ]
960,455
https://en.wikipedia.org/wiki/Yakub%20%28Nation%20of%20Islam%29
Yakub (also spelled Yacub or Yaqub) is a figure in the mythology of the Nation of Islam (NOI) and its offshoots. According to the NOI's doctrine, Yakub was a black Meccan scientist who lived 6,600 years ago and created the white race. According to the story, following his discovery of the law of attraction and repulsion, he gathered followers and began the creation of the white race through a form of selective breeding referred to as "grafting" on the island of Patmos; Yakub died at the age of 150, but his followers continued the process after his death. According to the NOI, the white race was created with an evil nature, and were destined to rule over black people for a period of 6,000 years through the practice of "tricknology", which ended in 1914. The story and idea of Yakub originated in the writings of the NOI's founder Wallace Fard Muhammad. Scholars have variously traced its origins in Fard's thought to the idea of the Yakubites propounded by the Moorish Science Temple, the Battle of Alarcos, or alternatively say it may have been created originally with little basis in any other tradition. Scholars have argued the tale is an example of a black theodicy, with similarities to gnosticism with Yakub as demiurge, as well as the story of Genesis. It has also been interpreted as a reversal of the contemporary racist ideas that asserted the inferiority of black people. The story has, throughout its history, caused disputes within the NOI. Under its current leader Louis Farrakhan, the NOI continues to assert that the story of Yakub is true, not a metaphor, and has been proven by modern science. Several other splinter groups and other black nationalist religious organizations, including the Nuwaubian Nation, the Five-Percent Nation and the United Nation of Islam, share a belief in Yakub. Summary Original version According to the story, at the start of human history, a variety of types of black people inhabited the moon; when a black "god-scientist" became frustrated that all those living on the moon did not speak one language, he blew up the moon. A piece of this destroyed moon became the Earth, which was then populated by a community of surviving, morally righteous black people, some of whom settled in the city of Mecca. Yakub was born a short distance outside the city, and was among the third of original black people who were discontented with life in this society. A member of the Meccan branch of the Tribe of Shabazz, Yakub acquired the nickname "big head", because of his unusually large head and arrogance. At the age of six, he discovered the law of attraction and repulsion by playing with magnets made of steel. He connected this to the rules of human attraction: the "unlike" people would attract, manipulate the original "like" people. By the age of 18, he had finished his education and had learned everything that Mecca's universities had to teach him, widely known as a successful scientist. He then discovered that the original black man contained both a "black germ" and a "brown germ", with the brown being the recessive one, and believed that if he could separate them by "grafting", he could graft the brown germ into a white germ. This insight led to a plan to create a new people, who, using tricks and lies, could rule the original black man and destroy them. He attracted a following but caused trouble, leading the Meccan authorities to exile him and his 59,999 followers. They then went to an isle in the Aegean Sea called Pelan, which Elijah Muhammad identified as modern-day Patmos. Yakub developed Christianity to fool the black people into supporting him and to trick them into not knowing their true history. Once there, he established a despotic regime, starting to breed out the black traits of his followers. This entailed breeding new children, with those who were too dark being killed at birth and their bodies being fed to wild animals or incinerated. Yakub died at the age of 150, but his followers carried on his work as he passed down his knowledge. After 600 years, the white race was created. All the races other than the black race were by-products of Yakub's work, as the "red, yellow and brown" races were created during the "bleaching" process, with the red germ coming out of the brown, the yellow coming from the red, and from the yellow the white. The brutal conditions of their creation determined the evil nature of the new race: "by lying to the black mother of the baby, this lie was born into the very nature of the white baby; and, murder for the black people was also born in them—or made by nature a liar and murderer". As a group of people distinct from the Original Asiatic Race, the white race are bereft of divinity, being intrinsically prone to lying, violence, and brutality. According to the Nation's teachings, Yakub's newly created white race sowed discord among the black race, and thus were exiled to live in the caves of Europe ("West Asia"). In this narrative, it was in Europe that the white race engaged in bestiality and degenerated, losing everything except their language. They were kept in Europe by guards. Elijah Muhammad also asserted that some of the new white race tried to become black, but failed. As a result, they became gorillas and other monkeys. To help the whites develop, the ruling Allah then sent prophets to them, the first of whom was Musa (Moses), who taught the whites to cook and wear clothes. Moses tried to civilize them, but eventually gave up and blew up 300 of the most troublesome white people with dynamite. According to the Nation, Jesus was also a prophet sent to try and civilize the white race. However, the whites had learned to use "tricknology"; a plan to use their trickery and lack of empathy and emotion to usurp power and enslave the black population, bringing the first slaves to America. According to NOI doctrine, Yakub's progeny were destined to rule for 6,000 years before the original black peoples of the world regained dominance, the end of which was the year 1914. Nuwaubian version An alternative version of the story was told by the Nuwaubian Nation, a black supremacist new religious movement run by Dwight York: this is set out in a roughly 1,700 page book called The Holy Tablets. In the Nuwaubian telling of the Yakub myth, 17 million years before the first of many "intergalactic battles", the ancestors of black people (given a variety of names, including Riziquians) were gods, but subservient to the "Supreme God". Riziquians lived in another galaxy on a planet known as "Rizk", which was located in the "Original Tri-Solar System" which featured a "moveable throne"/spaceship, Nibiru. In their telling the original protective atmospheric layer of this planet, necessary to protect from the UV rays of its three suns, had been destroyed by an evil being who was the leader of the fallen angels, Shaitan. Shaitan had been asked by the supreme god to move, either off the planet entirely or to a different location on it. He refused, and instead set off an atomic explosion "like an H-bomb", destroying part of the atmosphere. The scientists of the planet were able to repair it with gold, but there wasn't enough gold on the planet, necessitating excursions into space on the Nibiru to mine gold from planet Earth, where colonies were established. The Riziquians did not want to mine gold, believing it was beneath their status as angels. They spliced genes of Homo erectus with their own genomes, producing mankind to do it for them. Humans originally had various psychic abilities, but after wars and Cain and Abel, the gland responsible for these psychic powers was removed from the human brain by the Riziquians. Yakub was born with two brains (the Nuwaubian explanation for the size of his large head), making him a genius capable of gene-splicing experiments, which resulted in white people. After his experiments were finished, one of his brains exploded, resulting in his death. Origins of the story The story of Yakub originated in the writings of Wallace Fard Muhammad, the founder of the Nation of Islam, in his doctrinal Q&A pamphlet Lost Found Moslem Lesson No. 2 from the early 1930s. It was developed by his successor Elijah Muhammad in several writings, most fully in a chapter entitled "The Making of Devil" in his book Message to the Blackman in America. The story of Yakub includes Jews as part of a wider artificially created "white" race. In the Book of Genesis, biblical patriarch Jacob makes a deal with his uncle Laban to divide livestock amongst themselves. The black goats and sheep will belong to Laban, while spotted, speckled or brown goats will belong to Jacob. After Laban agrees, Jacob places wood "with white streaks" in front of the strongest animals during breeding so as to produce spotted offspring. He further uses selective breeding to ensure "the feebler would be Laban's, and the stronger Jacob's". Leaders in the early-20th century Eugenics movement like James Barr cited the Jacob story in their literature, often from an anti-Semitic point of view. Knight opines: "The prominence of Jacob as not only a controller of animal heredity but a selfish, scheming deceiver presents him as a natural candidate for the engineer of the white race". In speeches by Malcolm X, Yakub is identified completely with the Jacob of Genesis. Referring to the story of Jacob wrestling with the angel, Malcolm X states that Elijah Muhammad told him that "Jacob was Yacub, and the angel that Jacob wrestled with wasn't God, it was the government of the day". This was because Yakub was seeking funds for his expedition to Patmos, "so when it says Jacob wrestled with an angel, 'angel' is only used as a symbol to hide the one he was really wrestling with". However, Malcolm X also states that John of Patmos was also Yakub, and that the Book of Revelation refers to his deeds: "John was Yacub. John was out there getting ready to make a new race, he said, for the word of the Lord". Ernest Allen argues that "the Yakub myth may have been created out of whole cloth by Prophet Fard". Allen says the Yakub story could conceivably have been influenced by a real historical event during the struggle between Muslims and Christians for control of Spain. Muslim leader Abu Yusuf Yaqub al-Mansur defeated the Franks at the Battle of Alarcos (1195). After the battle, 40,000 European prisoners of war were taken to Morocco to labor on Yaqub's building projects. They were then set free and "allowed to form a valley settlement located somewhere between Fez and Marrakesh. On his deathbed Ya'qub lamented his decision to allow these Shibanis (as they came to be called) to form an enclave on Moroccan soil, thereby posing a potential threat to the stability of the Moorish empire". Yusuf Nuruddin says that a more direct source was the doctrine of the "Yacobites" or "Yakubites" propounded by Timothy Drew's Moorish Science Temple, to which Fard may have belonged before he founded the NOI. According to Drew, early pre-Columbian civilizations were founded by a West African Moor "named Yakub who landed on the Yucatan Peninsula", whose people evolved into "a race of scientific geniuses with large heads". Drew's followers said this was supported by the large heads of the Olmec statues, which they claimed reflected African features; Nuruddin argues this indicated that the Yakub myth was influenced by the Moorish Science Temple's theology. Role in the Nation of Islam The Yakub story attempts to rationalize "black suffering" through the lens of Islamic theologies, trying to give it a religious meaning and understanding. Even for those members who refused to take the story literally, it provided a useful metaphor for racial relations and oppression. Elijah Muhammad repeatedly referred to whites as "the devil". The Nation maintains that most white people are unaware of their true origins, but that such knowledge is held by senior white Freemasons. The doctrine is not present or substantiated in mainstream Islam. As a result, it has led to controversy: Malcolm X in his Autobiography notes that, in his travels in the Middle East, many Muslims reacted with shock upon hearing about the doctrine of Yakub. When Malcolm founded his own religion organization, Muslim Mosque, Inc., he did not carry over the concept of Yakub. Louis Farrakhan reinstated the original Nation of Islam, and has reasserted his belief in the literal truth of the story of Yakub. In a 1996 interview, Henry Louis Gates, Chairman of Harvard University's Afro-American Studies Department, asked him whether the story was a metaphor or literal. Farrakhan claimed that aspects of the story had been proven accurate by modern genetic science and insisted that "Personally, I believe that Yakub is not a mythical figure—he is a very real scientist. Not a big-head silly thing, as they would like to say". However, he did later cease speaking of the related "white devil" concept. Farrakhan's periodical The Final Call continues to publish articles asserting the truth of the story, arguing that modern science supports the accuracy of Elijah Muhammad's account of Yakub. The NOI splinter groups the Five-Percent Nation and the United Nation of Islam also believe in the Yakub doctrine. Commentary Harold Bloom in his book The American Religion argues that Yakub combines elements of the biblical God and the Gnostic concept of the Demiurge, saying that "Yakub has an irksome memorability as a crude but pungent Gnostic Demiurge". Nathaniel Deutsch also notes that Fard and Muhammad draw on the concept of the Demiurge, along with traditions of esotericism in Biblical interpretation, absorbing aspects of Biblical tales to the new narrative, such as the swords of the Muslim warriors keeping the "white devils" from Paradise, like the flaming sword of the angel protecting the Garden of Eden in Genesis. Yusuf Nuruddin also compared the Yakub story to the Genesis story, with the opposing group to the initial utopian society being comparable to the snake in the Garden of Eden. In his view the story of the later expulsion of Yakub was comparable to the expulsion of Adam and Eve, as well as the fall of man. Edward Curtis calls the story "a black theodicy: a story grounded in a mythological view of history that explained the fall of black civilization, the Middle Passage from Africa to the Americas, and the practice of Christian religion among slaves and their descendants". Stephen C. Finley also called it a theodicy. Several commentators state that the story, by associating blacks with ancient high civilizations and whites with cave-dwelling barbarians and gorillas, both uses and spectacularly reverses the populist and scientific racism of the era which identified Africans as primitive, or closer to apes than whites. This drew on earlier criticisms of white supremacist Nordicism, creating a mythic version of "attacks on AngloSaxon lineage and behavior that had been voiced by more mainstream black thinkers during the nineteenth century. [...] With these references the [NOI] Muslims replicated the images of European savagery in the Middle Ages that were so pervasive in nineteenth-century black racial thought". In popular culture The American author and playwright Amiri Baraka's play A Black Mass (1965) takes inspiration from the story of Yakub. In Baraka's version the experiment creates a single Frankenstein-like "white" monster who kills Jacoub and the other magician-scientists and bites a woman, transforming her in a vampire-like way into a white-devil mate for himself. From this monstrous couple the white race is descended. According to critic Melani McAlister, "the character of Yakub, now called Jacoub, is introduced as one of three 'Black Magicians' who together symbolize the black origin of all religions". McAlister argues that Baraka turns the Yakub story "into a reinterpretation of the Faust story and a simultaneous meditation on the role and function of art." saying that "As with Faust, Jacoub's individualism and egotism are his undoing, but his failings also signal the destruction of a community." He also compared his version of the story to Frankenstein, in its conflation of "the six hundred years of Elijah Muhammad's "history" into a single, terrible moment of the creation of a monster." According to Charise L. Cheney, the doctrine of Yakub has had a significant influence in rap culture, mentioning several rappers. She argues that the rapper Kam (a member of the NoI), in his 1995 song "Keep tha Peace", uses the Yakub doctrine in order to explain "the roots of black-on-black crime and gang violence in America's inner cities", noting the lyrics: She also notes Grand Puba's 1990 lyric, in which he announces that "his calling was to bring enlightenment to black people and an end to white domination" saying "Here comes the god to send the devil right back to his cave. […] We're gonna drop the bomb on the Yakub crew". Chuck D of Public Enemy also refers to the story in his song "Party for Your Right to Fight", referring to the Yakub story by attributing the deaths of African American radicals to the "grafted devils" conspiring against the "Black Asiatic Man". See also Xenu Notes References Works cited Primary sources Academic articles Books Alleged extraterrestrial beings Antisemitic tropes Anti-white racism in the United States Jacob Legendary progenitors National mysticism Nation of Islam Nuwaubianism Patmos Pseudohistory Scientific racism Theodicy
Yakub (Nation of Islam)
[ "Biology" ]
3,867
[ "Biology theories", "Obsolete biology theories", "Scientific racism" ]
960,581
https://en.wikipedia.org/wiki/Load-bearing%20wall
A load-bearing wall or bearing wall is a wall that is an active structural element of a building, which holds the weight of the elements above it, by conducting its weight to a foundation structure below it. Load-bearing walls are one of the earliest forms of construction. The development of the flying buttress in Gothic architecture allowed structures to maintain an open interior space, transferring more weight to the buttresses instead of to central bearing walls. In housing, load-bearing walls are most common in the light construction method known as "platform framing". In the birth of the skyscraper era, the concurrent rise of steel as a more suitable framing system first designed by William Le Baron Jenney, and the limitations of load-bearing construction in large buildings, led to a decline in the use of load-bearing walls in large-scale commercial structures. Description A load-bearing wall or bearing wall is a wall that is an active structural element of a that is, it bears the weight of the elements above said wall, resting upon it by conducting its weight to a foundation structure. The materials most often used to construct load-bearing walls in large buildings are concrete, block, or brick. By contrast, a curtain wall provides no significant structural support beyond what is necessary to bear its own materials or conduct such loads to a bearing wall. History Load-bearing walls are one of the earliest forms of construction. The development of the flying buttress in Gothic architecture allowed structures to maintain an open interior space, transferring more weight to the buttresses instead of to central bearing walls. The Notre Dame Cathedral is an example of a load-bearing wall structure with flying buttresses. Application Depending on the type of building and the number of floors, load-bearing walls are gauged to the appropriate thickness to carry the weight above them. Without doing so, it is possible that an outer wall could become unstable if the load exceeds the strength of the material used, potentially leading to the collapse of the structure. The primary function of this wall is to enclose or divide space of the building to make it more functional and useful. It provides privacy, affords security, and gives protection against heat, cold, sun or rain. Housing In housing, load-bearing walls are most common in the light construction method known as "platform framing", and each load-bearing wall sits on a wall sill plate which is mated to the lowest base plate. The sills are bolted to the masonry or concrete foundation. The top plate or ceiling plate is the top of the wall, which sits just below the platform of the next floor (at the ceiling). The base plate or floor plate is the bottom attachment point for the wall studs. Using a top plate and a bottom plate, a wall can be constructed while it lies on its side, allowing for end-nailing of the studs between two plates, and then the finished wall can be tipped up vertically into place atop the wall sill; this not only improves accuracy and shortens construction time, but also produces a stronger wall. Skyscrapers Due to the immense weight of skyscrapers, the base and walls of the lower floors must be extremely strong. Pilings are used to anchor the building to the bedrock underground. For example, the Burj Khalifa, the world's tallest building as well as the world's tallest structure, uses specially treated and mixed reinforced concrete. Over of concrete, weighing more than were used to construct the concrete and steel foundation, which features 192 piles, with each pile being 1.5 m diameter × 43 m long ( × ) and buried more than deep. See also Column – in most larger, multi-storey buildings, vertical loads are primarily borne by columns / pillars instead of structural walls Tube frame structure – Some of the world's tallest skyscrapers use load-bearing outer frames – be it single tube (e.g. the old WTC Twin Towers), or bundled tube (e.g. the Willis Tower or the Burj Khalifa) References Structural system Types of wall
Load-bearing wall
[ "Technology", "Engineering" ]
819
[ "Structural system", "Types of wall", "Structural engineering", "Building engineering" ]
961,605
https://en.wikipedia.org/wiki/ACES%20%28computational%20chemistry%29
Aces II (Advanced Concepts in Electronic Structure Theory) is an ab initio computational chemistry package for performing high-level quantum chemical ab initio calculations. Its major strength is the accurate calculation of atomic and molecular energies as well as properties using many-body techniques such as many-body perturbation theory (MBPT) and, in particular coupled cluster techniques to treat electron correlation. The development of ACES II began in early 1990 in the group of Professor Rodney J. Bartlett at the Quantum Theory Project (QTP) of the University of Florida in Gainesville. There, the need for more efficient codes had been realized and the idea of writing an entirely new program package emerged. During 1990 and 1991 John F. Stanton, Jürgen Gauß, and John D. Watts, all of them at that time postdoctoral researchers in the Bartlett group, supported by a few students, wrote the backbone of what is now known as the ACES II program package. The only parts which were not new coding efforts were the integral packages (the MOLECULE package of J. Almlöf, the VPROP package of P.R. Taylor, and the integral derivative package ABACUS of T. Helgaker, P. Jorgensen J. Olsen, and H.J. Aa. Jensen). The latter was modified extensively for adaptation with Aces II, while the others remained very much in their original forms. Ultimately, two different versions of the program evolved. The first was maintained by the Bartlett group at the University of Florida, and the other (known as ACESII-MAB) was maintained by groups at the University of Texas, Universitaet Mainz in Germany, and ELTE in Budapest, Hungary. The latter is now called CFOUR. Aces III is a parallel implementation that was released in the fall of 2008. The effort led to definition of a new architecture for scalable parallel software called the super instruction architecture. The design and creation of software is divided into two parts: The algorithms are coded in a domain specific language called super instruction assembly language or SIAL, pronounced "sail" for easy communication. The SIAL programs are executed by a MPMD parallel virtual machine called the super instruction processor or SIP. The ACES III program consists of 580,000 lines of SIAL code of which 200,000 lines are comments, and 230,000 lines of C/C++ and Fortran of which 62,000 lines are comments. The latest version of the program was released on August 1, 2014. See also Quantum chemistry computer programs References ACES II Florida-Version Homepage ACES II Mainz-Austin-Budapest-Version Homepage (outdated) ACES III Homepage (outdated) CFOUR Homepage Computational chemistry software University of Florida
ACES (computational chemistry)
[ "Physics", "Chemistry" ]
551
[ "Computational chemistry software", "Chemistry software", "Theoretical chemistry stubs", "Quantum mechanics", "Computational chemistry stubs", "Computational chemistry", "Physical chemistry stubs", "Quantum physics stubs" ]
961,961
https://en.wikipedia.org/wiki/Gene%20gun
In genetic engineering, a gene gun or biolistic particle delivery system is a device used to deliver exogenous DNA (transgenes), RNA, or protein to cells. By coating particles of a heavy metal with a gene of interest and firing these micro-projectiles into cells using mechanical force, an integration of desired genetic information can be introduced into desired cells. The technique involved with such micro-projectile delivery of DNA is often referred to as biolistics, short for "biological ballistics". This device is able to transform almost any type of cell and is not limited to the transformation of the nucleus; it can also transform organelles, including plastids and mitochondria. Gene gun design The gene gun was originally a Crosman air pistol modified to fire dense tungsten particles. It was invented by John C Sanford, Ed Wolf, and Nelson Allen at Cornell University along with Ted Klein of DuPont between 1983 and 1986. The original target was onions (chosen for their large cell size), and the device was used to deliver particles coated with a marker gene which would relay a signal if proper insertion of the DNA transcript occurred. Genetic transformation was demonstrated upon observed expression of the marker gene within onion cells. The earliest custom manufactured gene guns (fabricated by Nelson Allen) used a 22 caliber nail gun cartridge to propel a polyethylene cylinder (bullet) down a 22 caliber Douglas barrel. A droplet of the tungsten powder coated with genetic material was placed onto the bullet and shot down into a Petri dish below. The bullet welded to the disk below the Petri plate, and the genetic material blasted into the sample with a doughnut effect involving devastation in the middle of the sample with a ring of good transformation around the periphery. The gun was connected to a vacuum pump and was placed under a vacuum while firing. The early design was put into limited production by a Rumsey-Loomis (a local machine shop then at Mecklenburg Road in Ithaca, NY, USA). Biolistics, Inc sold Dupont the rights to manufacture and distribute an updated device with improvements including the use of helium as a non-explosive propellant and a multi-disk collision delivery mechanism to minimize damage to sample tissues. Other heavy metals such as gold and silver are also used to deliver genetic material with gold being favored due to lower cytotoxicity in comparison to tungsten projectile carriers. Biolistic construct design Biolistic transformation involves the integration of a functional fragment of DNA—known as a DNA construct—into target cells. A gene construct is a DNA cassette containing all required regulatory elements for proper expression within the target organism. While gene constructs may vary in their design depending on the desired outcome of the transformation procedure, all constructs typically contain a combination a promoter sequence, a terminator sequence, the gene of interest, and a reporter gene. PromoterPromoters control the location and magnitude of gene expression and function as “the steering wheel and gas pedal” of a gene. Promoters precede the gene of interest in the DNA construct and can be changed through laboratory design to fine-tune transgene expression. The 35S promoter from Cauliflower mosaic virus is an example of a commonly used promoter that results in robust constitutive gene expression within plants. TerminatorTerminator sequences are required for proper gene expression and are placed after the coding region of the gene of interest within the DNA construct. A common terminator for biolistic transformation is the NOS terminator derived from Agrobacterium tumefaciens. Due to the high frequency of use of this terminator in genetically engineered plants, strategies have been developed to detect its presence within the food supply to monitor for unauthorized GE crops. Reporter gene A gene encoding a selectable marker is a common element within DNA constructs and is used to select for properly transformed cells. The selectable marker chosen will depend on the species being transformed, but it will typically be a gene granting cells a detoxification capacity for certain herbicides or antibiotics such as kanamycin, hygromycin B, or glyphosate. Additional elements Optional components of a DNA construct include elements such as cre-lox sequences that allow for controlled removal of the construct from the target genome. Such elements are chosen by the construct developer to perform specialized functions alongside the main gene of interest. Application Gene guns are mostly used with plant cells. However, there is much potential use in humans and other animals as well. Plants The target of a gene gun is often a callus of undifferentiated plant cells or a group of immature embryos growing on gel medium in a Petri dish. After the DNA-coated gold particles have been delivered to the cells, the DNA is used as a template for transcription (transient expression) and sometimes it integrates into a plant chromosome ('stable' transformation) If the delivered DNA construct contains a selectable marker, then stably transformed cells can be selected and cultured using tissue culture methods. For example, if the delivered DNA construct contains a gene that confers resistance to an antibiotic or herbicide, then stably transformed cells may be selected by including that antibiotic or herbicide in the tissue culture media. Transformed cells can be treated with a series of plant hormones, such as auxins and gibberellins, and each may divide and differentiate into the organized, specialized, tissue cells of an entire plant. This capability of total re-generation is called totipotency. The new plant that originated from a successfully transformed cell may have new traits that are heritable. The use of the gene gun may be contrasted with the use of Agrobacterium tumefaciens and its Ti plasmid to insert DNA into plant cells. See transformation for different methods of transformation in different species. Humans and other animals Gene guns have also been used to deliver DNA vaccines. The delivery of plasmids into rat neurons through the use of a gene gun, specifically DRG neurons, is also used as a pharmacological precursor in studying the effects of neurodegenerative diseases such as Alzheimer's disease. The gene gun has become a common tool for labeling subsets of cells in cultured tissue. In addition to being able to transfect cells with DNA plasmids coding for fluorescent proteins, the gene gun can be adapted to deliver a wide variety of vital dyes to cells. Gene gun bombardment has also been used to transform Caenorhabditis elegans, as an alternative to microinjection. Advantages Biolistics has proven to be a versatile method of genetic modification and it is generally preferred to engineer transformation-resistant crops, such as cereals. Notably, Bt maize is a product of biolistics. Plastid transformation has also seen great success with particle bombardment when compared to other current techniques, such as Agrobacterium mediated transformation, which have difficulty targeting the vector to and stably expressing in the chloroplast. In addition, there are no reports of a chloroplast silencing a transgene inserted with a gene gun. Additionally, with only one firing of a gene gun, a skilled technician can generate two transformed organisms in certain species. This technology has even allowed for modification of specific tissues in situ, although this is likely to damage large numbers of cells and transform only some, rather than all, cells of the tissue. Limitations Biolistics introduces DNA randomly into the target cells. Thus the DNA may be transformed into whatever genomes are present in the cell, be they nuclear, mitochondrial, plasmid or any others, in any combination, though proper construct design may mitigate this. The delivery and integration of multiple templates of the DNA construct is a distinct possibility, resulting in potential variable expression levels and copy numbers of the inserted gene. This is due to the ability of the constructs to give and take genetic material from other constructs, causing some to carry no transgene and others to carry multiple copies; the number of copies inserted depends on both how many copies of the transgene an inserted construct has, and how many were inserted. Also, because eukaryotic constructs rely on illegitimate recombination—a process by which the transgene is integrated into the genome without similar genetic sequences—and not homologous recombination, they cannot be targeted to specific locations within the genome, unless the transgene is co-delivered with genome editing reagents. References Further reading External links John O'Brien presents...Gene Gun Barrels for more information about biolistics Molecular biology Molecular genetics Laboratory techniques Gene delivery 1983 introductions Nanotechnology
Gene gun
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
1,769
[ "Genetics techniques", "Materials science", "Molecular biology techniques", "Molecular genetics", "nan", "Molecular biology", "Biochemistry", "Nanotechnology", "Gene delivery" ]
962,035
https://en.wikipedia.org/wiki/Seismic%20refraction
Seismic refraction is a geophysical principle governed by Snell's Law of refraction. The seismic refraction method utilizes the refraction of seismic waves by rock or soil layers to characterize the subsurface geologic conditions and geologic structure. Seismic refraction is exploited in engineering geology, geotechnical engineering and exploration geophysics. Seismic refraction traverses (seismic lines) are performed using an array of seismographs or geophones and an energy source. The methods depend on the fact that seismic waves have differing velocities in different types of soil or rock. The waves are refracted when they cross the boundary between different types (or conditions) of soil or rock. The methods enable the general soil types and the approximate depth to strata boundaries, or to bedrock, to be determined. P-wave refraction P-wave refraction evaluates the compression wave generated by the seismic source located at a known distance from the array. The wave is generated by vertically striking a striker plate with a sledgehammer, shooting a seismic shotgun into the ground, or detonating an explosive charge in the ground. Since the compression wave is the fastest of the seismic waves, it is sometimes referred to as the primary wave and is usually more-readily identifiable within the seismic recording as compared to the other seismic waves. S-wave refraction S-wave refraction evaluates the shear wave generated by the seismic source located at a known distance from the array. The wave is generated by horizontally striking an object on the ground surface to induce the shear wave. Since the shear wave is the second fastest wave, it is sometimes referred to as the secondary wave. When compared to the compression wave, the shear wave is approximately one-half (but may vary significantly from this estimate) the velocity depending on the medium. Two horizontal layers ic0 - critical angle V0 - velocity of the first layer V1 - velocity of the second layer h0 - thickness of the first layer T01 - intercept Several horizontal layers Inversion methods The General Reciprocal method The Plus minus method Refraction inversion modeling (refraction tomography) Monte Carlo simulation Genetic algorithms Applications Seismic refraction has been successfully applied to tailings characterisation through P- and S-wave travel time tomographic inversions. See also Reflection seismology References US Army Corps of Engineers EM 1110-1-1802 Central Federal Lands Highway Division Exploration geophysics Geophysics Seismology
Seismic refraction
[ "Physics" ]
492
[ "Applied and interdisciplinary physics", "Geophysics" ]
962,123
https://en.wikipedia.org/wiki/Superframe
In telecommunications, superframe (SF) is a T1 framing standard. In the 1970s it replaced the original T1/D1 framing scheme of the 1960s in which the framing bit simply alternated between 0 and 1. Superframe is sometimes called D4 Framing to avoid confusion with single-frequency signaling. It was first supported by the D2 channel bank, but it was first widely deployed with the D4 channel bank. In order to determine where each channel is located in the stream of data being received, each set of 24 channels is aligned in a frame. The frame is 192 bits long (8 * 24), and is terminated with a 193rd bit, the framing bit, which is used to find the end of the frame. In order for the framing bit to be located by receiving equipment, a predictable pattern is sent on this bit. Equipment will search for a bit which has the correct pattern, and will align its framing based on that bit. The pattern sent is 12 bits long, so every group of 12 frames is called a superframe. The pattern used in the 193rd bit is 100011 011100. Each channel sends two bits of call supervision data during each superframe using robbed-bit signaling during frames 6 and 12 of the superframe. More specifically, after the 6th and 12th bit in the superframe pattern, the least significant data bit of each channel (bit 8; T1 data is sent big-endian and uses 1-origin numbering) is replaced by a "channel-associated signalling" bit (bits A and B, respectively). Superframe remained in service in many places through the turn of the century, replaced by the improved extended superframe (ESF) of the 1980s in applications where its additional features were desired. Extended superframe In telecommunications, extended superframe (ESF) is a T1 framing standard. ESF is sometimes called D5 Framing because it was first used in the D5 channel bank, invented in the 1980s. It is preferred to its predecessor, superframe, because it includes a cyclic redundancy check (CRC) and 4000 bit/s channel capacity for a data link channel (used to pass out-of-band data between equipment.) It requires less frequent synchronization than the earlier superframe format, and provides on-line, real-time monitoring of circuit capability and operating condition. Structure An extended superframe is 24 frames long, and the framing bit of each frame is used in the following manner: All odd-numbered frames (1, 3, ..., 23) are used for the data link (totalling 4000 bits per second), Frames 2, 6, 10, 14, 18, and 22 are used to pass the CRC total of the previous extended superframe (all 4632 bits, framing and data), and Frames 4, 8, 12, 16, 20, and 24 are used to send the fixed framing pattern, 001011. The CRC is computed using the polynomial over all 24×193 = 4632 bits (framing and data) of the previous superframe, but with its framing bits forced to 1 for the purpose of CRC computation. The purpose of this small CRC is not to take any immediate action, but to keep statistics on the performance of the link. Like the predecessor superframe, every sixth frame's least-significant data bit can be used for robbed-bit signaling of call supervision state. However, there are four such bits (ABCD) per channel per extended superframe, rather than the two bits (AB) provided per superframe. (Specifically, the robbed bits follow framing bits 6, 12, 18 and 24.) Unlike the superframe, it is possible to avoid robbed-bit signalling and send call supervision over the data link instead. References Multiplexing Telephony signals Synchronization
Superframe
[ "Engineering" ]
790
[ "Telecommunications engineering", "Synchronization" ]
962,171
https://en.wikipedia.org/wiki/Stern%E2%80%93Gerlach%20experiment
In quantum physics, the Stern–Gerlach experiment demonstrated that the spatial orientation of angular momentum is quantized. Thus an atomic-scale system was shown to have intrinsically quantum properties. In the original experiment, silver atoms were sent through a spatially-varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment were deflected, owing to the magnetic field gradient, from a straight path. The screen revealed discrete points of accumulation, rather than a continuous distribution, owing to their quantized spin. Historically, this experiment was decisive in convincing physicists of the reality of angular-momentum quantization in all atomic-scale systems. After its conception by Otto Stern in 1921, the experiment was first successfully conducted with Walther Gerlach in early 1922. Description The Stern–Gerlach experiment involves sending silver atoms through an inhomogeneous magnetic field and observing their deflection. Silver atoms were evaporated using an electric furnace in a vacuum. Using thin slits, the atoms were guided into a flat beam and the beam sent through an inhomogeneous magnetic field before colliding with a metallic plate. The laws of classical physics predict that the collection of condensed silver atoms on the plate should form a thin solid line in the same shape as the original beam. However, the inhomogeneous magnetic field caused the beam to split in two separate directions, creating two lines on the metallic plate. The results show that particles possess an intrinsic angular momentum that is closely analogous to the angular momentum of a classically spinning object, but that takes only certain quantized values. Another important result is that only one component of a particle's spin can be measured at one time, meaning that the measurement of the spin along the z-axis destroys information about a particle's spin along the x and y axis. The experiment is normally conducted using electrically neutral particles such as silver atoms. This avoids the large deflection in the path of a charged particle moving through a magnetic field and allows spin-dependent effects to dominate. If the particle is treated as a classical spinning magnetic dipole, it will precess in a magnetic field because of the torque that the magnetic field exerts on the dipole (see torque-induced precession). If it moves through a homogeneous magnetic field, the forces exerted on opposite ends of the dipole cancel each other out and the trajectory of the particle is unaffected. However, if the magnetic field is inhomogeneous then the force on one end of the dipole will be slightly greater than the opposing force on the other end, so that there is a net force which deflects the particle's trajectory. If the particles were classical spinning objects, one would expect the distribution of their spin angular momentum vectors to be random and continuous. Each particle would be deflected by an amount proportional to the dot product of its magnetic moment with the external field gradient, producing some density distribution on the detector screen. Instead, the particles passing through the Stern–Gerlach apparatus are deflected either up or down by a specific amount. This was a measurement of the quantum observable now known as spin angular momentum, which demonstrated possible outcomes of a measurement where the observable has a discrete set of values or point spectrum. Although some discrete quantum phenomena, such as atomic spectra, were observed much earlier, the Stern–Gerlach experiment allowed scientists to directly observe separation between discrete quantum states for the first time. Theoretically, quantum angular momentum of any kind has a discrete spectrum, which is sometimes briefly expressed as "angular momentum is quantized". Experiment using particles with +1/2 or −1/2 spin If the experiment is conducted using charged particles like electrons, there will be a Lorentz force that tends to bend the trajectory in a circle. This force can be cancelled by an electric field of appropriate magnitude oriented transverse to the charged particle's path. Electrons are spin-1/2 particles. These have only two possible spin angular momentum values measured along any axis, or , a purely quantum mechanical phenomenon. Because its value is always the same, it is regarded as an intrinsic property of electrons, and is sometimes known as "intrinsic angular momentum" (to distinguish it from orbital angular momentum, which can vary and depends on the presence of other particles). If one measures the spin along a vertical axis, electrons are described as "spin up" or "spin down", based on the magnetic moment pointing up or down, respectively. To mathematically describe the experiment with spin particles, it is easiest to use Dirac's bra–ket notation. As the particles pass through the Stern–Gerlach device, they are deflected either up or down, and observed by the detector which resolves to either spin up or spin down. These are described by the angular momentum quantum number , which can take on one of the two possible allowed values, either or . The act of observing (measuring) the momentum along the axis corresponds to the -axis angular momentum operator, often denoted . In mathematical terms, the initial state of the particles is where constants and are complex numbers. This initial state spin can point in any direction. The squares of the absolute values and are respectively the probabilities for a system in the state to be found in and after the measurement along axis is made. The constants and must also be normalized in order that the probability of finding either one of the values be unity, that is we must ensure that . However, this information is not sufficient to determine the values of and , because they are complex numbers. Therefore, the measurement yields only the squared magnitudes of the constants, which are interpreted as probabilities. Sequential experiments If we link multiple Stern–Gerlach apparatuses (the rectangles containing S-G), we can clearly see that they do not act as simple selectors, i.e. filtering out particles with one of the states (pre-existing to the measurement) and blocking the others. Instead they alter the state by observing it (as in light polarization). In the figure below, x and z name the directions of the (inhomogenous) magnetic field, with the x-z-plane being orthogonal to the particle beam. In the three S-G systems shown below, the cross-hatched squares denote the blocking of a given output, i.e. each of the S-G systems with a blocker allows only particles with one of two states to enter the next S-G apparatus in the sequence. Experiment 1 The top illustration shows that when a second, identical, S-G apparatus is placed at the exit of the first apparatus, only z+ is seen in the output of the second apparatus. This result is expected since all particles at this point are expected to have z+ spin, as only the z+ beam from the first apparatus entered the second apparatus. Experiment 2 The middle system shows what happens when a different S-G apparatus is placed at the exit of the z+ beam resulting of the first apparatus, the second apparatus measuring the deflection of the beams on the x axis instead of the z axis. The second apparatus produces x+ and x- outputs. Now classically we would expect to have one beam with the x characteristic oriented + and the z characteristic oriented +, and another with the x characteristic oriented - and the z characteristic oriented +. Experiment 3 The bottom system contradicts that expectation. The output of the third apparatus which measures the deflection on the z axis again shows an output of z- as well as z+. Given that the input to the second S-G apparatus consisted only of z+, it can be inferred that a S-G apparatus must be altering the states of the particles that pass through it. This experiment can be interpreted to exhibit the uncertainty principle: since the angular momentum cannot be measured on two perpendicular directions at the same time, the measurement of the angular momentum on the x direction destroys the previous determination of the angular momentum in the z direction. That's why the third apparatus measures renewed z+ and z- beams like the x measurement really made a clean slate of the z+ output. History The Stern–Gerlach experiment was conceived by Otto Stern in 1921 and performed by him and Walther Gerlach in Frankfurt in 1922. At the time of the experiment, the most prevalent model for describing the atom was the Bohr-Sommerfeld model, which described electrons as going around the positively charged nucleus only in certain discrete atomic orbitals or energy levels. Since the electron was quantized to be only in certain positions in space, the separation into distinct orbits was referred to as space quantization. The Stern–Gerlach experiment was meant to test the Bohr–Sommerfeld hypothesis that the direction of the angular momentum of a silver atom is quantized. The experiment was first performed with an electromagnet that allowed the non-uniform magnetic field to be turned on gradually from a null value. When the field was null, the silver atoms were deposited as a single band on the detecting glass slide. When the field was made stronger, the middle of the band began to widen and eventually to split into two, so that the glass-slide image looked like a lip-print, with an opening in the middle, and closure at either end. In the middle, where the magnetic field was strong enough to split the beam into two, statistically half of the silver atoms had been deflected by the non-uniformity of the field. Note that the experiment was performed several years before George Uhlenbeck and Samuel Goudsmit formulated their hypothesis about the existence of electron spin in 1925. Even though the result of the Stern−Gerlach experiment has later turned out to be in agreement with the predictions of quantum mechanics for a spin-1/2 particle, the experimental result was also consistent with the Bohr–Sommerfeld theory. In 1927, T.E. Phipps and J.B. Taylor reproduced the effect using hydrogen atoms in their ground state, thereby eliminating any doubts that may have been caused by the use of silver atoms. However, in 1926 the non-relativistic scalar Schrödinger equation had incorrectly predicted the magnetic moment of hydrogen to be zero in its ground state. To correct this problem Wolfgang Pauli considered a spin-1/2 version of the Schrödinger equation using the 3 Pauli matrices which now bear his name, which was later shown by Paul Dirac in 1928 to be a consequence of his relativistic Dirac equation. In the early 1930s Stern, together with Otto Robert Frisch and Immanuel Estermann improved the molecular beam apparatus sufficiently to measure the magnetic moment of the proton, a value nearly 2000 times smaller than the electron moment. In 1931, theoretical analysis by Gregory Breit and Isidor Isaac Rabi showed that this apparatus could be used to measure nuclear spin whenever the electronic configuration of the atom was known. The concept was applied by Rabi and Victor W. Cohen in 1934 to determine the spin of sodium atoms. In 1938 Rabi and coworkers inserted an oscillating magnetic field element into their apparatus, inventing nuclear magnetic resonance spectroscopy. By tuning the frequency of the oscillator to the frequency of the nuclear precessions they could selectively tune into each quantum level of the material under study. Rabi was awarded the Nobel Prize in 1944 for this work. Importance The Stern–Gerlach experiment was the first direct evidence of angular-momentum quantization in quantum mechanics, and it strongly influenced later developments in modern physics: In the decade that followed, scientists showed using similar techniques, that the nuclei of some atoms also have quantized angular momentum. It is the interaction of this nuclear angular momentum with the spin of the electron that is responsible for the hyperfine structure of the spectroscopic lines. Norman F. Ramsey later modified the Rabi apparatus to improve its sensitivity (using the separated oscillatory field method). In the early sixties, Ramsey, H. Mark Goldenberg, and Daniel Kleppner used a Stern–Gerlach system to produce a beam of polarized hydrogen as the source of energy for the hydrogen maser. This led to developing an extremely stable clock based on a hydrogen maser. From 1967 until 2019, the second was defined based on 9,192,631,770 Hz hyperfine transition of a cesium-133 atom; the atomic clock which is used to set this standard is an application of Ramsey's work. The Stern–Gerlach experiment has become a prototype for quantum measurement, demonstrating the observation of a discrete value (eigenvalue) of a physical property, previously assumed to be continuous. Entering the Stern–Gerlach magnet, the direction of the silver atom's magnetic moment is indefinite, but when the atom is registered at the screen, it is observed to be at either one spot or the other, and this outcome cannot be predicted in advance. Because the experiment illustrates the character of quantum measurements, The Feynman Lectures on Physics use idealized Stern–Gerlach apparatuses to explain the basic mathematics of quantum theory. See also Photon polarization Stern–Gerlach Medal German inventors and discoverers References Further reading External links Stern–Gerlach Experiment Java Applet Animation Stern–Gerlach Experiment Flash Model Detailed explanation of the Stern–Gerlach Experiment Animation, applications and research linked to the spin (Université Paris Sud) Wave Mechanics and Stern–Gerlach experiment at MIT OpenCourseWare Quantum measurement Foundational quantum physics Physics experiments Spintronics 1922 in science Articles containing video clips
Stern–Gerlach experiment
[ "Physics", "Materials_science" ]
2,814
[ "Physics experiments", "Spintronics", "Foundational quantum physics", "Quantum mechanics", "Quantum measurement", "Experimental physics", "Condensed matter physics" ]
962,174
https://en.wikipedia.org/wiki/Node%20of%20Ranvier
Nodes of Ranvier ( ), also known as myelin-sheath gaps, occur along a myelinated axon where the axolemma is exposed to the extracellular space. Nodes of Ranvier are uninsulated axonal domains that are highly enriched in sodium and potassium ion channels complexed with cell adhesion molecules, allowing them to participate in the exchange of ions required to regenerate the action potential. Nerve conduction in myelinated axons is referred to as saltatory conduction () due to the manner in which the action potential seems to "jump" from one node to the next along the axon. This results in faster conduction of the action potential. The nodes of Ranvier are present in both the peripheral and central nervous systems. Overview The nodes are primarily are composed of sodium and potassium voltage-gated ion channels; CAMs such as neurofascin-186 and NrCAM; and cytoskeletal adaptor proteins such as ankyrin-G and spectrinβIV. Many vertebrate axons are surrounded by a myelin sheath, allowing rapid and efficient saltatory ("jumping") propagation of action potentials. The contacts between neurons and glial cells display a very high level of spatial and temporal organization in myelinated fibers. The myelinating glial cells - oligodendrocytes in the central nervous system (CNS), and Schwann cells in the peripheral nervous system (PNS) - are wrapped around the axon, leaving the axolemma relatively uncovered at the regularly spaced nodes of Ranvier. The internodal glial membranes are fused to form compact myelin, whereas the cytoplasm-filled paranodal loops of myelinating cells are spirally wrapped around the axon at both sides of the nodes. This organization demands a tight developmental control and the formation of a variety of specialized zones of contact between different areas of the myelinating cell membrane. Each node of Ranvier is flanked by paranodal regions where helicoidally wrapped glial loops are attached to the axonal membrane by a septate-like junction. The segment between nodes of Ranvier is termed as the internode, and its outermost part that is in contact with paranodes is referred to as the juxtaparanodal region. The nodes are encapsulated by microvilli stemming from the outer aspect of the Schwann cell membrane in the PNS, or by perinodal extensions from astrocytes in the CNS. Structure The internodes are the myelin segments and the gaps between are referred to as nodes. The size and the spacing of the internodes vary with the fiber diameter in a curvilinear relationship that is optimized for maximal conduction velocity. The size of the nodes span from 1–2 μm whereas the internodes can be up to (and occasionally even greater than)1.5 millimetres long, depending on the axon diameter and fiber type. The structure of the node and the flanking paranodal regions are distinct from the internodes under the compact myelin sheath, but are very similar in CNS and PNS. The axon is exposed to the extra-cellular environment at the node and is constricted in its diameter. The decreased axon size reflects a higher packing density of neurofilaments in this region, which are less heavily phosphorylated and are transported more slowly. Vesicles and other organelles are also increased at the nodes, which suggest that there is a bottleneck of axonal transport in both directions as well as local axonal-glial signaling. When a longitudinal section is made through a myelinating Schwann cell at the node, three distinctive segments are represented: the stereotypic internode, the paranodal region, and the node itself. In the internodal region, the Schwann cell has an outer collar of cytoplasm, a compact myelin sheath, and inner collar of cytoplasm, and the axolemma. At the paranodal regions, the paranodal cytoplasm loops contact thickenings of the axolemma to form septate –like junctions. In the node alone, the axolemma is contacted by several Schwann microvilli and contains a dense cytoskeletal undercoating. Differences in the central and peripheral nervous systems Although freeze fracture studies have revealed that the nodal axolemma in both the CNS and PNS is enriched in intra-membranous particles (IMPs) compared to the internode, there are some structural differences reflecting their cellular constituents. In the PNS, specialized microvilli project from the outer collar of Schwann cells and come very close to nodal axolemma of large fibers. The projections of the Schwann cells are perpendicular to the node and are radiating from the central axons. However, in the CNS, one or more of the astrocytic processes come in close vicinity of the nodes. Researchers declare that these processes stem from multi-functional astrocytes, as opposed to from a population of astrocytes dedicated to contacting the node. On the other hand, in the PNS, the basal lamina that surrounds the Schwann cells is continuous across the node. A study suggests that in the CNS, nerve cells individually alter the size of the nodes to tune conduction speeds, leading node length to vary much more across different axons than within one. Composition The nodes of Ranvier Na+/Ca2+ exchangers and high density of voltage-gated Na+ channels that generate action potentials. A sodium channel consists of a pore-forming α subunit and two accessory β subunits, which anchor the channel to extra-cellular and intra-cellular components. The nodes of Ranvier in the central and peripheral nervous systems mostly consist of αNaV1.6 and β1 subunits. The extra-cellular region of β subunits can associate with itself and other proteins, such as tenascin R and the cell-adhesion molecules neurofascin and contactin. Contactin is also present at nodes in the CNS and interaction with this molecule enhances the surface expression of Na+ channels. Ankyrin has been found to be bounded to βIV spectrin, a spectrin isoform enriched at nodes of Ranvier and axon initial segments. The PNS nodes are surrounded by Schwann cell microvilli, which contain ERMs and EBP50 that may provide a connection to actin microfilaments. Several extracellular matrix proteins are enriched at nodes of Ranvier, including tenascin-R, Bral-1, and proteoglycan NG2, as well as phosphacan and versican V2. At CNS nodes, the axonal proteins also include contactin; however, different from the PNS, Schwann cell microvilli are replaced by astrocyte perinodal extensions. Molecular organization The molecular organization of the nodes corresponds to their specialized function in impulse propagation. The level of sodium channels in the node versus the internode suggests that the number IMPs corresponds to sodium channels. Potassium channels are essentially absent in the nodal axolemma, whereas they are highly concentrated in the paranodal axolemma and Schwann cell membranes at the node. The exact function of potassium channels have not quite been revealed, but it is known that they may contribute to the rapid repolarization of the action potentials or play a vital role in buffering the potassium ions at the nodes. This highly asymmetric distribution of voltage-gated sodium and potassium channels is in striking contrast to their diffuse distribution in unmyelinated fibers. The filamentous network subjacent to the nodal membrane contains cytoskeletal proteins called spectrin and ankyrin. The high density of ankyrin at the nodes may be functionally significant because several of the proteins that are populated at the nodes share the ability to bind to ankyrin with extremely high affinity. All of these proteins, including ankyrin, are enriched in the initial segment of axons which suggests a functional relationship. Now the relationship of these molecular components to the clustering of sodium channels at the nodes is still not known. Although some cell-adhesion molecules have been reported to be present at the nodes inconsistently; however, a variety of other molecules are known to be highly populated at the glial membranes of the paranodal regions where they contribute to its organization and structural integrity. Development Myelination of nerve fibers The complex changes that the Schwann cell undergoes during the process of myelination of peripheral nerve fibers have been observed and studied by many. The initial envelopment of the axon occurs without interruption along the entire extent of the Schwann cell. This process is sequenced by the in-folding of the Schwann cell surface so that a double membrane of the opposing faces of the in-folded Schwann cell surface is formed. This membrane stretches and spirally wraps itself over and over as the in-folding of the Schwann cell surface continues. As a result, the increase in the thickness of the extension of the myelin sheath in its cross-sectional diameter is easily ascertained. It is also evident that each of the consecutive turns of the spiral increases in size along the length of the axon as the number of turns increase. However, it is not clear whether or not the increase in length of the myelin sheath can be accounted solely by the increase in length of axon covered by each successive turn of the spiral, as previously explained. At the junction of two Schwann cells along an axon, the directions of the lamellar overhang of the myelin endings are of opposite sense. This junction, adjacent of the Schwann cells, constitutes the region designated as the node of Ranvier. Early stages Researchers prove that in the developing CNS, Nav1.2 is initially expressed at all forming nodes of Ranvier. Upon maturation, nodal Nav1.2 is down-regulated and replaced by Nav1.6. Nav1.2 is also expressed during PNS node formation, which suggests that the switching of Nav-channel subtypes is a general phenomenon in the CNS and PNS. In this same investigation, it was shown that Nav1.6 and Nav1.2 colocalize at many nodes of Ranvier during early myelination. This also led to the suggestion that early clusters of Nav1.2 and Nav1.6 channels are destined to later become nodes of Ranvier. Neurofascin is also reported to be one of the first proteins to accumulate at newly forming nodes of Ranvier. They are also found to provide the nucleation site for attachment of ankyrin G, Nav channels, and other proteins. The recent identification of the Schwann cell microvilli protein gliomedin as the likely binding partner of axonal neurofascin brings forward substantial evidence for the importance of this protein in recruiting Nav channels to the nodes of Ranvier. Furthermore, Lambert et al. and Eshed et al. also indicates that neurofascin accumulates before Nav channels and is likely to have crucial roles in the earliest events associated with node of Ranvier formation. Thus, multiple mechanisms may exist and work synergistically to facilitate clustering of Nav channels at nodes of Ranvier. Nodal formation The first event appears to be the accumulation of cell adhesion molecules such as NF186 or NrCAM. The intra-cellular regions of these cell-adhesion molecules interact with ankyrin G, which serves as an anchor for sodium channels. In the PNS, this interaction has been elucidated. The Ig superfamily membrane protein NrCAM acts as a pioneer molecule in the formation of the nodes by recruiting ankyrin-G, a mediator protein in the connection of actin-spectrin cytoskeleton to the ion gated channels present at the node. At the same time, the periaxonal extension of the glial cell wraps around the axon, giving rise to the paranodal regions. This movement along the axon contributes significantly to the overall formation of the nodes of Ranvier by permitting heminodes formed at the edges of neighboring glial cells to fuse into complete nodes. Septate-like junctions form at the paranodes with the enrichment of NF155 in glial paranodal loops. Immediately following the early differentiation of the nodal and paranodal regions, potassium channels, Caspr2 and TAG1 accumulate in the juxta-paranodal regions. This accumulation coincides directly with the formation of compact myelin. In mature nodal regions, interactions with the intracellular proteins appear vital for the stability of all nodal regions. In the CNS, oligodendrocytes do not possess microvilli, but appear capable to initiate the clustering of some axonal proteins through secreted factors. The combined effects of such factors with the subsequent movements generated by the wrapping of oligodendrocyte periaxonal extension could account for the organization of CNS nodes of Ranvier. Function Action potential An action potential is a spike of both positive and negative ionic discharge that travels along the membrane of a cell. The creation and conduction of action potentials represents a fundamental means of communication in the nervous system. Action potentials represent rapid reversals in voltage across the plasma membrane of axons. These rapid reversals are mediated by voltage-gated ion channels found in the plasma membrane. The action potential travels from one location in the cell to another, but ion flow across the membrane occurs only at the nodes of Ranvier. As a result, the action potential signal jumps along the axon, from node to node, rather than propagating smoothly, as they do in axons that lack a myelin sheath. The clustering of voltage-gated sodium and potassium ion channels at the nodes permits this behavior. Saltatory conduction Since an axon can be unmyelinated or myelinated, the action potential has two methods to travel down the axon. These methods are referred to as continuous conduction for unmyelinated axons, and saltatory conduction for myelinated axons. Saltatory conduction is defined as an action potential moving in discrete jumps down a myelinated axon. This process is outlined as the charge passively spreading to the next node of Ranvier to depolarize it to threshold which will then trigger an action potential in this region which will then passively spread to the next node and so on. Saltatory conduction provides one advantage over conduction that occurs along an axon without myelin sheaths. This is that the increased speed afforded by this mode of conduction assures faster interaction between neurons. On the other hand, depending on the average firing rate of the neuron, calculations show that the energetic cost of maintaining the resting potential of oligodendrocytes can outweigh the energy savings of action potentials. So, axon myelination does not necessarily save energy. Formation regulation Paranode regulation via mitochondria accumulation Mitochondria and other membranous organelles are normally enriched in the PNP region of peripheral myelinated axons, especially those large caliber axons. The actual physiological role of this accumulation and factors that regulate it are not understood; however, it is known that mitochondria are usually present in areas of the cell that expresses a high energy demand. In these same regions, they are also understood to contain growth cones, synaptic terminals, and sites of action potential initiation and regeneration, such as the nodes of Ranvier. In the synaptic terminals, mitochondria produce the ATP needed to mobilize vesicles for neurotransmission. In the nodes of Ranvier, mitochondria serve as an important role in impulse conduction by producing the ATP that is essential to maintain the activity of energy-demanding ion pumps. Supporting this fact, about five times more mitochondria are present in the PNP axoplasm of large peripheral axons than in the corresponding internodal regions of these fibers. Nodal regulation Via αII-Spectrin Saltatory conduction in myelinated axons requires organization of the nodes of Ranvier, whereas voltage-gated sodium channels are highly populated. Studies show that αII-Spectrin, a component of the cytoskeleton is enriched at the nodes and paranodes at early stages and as the nodes mature, the expression of this molecule disappears. It is also proven that αII-Spectrin in the axonal cytoskeleton is absolutely vital for stabilizing sodium channel clusters and organizing the mature node of Ranvier. Possible regulation via the recognition molecule OMgp It has been shown previously that OMgp (oligodendrocyte myelin glycoprotein) clusters at nodes of Ranvier and may regulate paranodal architecture, node length and axonal sprouting at nodes. However, a follow-up study showed that the antibody used previously to identify OMgp at nodes crossreacts with another node-enriched component versican V2 and that OMgp is not required for the integrity of nodes and paranodes, arguing against the previously reported localization and proposed functions of OMgp at nodes. Clinical significance The proteins in these excitable domains of neuron when injured may result in cognitive disorders and various neuropathic ailments. History The myelin sheath of long nerves was discovered and named by German pathological anatomist Rudolf Virchow in 1854. French pathologist and anatomist Louis-Antoine Ranvier later discovered the nodes, or gaps, in the myelin sheath that now bear his name. Born in Lyon, Ranvier was one of the most prominent histologists of the late 19th century. Ranvier abandoned pathological studies in 1867 and became an assistant of physiologist Claude Bernard. He was the chairman of General Anatomy at the Collège de France in 1875. Ranvier discovered the nodes in 1878. Using staining techniques developed by Ludwig Mauthner, he noticed that myelinated axons were only stained at regular intervals, leading to the discovery of the nodes. Reportedly, he dismissed the idea of nodes in the CNS although their existence was proven later. His refined histological techniques and his work on both injured and normal nerve fibers became world-renowned. His observations on fiber nodes and the degeneration and regeneration of cut fibers had a great influence on Parisian neurology at the Salpêtrière. Soon afterwards, he discovered gaps in sheaths of nerve fibers, which were later called the Nodes of Ranvier. This discovery later led Ranvier to careful histological examination of myelin sheaths and Schwann cells. Additional images See also Internodal segment Schwann cell Oligodendrocyte Myelin References External links Cell Centered Database – Node of Ranvier – "PNS, nerve (LM, Medium)" Membrane biology Neurohistology Signal transduction
Node of Ranvier
[ "Chemistry", "Biology" ]
3,992
[ "Membrane biology", "Signal transduction", "Molecular biology", "Biochemistry", "Neurochemistry" ]
962,501
https://en.wikipedia.org/wiki/Floating%20car%20data
Floating car data (FCD) in traffic engineering and management is typically timestamped geo-localization and speed data directly collected by moving vehicles, in contrast to traditional traffic data collected at a fixed location by a stationary device or observer. In a physical interpretation context, FCD provides a Lagrangian description of the vehicle movements whereas stationary devices provide an Eulerian description. The participating vehicle acts itself consequently as a moving sensor using an onboard GPS receiver or cellular phone. The most common and widespread use of FCD is to determine the traffic speed on the road network. Based on these data, traffic congestion can be identified, travel times can be calculated, and traffic reports can be rapidly generated. In contrast to stationary devices such as traffic cameras, number plate recognition systems, and induction loops embedded in the roadway, no additional hardware on the road network is necessary. Floating cellular data Floating cellular data is one of the methods to collect floating car data. This method uses cellular network data (CDMA, GSM, UMTS, GPRS). No special devices/hardware are necessary: every switched-on mobile phone becomes a traffic probe and is as such an anonymous source of information. The location of the mobile phone is determined using (1) triangulation or (2) the hand-over data stored by the network operator. As GSM localisation is less accurate than GPS based systems, many phones must be tracked and complex algorithms used to extract high-quality data. For example, care must be taken not to misinterpret cellular phones on a high speed railway track near the road as incredibly fast journeys along the road. However, the more congestion, the more cars, the more phones and thus more probes. In metropolitan areas where traffic data are most needed the distance between cell sites is lower and thus precision increases. Advantages over GPS-based or conventional methods such as cameras or street embedded sensors include: No infrastructure or hardware in cars or along the road. It is much less expensive, offers more coverage of more streets, it is faster to set up (no work zones) and needs less maintenance. In 2007, GDOT demonstrated in Atlanta that such system can emulate very well road sensors data for section speeds. A 2007 study by GMU investigated the relationship between vehicle free flow speed and geometric variables on urban street segments using FCD. Vehicle re-identification Vehicle re-identification methods require sets of detectors mounted along the road. In this technique, a unique serial number for a device in the vehicle is detected at one location and then detected again (re-identified) further down the road. Travel times and speed are calculated by comparing the time at which a specific device is detected by pairs of sensors. This can be done using the MAC addresses from Bluetooth devices, or using the radio-frequency identification (RFID) serial numbers from Electronic Toll Collection (ETC) transponders (also called "toll tags"). The ETC transponders, which are uniquely identifiable, may be read not only at toll collection points (e.g. toll bridges) but also at many non-toll locations. This is used as a method to collect traffic flow data (which is anonymized) for the San Francisco Bay Area's 5-1-1 service. In New York City's Midtown in Motion program, its adaptive traffic control system also use RFID readers to track movement of E-ZPass tags as a means of monitoring traffic flow. The data is fed through the government-dedicated broadband wireless infrastructure to the traffic management center to be used in adaptive traffic control of the traffic lights. Global Positioning System A small number of cars (typically fleet vehicles such as courier services and taxi drivers) are equipped with a box that contains a GPS receiver. The data are then communicated with the service provider using the regular on-board radio unit or via cellular network data (more expensive). It is possible that FCD could be used as a surveillance method, although the companies deploying FCD systems give assurances that all data are anonymized in their systems, or kept sufficiently secure to prevent abuses. See also Traffic count References Advanced driver assistance systems Intelligent transportation systems Speed sensors Surveillance Transportation engineering
Floating car data
[ "Technology", "Engineering" ]
854
[ "Transport systems", "Measuring instruments", "Industrial engineering", "Information systems", "Transportation engineering", "Civil engineering", "Warning systems", "Speed sensors", "Intelligent transportation systems" ]
559,066
https://en.wikipedia.org/wiki/Cobra%20probe
A Cobra probe is a device to measure the pressure and velocity components of a moving fluid. It is a multi-holed pressure probe with rotational axis of the probe shaft coplanar with the measurement plane of the instrument. Because of this geometry, when the instrument is rotated around the shaft's axis, the measurement elements of the probe remain in the same location. The name cobra probe comes from the shape of the probe head which gives it this property. Cobra probes come in three-, four-, and five-hole configurations, the former used for two-dimensional flow measurement, the latter two for three-dimensional flow measurement. In the three-hole kind of instrument, there are two yaw direction tubes which are chamfered and silver soldered symmetrically on the two sides of a pitot tube. It is otherwise similar to the other kinds of yawmeters. In the four- and five-hole configurations, the central pitot tube is surrounded by three or four chamfered tubes, respectively. References Hydraulic engineering
Cobra probe
[ "Physics", "Engineering", "Environmental_science" ]
215
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
559,155
https://en.wikipedia.org/wiki/Metric%20modulation
In music, metric modulation is a change in pulse rate (tempo) and/or pulse grouping (subdivision) which is derived from a note value or grouping heard before the change. Examples of metric modulation may include changes in time signature across an unchanging tempo, but the concept applies more specifically to shifts from one time signature/tempo (metre) to another, wherein a note value from the first is made equivalent to a note value in the second, like a pivot or bridge. The term "modulation" invokes the analogous and more familiar term in analyses of tonal harmony, wherein a pitch or pitch interval serves as a bridge between two keys. In both terms, the pivoting value functions differently before and after the change, but sounds the same, and acts as an audible common element between them. Metric modulation was first described by Richard Franko Goldman while reviewing the Cello Sonata of Elliott Carter, who prefers to call it tempo modulation. Another synonymous term is proportional tempi. Determination of the new tempo The following formula illustrates how to determine the tempo before or after a metric modulation, or, alternatively, how many of the associated note values will be in each measure before or after the modulation: Thus if the two half notes in time at a tempo of quarter note = 84 are made equivalent with three half notes at a new tempo, that tempo will be: Example taken from Carter's Eight Etudes and a Fantasy for woodwind quartet (1950), Fantasy, mm. 16-17. Note that this tempo, quarter note = 126, is equal to dotted-quarter note = 84 (( = ) = ( = )). A tempo (or metric) modulation causes a change in the hierarchical relationship between the perceived beat subdivision and all potential subdivisions belonging to the new tempo. Benadon has explored some compositional uses of tempo modulations, such as tempo networks and beat subdivision spaces. Three challenges arise when performing metric modulations: Grouping notes of the same speed differently on each side of the barline, ex: (quintuplet =sextuplet ) with sixteenth notes before and after the barline Subdivision used on one side of the barline and not the other, ex: (triplet =) with triplets before and quarter notes after the barline Subdivision used on neither side of the barline but used to establish the modulation, ex: (quintuplet =) with quarter notes before and after the barline Examples of the use of metric modulation include Carter's Cello Sonata (1948), A Symphony of Three Orchestras (1976), and Björk's "Desired Constellation" (=). Beethoven used metric modulation in his Trio for 2 oboes & English horn, Op. 87, 1794. Score notation Metric modulations are generally notated as 'note value' = 'note value'. For example, = This notation is also normally followed by the new tempo in parentheses. Before the modern concept and notation of metric modulations composers used the terms doppio piu mosso and doppio piu lento for double and half-speed, and later markings such as: (Adagio)=(Allegro) indicating double speed, which would now be marked (=). The phrase l'istesso tempo was used for what may now be notated with metric modulation markings. For example: to (), will be marked l'istesso tempo, indicating the beat is the same speed. See also Tuplet References Sources Further reading Arlin, Mary I. (2000). "Metric Mutation and Modulation: The Nineteenth-Century Speculations of F.-J. Fétis". Journal of Music Theory 44, no. 2 (Fall): 261–322. Bernard, Jonathan W. (1988). "The Evolution of Elliott Carter's Rhythmic Practice". Perspectives of New Music 26, no. 2: (Summer): 164–203. Braus, Ira Lincoln (1994). "An Unwritten Metrical Modulation in Brahms's Intermezzo in E minor, op. 119, no. 2". Brahms Studies 1:161–169. Everett, Walter (2009). "Any Time at All: The Beatles' Free Phrase Rhythms". In The Cambridge Companion to the Beatles, edited by Kenneth Womack, 183–199. Cambridge and New York: Cambridge University Press. (cloth); (pbk). Reese, Kirsten (1999). "Ruhelos: Annäherung an Johanna Magdalena Beyer". MusikTexte: Zeitschrift für Neue Musik, nos. 81–82 (December) 6–15. External links , Conor Guilfoyle , Conor Guilfoyle Musical techniques Rhythm and meter
Metric modulation
[ "Physics" ]
972
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
559,220
https://en.wikipedia.org/wiki/Captopril
Captopril, sold under the brand name Capoten among others, is an angiotensin-converting enzyme (ACE) inhibitor used for the treatment of hypertension and some types of congestive heart failure. Captopril was the first oral ACE inhibitor found for the treatment of hypertension. It does not cause fatigue as associated with beta-blockers. Due to the adverse drug event of causing hyperkalemia, as seen with most ACE Inhibitors, the medication is usually paired with a diuretic. Captopril was patented in 1976 and approved for medical use in 1980. Structure–activity relationship Captopril has an L-proline group which allows it to be more bioavailable in oral formulations. The thiol moiety within the molecule has been associated with two significant adverse effects: the hapten or immune response. This immune response, also known as agranulocytosis, can explain the adverse drug events which may be seen in captopril with the allergic response which includes hives, severe stomach pain, difficulty breathing, swelling of the face, lips, tongue or throat. In terms of interaction with the enzyme, the molecule's thiol moiety will attach to the binding site of the ACE enzyme. This will inhibit the port at which the angiotensin-1 molecule would normally bind, therefore inhibiting the downstream effects within the renin-angiotensin system. Medical uses Captopril's main uses are based on its vasodilation and inhibition of some renal function activities. These benefits are most clearly seen in: Hypertension Cardiac conditions such as congestive heart failure and after myocardial infarction Preservation of kidney function in diabetic nephropathy. Additionally, it has shown mood-elevating properties in some patients. This is consistent with the observation that animal screening models indicate putative antidepressant activity for this compound, although one study has been negative. Formal clinical trials in depressed patients have not been reported. It has also been investigated for use in the treatment of cancer. Captopril stereoisomers were also reported to inhibit some metallo-β-lactamases. Adverse effects Adverse effects of captopril include cough due to increase in the plasma levels of bradykinin, angioedema, agranulocytosis, proteinuria, hyperkalemia, taste alteration, teratogenicity, postural hypotension, acute renal failure, and leukopenia. Except for postural hypotension, which occurs due to the short and fast mode of action of captopril, most of the side effects mentioned are common for all ACE inhibitors. Among these, cough is the most common adverse effect. Hyperkalemia can occur, especially if used with other drugs which elevate potassium level in blood, such as potassium-sparing diuretics. Other side effects are: Itching Headache Tachycardia Chest pain Palpitations Dysgeusia Weakness The adverse drug reaction (ADR) profile of captopril is similar to other ACE inhibitors, with cough being the most common ADR. However, captopril is also commonly associated with rash and taste disturbances (metallic or loss of taste), which are attributed to the unique thiol moiety. Overdose ACE inhibitor overdose can be treated with naloxone. History In the late 1960s, John Vane of the Royal College of Surgeons of England was working on mechanisms by which the body regulates blood pressure. He was joined by Sérgio Henrique Ferreira of Brazil, who had been studying the venom of a Brazilian pit viper, the jararaca (Bothrops jararaca), and brought a sample of the viper's venom. Vane's team found that one of the venom's peptides selectively inhibited the action of angiotensin-converting enzyme (ACE), which was thought to function in blood pressure regulation; the snake venom functions by severely depressing blood pressure. During the 1970s, ACE was found to elevate blood pressure by controlling the release of water and salts from the kidneys. Captopril, an analog of the snake venom's ACE-inhibiting peptide, was first synthesized in 1975 by three researchers at the U.S. drug company E.R. Squibb & Sons Pharmaceuticals (now Bristol-Myers Squibb): Miguel Ondetti, Bernard Rubin, and David Cushman. Squibb filed for U.S. patent protection on the drug in February 1976, which was granted in September 1977, and captopril was approved for medical use in 1980. It was the first ACE inhibitor developed and was considered a breakthrough both because of its mechanism of action and also because of the development process. In the 1980s, Vane received the Nobel prize and was knighted for his work and Ferreira received the National Order of Scientific Merit from Brazil. The development of captopril was among the earliest successes of the revolutionary concept of structure-based drug design. The renin–angiotensin–aldosterone system had been extensively studied in the mid-20th century, and this system presented several opportune targets in the development of novel treatments for hypertension. The first two targets that were attempted were renin and ACE. Captopril was the culmination of efforts by Squibb's laboratories to develop an ACE inhibitor. Ondetti, Cushman, and colleagues built on work that had been done in the 1960s by a team of researchers led by John Vane at the Royal College of Surgeons of England. The first breakthrough was made by Kevin K.F. Ng in 1967, when he found the conversion of angiotensin I to angiotensin II took place in the pulmonary circulation instead of in the plasma. In contrast, Sergio Ferreira found bradykinin disappeared in its passage through the pulmonary circulation. The conversion of angiotensin I to angiotensin II and the inactivation of bradykinin were thought to be mediated by the same enzyme. In 1970, using bradykinin potentiating factor (BPF) provided by Sergio Ferreira, Ng and Vane found the conversion of angiotensin I to angiotensin II was inhibited during its passage through the pulmonary circulation. BPF was later found to be a peptide in the venom of a lancehead viper (Bothrops jararaca), which was a “collected-product inhibitor” of the converting enzyme. Captopril was developed from this peptide after it was found via QSAR-based modification that the terminal sulfhydryl moiety of the peptide provided a high potency of ACE inhibition. Captopril gained FDA approval on April 6, 1981. The drug became a generic medicine in the U.S. in February 1996, when the market exclusivity held by Bristol-Myers Squibb for captopril expired. Chemical synthesis A chemical synthesis of captopril by treatment of L-proline with (2S)-3-acetylthio-2-methylpropanoyl chloride under basic conditions (NaOH), followed by aminolysis of the protective acetyl group to unmask the drug's free thiol, is depicted in the figure at right. Procedure 2 taken out of patent US4105776. See examples 28, 29a and 36. Mechanism of action Captopril blocks the conversion of angiotensin I to angiotensin II and prevents the degradation of vasodilatory prostaglandins, thereby inhibiting vasoconstriction and promoting systemic vasodilation. Pharmacokinetics Unlike the majority of ACE inhibitors, captopril is not administered as a prodrug (the only other being lisinopril). About 70% of orally administered captopril is absorbed. Bioavailability is reduced by presence of food in stomach. It is partly metabolised and partly excreted unchanged in urine. Captopril also has a relatively poor pharmacokinetic profile. The short half-life necessitates dosing two or three times per day, which may reduce patient compliance. Captopril has a short half-life of 2–3 hours and a duration of action of 12–24 hours. See also Captopril challenge test Captopril suppression test References External links U.S. Patent 4,046,889 The story of the discovery of Captopril drugdesign.org ACE inhibitors Carboxamides Carboxylic acids Enantiopure drugs Pyrrolidines Thiols Drugs developed by Bristol Myers Squibb Daiichi Sankyo
Captopril
[ "Chemistry" ]
1,785
[ "Stereochemistry", "Thiols", "Functional groups", "Enantiopure drugs", "Carboxylic acids", "Organic compounds" ]
559,764
https://en.wikipedia.org/wiki/Roentgen%20equivalent%20man
The roentgen equivalent man (rem) is a CGS unit of equivalent dose, effective dose, and committed dose, which are dose measures used to estimate potential health effects of low levels of ionizing radiation on the human body. Quantities measured in rem are designed to represent the stochastic biological risk of ionizing radiation, which is primarily radiation-induced cancer. These quantities are derived from absorbed dose, which in the CGS system has the unit rad. There is no universally applicable conversion constant from rad to rem; the conversion depends on relative biological effectiveness (RBE). The rem has been defined since 1976 as equal to 0.01 sievert, which is the more commonly used SI unit outside the United States. Earlier definitions going back to 1945 were derived from the roentgen unit, which was named after Wilhelm Röntgen, a German scientist who discovered X-rays. The unit name is misleading, since 1 roentgen actually deposits about 0.96 rem in soft biological tissue, when all weighting factors equal unity. Older units of rem following other definitions are up to 17% smaller than the modern rem. Doses greater than 100 rem received over a short time period are likely to cause acute radiation syndrome (ARS), possibly leading to death within weeks if left untreated. Note that the quantities that are measured in rem were not designed to be correlated to ARS symptoms. The absorbed dose, measured in rad, is a better indicator of ARS. A rem is a large dose of radiation, so the millirem (mrem), which is one thousandth of a rem, is often used for the dosages commonly encountered, such as the amount of radiation received from medical x-rays and background sources. Usage The rem and millirem are CGS units in widest use among the U.S. public, industry, and government. However, the SI unit the sievert (Sv) is the normal unit outside the United States, and is increasingly encountered within the US in academic, scientific, and engineering environments, and have now virtually replaced the rem. The conventional units for dose rate is mrem/h. Regulatory limits and chronic doses are often given in units of mrem/yr or rem/yr, where they are understood to represent the total amount of radiation allowed (or received) over the entire year. In many occupational scenarios, the hourly dose rate might fluctuate to levels thousands of times higher for a brief period of time, without infringing on the annual total exposure limits. The annual conversions to a Julian year are: 1 mrem/h = 8,766 mrem/yr 0.1141 mrem/h = 1,000 mrem/yr The International Commission on Radiological Protection (ICRP) once adopted fixed conversion for occupational exposure, although these have not appeared in recent documents: 8 h = 1 day 40 h = 1 week 50 week = 1 yr Therefore, for occupation exposures of that time period, 1 mrem/h = 2,000 mrem/yr 0.5 mrem/h = 1,000 mrem/yr The U.S. National Institute of Standards and Technology (NIST) strongly discourages Americans from expressing doses in rem, in favor of recommending the SI unit. The NIST recommends defining the rem in relation to the SI in every document where this unit is used. Health effects Ionizing radiation has deterministic and stochastic effects on human health. The deterministic effects that can lead to acute radiation syndrome only occur in the case of high doses (> ~10 rad or > 0.1 Gy) and high dose rates (> ~10 rad/h or > 0.1 Gy/h). A model of deterministic risk would require different weighting factors (not yet established) than are used in the calculation of equivalent and effective dose. To avoid confusion, deterministic effects are normally compared to absorbed dose in units of rad, not rem. Stochastic effects are those that occur randomly, such as radiation-induced cancer. The consensus of the nuclear industry, nuclear regulators, and governments, is that the incidence of cancers caused by ionizing radiation can be modeled as increasing linearly with effective dose at a rate of 0.055% per rem (5.5%/Sv). Individual studies, alternate models, and earlier versions of the industry consensus have produced other risk estimates scattered around this consensus model. There is general agreement that the risk is much higher for infants and fetuses than adults, higher for the middle-aged than for seniors, and higher for women than for men, though there is no quantitative consensus about this. There is much less data, and much more controversy, regarding the possibility of cardiac and teratogenic effects, and the modelling of internal dose. The ICRP recommends limiting artificial irradiation of the public to an average of 100 mrem (1 mSv) of effective dose per year, not including medical and occupational exposures. For comparison, radiation levels inside the United States Capitol are 85 mrem/yr (0.85 mSv/yr), close to the regulatory limit, because of the uranium content of the granite structure. The NRC sets the annual total effective dose of full body radiation, or total body radiation (TBR), allowed for radiation workers 5,000 mrem (5 rem). History The concept of the rem first appeared in literature in 1945 and was given its first definition in 1947. The definition was refined in 1950 as "that dose of any ionizing radiation which produces a relevant biological effect equal to that produced by one roentgen of high-voltage x-radiation." Using data available at the time, the rem was variously evaluated as 83, 93, or 95 erg/gram. Along with the introduction of the rad in 1953, the ICRP decided to continue the use of the rem. The US National Committee on Radiation Protection and Measurements noted in 1954 that this effectively implied an increase in the magnitude of the rem to match the rad (100 erg/gram). The ICRP introduced and then officially adopted the rem in 1962 as the unit of equivalent dose to measure the way different types of radiation distribute energy in tissue and began recommending values of relative biological effectiveness (RBE) for various types of radiation. In practice, the unit of rem was used to denote that an RBE factor had been applied to a number which was originally in units of rad or roentgen. The International Committee for Weights and Measures (CIPM) adopted the sievert in 1980 but never accepted the use of the rem. The NIST recognizes that this unit is outside the SI but temporarily accepts its use in the U.S. with the SI. The rem remains in widespread use as an industry standard in the U.S. The United States Nuclear Regulatory Commission still permits the use of the units curie, rad, and rem alongside SI units. Radiation-related quantities The following table shows radiation quantities in SI and non-SI units: See also Roentgen equivalent physical Banana equivalent dose Health threat from cosmic rays Orders of magnitude (radiation) References Units of radiation dose Radiation health effects Radiobiology Non-SI metric units Equivalent Equivalent units
Roentgen equivalent man
[ "Chemistry", "Materials_science", "Mathematics", "Biology" ]
1,526
[ "Radiation health effects", "Equivalent quantities", "Units of measurement", "Non-SI metric units", "Radiobiology", "Quantity", "Units of radiation dose", "Equivalent units", "Radiation effects", "Radioactivity" ]
560,502
https://en.wikipedia.org/wiki/Chromosomal%20translocation
In genetics, chromosome translocation is a phenomenon that results in unusual rearrangement of chromosomes. This includes balanced and unbalanced translocation, with two main types: reciprocal, and Robertsonian translocation. Reciprocal translocation is a chromosome abnormality caused by exchange of parts between non-homologous chromosomes. Two detached fragments of two different chromosomes are switched. Robertsonian translocation occurs when two non-homologous chromosomes get attached, meaning that given two healthy pairs of chromosomes, one of each pair "sticks" and blends together homogeneously. A gene fusion may be created when the translocation joins two otherwise-separated genes. It is detected on cytogenetics or a karyotype of affected cells. Translocations can be balanced (in an even exchange of material with no genetic information extra or missing, and ideally full functionality) or unbalanced (where the exchange of chromosome material is unequal resulting in extra or missing genes). Reciprocal translocations Reciprocal translocations are usually an exchange of material between non-homologous chromosomes and occur in about 1 in 491 live births. Such translocations are usually harmless, as they do not result in a gain or loss of genetic material, though they may be detected in prenatal diagnosis. However, carriers of balanced reciprocal translocations may create gametes with unbalanced chromosome translocations during meiotic chromosomal segregation. This can lead to infertility, miscarriages or children with abnormalities. Genetic counseling and genetic testing are often offered to families that may carry a translocation. Most balanced translocation carriers are healthy and do not have any symptoms. It is important to distinguish between chromosomal translocations that occur in germ cells, due to errors in meiosis (i.e. during gametogenesis), and those that occur in somatic cells, due to errors in mitosis. The former results in a chromosomal abnormality featured in all cells of the offspring, as in translocation carriers. Somatic translocations, on the other hand, result in abnormalities featured only in the affected cell and its progenitors, as in chronic myelogenous leukemia with the Philadelphia chromosome translocation. Nonreciprocal translocation Nonreciprocal translocation involves the one-way transfer of genes from one chromosome to another nonhomologous chromosome. Robertsonian translocations Robertsonian translocation is a type of translocation caused by breaks at or near the centromeres of two acrocentric chromosomes. The reciprocal exchange of parts gives rise to one large metacentric chromosome and one extremely small chromosome that may be lost from the organism with little effect because it contains few genes. The resulting karyotype in humans leaves only 45 chromosomes, since two chromosomes have fused together. This has no direct effect on the phenotype, since the only genes on the short arms of acrocentrics are common to all of them and are present in variable copy number (nucleolar organiser genes). Robertsonian translocations have been seen involving all combinations of acrocentric chromosomes. The most common translocation in humans involves chromosomes 13 and 14 and is seen in about 0.97 / 1000 newborns. Carriers of Robertsonian translocations are not associated with any phenotypic abnormalities, but there is a risk of unbalanced gametes that lead to miscarriages or abnormal offspring. For example, carriers of Robertsonian translocations involving chromosome 21 have a higher risk of having a child with Down syndrome. This is known as a 'translocation Downs'. This is due to a mis-segregation (nondisjunction) during gametogenesis. The mother has a higher (10%) risk of transmission than the father (1%). Robertsonian translocations involving chromosome 14 also carry a slight risk of uniparental disomy 14 due to trisomy rescue. Role in disease Some human diseases caused by translocations are: Cancer: Several forms of cancer are caused by acquired translocations (as opposed to those present from conception); this has been described mainly in leukemia (acute myelogenous leukemia and chronic myelogenous leukemia). Translocations have also been described in solid malignancies such as Ewing's sarcoma. Infertility: One of the would-be parents carries a balanced translocation, where the parent is asymptomatic but conceived fetuses are not viable. Down syndrome is caused in a minority (5% or less) of cases by a Robertsonian translocation of the chromosome 21 long arm onto the long arm of chromosome 14. Chromosomal translocations between the sex chromosomes can also result in a number of genetic conditions, such as XX male syndrome: caused by a translocation of the SRY gene from the Y to the X chromosome By chromosome Denotation The International System for Human Cytogenetic Nomenclature (ISCN) is used to denote a translocation between chromosomes. The designation t(A;B)(p1;q2) is used to denote a translocation between chromosome A and chromosome B. The information in the second set of parentheses, when given, gives the precise location within the chromosome for chromosomes A and B respectively—with p indicating the short arm of the chromosome, q indicating the long arm, and the numbers after p or q refers to regions, bands and sub-bands seen when staining the chromosome with a staining dye. See also the definition of a genetic locus. The translocation is the mechanism that can cause a gene to move from one linkage group to another. Examples of translocations on human chromosomes History In 1938, Karl Sax, at the Harvard University Biological Laboratories, published a paper entitled "Chromosome Aberrations Induced by X-rays", which demonstrated that radiation could induce major genetic changes by affecting chromosomal translocations. The paper is thought to mark the beginning of the field of radiation cytology, and led him to be called "the father of radiation cytology". DNA double-strand break repair The initiating event in the formation of a translocation is generally a double-strand break in chromosomal DNA. A type of DNA repair that has a major role in generating chromosomal translocations is the non-homologous end joining pathway. When this pathway functions appropriately it restores a DNA double-strand break by reconnecting the originally broken ends, but when it acts inappropriately it may join ends incorrectly resulting in genomic rearrangements including translocations. In order for the illegitimate joining of broken ends to occur, the exchange partners DNAs need to be physically close to each other in the 3D genome. See also Accipitridae Aneuploidy Chromosome abnormalities DbCRID Fusion gene Pseudodiploid Takifugu rubripes References External links Chromosomal abnormalities Cytogenetics Modification of genetic information
Chromosomal translocation
[ "Biology" ]
1,468
[ "Modification of genetic information", "Molecular genetics" ]
561,585
https://en.wikipedia.org/wiki/Master%20theorem%20%28analysis%20of%20algorithms%29
In the analysis of algorithms, the master theorem for divide-and-conquer recurrences provides an asymptotic analysis for many recurrence relations that occur in the analysis of divide-and-conquer algorithms. The approach was first presented by Jon Bentley, Dorothea Blostein (née Haken), and James B. Saxe in 1980, where it was described as a "unifying method" for solving such recurrences. The name "master theorem" was popularized by the widely used algorithms textbook Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein. Not all recurrence relations can be solved by this theorem; its generalizations include the Akra–Bazzi method. Introduction Consider a problem that can be solved using a recursive algorithm such as the following: procedure p(input x of size n): if n < some constant k: Solve x directly without recursion else: Create a subproblems of x, each having size n/b Call procedure p recursively on each subproblem Combine the results from the subproblems The above algorithm divides the problem into a number of subproblems recursively, each subproblem being of size . Its solution tree has a node for each recursive call, with the children of that node being the other calls made from that call. The leaves of the tree are the base cases of the recursion, the subproblems (of size less than k) that do not recurse. The above example would have child nodes at each non-leaf node. Each node does an amount of work that corresponds to the size of the subproblem passed to that instance of the recursive call and given by . The total amount of work done by the entire algorithm is the sum of the work performed by all the nodes in the tree. The runtime of an algorithm such as the above on an input of size , usually denoted , can be expressed by the recurrence relation where is the time to create the subproblems and combine their results in the above procedure. This equation can be successively substituted into itself and expanded to obtain an expression for the total amount of work done. The master theorem allows many recurrence relations of this form to be converted to Θ-notation directly, without doing an expansion of the recursive relation. Generic form The master theorem always yields asymptotically tight bounds to recurrences from divide and conquer algorithms that partition an input into smaller subproblems of equal sizes, solve the subproblems recursively, and then combine the subproblem solutions to give a solution to the original problem. The time for such an algorithm can be expressed by adding the work that they perform at the top level of their recursion (to divide the problems into subproblems and then combine the subproblem solutions) together with the time made in the recursive calls of the algorithm. If denotes the total time for the algorithm on an input of size , and denotes the amount of time taken at the top level of the recurrence then the time can be expressed by a recurrence relation that takes the form: Here is the size of an input problem, is the number of subproblems in the recursion, and is the factor by which the subproblem size is reduced in each recursive call (). Crucially, and must not depend on . The theorem below also assumes that, as a base case for the recurrence, when is less than some bound , the smallest input size that will lead to a recursive call. Recurrences of this form often satisfy one of the three following regimes, based on how the work to split/recombine the problem relates to the critical exponent . (The table below uses standard big O notation). A useful extension of Case 2 handles all values of : Examples Case 1 example As one can see from the formula above: , so , where Next, we see if we satisfy the case 1 condition: . It follows from the first case of the master theorem that (This result is confirmed by the exact solution of the recurrence relation, which is , assuming ). Case 2 example As we can see in the formula above the variables get the following values: where Next, we see if we satisfy the case 2 condition: , and therefore, c and are equal So it follows from the second case of the master theorem: Thus the given recurrence relation was in . (This result is confirmed by the exact solution of the recurrence relation, which is , assuming ). Case 3 example As we can see in the formula above the variables get the following values: , where Next, we see if we satisfy the case 3 condition: , and therefore, yes, The regularity condition also holds: , choosing So it follows from the third case of the master theorem: Thus the given recurrence relation was in , that complies with the of the original formula. (This result is confirmed by the exact solution of the recurrence relation, which is , assuming .) Inadmissible equations The following equations cannot be solved using the master theorem: a is not a constant; the number of subproblems should be fixed non-polynomial difference between and (see below; extended version applies) , which is the combination time, is not positive case 3 but regularity violation. In the second inadmissible example above, the difference between and can be expressed with the ratio . It is clear that for any constant . Therefore, the difference is not polynomial and the basic form of the Master Theorem does not apply. The extended form (case 2b) does apply, giving the solution . Application to common algorithms See also Akra–Bazzi method Asymptotic complexity Notes References Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw–Hill, 2001. . Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90. Michael T. Goodrich and Roberto Tamassia. Algorithm Design: Foundation, Analysis, and Internet Examples. Wiley, 2002. . The master theorem (including the version of Case 2 included here, which is stronger than the one from CLRS) is on pp. 268–270. Asymptotic analysis Theorems in computational complexity theory Recurrence relations Analysis of algorithms
Master theorem (analysis of algorithms)
[ "Mathematics" ]
1,355
[ "Mathematical analysis", "Recurrence relations", "Theorems in discrete mathematics", "Mathematical relations", "Asymptotic analysis", "Theorems in computational complexity theory" ]
561,590
https://en.wikipedia.org/wiki/Positive%20pressure
Positive pressure is a pressure within a system that is greater than the environment that surrounds that system. Consequently, if there is any leak from the positively pressured system, it will egress into the surrounding environment. This is in contrast to a negative pressure room, where air is sucked in. Use is also made of positive pressure to ensure there is no ingress of the environment into a supposed closed system. A typical example of the use of positive pressure is the location of a habitat in an area where there may exist flammable gases such as those found on an oil platform or laboratory cleanroom. This kind of positive pressure is also used in operating theaters and in vitro fertilisation (IVF) labs. Hospitals may have positive pressure rooms for patients with compromised immune systems. Air will flow out of the room instead of in, so that any airborne microorganisms (e.g., bacteria) that may infect the patient are kept away. This process is important in human and chick development. Positive pressure, created by the closure of anterior and posterior neuropores of the neural tube during neurulation, is a requirement of brain development. Amphibians use this process to respire, whereby they use positive pressure to inflate their lungs. Historic utility Industrial utility Industrial use of positive pressure systems are commonly used to ventilate confined spaces with dust, fumes, pollutants and/or high temperatures Clinical utility Many hospitals are equipped with negative and positive pressure rooms just for the purposes described. Negative pressure rooms are used to help keep airborne pathogens (eg. aerosolized COVID-19 and active TB) from escaping into surrounding areas, thereby preventing the spread of airborne pathogens to outside the room. Positive pressure rooms are used for immunocompromised persons (eg. Neutropenic) whereby controlled quality air is sent into the room to prevent random (and potentially polluted) air from entering the room. The CDC recommends a positive pressure differential of at least 2.5 Pa between the positively pressured room and the adjoining hallway. See also Filtered air positive pressure Negative pressure (disambiguation) Negative room pressure Overpressure (CBRN protection) Plenum chamber Positive pressure enclosure References Classical mechanics Gas technologies Pressure sv:Övertryck (lufttryck)
Positive pressure
[ "Physics" ]
476
[ "Scalar physical quantities", "Mechanical quantities", "Physical quantities", "Pressure", "Classical mechanics", "Mechanics", "Wikipedia categories named after physical quantities" ]
562,067
https://en.wikipedia.org/wiki/Brillouin%20zone
In mathematics and solid state physics, the first Brillouin zone (named after Léon Brillouin) is a uniquely defined primitive cell in reciprocal space. In the same way the Bravais lattice is divided up into Wigner–Seitz cells in the real lattice, the reciprocal lattice is broken up into Brillouin zones. The boundaries of this cell are given by planes related to points on the reciprocal lattice. The importance of the Brillouin zone stems from the description of waves in a periodic medium given by Bloch's theorem, in which it is found that the solutions can be completely characterized by their behavior in a single Brillouin zone. The first Brillouin zone is the locus of points in reciprocal space that are closer to the origin of the reciprocal lattice than they are to any other reciprocal lattice points (see the derivation of the Wigner–Seitz cell). Another definition is as the set of points in k-space that can be reached from the origin without crossing any Bragg plane. Equivalently, this is the Voronoi cell around the origin of the reciprocal lattice. There are also second, third, etc., Brillouin zones, corresponding to a sequence of disjoint regions (all with the same volume) at increasing distances from the origin, but these are used less frequently. As a result, the first Brillouin zone is often called simply the Brillouin zone. In general, the n-th Brillouin zone consists of the set of points that can be reached from the origin by crossing exactly n − 1 distinct Bragg planes. A related concept is that of the irreducible Brillouin zone, which is the first Brillouin zone reduced by all of the symmetries in the point group of the lattice (point group of the crystal). The concept of a Brillouin zone was developed by Léon Brillouin (1889–1969), a French physicist. Within the Brillouin zone, a constant-energy surface represents the loci of all the -points (that is, all the electron momentum values) that have the same energy. Fermi surface is a special constant-energy surface that separates the unfilled orbitals from the filled ones at zero kelvin. Critical points Several points of high symmetry are of special interest – these are called critical points. Other lattices have different types of high-symmetry points. They can be found in the illustrations below. See also Fundamental pair of periods Fundamental domain References Bibliography External links Brillouin Zone simple lattice diagrams by Thayer Watkins Brillouin Zone 3d lattice diagrams by Technion. DoITPoMS Teaching and Learning Package – "Brillouin Zones" Aflowlib.org consortium database (Duke University) AFLOW Standardization of VASP/QUANTUM ESPRESSO input files (Duke University) Crystallography Electronic band structures Vibrational spectroscopy
Brillouin zone
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
583
[ "Electron", "Spectrum (physical sciences)", "Materials science", "Spectroscopy", "Crystallography", "Electronic band structures", "Condensed matter physics", "Vibrational spectroscopy" ]