id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
17,872,730
https://en.wikipedia.org/wiki/Ergodic%20Ramsey%20theory
Ergodic Ramsey theory is a branch of mathematics where problems motivated by additive combinatorics are proven using ergodic theory. History Ergodic Ramsey theory arose shortly after Endre Szemerédi's proof that a set of positive upper density contains arbitrarily long arithmetic progressions, when Hillel Furstenberg gave a new proof of this theorem using ergodic theory. It has since produced combinatorial results, some of which have yet to be obtained by other means, and has also given a deeper understanding of the structure of measure-preserving dynamical systems. Szemerédi's theorem Szemerédi's theorem is a result in arithmetic combinatorics, concerning arithmetic progressions in subsets of the integers. In 1936, Erdős and Turán conjectured that every set of integers A with positive natural density contains a k-term arithmetic progression for every k. This conjecture, which became Szemerédi's theorem, generalizes the statement of van der Waerden's theorem. Hillel Furstenberg proved the theorem using ergodic principles in 1977. See also IP set Piecewise syndetic set Ramsey theory Syndetic set Thick set References Ergodic Methods in Additive Combinatorics Vitaly Bergelson (1996) Ergodic Ramsey Theory -an update Sources Ergodic theory Ramsey theory
Ergodic Ramsey theory
[ "Mathematics" ]
280
[ "Dynamical systems", "Ergodic theory", "Ramsey theory", "Combinatorics" ]
12,251,858
https://en.wikipedia.org/wiki/2-Cyanoguanidine
2-Cyanoguanidine is a nitrile derived from guanidine. It is a dimer of cyanamide, from which it can be prepared. 2-Cyanoguanidine is a colourless solid that is soluble in water, acetone, and alcohol, but not nonpolar organic solvents. Production and use 2-Cyanoguanidine is produced by treating cyanamide with base. It is produced in soil by decomposition of cyanamide. A variety of useful compounds are produced from 2-cyanoguanidine, guanidines and melamine. For example, acetoguanamine and benzoguanamine are prepared by condensation of cyanoguanidine with the nitrile: (H2N)2C=NCN + RCN → (CNH2)2(CR)N3 Cyanoguanidine is also used as a slow fertilizer. Formerly, it was used as a fuel in some explosives. It is used in the adhesive industry as a curing agent for epoxy resins. Chemistry Two tautomeric forms exist, differing in the protonation and bonding of the nitrogen to which the nitrile group is attached. 2-Cyanoguanidine can also exist in a zwitterionic form via a formal acid–base reaction among the nitrogens. Loss of ammonia (NH3) from the zwitterionic form, followed by deprotonation of the remaining central nitrogen atom, gives the dicyanamide anion, [N(CN)2]−. Drugs List 2-Cyanoguanidine finds use in the synthesis of the following list of agents: Amanozine Azapropazone Buformin Chlorazanil Chlorproguanil Clociguanil Cloguanamil Cloquanamil Cycloguanil Dametralast Diallylmelamine (DAM) Guanazole Irsogladine Metformin Methylphenobarbital Moroxydine Oxonazine Phenformin Phenylbiguanide NSC-127755 Triazinate References External links OECD document Entry at chemicalland21.com Guanidines Cyanamides Cyanoguanidines
2-Cyanoguanidine
[ "Chemistry" ]
480
[ "Guanidines", "Cyanamides", "Functional groups" ]
12,251,941
https://en.wikipedia.org/wiki/C4H9NO3
{{DISPLAYTITLE:C4H9NO3}} The molecular formula C4H9NO3 may refer to: Allothreonine γ-Amino-β-hydroxybutyric acid Butyl nitrate Homoserine Threonine Molecular formulas
C4H9NO3
[ "Physics", "Chemistry" ]
58
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
12,259,899
https://en.wikipedia.org/wiki/Pressure-correction%20method
Pressure-correction method is a class of methods used in computational fluid dynamics for numerically solving the Navier-Stokes equations normally for incompressible flows. Common properties The equations solved in this approach arise from the implicit time integration of the incompressible Navier–Stokes equations. Due to the non-linearity of the convective term in the momentum equation that is written above, this problem is solved with a nested-loop approach. While so called global or inner iterations represent the real time-steps and are used to update the variables and , based on a linearized system, and boundary conditions; there is also an outer loop for updating the coefficients of the linearized system. The outer iterations comprise two steps: Solve the momentum equation for a provisional velocity based on the velocity and pressure of the previous outer loop. Plug the new newly obtained velocity into the continuity equation to obtain a correction. The correction for the velocity that is obtained from the second equation one has with incompressible flow, the non-divergence criterion or continuity equation is computed by first calculating a residual value , resulting from spurious mass flux, then using this mass imbalance to get a new pressure value. The pressure value that is attempted to compute, is such that when plugged into momentum equations a divergence-free velocity field results. The mass imbalance is often also used for control of the outer loop. The name of this class of methods stems from the fact that the correction of the velocity field is computed through the pressure-field. The discretization of this is typically done with either the finite element method or the finite volume method. With the latter, one might also encounter the dual mesh, i.e. the computation grid obtained from connecting the centers of the cells that the initial subdivision into finite elements of the computation domain yielded. Implicit split-update procedures Another approach which is typically used in FEM is the following. The aim of the correction step is to ensure conservation of mass. In continuous form for compressible substances mass, conservation of mass is expressed by where is the square of the "speed of sound". For low Mach numbers and incompressible media is assumed to be infinite, which is the reason for the above continuity equation to reduce to The way of obtaining a velocity field satisfying the above, is to compute a pressure which when substituted into the momentum equation leads to the desired correction of a preliminary computed intermediate velocity. Applying the divergence operator to the compressible momentum equation yields then provides the governing equation for pressure computation. The idea of pressure-correction also exists in the case of variable density and high Mach numbers, although in this case there is a real physical meaning behind the coupling of dynamic pressure and velocity as arising from the continuity equation is with compressibility, still an additional variable that can be eliminated with algebraic operations, but its variability is not a pure artifice as in the compressible case, and the methods for its computation differ significantly from those with References M. Thomadakis, M. Leschziner: A PRESSURE-CORRECTION METHOD FOR THE SOLUTION OF INCOMPRESSIBLE VISCOUS FLOWS ON UNSTRUCTURED GRIDS, Int. Journal for Numerical Meth. in Fluids, Vol. 22, 1996 A. Meister, J. Struckmeier: Hyperbolic Partial Differential Equations, 1st Edition, Vieweg, 2002 External links ISNaS – incompressible flow solver Application of Temperature and/or Pressure Correction Factors in Gas Measurement Fluid dynamics Computational fluid dynamics
Pressure-correction method
[ "Physics", "Chemistry", "Engineering" ]
715
[ "Computational fluid dynamics", "Chemical engineering", "Computational physics", "Piping", "Fluid dynamics" ]
12,260,613
https://en.wikipedia.org/wiki/Synchrotron-Light%20for%20Experimental%20Science%20and%20Applications%20in%20the%20Middle%20East
The Synchrotron-Light for Experimental Science and Applications in the Middle East (SESAME) is an independent laboratory located in Allan in the Balqa governorate of Jordan, created under the auspices of UNESCO on 30 May 2002. Aimed at promoting peace between Middle Eastern countries, Jordan was chosen as the location for the laboratory, as it was then the only country that maintained diplomatic relations with all the other founding members; Bahrain, Cyprus, Egypt, Iran, Israel, Pakistan, the Palestinian Authority, and Turkey. The idea, to create a joint Arab-Israeli scientific collaboration, goes back to the 1980s and took a more concrete form in discussions at CERN in 1993. The project was launched in 1999 and the ground breaking ceremony was held on 6 January 2003. Construction work began the following July, and the facility was finally inaugurated on 16 May 2017 under the patronage and presence of King Abdullah II. The construction of the project costed around $98 million, with $5 million donated each by Jordan, Israel, Turkey, Iran, and the European Union. The rest was donated by CERN from existing equipment. Jordan became the greatest contributor to the project by donating land and building construction costs, and by pledging to build a $7 million solar power plant, which will make SESAME the first accelerator in the world to be powered by renewable energy. The annual operational cost of $6 million are pledged by the members according to the size of their economies. The facility is the only synchrotron radiation facility in the Middle East and is one of around 60 in the world. , the president of the SESAME Council is Rolf Heuer. He was preceded by Christopher Llewellyn Smith (2008-2017) and Herwig Schopper (2004-2008). All three were previously directors-general of CERN. Khaled Toukan, the chairman of the Jordan Atomic Energy Commission, is the current director and former vice-president of SESAME. Background Synchrotron light (also referred to as synchrotron radiation) is radiation that is emitted when charged particles moving at speeds near the speed of light are forced to change direction by a magnetic field. It is the brightest artificial source of X-rays, allowing for the detailed study of molecular structures. When synchrotrons were first developed, their primary purpose was to accelerate particles for the study of the nucleus. Today, there are almost 60 synchrotron light sources around the world dedicated to exploiting the special qualities, which allow it to be used across a wide range of applications, from condensed matter physics to structural biology, environmental science and cultural heritage. History The need for a large-scale scientific project to bring the Middle-East back into the scientific community as well as promote peace and foster international collaboration has been recognised for almost 40 years. In his speech at the 1979 Nobel Prize banquet, Pakistani physicist Mohammad Abdus Salam stated that we should "strive to provide equal opportunities to all so that they can engage in the creation of Physics and science for the benefit of all mankind". In his paper presented at the Symposium on the "Future Outlook of the Arabian Gulf University", on 11 May 1983, in Bahrain, titled The Gulf University and Science in the Arab-Islamic Commonwealth, Abdus Salam proposed the founding of a Super Gulf University and an international laboratory in material sciences in Bahrain. Such a laboratory was proposed for the University of Jeddah, to emphasise science and technology transfer in the material sciences, including a laboratory with a synchrotron radiation light source. Ultimately, the proposal did not come through, possibly because it had the sponsorship of a single university rather than a consortium of universities. In 1997, Herman Winick and suggested building a light source in the Middle-East using components from the soon-to-be decommissioned BESSY I facility in Berlin, during two seminars organized in 1997 in Italy and in 1998 in Sweden by Tord Ekelöf with the CERN-based Middle East Scientific Co-operation (MESC) group headed by Sergio Fubini. Winick was credited with the idea of moving the machine to the Middle-East during discussions about the future of the machine. He explained "(his) main motivation is to help create a project in which people can work constructively and collectively." This proposal was adopted and pursued by MESC. The German government agreed to donate the necessary equipment at the request of Fubini and Herwig Schopper. Believing that the only chance of realizing such a project was following the example of CERN, the plan was brought to the attention of Federico Mayor, then Director-General of UNESCO, who organized the Consultative Meeting on a Middle East Synchrotron Light Facility, at UNESCO headquarters in Paris in June 1999. The meeting resulted in the launching of the project and the establishment of an International Interim Council under the chairmanship of Herwig Schopper. In May 2002, the executive board and Director General of UNESCO unanimously approved the establishment of the Centre under UNESCO auspices, through resolution 31C/Resolution 19. The groundbreaking ceremony for SESAME took place at Al-Balqa' Applied University in Jordan, on 6 January 2003. SESAME used offices at the UNESCO Office in Amman until the completion of the building in 2008. In April 2004, the centre formally came into existence when the required number of Members had informed UNESCO of their decision to join. and a permanent Council was established. The founding members were Bahrain, Egypt, Israel, Jordan, Pakistan, and Turkey. The current Members are Cyprus, Egypt, Iran, Israel, Jordan, Pakistan, the Palestinian Authority, and Turkey. The Observers are Brazil, Canada, China, the European Union, France, Germany, Greece, Italy, Japan, Kuwait, Portugal, Russia, Spain, Switzerland, Sweden, the United Kingdom, and the United States. SESAME was officially opened on 16 May 2017 by King Abdullah II of Jordan. Early Criticism The German and French Ambassadors to UNESCO complained that Kōichirō Matsuura, then Director General of UNESCO, had not followed UNESCO protocol while making this decision. Schopper explained the difficult circumstances the project was facing and they withdrew their complaints. Matsuura did not need formal approval to provide the required funds because the Japanese government had given him a budget to be used at his discretion when he was appointed Director General. SESAME faced a further setback when the German government was asked to withdraw the authorization of export of BESSY I after public criticism was raised because some scientists claimed that it is possible to produce nuclear materials for atomic bombs with SESAME. Schopper was invited to a televised discussion with Professor Dr. Reinhard Brandt, one of the scientists who made the critical claims. The objections were eventually resolved, as Schopper explained that although some plutonium could have been produced, it would not have been a sufficient amount to develop a bomb. The BESSY I components were eventually shipped from Berlin to Hamburg and then to the Zarqa Free Zone in Jordan, where they were held by the Jordanian government until SESAME was formally founded and the building was ready to accept the components. Location Before UNESCO could formally approve SESAME, the issue of finding a host country and a site had to be resolved. The Interim Council agreed on a set of criteria which had to be satisfied by the host country and site. The lab had to be accessible, geographically and politically, to scientists from all over the world, and the host state should be strongly committed to the project, and should provide the land on which the lab would be based for free, as well as provide the building itself and the technical infrastructure (roads, water, electricity). Seven Members (Armenia, Egypt, Iran, Jordan, Oman, Palestine and Turkey) proposed 12 sites. UNESCO Assistant Director General for Natural Sciences, Maurizio Iaccarino, and Schopper visited Egypt, Israel, Jordan and Palestine in September 1999. The Armenian, Iranian and Turkish proposals were explored at Interim Council Meetings. Although Egypt expressed strong interest in the project, a long procedure which involved going through many authorities was necessary before the project could be presented to the Prime Minister, and the proposal was ultimately deemed unfit. The Palestinian National Authority, although interested in the project, did not have the financial capacity to meet the Interim Council's criteria. For political reasons, Israel could not provide a site accessible to all scientists. Additionally, Israel was already heavily involved in the ESRF laboratory at Grenoble, and were contractually bound to provide considerable funds. Furthermore, biologists did not see how they would benefit from SESAME since they already had access to other laboratories across the world. Armenia offered to host SESAME in their building Synchrotron Laboratory at Erevan since their accelerator was outdated. Their proposal was strengthened with the backing of wealthy Iraqi-born Armenian-American businessman Kevork Hovnanian. However, it was later realized that several alterations to the building were necessary to make it a viable site for SESAME. Iran, considered a rogue state at the time, though interested in the project, could not guarantee access to scientists from all countries, and so the proposal was ultimately unsuccessful. Approving Jordan In Jordan, Adnan Badran, deputy director of UNESCO from 1992 to 1998, organised a meeting with representatives from universities and other organisations. No government members could be met, and no commitment was obtained. In a last ditch effort to save the project, Schopper contacted his former student Isa Khubeis, then vice-president of Al-Balqa Applied University. Khubeis invited Iaccarino and Schopper to dinner along with Khaled Toukan, President of Al-Balqa University, and Prince Ghazi Bin Muhammad, who chairman of the Board of Governors of the university and a close advisor of King Abdullah II. Schopper explained the situation to Prince Ghazi, who arranged a meeting with King Abdullah for the following day. King Abdullah formally committed Jordan to the project during the meeting in a signed letter addressed to the Director General of UNESCO. After long discussions and a series of votes, Jordan was formally approved to be the host of the Centre at the third meeting of the SESAME Interim Council in June 2000. Egypt and Iran withdrew their proposals before the final round of voting. The decision was ratified by 9 votes in favour and 1 abstention. Jordan was seen as an appropriate location for the project because it was the only country at the time to have maintained diplomatic relations with all other founding members: Bahrain, Cyprus, Egypt, Iran, Israel, Pakistan, the Palestinian Authority, and Turkey. Cost The project cost around $98 million, with $5 million donated each by Jordan, Israel, Turkey, Iran and the European Union. The rest was in kind donations of equipment by CERN, and land by Jordan (the largest contributor to the project). Jordan also contributed the building construction costs, and a $7 million solar power plant, making SESAME the first accelerator in the world powered solely by renewable energy. The annual operational cost of $6 million is split between the members, according to the sizes of their economies. Funding As well as the drawn-out process in deciding which country should host SESAME, the project came across several other difficulties on its path to completion. Possibly the largest issue was its funding. Because the major components of the laboratory from the decommissioned BESSY I experiment, originally valued at $60 million, were being donated by the German government, funding on that front was not an issue. However, the German government stipulated that the cost of dismantling, including documentation, packing and transport, had to be provided by SESAME. The cost was an estimated $600,000, and had to be guaranteed before the end of 1999 because the BESSY building had been promised to the Max Planck Society. Schopper had been informed of this condition only a few hours before the Interim Council meeting, and asking for voluntary contributions would have been ineffective because most delegates at the meeting would not have had the authority to make financial decision. After a discussion between the Interim Council, the United States State Department, Sweden and Russia agreed to provide $200,000. Schopper saw only one possible option to save the project. He asked UNESCO Director General Koichiro Matsuura to arrange an emergency meeting. They had a lunch together, and Schopper asked Matsuura to fund the missing $400,000 immediately. Matsuura agreed to the request and the Interim Council Members were informed after the lunch. Because the BESSY components were used only as an injector system, the construction of a new main ring was still needed for SESAME. The estimated cost of the ring was $10 million so additional sources of funding were required. On 23 July 2001 a formal proposal supported by the German and French Ministers of Research, and later the Commissioner for Research Philippe Busquin, was submitted to European Commissioner Chris Patten. In October 2001 chef de cabinet of Commissioner Patten, Anthony Cary, informed Schopper that an independent evaluation by a panel of international experts was needed. The Techno-Economic Feasibility Study was under the guidance of Professor Guy Le Lay of the University of Marseille. The report concluded that the project was promising and would "effectively stimulate scientific activity and cooperation in the Middle East". However, in August 2003, Commissioner Patten stated that "the Commission is not in a position at this stage to provide Community funding to SESAME". In a subsequent meeting arranged between the Jordanian government, SESAME representatives and the EU Commission, the main of contention was the project's energy level. It was claimed that a competitive facility needed a higher energy level. A compromise was reached that the machine should start at 2 GeV, with 2.5 GeV available at a later stage. This would have increased the cost of the ring by another $2 million. The issue was eventually resolved through negotiations started by Director General Rolf Heuer between CERN and the Commissioner for Research and Innovation, Carlos Moedas. About $5 million were approved for CERN to be used for the construction of the magnets of the SESAME main ring. The Sergio Fubini Guesthouse that was inaugurated in December 2019 was funded by the Government of Italy represented by the Ministry for Education, University and Research through INFN. Design The machine works in four stages. Microtron The microtron accelerates electrons to the energy of 22.5 MeV, and injects them into the booster. It was fully operational in November 2011. Booster synchrotron The booster synchrotron receives electrons from the microtron, and accelerates them to 800 MeV, for injection into the storage ring. The booster was created with parts from the German synchrotron facility BESSY, which was decommissioned in 1999. Storage ring The storage ring accelerates electrons to 2.5 GeV, and keeps them circulating for as long as two hours. As the electrons go around the storage ring, they emit x-rays. Lost energy is replaced as the beam travels through radio frequency cavities along the ring. Beamlines X-rays from the storage ring are directed to beamlines, where research experiments are performed. BASEMA (Beamline for Absorption Spectroscopy for Environmental and Material Applications ), a beamline for X-ray absorption fine structure (XAFS) and X-ray fluorescence (XRF) spectroscopy, the 'day-one' beamline to be ready in March/April 2017 EMIRA (ElectroMagnetic Infrared RAdiation) for IR (Infrared Spectromicroscopy), is the second 'day-one' beamline, in this case that to start operations in April/May 2017 SUSAM (SESAME USers Application for Materials Science) or Materials Science (MS), to be completed at the end of the third quarter of 2017 MX (Macromolecular Crystallography), the beamline to be completed in 2019 Soft X-ray Beamline SAXS/WAXS (Small Angle and Wide Angle X-ray Scattering) Tomography Beamline Initial beams Although the current facility has space for seven light beams, only two beams were operational when the facility opened in 2017. The first beam is an X-ray beam that will be used to study pollution in the Jordan Valley, among other things. While the second beam provides infrared radiation for a microscope that would study biological tissue; including cancer cells. The rest are planned for later, with the third beam, an X-ray source used for crystallography, slated for late 2017. Deaths and delays Dr. Masoud Alimohammadi and Dr. Magid Shahriari, two Iranian members of SESAME, were killed in two different terrorist attacks, for which an Iranian prosecutor accuses the Israeli Mossad in 2010. The roof of the laboratory collapsed during the 2013 Middle East cold snap due to heavy snowfall, which led to delays. See also Jordan Research and Training Reactor Science and technology in Jordan The African Light Source (AfLS) References External links Facebook page News articles New Middle Eastern Particle Accelerator’s Motto is “Science for Peace”, By Elisa Oddone on Thu, 21 Jun 2018, pbs.com. A Light for Science, and Cooperation, in the Middle East, 8 May 2017, The New York Times. How Jordan's particle accelerator is bringing together Middle East enemies. Synchrotron-Light for Experimental Science Applications in the Middle East Synchrotron radiation facilities International research institutes Research institutes in Jordan 2017 establishments in Jordan Science and technology in Jordan Institutes associated with CERN
Synchrotron-Light for Experimental Science and Applications in the Middle East
[ "Materials_science" ]
3,536
[ "Materials testing", "Synchrotron radiation facilities" ]
12,261,271
https://en.wikipedia.org/wiki/Drainage%20research
Drainage research is the study of agricultural drainage systems and their effects to arrive at optimal system design. Overview Agricultural land drainage has agricultural, environmental, hydrological, engineering, economical, social and socio-political aspects (Figure 1). All these aspects can be subject of drainage research. The aim (objective, target) of agricultural land drainage is the optimized agricultural production related to: reclamation of agricultural land conservation of agricultural land optimization of crop yield crop diversification cropping intensification optimization of farm operations Systems analysis The role of targets, criterion, environmental, and hydrological factors is illustrated in Figure 2. In this figure criterion factors are factors influenced by drainage on the one hand and the agricultural performance on the other. An example of a criterion factor is the depth of the water table: A drainage system influences this depth; the relation between drainage system design and depth of water table is mainly physical and can be described by drainage equations, in which the drainage requirements are to be found from a water balance. The depth of the water table as a criterion factor needs to be translated into a criterion index to be given a numerical value that represents the behavior of the water table on the one hand and that can be related to the target (e.g. crop production) on the other hand. The relation between criterion index and target can often be optimized, the maximum value providing the ultimate aim while the corresponding value of the criterion index can be used as an agricultural drainage criterion in the design procedure. Crop response processes The underlying processes in the optimization (as in the insert of Figure 2) are manifold. The processes can be grouped into mutually dependent soil physical, soil chemical/biological, and hydrological processes (Figure 3): The soil physical processes include soil aeration, soil structure, soil stability, and soil temperature The chemical processes include soil salinity, soil acidity and soil alkalinity. The hydrological processes include evaporation, runoff, and soil salinity. Examples of processes can be found in. Field data In drainage research the collection and analysis of field data is important. In dealing with field data one must expect considerable random variation owing to the large number of natural processes involved and the large variability of plant and soil properties and hydrological conditions. An example of a relation between crop yield and depth of water table subject to random natural variation is shown in the attached graph. The graph was made with the SegReg program, see segmented regression. When analysing field data with random variation a proper application of statistical principles like in regression and frequency analysis is necessary. Soil salinity control In irrigated lands, subsurface drainage may be required to leach the salts brought into the soil with the irrigation water to prevent soil salination. Agro-hydro-salinity and leaching models like SaltMod may be helpful to determine the drainage requirement. References External links Website on Land Drainage Yard Drainage Solutions Hydrology Drainage Soil science Environmental soil science
Drainage research
[ "Chemistry", "Engineering", "Environmental_science" ]
602
[ "Hydrology", "Environmental soil science", "Environmental engineering" ]
11,314,351
https://en.wikipedia.org/wiki/Digital%20polymerase%20chain%20reaction
Digital polymerase chain reaction (digital PCR, DigitalPCR, dPCR, or dePCR) is a biotechnological refinement of conventional polymerase chain reaction methods that can be used to directly quantify and clonally amplify nucleic acids strands including DNA, cDNA, or RNA. The key difference between dPCR and qPCR lies in the method of measuring nucleic acids amounts, with the former being a more precise method than PCR, though also more prone to error in the hands of inexperienced users. PCR carries out one reaction per single sample. dPCR also carries out a single reaction within a sample, however the sample is separated into a large number of partitions and the reaction is carried out in each partition individually. This separation allows a more reliable collection and sensitive measurement of nucleic acid amounts. The method has been demonstrated as useful for studying variations in gene sequences—such as copy number variants and point mutations. Principles The polymerase chain reaction method is used to quantify nucleic acids by amplifying a nucleic acid molecule with the enzyme DNA polymerase. Conventional PCR is based on the theory that amplification is exponential. Therefore, nucleic acids may be quantified by comparing the number of amplification cycles and amount of PCR end-product to those of a reference sample. However, many factors complicate this calculation, creating uncertainties and inaccuracies. These factors include the following: initial amplification cycles may not be exponential; PCR amplification eventually plateaus after an uncertain number of cycles; and low initial concentrations of target nucleic acid molecules may not amplify to detectable levels. However, the most significant limitation of PCR is that PCR amplification efficiency in a sample of interest may be different from that of reference samples. Instead of performing one reaction per well, dPCR involves partitioning the PCR solution into tens of thousands of nano-liter sized droplets, where a separate PCR reaction takes place in each one. A PCR solution is made similarly to a TaqMan assay, which consists of template DNA (or RNA), fluorescence-quencher probes, primers, and a PCR master mix, which contains DNA polymerase, dNTPs, MgCl2, and reaction buffers at optimal concentrations. Several different methods can be used to partition samples, including microwell plates, capillaries, oil emulsion, and arrays of miniaturized chambers with nucleic acid binding surfaces. The PCR solution is partitioned into smaller units, each with the necessary components for amplification. The partitioned units are then subjected to thermocycling so that each unit may independently undergo PCR amplification. After multiple PCR amplification cycles, the samples are checked for fluorescence with a binary readout of “0” or “1”. The fraction of fluorescing droplets is recorded. The partitioning of the sample allows one to estimate the number of different molecules by assuming that the molecule population follows the Poisson distribution, thus accounting for the possibility of multiple target molecules inhabiting a single droplet. Using Poisson's law of small numbers, the distribution of target molecule within the sample can be accurately approximated allowing for a quantification of the target strand in the PCR product. This model simply predicts that as the number of samples containing at least one target molecule increases, the probability of the samples containing more than one target molecule increases. In conventional PCR, the number of PCR amplification cycles is proportional to the starting copy number. Different from many people's belief that dPCR provides absolute quantification, digital PCR uses statistical power to provide relative quantification. For example, if Sample A, when assayed in 1 million partitions, gives one positive reaction, it does not mean that the Sample A has one starting molecule. The benefits of dPCR include increased precision through massive sample partitioning, which ensures reliable measurements in the desired DNA sequence due to reproducibility. Error rates are larger when detecting small-fold change differences with basic PCR, while error rates are smaller with dPCR due to the smaller-fold change differences that can be detected in DNA sequence. The technique itself reduces the use of a larger volume of reagent needed, which inevitably will lower experiment cost. Also, dPCR is highly quantitative as it does not rely on relative fluorescence of the solution to determine the amount of amplified target DNA. Comparison between dPCR and Real-Time PCR (qPCR) dPCR measures the actual number of molecules (target DNA) as each molecule is in one droplet, thus making it a discrete “digital” measurement. It provides absolute quantification because dPCR measures the positive fraction of samples, which is the number of droplets that are fluorescing due to proper amplification. This positive fraction accurately indicates the initial amount of template nucleic acid. Similarly, qPCR utilizes fluorescence; however, it measures the intensity of fluorescence at specific times (generally after every amplification cycle) to determine the relative amount of target molecule (DNA), but cannot specify the exact amount without constructing a standard curve using different amounts of a defined standard. It gives the threshold per cycle (CT) and the difference in CT is used to calculate the amount of initial nucleic acid. As such, qPCR is an analog measurement, which may not be as precise due to the extrapolation required to attain a measurement. dPCR measures the amount of DNA after amplification is complete and then determines the fraction of replicates. This is representative of an endpoint measurement as it requires the observation of the data after the experiment is completed. In contrast, qPCR records the relative fluorescence of the DNA at specific points during the amplification process, which requires stops in the experimental process. This “real-time” aspect of qPCR may theoretically affect results due to the stopping of the experiment. In practice, however, most qPCR thermal cyclers read each sample's fluorescence very quickly at the end of the annealing/extension step before proceeding to the next melting step, meaning this hypothetical concern is not actually relevant or applicable for the vast majority of researchers. dPCR measures the amplification by measuring the products of end point PCR cycling and is therefore less susceptible to the artifacts arising from impaired amplification efficiencies due to the presence of PCR inhibitors or primer template mismatch. Real-time Digital PCR (rdPCR) combines the methodologies of digital PCR (dPCR) and quantitative PCR (qPCR), integrating the precision of dPCR with the real-time analysis capabilities of qPCR. This integration aims to provide enhanced sensitivity, specificity, and the ability for absolute quantification of nucleic acid sequences, contributing to the quantification of genetic material in scientific and clinical research. qPCR is unable to distinguish differences in gene expression or copy number variations that are smaller than twofold. On the other hand, dPCR has a higher precision and has been shown to detect differences of less than 30% in gene expression, distinguish between copy number variations that differ by only 1 copy, and identify alleles that occur at frequencies less than 0.1%. Applications Digital PCR has many applications in basic research, clinical diagnostics and environmental testing. Its uses include pathogen detection and digestive health analysis; liquid biopsy for cancer monitoring, organ transplant rejection monitoring and non-invasive prenatal testing for serious genetic abnormalities; copy number variation analysis, single gene expression analysis, rare sequence detection, gene expression profiling and single-cell analysis; the detection of DNA contaminants in bioprocessing, the validation of gene edits and detection of specific methylation changes in DNA as biomarkers of cancer, as well as plasmid copy number determination in bacterial populations. dPCR is also frequently used as an orthogonal method to confirm rare mutations detected through next-generation sequencing (NGS) and to validate NGS libraries. Absolute quantification dPCR enables the absolute and reproducible quantification of target nucleic acids at single-molecule resolution. Unlike analogue quantitative PCR (qPCR), however, absolute quantification with dPCR does not require a standard curve. dPCR also has a greater tolerance for inhibitor substances and PCR assays that amplify inefficiently as compared to qPCR. dPCR can quantify, for example, the presence of specific sequences from contaminating genetically modified organisms in foodstuffs, viral load in the blood, PBMCs, serum samples, chorionic villi tissues, biomarkers of neurodegenerative disease in cerebral spinal fluid, and fecal contamination in drinking water. Copy number variation An alteration in copy number state with respect to a single-copy reference locus is referred to as a “copy number variation” (CNV) if it appears in germline cells, or a copy number alteration (CNA) if it appears in somatic cells. A CNV or CNA could be due to a deletion or amplification of a locus with respect to the number of copies of the reference locus present in the cell, and together, they are major contributors to variability in the human genome. They have been associated with cancers; neurological, psychiatric, and autoimmune diseases; and adverse drug reactions. However, it is difficult to measure these allelic variations with high precision using other methods such as qPCR, thus making phenotypic and disease associations with altered CNV status challenging. The large number of “digitized,” endpoint measurements made possible by sample partitioning enables dPCR to resolve small differences in copy number with better accuracy and precision when compared to other methods such as SNP-based microarrays or qPCR. qPCR is limited in its ability to precisely quantify gene amplifications in several diseases, including Crohn’s disease, HIV-1 infection, and obesity. dPCR was designed to measure the concentration of a nucleic acid target in copies per unit volume of the sample. When operating in dilute reactions where less than ~10% of the partitions contain a desired target (referred to as “limiting dilution”), copy number can be estimated by comparing the number of fluorescent droplets arising from a target CNV with the number of fluorescent droplets arising from an invariant single-copy reference locus. In fact, both at these lower target concentrations and at higher ones where multiple copies of the same target can co-localize to a single partition, Poisson statistics are used to correct for these multiple occupancies to give a more accurate value for each target’s concentration. Digital PCR has been used to uncover both germline and somatic variation in gene copy number between humans and to study the link between amplification of HER2 (ERBB2) and breast cancer progression. Rare mutation and rare allele detection Partitioning in digital PCR increases sensitivity and allows for detection of rare events, especially single nucleotide variants (SNVs), by isolating or greatly diminishing the target biomarker signal from potentially competing background. These events can be organized into two classes: rare mutation detection and rare sequence detection. Rare mutation detection Rare mutation detection occurs when a biomarker exists within a background of a highly abundant counterpart that differs by only a single nucleotide variant (SNV). Digital PCR has been shown to be capable of detecting mutant DNA in the presence of a 200,000-fold excess of wild type background, which is 2,000 times more sensitive than achievable with conventional qPCR. Rare sequence detection Digital PCR can detect rare sequences such as HIV DNA in patients with HIV, and DNA from fecal bacteria in ocean and other water samples for assessing water quality. dPCR can detect sequences as rare as 1 in every 1,250,000 cells. Liquid biopsy dPCR’s ability to detect rare mutations may be of particular benefit in the clinic through the use of the liquid biopsy, a generally noninvasive strategy for detecting and monitoring disease via bodily fluids. Researchers have used liquid biopsy to monitor tumor load, treatment response and disease progression in cancer patients by measuring rare mutations in circulating tumor DNA (ctDNA) in a variety of biological fluids from patients including blood, urine and cerebrospinal fluid. Early detection of ctDNA (as in molecular relapse) may lead to earlier administration of an immunotherapy or a targeted therapy specific for the patient’s mutation signature, potentially improving chances of the treatment’s effectiveness rather than waiting for clinical relapse before altering treatment. Liquid biopsies can have turnaround times of a few days, compared to two to four weeks or longer for tissue-based tests. This reduced time to results has been used by physicians to expedite treatments tailored to biopsy data. In 2016, a prospective trial using dPCR at the Dana-Farber Cancer Institute authenticated the clinical benefit of liquid biopsy as a predictive diagnostic tool for patients with non-small-cell lung cancer. The application of liquid biopsy tests have also been studied in patients with breast, colorectal, gynecologic, and bladder cancers to monitor both the disease load and the tumor’s response to treatment. Gene expression and RNA quantification Gene expression and RNA quantification studies have benefited from the increased precision and absolute quantification of dPCR. RNA quantification can be accomplished via RT-PCR, wherein RNA is reverse-transcribed into cDNA in the partitioned reaction itself, and the number of RNA molecules originating from each transcript (or allelic transcript) is quantified via dPCR. One can often achieve greater sensitivity and precision by using dPCR rather than qPCR to quantify RNA molecules in part because it does not require use of a standard curve for quantification. dPCR is also more resilient to PCR inhibitors for the quantification of RNA than qPCR. dPCR can detect and quantify more individual target species per detection channel than qPCR by virtue of being able to distinguish targets based on their differential fluorescence amplitude or by the use of distinctive color combinations for their detection. As an example of this, a 2-channel dPCR system has been used to detect in a single well the expression of four different splice variants of human telomerase reverse transcriptase, a protein that is more active in most tumor cells than in healthy cells. Alternative uses for partitioning Using the dynamic partitioning capabilities employed in dPCR, improved NGS sequencing can be achieved by partitioning of complex PCR reactions prior to amplification to give more uniform amplification across many distinct amplicons for NGS analysis. Additionally, the improved specificity of complex PCR amplification reactions in droplets has been shown to greatly reduce the number of iterations required to select for high affinity aptamers in the SELEX method. Partitioning can also allow for more robust measurements of telomerase activity from cell lysates. dPCR’s dynamic partitioning capabilities can also be used to partition thousands of nuclei or whole cells into individual droplets to facilitate library preparation for a single cell assay for transposase-accessible chromatin using sequencing (scATAC-seq). Droplet digital PCR Droplet Digital PCR (ddPCR) is a method of dPCR in which a 20 microliter sample reaction including assay primers and either Taqman probes or an intercalating dye, is divided into ~20,000 nanoliter-sized oil droplets through a water-oil emulsion technique, thermocycled to endpoint in a 96-well PCR plate, and fluorescence amplitude read for all droplets in each sample well in a droplet flow cytometer. Chip-based digital PCR Chip-based Digital PCR (dPCR) is also a method of dPCR in which the reaction mix (also when used in qPCR) is divided into ~10,000 to ~45,000 partitions on a chip, then amplified using an endpoint PCR thermocycling machine, and is read using a high-powered camera reader with fluorescence filter (HEX, FAM, Cy5, Cy5.5 and Texas Red) for all partitions on each chip. History dPCR rose out of an approach first published in 1988 by Cetus Corporation when researchers showed that a single copy of the β-globin gene could be detected and amplified by PCR. This was achieved by diluting DNA samples from a normal human cell line with DNA from a mutant line having a homozygous deletion of the β-globin gene, until it was no longer present in the reaction. In 1989, Peter Simmonds, AJ Brown et al. used this concept to quantify a molecule for the first time. Alex Morley and Pamela Sykes formally established the method as a quantitative technique in 1992. In 1999, Bert Vogelstein and Kenneth Kinzler coined the term “digital PCR” and showed that the technique could be used to find rare cancer mutations. However, dPCR was difficult to perform; it was labor-intensive, required a lot of training to do properly, and was difficult to do in large quantities. In 2003, Kinzler and Vogelstein continued to refine dPCR and created an improved method that they called BEAMing technology, an acronym for “beads, emulsion, amplification and magnetics.” The new protocol used emulsion to compartmentalize amplification reactions in a single tube. This change made it possible for scientists to scale the method to thousands of reactions in a single run. Companies developing commercial dPCR systems have integrated technologies like automated partitioning of samples, digital counting of nucleic acid targets, and increasing droplet count that can help the process be more efficient. In recent years, scientists have developed and commercialized dPCR-based diagnostics for several conditions, including non-small cell lung cancer and Down’s Syndrome. The first dPCR system for clinical use was CE-marked in 2017 and cleared by the US Food and Drug Administration in 2019, for diagnosing chronic myeloid leukemia. References External links Digital PCR Protocol High Throughput, Nanoliter Quantitative PCR PCR's next frontier Molecular biology Polymerase chain reaction Laboratory techniques
Digital polymerase chain reaction
[ "Chemistry", "Biology" ]
3,877
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction", "nan", "Molecular biology", "Biochemistry" ]
11,316,209
https://en.wikipedia.org/wiki/Bottle%20dynamo
A bottle dynamo or sidewall dynamo is a small electrical generator for bicycles employed to power a bicycle's lights. The traditional bottle dynamo (pictured) is not actually a dynamo at all (which creates DC power), but a low-power magneto that generates AC. Newer models can include a rectifier to create DC output to charge batteries for electronic devices including cellphones or GPS receivers. Named after their resemblance to bottles, these generators are also called sidewall dynamos because they operate using a roller placed on the sidewall of a bicycle tire. When the bicycle is in motion and the dynamo roller is engaged, electricity is generated as the tire spins the roller. Two other dynamo systems used on bicycles are hub dynamos and bottom bracket dynamos. Advantages over hub dynamos No extra resistance when disengaged When engaged, a dynamo requires the bicycle rider to exert more effort to maintain a given speed than would otherwise be necessary when the dynamo is not present or disengaged. Bottle dynamos can be completely disengaged when they are not in use, whereas a hub dynamo will always have added drag (though it may be so low as to be irrelevant or unnoticeable to the rider, and it is reduced significantly when lights are not being powered by the hub). Easy retrofitting A bottle dynamo may be more feasible than a hub dynamo to add to an existing bicycle, as it does not require a replacement or rebuilt wheel. Price A bottle dynamo is generally cheaper than a hub dynamo, but not always. Disadvantages over hub dynamos Slippage In wet conditions, the roller on a bottle dynamo can slip against the surface of a tire, which interrupts or reduces the amount of electricity generated. This can cause the lights to go out completely or intermittently. Hub dynamos do not need traction and are sealed from the elements. Increased resistance Bottle dynamos typically create more drag than hub dynamos. However, when they are properly adjusted, the drag may be so low as to be trivial, and there is no resistance when the bottle dynamo is disengaged. Tire wear Because bottle dynamos rub against the sidewall of a tire to generate electricity, they cause added wear on the side of tire. Hub dynamos do not. Noise Bottle dynamos make an easily audible mechanical humming or whirring sound when engaged. Hub dynamos are silent. Switching Bottle dynamos must be physically repositioned to engage them, to turn on the lamps. Hub dynamos are switched on electrically. Hub dynamos can be engaged automatically by using electronic ambient light detection, providing zero-effort activation. Positioning Bottle dynamos must be carefully adjusted to touch the sidewall at correct angles, height and pressure. Bottle dynamos can be knocked out of position if the bike falls, or if the mounting screws are too loose. A badly positioned bottle dynamo will make more noise and drag, slip more easily, and can in worst case fall into the spokes. Some dynamo mounts have tabs to try to prevent the latter. See also Bicycle dynamo Hub dynamo Third-brush dynamo List of bicycle parts References Bicycle lighting Dynamo, Bottle nl:Fietsdynamo tr:Bisiklet dinamosu
Bottle dynamo
[ "Physics", "Technology" ]
653
[ "Physical systems", "Electrical generators", "Machines" ]
11,318,393
https://en.wikipedia.org/wiki/Halogeton%20sativus
Halogeton sativus is a species of flowering plant in the family Amaranthaceae. It is native to Spain, Morocco and Algeria. Rich in salt, in the past it was cultivated to produce soda ash for glass-makers. References Amaranthaceae Halophytes
Halogeton sativus
[ "Chemistry" ]
59
[ "Halophytes", "Salts" ]
11,319,263
https://en.wikipedia.org/wiki/Field%20magnet
Field magnet refers to a magnet used to produce a magnetic field in a device. It may be a permanent magnet or an electromagnet. When the field magnet is an electromagnet, it is referred to as a field coil. Although the term usually refers to magnets used in motors and generators, it may refer to magnets used in any of the following devices: Alternator Cathode ray tube Dynamo Electric motor Electrical generator Fusion reactor Loudspeaker Maglev trains Magnetic Separator Magneto Mass spectrometer Metal detector MRI scanner Particle accelerator Read/write head Relay Solenoid Stepping switch Tape head See also Rotor (electric) Stator Electromagnetism Types of magnets
Field magnet
[ "Physics" ]
144
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions" ]
11,320,637
https://en.wikipedia.org/wiki/Dimension%20function
In mathematics, the notion of an (exact) dimension function (also known as a gauge function) is a tool in the study of fractals and other subsets of metric spaces. Dimension functions are a generalisation of the simple "diameter to the dimension" power law used in the construction of s-dimensional Hausdorff measure. Motivation: s-dimensional Hausdorff measure Consider a metric space (X, d) and a subset E of X. Given a number s ≥ 0, the s-dimensional Hausdorff measure of E, denoted μs(E), is defined by where μδs(E) can be thought of as an approximation to the "true" s-dimensional area/volume of E given by calculating the minimal s-dimensional area/volume of a covering of E by sets of diameter at most δ. As a function of increasing s, μs(E) is non-increasing. In fact, for all values of s, except possibly one, Hs(E) is either 0 or +∞; this exceptional value is called the Hausdorff dimension of E, here denoted dimH(E). Intuitively speaking, μs(E) = +∞ for s < dimH(E) for the same reason as the 1-dimensional linear length of a 2-dimensional disc in the Euclidean plane is +∞; likewise, μs(E) = 0 for s > dimH(E) for the same reason as the 3-dimensional volume of a disc in the Euclidean plane is zero. The idea of a dimension function is to use different functions of diameter than just diam(C)s for some s, and to look for the same property of the Hausdorff measure being finite and non-zero. Definition Let (X, d) be a metric space and E ⊆ X. Let h : [0, +∞) → [0, +∞] be a function. Define μh(E) by where Then h is called an (exact) dimension function (or gauge function) for E if μh(E) is finite and strictly positive. There are many conventions as to the properties that h should have: Rogers (1998), for example, requires that h should be monotonically increasing for t ≥ 0, strictly positive for t > 0, and continuous on the right for all t ≥ 0. Packing dimension Packing dimension is constructed in a very similar way to Hausdorff dimension, except that one "packs" E from inside with pairwise disjoint balls of diameter at most δ. Just as before, one can consider functions h : [0, +∞) → [0, +∞] more general than h(δ) = δs and call h an exact dimension function for E if the h-packing measure of E is finite and strictly positive. Example Almost surely, a sample path X of Brownian motion in the Euclidean plane has Hausdorff dimension equal to 2, but the 2-dimensional Hausdorff measure μ2(X) is zero. The exact dimension function h is given by the logarithmic correction I.e., with probability one, 0 < μh(X) < +∞ for a Brownian path X in R2. For Brownian motion in Euclidean n-space Rn with n ≥ 3, the exact dimension function is References Dimension theory Fractals Metric geometry
Dimension function
[ "Mathematics" ]
698
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Fractals", "Mathematical relations" ]
11,320,943
https://en.wikipedia.org/wiki/Extensively%20drug-resistant%20tuberculosis
Extensively drug-resistant tuberculosis (XDR-TB) is a form of tuberculosis caused by bacteria that are resistant to some of the most effective anti-TB drugs. XDR-TB strains have arisen after the mismanagement of individuals with multidrug-resistant TB (MDR-TB). Almost one in four people in the world is infected with TB bacteria. Only when the bacteria become active do people become ill with TB. Bacteria become active as a result of anything that can reduce the person's immunity, such as HIV, advancing age, or some medical conditions. TB can usually be treated with a course of four standard, or first-line, anti-TB drugs (i.e., isoniazid, rifampin and any fluoroquinolone). If these drugs are misused or mismanaged, multidrug-resistant TB (MDR-TB) can develop. MDR-TB takes longer to treat with second-line drugs (i.e., amikacin, kanamycin, or capreomycin), which are more expensive and have more side-effects. XDR-TB can develop when these second-line drugs are also misused or mismanaged and become ineffective. The World Health Organization (WHO) defines XDR-TB as MDR-TB that is resistant to at least one fluoroquinolone and a second-line injectable drug (amikacin, capreomycin, or kanamycin). XDR-TB raises concerns of a future TB epidemic with restricted treatment options, and jeopardizes the major gains made in TB control and progress on reducing TB deaths among people living with HIV/AIDS. It is therefore vital that TB control be managed properly and new tools developed to prevent, treat and diagnose the disease. The true scale of XDR-TB is unknown as many countries lack the necessary equipment and capacity to accurately diagnose it. By June 2008, 49 countries had confirmed cases of XDR-TB. By the end of 2017, 127 WHO Member States reported a total of 10,800 cases of XDR-TB, and 8.5% of cases of MDR-TB in 2017 were estimated to have been XDR-TB. In August 2019, the Food and Drug Administration (FDA) approved the use of pretomanid in combination with bedaquiline and linezolid for treating a limited and specific population of adult patients with extensively drug resistant, treatment-intolerant or nonresponsive multidrug resistant pulmonary TB. Symptoms and signs Symptoms of XDR-TB are no different from ordinary or drug-susceptible TB: a cough with thick, cloudy mucus (or sputum), sometimes with blood, for more than two weeks; fever, chills, and night sweats; fatigue and muscle weakness; weight loss; and in some cases shortness of breath and chest pain. A person with these symptoms does not necessarily have XDR-TB, but they should see a physician for diagnosis and a treatment plan. TB patients whose symptoms do not improve after a few weeks of treatment for TB and are taking treatment should inform their clinician or nurse. Transmission Like other forms of TB, XDR-TB is spread through the air. When a person with infectious TB coughs, sneezes, talks or spits, they propel TB germs, known as bacilli, into the air. XDR-TB cannot be spread by kissing, sharing food or drinks, or shaking someone's hand. The bacterium has the ability to stay in the air for several hours. A person needs only to inhale a small number of these to be infected. People infected with TB bacilli will not necessarily become sick with the disease. The immune system "walls off" the TB bacilli which, protected by a thick waxy coat, can lie dormant for years. The spread of TB bacteria depends on factors such as the number and concentration of infectious people in any one place together with the presence of people with a higher risk of being infected (such as those with HIV/AIDS). The risk of becoming infected increases with the longer the time that a previously uninfected person spends in the same room as the infectious case. The risk of spread increases where there is a high concentration of TB bacteria, such as can occur in closed environments like overcrowded houses, hospitals or prisons. The risk will be further increased if ventilation is poor, but reduced when mechanical filters are used. The risk of spread will be reduced and eventually eliminated if infectious patients receive proper treatment. Diagnosis Successful diagnosis of XDR-TB depends on the patient's access to quality health-care services. If TB bacteria are found in the sputum, the diagnosis of TB can be made in a day or two, but this finding will not be able to distinguish between drug-susceptible and drug-resistant TB. To evaluate drug susceptibility, the bacteria need to be cultivated and tested in a suitable laboratory. Final diagnosis in this way for TB, and especially for XDR-TB, may take from 6 to 16 weeks. The original method used to test for MDR-TB and XDR-TB was the Drug Susceptibility Testing (DST). DST is capable of determining how well four primary antitubercular drugs inhibit the growth of Mycobacterium tuberculosis. The four primary antitubercular drugs are Isoniazid, Rifampin, Ethambutol and Pyrazinamide. Drug Susceptibility testing is done by making a Lowenstein-Jensen medium plate and spreading the bacteria on the plate. Disks containing one of the four primary drugs are added to the plate. After weeks of allowing the bacteria to grow the plate is checked for clear areas around the disk. If there is a clear area, the drug has killed the bacteria and most likely the bacteria are not resistant to that drug. As Mycobacterium tuberculosis evolved new strains of resistant bacteria were being found such as XDR-TB. The problem was that primary DST was not suitable for testing bacteria strains that were extensively drug resistant. This problem was starting to be fixed when drug susceptibility tests started including not just the four primary drugs, but secondary drugs. This secondary test is known as Bactec MGIT 960 System. Although Bactec MGIT 960 System was accurate it was still slow at determining the level of resistance. Diagnosis of MDR and XDR-TB in children is challenging. With an increasing number of cases being reported worldwide there is a great need for better diagnostic tools available for pediatric patients. In recent years drug resistant tuberculosis testing has shown a lot of progress. Some studies have found an in-house assay that could rapidly detect resistance to drugs involved in the definition of XDR-TB directly from smear-positive specimens. The assay is called Reverse Line Blot Hybridization Assay also known as RLBH. The study showed that the results of RLBH were as accurate as other drug susceptibility tests, but at the same time did not take weeks to get results. RLBH testing only took three days to determine how resistant the strain of bacteria was. The current research has shown progress in the testing of drug resistance. A recent study found that a research technique known as direct nitrate reductase assay (D-NRA) showed efficient accuracy for the rapid and simultaneous detection of resistance to isoniazid (INH), rifampicin (RIF), kanamycin (KAN) and ofloxacin (OFL). D-NRA results were obtained in 16.9 days, comparably less than other drug susceptibility testing. At the same time the study mentioned how D-NRA is a low-cost technology, easy to set up in clinical laboratories and suitable to be used for DST of M. tuberculosis in all smear-positive samples. Prevention Countries aim to prevent XDR-TB by ensuring that the work of their national TB control programmes, and of all practitioners working with people with TB, is carried out according to the International Standards for TB Care. These emphasize providing proper diagnosis and treatment to all TB patients, including those with drug-resistant TB; assuring regular, timely supplies of all anti-TB drugs; proper management of anti-TB drugs and providing support to patients to maximize adherence to prescribed regimens; caring for XDR-TB cases in a centre with proper ventilation, and minimizing contact with other patients, particularly those with HIV, especially in the early stages before treatment has had a chance to reduce the infectiousness. Also an effective disease control infrastructure is necessary for the prevention of XDR tuberculosis. Increased funding for research, and strengthened laboratory facilities are much required. Immediate detection through drug susceptibility testing's are vital, when trying to stop the spread of XDR tuberculosis. BCG vaccine The BCG vaccine prevents severe forms of TB in children, such as TB meningitis. It would be expected that BCG would have the same effect in preventing severe forms of TB in children, even if they were exposed to XDR-TB. The vaccine has shown to be less effective at preventing the most common strains of TB and in blocking TB in adults. The effect of BCG against XDR-TB would therefore likely be very limited. Treatment The principles of treatment for MDR-TB and for XDR-TB are the same. Second-line drugs are more toxic than the standard anti-TB regimen and can cause a range of serious side-effects including hepatitis, depression, hallucinations, and deafness. Patients are often hospitalized for long periods, in isolation. In addition, second-line drugs are extremely expensive compared with the cost of drugs for standard TB treatment. XDR-TB is associated with a much higher mortality rate than MDR-TB, because of a reduced number of effective treatment options. A 2008 study in the Tomsk oblast of Russia, reported that 14 out of 29 (48.3%) patients with XDR-TB successfully completed treatment. In 2018, the WHO reported that the treatment success rate for XDR-TB was 34% for the 2015 cohort, compared to 55% for MDR/RR-TB (2015 cohort), 77% for HIV-associated TB (2016 cohort), and 82% for TB (2016 cohort). A 2018 meta-analysis of 12,030 patients from 25 countries in 50 studies has demonstrated that treatment success increases and mortality decreases when treatment includes bedaquiline, later generation fluoroquinolones, and linezolid. One regimen for XDR-TB called Nix-TB, a combination pretomanid, bedaquiline, and linezolid, has shown promise in early clinical trials. Successful outcomes depend on a number of factors including the extent of the drug resistance, the severity of the disease and whether the patient's immune system is compromised. It also depends on access to laboratories that can provide early and accurate diagnosis so that effective treatment is provided as soon as possible. Effective treatment requires that all six classes of second-line drugs be available to clinicians who have special expertise in treating such cases. Enforced quarantine Carriers who refuse to wear a mask in public have been indefinitely involuntarily committed to regular jails, and cut off from contacting the world. Some have run away from the US, complaining of abuse. Epidemiology Studies have found that men have a higher risk of getting XDR-TB than women. One study showed that the male to female ratio was more than threefold, with statistical relevance (P<0.05) Studies done on the effect of age and XDR-TB have revealed that individuals who are 65 and up are less likely to get XDR-TB. A study in Japan found that XDR-TB patients are more likely to be younger. XDR-TB and HIV/AIDS TB is one of the most common infections in people living with HIV/AIDS. In places where XDR-TB is most common, people living with HIV are at greater risk of becoming infected with XDR-TB, compared with people without HIV, because of their weakened immunity. If there are a lot of HIV-infected people in these places, then there will be a strong link between XDR-TB and HIV. Fortunately, in most of the places with high rates of HIV, XDR-TB is not yet widespread. For this reason, the majority of people with HIV who develop TB will have drug-susceptible or ordinary TB, and can be treated with standard first-line anti-TB drugs. For those with HIV infection, treatment with antiretroviral drugs will likely reduce the risk of becoming infected with XDR-TB, just as it does with ordinary TB. A research study titled "TB Prevalence Survey and Evaluation of Access to TB Care in HIV-Infected and Uninfected TB Patients in Asembo and Gem, Western Kenya", says that HIV/AIDS is fueling large increases in TB incidence in Africa, and a large proportion of cases are not diagnosed. History XDR-TB is defined as TB that has developed resistance to at least rifampicin and isoniazid (resistance to these first line anti-TB drugs defines multidrug-resistant tuberculosis, or MDR-TB), as well as to any member of the quinolone family and at least one of the following second-line anti-TB injectable drugs: kanamycin, capreomycin, or amikacin. This definition of XDR-TB was agreed by the World Health Organization (WHO) Global Task Force on XDR-TB in October 2006. The earlier definition of XDR-TB as MDR-TB that is also resistant to three or more of the six classes of second-line drugs, is no longer used, but may be referred to in older publications. South African epidemic XDR-TB was first widely publicised following the report of an outbreak in South Africa in 2006. 53 patients in a rural hospital in Tugela Ferry were found to have XDR-TB of whom 52 died. The median survival from sputum specimen collection to death was only 16 days and that the majority of patients had never previously received treatment for tuberculosis suggesting that they had been newly infected by XDR-TB strains, and that resistance did not develop during treatment. This was the first epidemic for which the acronym XDR-TB was used, and although TB strains that fulfill the current definition have been identified retrospectively, this was the largest group of linked cases ever found. Since the initial report in September 2006, cases have now been reported in most provinces in South Africa. As of 16 March 2007, there were 314 cases reported, with 215 deaths. It is clear that the spread of this strain of TB is closely associated with a high prevalence of HIV and poor infection control; in other countries where XDR-TB strains have arisen, drug resistance has arisen from mismanagement of cases or poor patient compliance with drug treatment instead of being transmitted from person to person. It is now clear that the problem has been around for much longer than health department officials have suggested, and is far more extensive. See also Multi-drug-resistant tuberculosis (MDR-TB) Totally drug-resistant tuberculosis (TDR-TB) Tuberculosis Tuberculosis treatment References External links World Health Organization Stop TB Department Stop TB Partnership The Global Plan to Stop TB Advocacy to Control TB Internationally - ACTION International Standards of TB Care Video: Drug-Resistant TB in Russia July 24, 2007, Woodrow Wilson Center event featuring Salmaan Keshavjee and Murray Feshbach XDRTB.org: Spread the Story. Stop the Disease. (photo documentary of XDR-TB by James Nachtwey) British Red Cross helps combat TB The Strange, Isolated Life of a Tuberculosis Patient in the 21st Century Population Services International Antibiotic-resistant bacteria Tuberculosis Infraspecific bacteria taxa pl:Gruźlica XDR-TB
Extensively drug-resistant tuberculosis
[ "Biology" ]
3,315
[ "Bacteria", "Antibiotic-resistant bacteria" ]
11,321,017
https://en.wikipedia.org/wiki/Multidrug-resistant%20tuberculosis
Multidrug-resistant tuberculosis (MDR-TB) is a form of tuberculosis (TB) infection caused by bacteria that are resistant to treatment with at least two of the most powerful first-line anti-TB medications (drugs): isoniazid and rifampicin. Some forms of TB are also resistant to second-line medications, and are called extensively drug-resistant TB (XDR-TB). Tuberculosis is caused by infection with the bacterium Mycobacterium tuberculosis. Almost one in four people in the world are infected with TB bacteria. Only when the bacteria become active do people become ill with TB. Bacteria become active as a result of anything that can reduce the person's immunity, such as HIV, advancing age, diabetes or other immunocompromising illnesses. TB can usually be treated with a course of four standard, or first-line, anti-TB drugs (i.e., isoniazid, rifampicin, pyrazinamide and ethambutol). However, beginning with the first antibiotic treatment for TB in 1943, some strains of the TB bacteria developed resistance to the standard drugs through genetic changes (see mechanisms.) Currently the majority of multidrug-resistant cases of TB are due to one strain of TB bacteria called the Beijing lineage. This process accelerates if incorrect or inadequate treatments are used, leading to the development and spread of multidrug-resistant TB (MDR-TB). Incorrect or inadequate treatment may be due to use of the wrong medications, use of only one medication (standard treatment is at least two drugs), or not taking medication consistently or for the full treatment period (treatment is required for several months). Treatment of MDR-TB requires second-line drugs (i.e., fluoroquinolones, aminoglycosides, and others), which in general are less effective, more toxic and much more expensive than first-line drugs. Treatment schedules for MDR-TB involving fluoroquinolones and aminoglycosides can run for two years, compared to the six months of first-line drug treatment, and cost over US$100,000. If these second-line drugs are prescribed or taken incorrectly, further resistance can develop leading to XDR-TB. Resistant strains of TB are already present in the population, so MDR-TB can be directly transmitted from an infected person to an uninfected person. In this case a previously untreated person develops a new case of MDR-TB. This is known as primary MDR-TB, and is responsible for up to 75% of cases. Acquired MDR-TB develops when a person with a non-resistant strain of TB is treated inadequately, resulting in the development of antibiotic resistance in the TB bacteria infecting them. These people can in turn infect other people with MDR-TB. MDR-TB caused an estimated 600,000 new TB cases and 240,000 deaths in 2016 and MDR-TB accounts for 4.1% of all new TB cases and 19% of previously treated cases worldwide. Globally, most MDR-TB cases occur in South America, Southern Africa, India, China, and the former Soviet Union. Treatment of MDR-TB requires treatment with second-line drugs, usually four or more anti-TB drugs for a minimum of 6 months, and possibly extending for 18–24 months if rifampin resistance has been identified in the specific strain of TB with which the patient has been infected. Under ideal program conditions, MDR-TB cure rates can approach 70%. Origin Researchers hypothesize that an ancestor of Mycobacterium tuberculosis first originated in the East African region approximately 3 million years ago, with modern strains mutating and arising 20,000 years ago; Archaeologists confirmed this with skeletal analysis of Egyptian remains. As migration out of East Africa increased, so did the spread of the disease, starting in Asia and then spreading towards the West and South America. Multidrug-resistant tuberculosis has a variety of causes, but resistance usually due to treatment failure, drug combinations, coinfections, prior use of anti-TB medications, inadequate absorption of medication, underlying disease, and noncompliance with anti-TB drugs. Mechanism of drug resistance The TB bacterium has natural defenses against some drugs, and can acquire drug resistance through genetic mutations. The bacterium does not have the ability to transfer genes for resistance between organisms through plasmids (see horizontal transfer). Some mechanisms of drug resistance include: Cell wall: The cell wall of M. tuberculosis (TB) contains complex lipid molecules which act as a barrier to stop drugs from entering the cell. In order to lessen its vulnerability, M. tuberculosis can also stop medications from penetrating its cells. RIF resistance is linked to numerous genes and proteins that are involved in the formation of cell walls. Maintaining the M. tuberculosis cell wall is a major function of the PE11 protein. It is hypothesized that upregulating the production of PE11 protein can decrease the quantity of antibiotics that enter M. tuberculosis. The expression of M. tuberculosis PE11 protein in M. smegmatis can generate raised resistance levels to several antibiotics, including RIF. Drug modifying & inactivating enzymes: The TB genome codes for enzymes (proteins) that inactivate drug molecules. These enzymes are usually phosphorylate, acetylate, or adenylate drug compounds. Drug efflux systems: The TB cell contains molecular systems that actively pump drug molecules out of the cell. Mutations: Spontaneous mutations in the TB genome can alter proteins which are the target of drugs, making the bacteria drug-resistant. One example is a mutation in the rpoB gene, which encodes the beta subunit of the bacterium's RNA polymerase enzyme. In non-resistant TB, rifampin binds the beta subunit of RNA polymerase and disrupts transcription elongation. Mutation in the rpoB gene changes the sequence of amino acids and eventual conformation, or arrangement, of the beta subunit. In this case, rifampin can no longer bind or prevent transcription, and the bacterium is resistant. Other mutations make the bacterium resistant to other drugs. For example, there are many mutations that confer resistance to isoniazid (INH), including in the genes katG, inhA, ahpC and others. Amino acid replacements in the NADH binding site of InhA apparently result in INH resistance by preventing the inhibition of mycolic acid biosynthesis, which the bacterium uses in its cell wall. Mutations in the katG gene make the enzyme catalase peroxidase unable to convert INH to its biologically active form. Hence, INH is ineffective and the bacterium is resistant. The discovery of new molecular targets is essential to overcome drug-resistance problems. In some TB bacteria, the acquisition of these mutations can be explained by other mutations in the DNA recombination, recognition and repair machinery. Mutations in these genes allow the bacteria to have a higher overall mutation rate and to accumulate mutations that cause drug resistance more quickly. Extensively drug-resistant TB MDR-TB can become resistant to the major second-line TB drug groups: fluoroquinolones (moxifloxacin, ofloxacin) and injectable aminoglycoside or polypeptide drugs (amikacin, capreomycin, kanamycin). When MDR-TB is resistant to at least one drug from each group, it is classified as extensively drug-resistant tuberculosis (XDR-TB). WHO has revised the definitions of pre-XDR-TB and XDR-TB in 2021 as following: Pre-XDR-TB: TB caused by Mycobacterium tuberculosis (M. tuberculosis) strains that fulfill the definition of MDR/RR-TB and which are also resistant to any fluoroquinolone. XDR-TB: TB caused by Mycobacterium tuberculosis (M. tuberculosis) strains that fulfill the definition of MDR/RR-TB and which are also resistant to any fluoroquinolone and at least one additional Group A drug. The Group A drugs are currently levofloxacin or moxifloxacin, bedaquiline and linezolid, therefore XDR-TB is MDR/RR-TB that is resistant to a fluoroquinolone and at least one of bedaquiline or linezolid (or both). In a study of MDR-TB patients from 2005 to 2008 in various countries, 43.7% had resistance to at least one second-line drug. About 9% of MDR-TB cases are resistant to a drug from both classes and classified as XDR-TB. In the past 10 years TB strains have emerged in Italy, Iran, India, and South Africa which are resistant to all available first and second line TB drugs, classified as totally drug-resistant tuberculosis, though there is some controversy over this term. Increasing levels of resistance in TB strains threaten to complicate the current global public health approaches to TB control. New drugs are being developed to treat extensively resistant forms but major improvements in detection, diagnosis, and treatment will be needed. There have been reports of totally drug-resistant tuberculosis, but such strains of TB are not recognized by the WHO. Prevention There are several ways that drug resistance to TB, and drug resistance in general, can be prevented: Rapid diagnosis & treatment of TB: One of the greatest risk factors for drug-resistant TB is problems in treatment and diagnosis, especially in developing countries. If TB is identified and treated soon, drug resistance can be avoided. Completion of treatment: Previous treatment of TB is an indicator of MDR TB. If the patient does not complete their antibiotic treatment, or if the physician does not prescribe the proper antibiotic regimen, resistance can develop. Also, drugs that are of poor quality or less in quantity, especially in developing countries, contribute to MDR TB. Identifying and diagnosing patients with HIV/AIDS as soon as possible. They lack the immunity to fight the TB infection and are at great risk of developing drug resistance. Identifying contacts who could have contracted TB: family members, people in close contact, etc. Research: Much research and funding is needed in the diagnosis, prevention and treatment of TB and MDR TB. "Opponents of a universal tuberculosis treatment, reasoning from misguided notions of cost-effectiveness, fail to acknowledge that MDRTB is not a disease of poor people in distant places. The disease is infectious and airborne. Treating only one group of patients looks inexpensive in the short run, but will prove disastrous for all in the long run." Paul Farmer DOTS-Plus Community-based treatment programs such as DOTS-Plus, a MDR-TB-specialized treatment using the popular Directly Observed Therapy – Short Course (DOTS) initiative, have shown considerable success in the world. In these locales, these programs have proven to be a good option for proper treatment of MDR-TB in poor, rural areas. A successful example has been in Lima, Peru, where the program has seen cure rates of over 80%. However, the DOTS program administered in the Republic of Georgia uses passive case finding. This means that the system depends on patients coming to health care providers, without conducting compulsory screenings. As medical anthropologists like Erin Koch have shown, this form of implementation does not suit all cultural structures. They urge that the DOTS protocol be constantly reformed in the context of local practices, forms of knowledge and everyday life. Treatment Usually, multidrug-resistant tuberculosis can be cured with long treatments of second-line drugs, but these are more expensive than first-line drugs and have more adverse effects. The treatment and prognosis of MDR-TB are much more akin to those for cancer than to those for infection. MDR-TB has a mortality rate of about 15% with treatment, which further depends on a number of factors, including: How many drugs the organism is resistant to (the fewer the better) How many drugs the patient is given (patients treated with five or more drugs do better) The expertise and experience of the physician responsible How co-operative the patient is with treatment (treatment is arduous and long, and requires persistence and determination on the part of the patient) Whether the patient is HIV-positive or not (HIV co-infection is associated with increased mortality). The majority of patients with multidrug-resistant tuberculosis do not receive treatment, as they are found in underdeveloped countries or in poverty. Denial of treatment remains a difficult human rights issue, as the high cost of second-line medications often precludes those who cannot afford therapy. A study of cost-effective strategies for tuberculosis control supported three major policies. First, the treatment of smear-positive cases in DOTS programs must be the foundation of any tuberculosis control approach, and should be a basic practice for all control programs. Second, there is a powerful economic case for treating smear-negative and extra-pulmonary cases in DOTS programs along with treating smear-negative and extra-pulmonary cases in DOTS programs as a new WHO "STOP TB" approach and the second global plan for tuberculosis control. Last but not least, the study shows that a significant scaling-up of all interventions is needed in the next 10 years if the millennium development goal and related goals for tuberculosis control are to be achieved. If the case detection rate can be improved, this will guarantee that people who gain access to treatment facilities are covered and that coverage is widely distributed to people who do not now have access. In general, treatment courses are measured in months to years; MDR-TB may require surgery, and death rates remain high despite optimal treatment. However, good outcomes for patients are still possible. The treatment of MDR-TB must be undertaken by physicians experienced in the treatment of MDR-TB. Mortality and morbidity in patients treated in non-specialist centers are significantly higher than those of patients treated in specialist centers. Treatment of MDR-TB must be done on the basis of sensitivity testing: it is impossible to treat such patients without this information. When treating a patient with suspected MDR-TB, pending the result of laboratory sensitivity testing, the patient could be started on SHREZ (Streptomycin+ isonicotinyl Hydrazine+ Rifampicin+Ethambutol+ pyraZinamide) and moxifloxacin with cycloserine. There is evidence that previous therapy with a drug for more than a month is associated with diminished efficacy of that drug regardless of in vitro tests indicating susceptibility. Hence, a detailed knowledge of the treatment history of each patient is essential. In addition to the obvious risks (i.e., known exposure to a patient with MDR-TB), risk factors for MDR-TB include HIV infection, previous incarceration, failed TB treatment, failure to respond to standard TB treatment, and relapse following standard TB treatment. A gene probe for rpoB is available in some countries. This serves as a useful marker for MDR-TB, because isolated RMP resistance is rare (except when patients have a history of being treated with rifampicin alone). If the results of a gene probe (rpoB) are known to be positive, then it is reasonable to omit RMP and to use SHEZ+MXF+cycloserine. The reason for maintaining the patient on INH is that INH is so potent in treating TB that it is foolish to omit it until there is microbiological proof that it is ineffective (even though isoniazid resistance so commonly occurs with rifampicin resistance). For treatment of RR- and MDT-TB, WHO treatment guidelines are as follows: "a regimen with at least five effective TB medicines during the intensive phase is recommended, including pyrazinamide and four core second-line TB medicines – one chosen from Group A, one from Group B, and at least two from Group C3 (conditional recommendation, very low certainty in the evidence). If the minimum number of effective TB medicines cannot be composed as given above, an agent from Group D2 and other agents from Group D3 may be added to bring the total to five. It is recommended that the regimen be further strengthened with high-dose isoniazid and/or ethambutol (conditional recommendation, very low certainty in the evidence)." Medicines recommended are the following: Group A: Fluoroquinolones (levofloxacin, moxifloxicin), linezolid, bedaquiline Group B: Clofazimine, cycloserine/terizidone Group C: Other core second-line agents (ethambutol, delamanid, pyrazinamide, imipenem-cilastatin/meropenem, amikacin/streptomycin, ethionamide/prothionamide, p-aminosalicylic acid) For patients with RR-TB or MDR-TB, "not previously treated with second-line drugs and in whom resistance to fluoroquinolones and second-line injectable agents was excluded or is considered highly unlikely, a shorter MDR-TB regimen of 9–12 months may be used instead of the longer regimens (conditional recommendation, very low certainty in the evidence)." In general, resistance to one drug within a class means resistance to all drugs within that class, but a notable exception is rifabutin: Rifampicin-resistance does not always mean rifabutin-resistance, and the laboratory should be asked to test for it. It is possible to use only one drug within each drug class. If it is difficult finding five drugs to treat then the clinician can request that high-level INH-resistance be looked for. If the strain has only low-level INH-resistance (resistance at 0.2 mg/L INH, but sensitive at 1.0 mg/L INH), then high dose INH can be used as part of the regimen. When counting drugs, PZA and interferon count as zero; that is to say, when adding PZA to a four-drug regimen, another drug must be chosen to make five. It is not possible to use more than one injectable (STM, capreomycin or amikacin), because the toxic effect of these drugs is additive: If possible, the aminoglycoside should be given daily for a minimum of three months (and perhaps thrice weekly thereafter). Ciprofloxacin should not be used in the treatment of tuberculosis if other fluoroquinolones are available. As of 2008, Cochrane reports that trials of other fluoroquinolones are ongoing. While Rifampin is an effective drug, lack of adherence has led to relapse. This is why the use of various first-line drugs, along with developing new drugs that are specific towards drug-resistant strains, is essential. There are a number of new anti-TB medications that are currently in the developmental stage that are directed to treat drug resistant strains; a few of these drugs are PA-824 (now pretomanid), OPC-67683 (now delamanid), and R207910 (now bedaquiline), all of which are in Phase II of development. Pretomanid and delamanid are both in the nitroimidazole class and have mechanisms involving bioactive reductive activation. Bedaquiline is a diarylquinoline that has a different mechanism; this drug directly inhibits energy production, so this drug may be a better option because it may not require as long of a treatment course as other drugs. When it is not possible to find five drugs from the lists above; the drugs imipenem, co-amoxiclav, clofazimine, prochlorperazine, metronidazole have been used in desperation, though it is not certain whether they are effective at all. There is no intermittent regimen validated for use in MDR-TB, but clinical experience is that giving injectable drugs for five days a week (because there is no-one available to give the drug at weekends) does not seem to result in inferior results. Directly observed therapy helps to improve outcomes in MDR-TB and should be considered an integral part of the treatment of MDR-TB. Patients with MDR-TB should be isolated in negative-pressure rooms, if possible. Patients with MDR-TB should not be accommodated on the same ward as immunosuppressed patients (HIV-infected patients, or patients on immunosuppressive drugs). Careful monitoring of compliance with treatment is crucial to the management of MDR-TB (and some physicians insist on hospitalisation if only for this reason). Some physicians will insist that these patients remain isolated until their sputum is smear-negative, or even culture-negative (which may take many months, or even years). Keeping these patients in hospital for weeks (or months) on end may be a practical or physical impossibility, and the final decision depends on the clinical judgement of the physician treating that patient. The attending physician should make full use of therapeutic drug monitoring (in particular, of the aminoglycosides) both to monitor compliance and to avoid toxic effects. Response to treatment must be obtained by repeated sputum cultures (monthly if possible). Some supplements may be useful as adjuncts in the treatment of tuberculosis, but, for the purposes of counting drugs for MDR-TB, they count as zero (if four drugs are already in the regimen, it may be beneficial to add arginine or vitamin D or both, but another drug will be needed to make five). Supplements include: arginine (peanuts are a good source), vitamin D, Dzherelo, V5 Immunitor. On 28 December 2012, the U.S. Food and Drug Administration (FDA) approved bedaquiline (marketed as Sirturo by Johnson & Johnson) to treat multidrug-resistant tuberculosis, the first new treatment in 40 years. Sirturo is to be used in a combination therapy for patients who have failed standard treatment and have no other options. Sirturo is an adenosine triphosphate synthase (ATP synthase) inhibitor. The resurgence of tuberculosis in the United States, the advent of HIV-related tuberculosis, and the development of strains of TB resistant to the first-line therapies developed in recent decades serve to reinforce the thesis that Mycobacterium tuberculosis, the causative organism, makes its own preferential option for the poor. The simple truth is that almost all tuberculosis deaths result from a lack of access to existing effective therapy. Treatment success rates remain unacceptably low globally with variation between regions. 2016 data published by the WHO reported treatment success rates of multidrug-resistant TB globally. For those started on treatment for multidrug-resistant TB 56% successfully completed treatment, either treatment course completion or eradication of disease; 15% of those died while in treatment; 15% were lost to follow-up; 8% had treatment failure and there was no data on the remaining 6%. Treatment success rate was highest in the World Health Organization Mediterranean region at 65%. Treatment success rates were lower than 50% in Ukraine, Mozambique, Indonesia and India. Areas with poor TB surveillance infrastructure had higher rates of loss to follow-up of treatment. 57 countries reported outcomes for patients started on extreme-drug resistant TB, this included 9258 patients. 39% completed treatment successfully, 26% of patients died and treatment failed for 18%. 84% of the extreme drug resistant cohort was made up of only three countries; India, Russian Federation and Ukraine. Shorter treatment regimes for MDR-TB have been found to be beneficial having higher treatment success rates. Surgery In cases of extremely resistant disease, surgery to remove infection portions of the lung is, in general, the final option. Early surgical treatments beginning in the 19th century include inducing lung collapse, as standing tissue heals faster than tissue in use, called artificial pneumothorax. Shrinking the lung cavity, thoracoplasty, to fill void space caused by tuberculosis damage was done by either removing ribs, raising the diaphragm, or implanting fluids or solid materials into lung cavity as a less invasive alternative to artificial pneumothorax. These treatments fell out of favor with the invention anti-tuberculosis drugs in the mid-20th century and have not seen a revival with MDR-TB, except for thoracoplasty done with implanted muscle tissue. Surgically removing portions of the lung, called lung resectioning, was a mostly theoretical possibility until the improved surgical tools and techniques of the mid-20th century. As of 2016, surgery is typically performed after 6–8 months of unsuccessful anti-TB treatment by other means. Surgical treatment has a high success rate, upwards of 80%, but a similarly high failure rate of upwards of 10% including the risk of death. Surgery is first focused on stabilizing cavities, or "destroyed lung", caused by the disease, followed by the removal of tuberculomas, and then the removal of fluid and pus build up. Tuberculosis and lung cancer can coexist in patients as a possible complication, however the surgical therapies are similar as lung cancer surgery has its roots in aforementioned tuberculosis treatments. Epidemiology Cases of MDR tuberculosis have been reported in every country surveyed. MDR-TB most commonly develops in the course of TB treatment, and is most commonly due to doctors giving inappropriate treatment, or patients missing doses or failing to complete their treatment. Because MDR tuberculosis is an airborne pathogen, persons with active, pulmonary tuberculosis caused by a multidrug-resistant strain can transmit the disease if they are alive and coughing. TB strains are often less fit and less transmissible, and outbreaks occur more readily in people with weakened immune systems (e.g., patients with HIV). Outbreaks among non-immunocompromised healthy people do occur, but are less common. As of 2013, 3.7% of new tuberculosis cases have MDR-TB. Levels are much higher in those previously treated for tuberculosis – about 20%. WHO estimates that there were about 0.5 million new MDR-TB cases in the world in 2011. About 60% of these cases occurred in Brazil, China, India, the Russian Federation and South Africa alone. In Moldova, the crumbling health system has led to the rise of MDR-TB. In 2013, the Mexico–United States border was noted to be "a very hot region for drug resistant TB", though the number of cases remained small. A study in Los Angeles, California, found that only 6% of cases of MDR-TB were clustered. Likewise, the appearance of high rates of MDR-TB in New York City in the early 1990s was associated with the explosion of AIDS in that area. In New York City, a report issued by city health authorities states that fully 80 percent of all MDR-TB cases could be traced back to prisons and homeless shelters. When patients have MDR-TB, they require longer periods of treatment. Several of the less powerful second-line drugs, which are required to treat MDR-TB, are also more toxic, with side effects such as nausea, abdominal pain, and even psychosis. The Partners in Health team had treated patients in Peru who were sick with strains that were resistant to ten and even twelve drugs. Most such patients require adjuvant surgery for any hope of a cure. Somalia MDR-TB is widespread in Somalia, where 8.7% of newly discovered TB cases are resistant to Rifampicin and Isoniazid, in patients which were treated previously the share was 47%. Refugees from Somalia brought an until then unknown variant of MDR tuberculosis with them to Europe. A few number of cases in four different countries were considered by the European Centre for Disease Prevention and Control to pose no risk to the native population. Russian prisons One of the so-called "hot-spots" of drug-resistant tuberculosis is within the Russian prison system. Infectious disease researchers Nachega & Chaisson report that 10% of the one million prisoners within the system have active TB. One of their studies found that 75% of newly diagnosed inmates with TB are resistant to at least one drug; 40% of new cases are multidrug-resistant. In 1997, TB accounted for almost half of all Russian prison deaths, and as Bobrik et al. point out in their public health study, the 90% reduction in TB incidence contributed to a consequential fall in the prisoner death rate in the years following 1997. Baussano et al. articulate that concerning statistics like these are especially worrisome because spikes in TB incidence in prisons are linked to corresponding outbreaks in surrounding communities. Additionally, rising rates of incarceration, especially in Central Asian and Eastern European countries like Russia, have been correlated with higher TB rates in civilian populations. Even as the DOTS program is expanded throughout Russian prisons, researchers such as Shin et al. have noted that wide-scale interventions have not had their desired effect, especially with regard to the spread of drug-resistant strains of TB. Contributing factors There are several elements of the Russian prison system that enable the spread of MDR-TB and heighten its severity. Overcrowding in prisons is especially conducive to the spread of tuberculosis; an inmate in a prison hospital has (on average) 3 meters of personal space, and an inmate in a correctional colony has 2 meters. Specialized hospitals and treatment facilities within the prison system, known as TB colonies, are intended to isolate infected prisoners to prevent transmission; however, as Ruddy et al. demonstrate, there are not enough of these colonies to sufficiently protect staff and other inmates. Additionally, many cells lack adequate ventilation, which increases likelihood of transmission. Bobrik et al. have also noted food shortages within prisons, which deprive inmates of the nutrition necessary for healthy functioning. Comorbidity of HIV within prison populations has also been shown to worsen health outcomes. Nachega & Chaisson articulate that while HIV-infected prisoners are not more susceptible MDR-TB infection, they are more likely to progress to serious clinical illness if infected. According to Stern, HIV infection is 75 times more prevalent in Russian prison populations than in the civilian population. Therefore, prison inmates are both more likely to become infected with MDR-TB initially and to experience severe symptoms because of previous exposure to HIV. Shin et al. emphasize another factor in MDR-TB prevalence in Russian prisons: alcohol and substance use. Ruddy et al. showed that risk for MDR-TB is three times higher among recreational drug users than non-users. Shin et al.'s study demonstrated that alcohol usage was linked to poorer outcomes in MDR-TB treatment; they also noted that a majority of subjects within their study (many of whom regularly used alcohol) were nevertheless cured by their aggressive treatment regimen. Non-compliance with treatment plans is often cited as a contributor to MDR-TB transmission and mortality. Indeed, of the 80 newly released TB-infected inmates in Fry et al.'s study, 73.8% did not report visiting a community dispensary for further treatment. Ruddy et al. cite release from facilities as one of the main causes of interruption in prisoner's TB treatment, in addition to non-compliance within the prison and upon reintegration into civilian life. Fry et al.'s study also listed side effects of TB treatment medications (especially in HIV positive individuals), financial worries, housing insecurities, family problems, and fear of arrest as factors that prevented some prisoners from properly adhering to TB treatment. They also note that some researchers have argued that the short-term gains TB-positive prisoners receive, such as better food or work exclusion, may dis-incentivize becoming cured. In their World Health Organization article, Gelmanova et al. posit that non-adherence to TB treatment indirectly contributes to bacterial resistance. Although ineffective or inconsistent treatment does not "create" resistant strains, mutations within the high bacterial load in non-adherent prisoners can cause resistance. Nachega & Chaisson argue that inadequate TB control programs are the strongest driver of MDR-TB incidence. They note that prevalence of MDR-TB is 2.5 times higher in areas of poorly controlled TB. Russian-based therapy (i.e., not DOTS) has been criticized by Kimerling et al. as "inadequate" in properly controlling TB incidence and transmission. Bobrik et al. note that treatment for MDR-TB is equally inconsistent; the second-line drugs used to treat the prisoners lack specific treatment guidelines, infrastructure, training, or follow-up protocols for prisoners reentering civilian life. Policy impacts As Ruddy et al. note, Russia's early 2000s penal reforms could greatly reduce the number of inmates inside prison facilities and thus increase the number of ex-convicts integrated into civilian populations. Because the incidence of MDR-TB is strongly predicted by past imprisonment, the health of Russian society will be greatly impacted by this change. Formerly incarcerated Russians will re-enter civilian life and remain within that sphere; as they live as civilians, they will infect others with the contagions they were exposed to in prison. Researcher Vivian Stern argues that the risk of transmission from prison populations to the general public calls for an integration of prison healthcare and national health services to better control both TB and MDR-TB. While second-line drugs necessary for treating MDR-TB are arguably more expensive than a typical regimen of DOTS therapy, infectious disease specialist Paul Farmer posits that the outcome of leaving infected prisoners untreated could cause a massive outbreak of MDR-TB in civilian populations, thereby inflicting a heavy toll on society. Additionally, as MDR-TB spreads, the threat of the emergence of totally-drug-resistant TB becomes increasingly apparent. See also 2007 tuberculosis scare Drug resistance MRSA Vancomycin-resistant enterococcus (VRE) Totally drug-resistant tuberculosis (TDR-TB) Medicines Patent Pool References Notes Further reading External links Video: Drug-Resistant TB in Russia 24 July 2007, Woodrow Wilson Center event featuring Salmaan Keshavjee and Murray Feshbach MDR-TB (DOTS Plus) protocol followed under RNTCP in India (PDF) "The Strange, Isolated Life of a Tuberculosis Patient in the 21st Century", Buzzfeed Antibiotic-resistant bacteria Pharmaceuticals policy Tuberculosis
Multidrug-resistant tuberculosis
[ "Biology" ]
7,215
[ "Bacteria", "Antibiotic-resistant bacteria" ]
11,321,984
https://en.wikipedia.org/wiki/Rensch%27s%20rule
Rensch's rule is a biological rule on allometrics, concerning the relationship between the extent of sexual size dimorphism and which sex is larger. Across species within a lineage, size dimorphism increases with increasing body size when the male is the larger sex, and decreases with increasing average body size when the female is the larger sex. The rule was proposed by the evolutionary biologist Bernhard Rensch in 1950. After controlling for confounding factors such as evolutionary history, an increase in average body size makes the difference in body size larger if the species has larger males, and smaller if it has larger females. Some studies propose that this is due to sexual bimaturism, which causes male traits to diverge faster and develop for a longer period of time. The correlation between sexual size dimorphism and body size is hypothesized to be a result of an increase in male-male competition in larger species, a result of limited environmental resources, fuelling aggression between males over access to breeding territories and mating partners. Phylogenetic lineages that appear to follow this rule include primates, pinnipeds, and artiodactyls. This rule has rarely been tested on parasites. A 2019 study showed that ectoparasitic philopterid and menoponid lice comply with it, while ricinid lice exhibit a reversed pattern. References Animal size Biological rules Sexual dimorphism
Rensch's rule
[ "Physics", "Biology" ]
296
[ "Sexual dimorphism", "Sex", "Organism size", "Biological rules", "nan", "Asymmetry", "Animal size", "Symmetry" ]
11,322,015
https://en.wikipedia.org/wiki/IPython
IPython (Interactive Python) is a command shell for interactive computing in multiple programming languages, originally developed for the Python programming language, that offers introspection, rich media, shell syntax, tab completion, and history. IPython provides the following features: Interactive shells (terminal and Qt-based). A browser-based notebook interface with support for code, text, mathematical expressions, inline plots and other media. Support for interactive data visualization and use of GUI toolkits. Flexible, embeddable interpreters to load into one's own projects. Tools for parallel computing. IPython is a NumFOCUS fiscally sponsored project. Parallel computing IPython is based on an architecture that provides parallel and distributed computing. IPython enables parallel applications to be developed, executed, debugged and monitored interactively, hence the I (Interactive) in IPython. This architecture abstracts out parallelism, enabling IPython to support many different styles of parallelism including: Single program, multiple data (SPMD) parallelism Multiple program, multiple data (MPMD) parallelism Message passing using MPI Task parallelism Data parallelism Combinations of these approaches Custom user defined approaches With the release of IPython 4.0, the parallel computing capabilities were made optional and released under the ipyparallel python package. And most of the capabilities of ipyparallel are now covered by more mature libraries like Dask. IPython frequently draws from SciPy stack libraries like NumPy and SciPy, often installed alongside one of many Scientific Python distributions. IPython provides integration with some libraries of the SciPy stack, notably matplotlib, producing inline graphs when used with the Jupyter notebook. Python libraries can implement IPython specific hooks to customize rich object display. SymPy for example implements rendering of mathematical expressions as rendered LaTeX when used within IPython context, and Pandas dataframe use an HTML representation. Other features IPython allows non-blocking interaction with Tkinter, PyGTK, PyQt/PySide and wxPython (the standard Python shell only allows interaction with Tkinter). IPython can interactively manage parallel computing clusters using asynchronous status callbacks and/or MPI. IPython can also be used as a system shell replacement. Its default behavior is largely similar to Unix shells, but it allows customization and the flexibility of executing code in a live Python environment. End of Python 2 support IPython 5.x (Long Time Support) series is the last version of IPython to support Python 2. The IPython project pledged to not support Python 2 beyond 2020 by being one of the first projects to join the Python 3 Statement, the 6.x series is only compatible with Python 3 and above. It is still possible though to run an IPython kernel and a Jupyter Notebook server on different Python versions allowing users to still access Python 2 on newer version of Jupyter. Project Jupyter In 2014, IPython creator Fernando Pérez announced a spin-off project from IPython called Project Jupyter. IPython continued to exist as a Python shell and kernel for Jupyter, but the notebook interface and other language-agnostic parts of IPython were moved under the Jupyter name. Jupyter is language agnostic and its name is a reference to core programming languages supported by Jupyter, which are Julia, Python, and R. Jupyter Notebook (formerly IPython Notebook) is a web-based interactive computational environment for creating, executing, and visualizing Jupyter notebooks. It is similar to the notebook interface of other programs such as Maple, Mathematica, and SageMath, a computational interface style that originated with Mathematica in the 1980s. It supports execution environments (aka kernels) in dozens of languages. By default Jupyter Notebook ships with the IPython kernel, but there are over 100 Jupyter kernels as of May 2018. In the media IPython has been mentioned in the popular computing press and other popular media, and it has a presence at scientific conferences. For scientific and engineering work, it is often presented as a companion tool to matplotlib. Grants and awards Beginning 1 January 2013, the Alfred P. Sloan Foundation announced that it would support IPython development for two years. On 23 March 2013, Fernando Perez was awarded the Free Software Foundation Advancement of Free Software award for IPython. In August 2013, Microsoft made a donation of $100,000 to sponsor IPython's continued development. In January 2014, it won the Jolt Productivity Award from Dr. Dobb's in the category of coding tools. In July 2015, the project won a funding of $6 million from Gordon and Betty Moore Foundation, Alfred P. Sloan Foundation and Leona M. and Harry B. Helmsley Charitable Trust. In May 2018, it was awarded the 2017 ACM Software System Award. It is the largest team to have won the award. See also Python (programming language) Electronic lab notebook SageMath Project Jupyter References External links Inline graphs Project Jupyter Command shells Free mathematics software Free software programmed in Python Notebook interface Python (programming language) development tools Software using the BSD license
IPython
[ "Mathematics" ]
1,105
[ "Free mathematics software", "Mathematical software" ]
11,324,348
https://en.wikipedia.org/wiki/Fourcault%20process
The Fourcault process is a method of manufacturing plate glass. First developed in Belgium by (1862–1919) during the early 1900s, the process was used globally. Fourcault is an example of a "vertical draw" process, in that the glass is drawn against gravity in an upward direction. Gravity forces influence parts of the process. Process The Fourcault process requires a "pit" or drawing area and an assembly of machines to draw up the ribbon of glass while performing actions upon it that ensure desired quality and process yields. Today most glass manufacture has a "hot end" where the products are made. Fourcault is no exception. The action in Fourcault happens "at the draw", or area where the glass is taken from a liquid state into the start of the process needed to make it into flat glass. At the bottom of the draw is the "pit" or place where the molten glass is sufficiently cooled to be close to forming temperature. The cooling process uses a device known as a "canal", a box-shaped structure which conveys the glass from the refining area to the pit. The canal links the pit with the "refining" area, a section of the glass furnace that removes gas bubbles and other sources of imperfection. Since refining requires much higher temperatures to release gas bubbles than those required to form the glass it is not possible to draw directly from the refining area, hence the need for canals. Forming The Fourcault process uses a ceramic die to shape fused (or molten) glass into a ribbon of rectangular cross section. The die, known as a debiteuse, floats in the molten glass inside of the pit to a prescribed depth which pushes a part of the molten glass slightly above the top surface of the die. A slot is cut through the center of the debiteuse, which is shaped to produce the best quality of glass. The debiteuse is the starting point of the vertical draw, where the glass begins to change from a hot syrupy mass into useful flat glass. We will call the glass from the point of the debiteuse until it is cut a "ribbon". The base of the ribbon is shielded from heat radiation from the fused glass so that it continues to hold the shape imparted to it by the debiteuse. This cooling preserves the rectangular cross section of the drawn glass by cooling the ribbon glass below the temperature where it would collapse into a column or break back into the melted glass. It is especially important to shield the outside edges of the ribbon from heat so that they are firmer and will hold the rest of the ribbon in a proper shape. In some cases manufacturers will allow the edges to form thicker "bulbs", which are removed after final cutting. Immediately after being drawn the ribbon is cooled using mechanical coolers so that it maintains its rectangular shape in two dimensions, but assumes a ribbon like structure that extends down into the Debi and upwards into a drawing assembly. This mechanical cooling allows the ribbon to hold its integrity. In the author's experience the mechanical coolers used water, contained in specially shaped radiators, to remove heat radiated by the ribbon. Sometimes a mild vacuum is applied to the ribbon in this early part of the process since mechanical cooling can induce air currents which impact upon surface quality. Quenching Some manufacturers also will apply sulfur dioxide gas during the draw in order to change the chemistry of the glass on the surface. By changing the chemistry it is possible to affect the surface characteristics of the glass, improving its quality and durability. Glass rollers hold the ribbon throughout various parts of the process, supporting its weight and continuing the drawing process. The process continues as the ribbon is drawn upwards into a chimney like structure, where it is quenched or rapidly cooled. When the ribbon reaches the end of the process it is scored, or cut, and then removed for further processing in discrete sheets of flat glass. The "bulb edges" are recycled as cullet (flawed glass which is remelted) or were resold for shelving or displays. Sometimes flawed parts of the sheets were removed, leaving behind decent quality flat glass. Operations Time, speed and spacing of the various phases of the process are critical factors in the Fourcault process. Fourcault process machine operators require experience in order to judge placement of the die, location of various parts of the process, and rates of draw. These must be balanced against glass quality and the age of the draw. As the draw continues the glass in the pit grows cooler and cooler, eventually leading to failures or diminished quality. The draw must be stopped, the pit must be "heated back" and then the process can continue anew. Glass chemistry has a huge impact on the process since it controls the melting, forming and annealing temperatures, liquidus temperature (point where various chemicals that make up the glass start to crystallize out of the glass) and rates of change of characteristics of the glass itself. Occasionally the ribbon will break or crack, leading to failure of the drawing process. Such breaks, known as "checks" can be alleviated by using proper operating parameters. Sometimes an expedient measure, using a portable source of heat, can be used to make the checks migrate to the edge of the ribbon where they disappear. The author has even seen crude torches made of wood which can migrate the checks. The resultant product is a form of flat glass which is suitable for lesser quality uses. Due to process instabilities Fourcault process glass can have waves, seeds (small gas bubbles) or stones (undissolved materials). This distorts the image seen through the glass. Fourcault glass is still being made as an architectural glass for historical restoration of buildings. In terms of economics and product quality the Fourcault process has been supplanted in many countries by the Pilkington developed "Float" process. The Float process lets the molten glass settle on top of a pool of liquid tin, so that gravity creates a flat sheet. Due to various chemistry and physical aspects of window glass the Pilkington Float process produces a vastly superior product. References External links Process and portrait on a 1955 stamp of Belgium Glass production
Fourcault process
[ "Materials_science", "Engineering" ]
1,264
[ "Glass engineering and science", "Glass production" ]
11,324,792
https://en.wikipedia.org/wiki/Tensor-hom%20adjunction
In mathematics, the tensor-hom adjunction is that the tensor product and hom-functor form an adjoint pair: This is made more precise below. The order of terms in the phrase "tensor-hom adjunction" reflects their relationship: tensor is the left adjoint, while hom is the right adjoint. General statement Say R and S are (possibly noncommutative) rings, and consider the right module categories (an analogous statement holds for left modules): Fix an -bimodule and define functors and as follows: Then is left adjoint to . This means there is a natural isomorphism This is actually an isomorphism of abelian groups. More precisely, if is an -bimodule and is a -bimodule, then this is an isomorphism of -bimodules. This is one of the motivating examples of the structure in a closed bicategory. Counit and unit Like all adjunctions, the tensor-hom adjunction can be described by its counit and unit natural transformations. Using the notation from the previous section, the counit has components given by evaluation: For The components of the unit are defined as follows: For in , is a right -module homomorphism given by The counit and unit equations can now be explicitly verified. For in , is given on simple tensors of by Likewise, For in , is a right -module homomorphism defined by and therefore The Ext and Tor functors The Hom functor commutes with arbitrary limits, while the tensor product functor commutes with arbitrary colimits that exist in their domain category. However, in general, fails to commute with colimits, and fails to commute with limits; this failure occurs even among finite limits or colimits. This failure to preserve short exact sequences motivates the definition of the Ext functor and the Tor functor. In arithmetic We can illustrate the tensor-hom adjunction in the category of functions of finite sets. Given a set , its Hom functor takes any set to the set of functions from to . The isomorphism class of this set of functions is the natural number . Similarly, the tensor product takes a set to its cartesian product with . Its isomorphism class is thus the natural number . This allows us to interpret the isomorphism of hom-sets that universally characterizes the tensor-hom adjunction, as the categorification of the remarkably basic law of exponents See also Currying Eckmann–Hilton_duality Ext functor Tor functor Change of rings References Adjoint functors Commutative algebra
Tensor-hom adjunction
[ "Mathematics" ]
562
[ "Fields of abstract algebra", "Commutative algebra" ]
14,972,851
https://en.wikipedia.org/wiki/HCN%20channel
Hyperpolarization-activated cyclic nucleotide–gated (HCN) channels are integral membrane proteins that serve as nonselective voltage-gated cation channels in the plasma membranes of heart and brain cells. HCN channels are sometimes referred to as pacemaker channels because they help to generate rhythmic activity within groups of heart and brain cells. HCN channels are activated by membrane hyperpolarization, are permeable to and , and are constitutively open at voltages near the resting membrane potential. HCN channels are encoded by four genes (HCN1, 2, 3, 4) and are widely expressed throughout the heart and the central nervous system. The current through HCN channels, designated If or Ih, plays a key role in the control of cardiac and neuronal rhythmicity and is called the pacemaker current or "funny" current. Expression of single isoforms in heterologous systems such as human embryonic kidney (HEK) cells, Chinese hamster ovary (CHO) cells and Xenopus oocytes yield homotetrameric channels able to generate ion currents with properties similar to those of the native If/Ih current, but with quantitative differences in the voltage-dependence, activation/deactivation kinetics and sensitivity to the nucleotide cyclic AMP (cAMP): HCN1 channels have a more positive threshold for activation, faster activation kinetics, and a lower sensitivity to cAMP, while HCN4 channels are slowly gating and strongly sensitive to cAMP. HCN2 and HCN3 have intermediate properties. Structure Hyperpolarization-activated and cyclic nucleotide–gated (HCN) channels belong to the superfamily of voltage-gated K+ (Kv) and cyclic nucleotide–gated (CNG) channels. HCN channels are thought to consist of four either identical or non-identical subunits that are integrally embedded in the cell membrane to create an ion-conducting pore. Each subunit comprises six membrane-spanning (S1–6) domains which include a putative voltage sensor (S4) and a pore region between S5 and S6 carrying the GYG triplet signature of K+-permeable channels, and a cyclic nucleotide-binding domain (CNBD) in the C-terminus. HCN isoforms are highly conserved in their core transmembrane regions and cyclic nucleotide binding domain (80–90% identical), but diverge in their amino- and carboxy-terminal cytoplasmic regions. HCN channels are regulated by both intracellular and extracellular molecules, but most importantly, by cyclic nucleotides (cAMP, cGMP, cCMP). Binding of cyclic nucleotides lowers the threshold potential of HCN channels, thus activating them. cAMP is a primary agonist of HCN2 while cGMP and cCMP may also bind to it. All three, however, are potent agonists. Cardiac function HCN4 is the main isoform expressed in the sinoatrial node, but low levels of HCN1 and HCN2 have also been reported. The current through HCN channels, called the pacemaker current (If), plays a key role in the generation and modulation of cardiac rhythmicity, as they are responsible for the spontaneous depolarization in pacemaker action potentials in the heart. HCN4 isoforms are regulated by cCMP and cAMP and these molecules are agonists at If. Function in the nervous system All four HCN subunits are expressed in the brain. In addition to their proposed roles in pacemaking rhythmic or oscillatory activity, HCN channels may control the way that neurons respond to synaptic input. Initial studies suggest roles for HCN channels in sour taste, coordinated motor behavior and aspects of learning and memory. Clinically, there is evidence that HCN channels play roles in epilepsy and neuropathic pain. HCN channels have been shown to be important for activity-dependent mechanisms for olfactory sensory neuron growth. HCN1 and 2 channels have been found in dorsal root ganglia, basal ganglia, and the dendrites of neurons in the hippocampus. It has been found that human cortical neurons have particularly high amount of HCN1 channel expression in all layers. HCN channel trafficking along dendrites in the hippocampus of rats has shown that HCN channels are quickly shuttled to the surface in response to neural activity. HCN channels have also been observed in the retrotrapezoid nucleus (RTN), a respiratory control center that responds to chemical signals such as CO2. When HCN is inhibited, serotonin fails to stimulate chemoreceptors in the RTN. This illustrates a connection between HCN channels and respiratory regulation. Due to the complex nature of HCN channel regulation, as well as the complex interactions between multiple ion channels, HCN channels are fine-tuned to respond to certain thresholds and agonists. This complexity is believed to affect neural plasticity. History HCN channel was first identified in 1976 in the heart by Noma and Irisawa and characterized by Brown, Difrancesco and Weiss See also Cyclic nucleotide-gated ion channel References External links Electrophysiology Ion channels Neurochemistry Integral membrane proteins
HCN channel
[ "Chemistry", "Biology" ]
1,110
[ "Biochemistry", "Neurochemistry", "Ion channels" ]
14,979,971
https://en.wikipedia.org/wiki/Quantum%20lithography
Quantum lithography is a type of photolithography, which exploits non-classical properties of the photons, such as quantum entanglement, in order to achieve superior performance over ordinary classical lithography. Quantum lithography is closely related to the fields of quantum imaging, quantum metrology, and quantum sensing. The effect exploits the quantum mechanical state of light called the NOON state. Quantum lithography was invented at Jonathan P. Dowling's group at JPL, and has been studied by a number of groups. Of particular importance, quantum lithography can beat the classical Rayleigh criterion for the diffraction limit. Classical photolithography has an optical imaging resolution that is limited by the wavelength of light used. For example, in the use of photolithography to mass-produce computer chips, it is desirable to produce smaller and smaller features on the chip, which classically requires moving to smaller and smaller wavelengths (ultraviolet and x-ray), which entails exponentially greater cost to produce the optical imaging systems at these extremely short optical wavelengths. Quantum lithography exploits the quantum entanglement between specially prepared photons in the NOON state and special photoresists, that display multi-photon absorption processes to achieve the smaller resolution without the requirement of shorter wavelengths. For example, a beam of red photons, entangled 50 at a time in the NOON state, would have the same resolving power as a beam of x-ray photons. The field of quantum lithography is in its infancy, and although experimental proofs of principle have been carried out using the Hong–Ou–Mandel effect, it is considered promising technology. References External links American Institute of Physics Introduction to Quantum Lithography New York Times Science News Quantum information science Lithography (microfabrication)
Quantum lithography
[ "Materials_science" ]
372
[ "Nanotechnology", "Microtechnology", "Lithography (microfabrication)" ]
14,983,028
https://en.wikipedia.org/wiki/Secondary%20flow
In fluid dynamics, flow can be decomposed into primary flow plus secondary flow, a relatively weaker flow pattern superimposed on the stronger primary flow pattern. The primary flow is often chosen to be an exact solution to simplified or approximated governing equations, such as potential flow around a wing or geostrophic current or wind on the rotating Earth. In that case, the secondary flow usefully spotlights the effects of complicated real-world terms neglected in those approximated equations. For instance, the consequences of viscosity are spotlighted by secondary flow in the viscous boundary layer, resolving the tea leaf paradox. As another example, if the primary flow is taken to be a balanced flow approximation with net force equated to zero, then the secondary circulation helps spotlight acceleration due to the mild imbalance of forces. A smallness assumption about secondary flow also facilitates linearization. In engineering, secondary flow also identifies an additional flow path. Examples of secondary flows Wind near ground level The basic principles of physics and the Coriolis effect define an approximate geostrophic wind or gradient wind, balanced flows that are parallel to the isobars. Measurements of wind speed and direction at heights well above ground level confirm that wind matches these approximations quite well. However, nearer the Earth's surface, the wind speed is less than predicted by the barometric pressure gradient, and the wind direction is partly across the isobars rather than parallel to them. This flow of air across the isobars is a secondary flow., a difference from the primary flow which is parallel to the isobars. Interference by surface roughness elements such as terrain, waves, trees and buildings cause drag on the wind and prevent the air from accelerating to the speed necessary to achieve balanced flow. As a result, the wind direction near ground level is partly parallel to the isobars in the region, and partly across the isobars in the direction from higher pressure to lower pressure. As a result of the slower wind speed at the earth's surface, in a region of low pressure the barometric pressure is usually significantly higher at the surface than would be expected, given the barometric pressure at mid altitudes, due to Bernoulli's principle. Hence, the secondary flow toward the center of a region of low pressure is also drawn upward by the significantly lower pressure at mid altitudes. This slow, widespread ascent of the air in a region of low pressure can cause widespread cloud and rain if the air is of sufficiently high relative humidity. In a region of high pressure (an anticyclone) the secondary flow includes a slow, widespread descent of air from mid altitudes toward ground level, and then outward across the isobars. This descent causes a reduction in relative humidity and explains why regions of high pressure usually experience cloud-free skies for many days. Tropical cyclones The flow around a tropical cyclone is often well approximated as parallel to circular isobars, such as in a vortex. A strong pressure gradient draws air toward the center of the cyclone, a centripetal force nearly balanced by Coriolis and centrifugal forces in gradient wind balance. The viscous secondary flow near the Earth's surface converges toward the center of the cyclone, ascending in the eyewall to satisfy mass continuity. As the secondary flow is drawn upward the air cools as its pressure falls, causing extremely heavy rainfall and releasing latent heat which is an important driver of the storm's energy budget. Tornadoes and dust devils Tornadoes and dust devils display localised vortex flow. Their fluid motion is similar to tropical cyclones but on a much smaller scale so that the Coriolis effect is not significant. The primary flow is circular around the vertical axis of the tornado or dust devil. As with all vortex flow, the speed of the flow is fastest at the core of the vortex. In accordance with Bernoulli's principle where the wind speed is fastest the air pressure is lowest; and where the wind speed is slowest the air pressure is highest. Consequently, near the center of the tornado or dust devil the air pressure is low. There is a pressure gradient toward the center of the vortex. This gradient, coupled with the slower speed of the air near the earth's surface, causes a secondary flow toward the center of the tornado or dust devil, rather than in a purely circular pattern. The slower speed of the air at the surface prevents the air pressure from falling as low as would normally be expected from the air pressure at greater heights. This is compatible with Bernoulli's principle. The secondary flow is toward the center of the tornado or dust devil, and is then drawn upward by the significantly lower pressure several thousands of feet above the surface in the case of a tornado, or several hundred feet in the case of a dust devil. Tornadoes can be very destructive and the secondary flow can cause debris to be swept into a central location and carried to low altitudes. Dust devils can be seen by the dust stirred up at ground level, swept up by the secondary flow and concentrated in a central location. The accumulation of dust then accompanies the secondary flow upward into the region of intense low pressure that exists outside the influence of the ground. Circular flow in a bowl or cup When water in a circular bowl or cup is moving in circular motion the water displays free-vortex flow – the water at the center of the bowl or cup spins at relatively high speed, and the water at the perimeter spins more slowly. The water is a little deeper at the perimeter and a little more shallow at the center, and the surface of the water is not flat but displays the characteristic depression toward the axis of the spinning fluid. At any elevation within the water the pressure is a little greater near the perimeter of the bowl or cup where the water is a little deeper, than near the center. The water pressure is a little greater where the water speed is a little slower, and the pressure is a little less where the speed is faster, and this is consistent with Bernoulli's principle. There is a pressure gradient from the perimeter of the bowl or cup toward the center. This pressure gradient provides the centripetal force necessary for the circular motion of each parcel of water. The pressure gradient also accounts for a secondary flow of the boundary layer in the water flowing across the floor of the bowl or cup. The slower speed of the water in the boundary layer is unable to balance the pressure gradient. The boundary layer spirals inward toward the axis of circulation of the water. On reaching the center the secondary flow is then upward toward the surface, progressively mixing with the primary flow. Near the surface there may also be a slow secondary flow outward toward the perimeter. The secondary flow along the floor of the bowl or cup can be seen by sprinkling heavy particles such as sugar, sand, rice or tea leaves into the water and then setting the water in circular motion by stirring with a hand or spoon. The boundary layer spirals inward and sweeps the heavier solids into a neat pile in the center of the bowl or cup. With water circulating in a bowl or cup, the primary flow is purely circular and might be expected to fling heavy particles outward to the perimeter. Instead, heavy particles can be seen to congregate in the center as a result of the secondary flow along the floor. River bends Water flowing through a bend in a river must follow curved streamlines to remain within the banks of the river. The water surface is slightly higher near the concave bank than near the convex bank. (The "concave bank" has the greater radius. The "convex bank" has the smaller radius.) As a result, at any elevation within the river, water pressure is slightly higher near the concave bank than near the convex bank. A pressure gradient results from the concave bank toward the other bank. Centripetal forces are necessary for the curved path of each parcel of water, which is provided by the pressure gradient. The primary flow around the bend approximates a free vortex – fastest speed where the radius of curvature of the stream itself is smallest and slowest speed where the radius is largest. The higher pressure near the concave (outer) bank is accompanied by slower water speed, and the lower pressure near the convex bank is accompanied by faster water speed, and all this is consistent with Bernoulli's principle. A secondary flow is produced in the boundary layer along the floor of the river bed. The boundary layer is not moving fast enough to balance the pressure gradient and so its path is partly downstream and partly across the stream from the concave bank toward the convex bank, driven by the pressure gradient. The secondary flow is then upward toward the surface where it mixes with the primary flow or moves slowly across the surface, back toward the concave bank. This motion is called helicoidal flow. On the floor of the river bed the secondary flow sweeps sand, silt and gravel across the river and deposits the solids near the convex bank, in similar fashion to sugar or tea leaves being swept toward the center of a bowl or cup as described above. This process can lead to accentuation or creation of D-shaped islands, meanders through creation of cut banks and opposing point bars which in turn may result in an oxbow lake. The convex (inner) bank of river bends tends to be shallow and made up of sand, silt and fine gravel; the concave (outer) bank tends to be steep and elevated due to heavy erosion. Turbomachinery Different definitions have been put forward for secondary flow in turbomachinery, such as "Secondary flow in broad terms means flow at right angles to intended primary flow". Secondary flows occur in the main, or primary, flowpath in turbomachinery compressors and turbines (see also unrelated use of term for flow in the secondary air system of a gas turbine engine). They are always present when a wall boundary layer is turned through an angle by a curved surface. They are a source of total pressure loss and limit the efficiency that can be achieved for the compressor or turbine. Modelling the flow enables blade, vane and end-wall surfaces to be shaped to reduce the losses. Secondary flows occur throughout the impeller in a centrifugal compressor but are less marked in axial compressors due to shorter passage lengths. Flow turning is low in axial compressors but boundary layers are thick on the annulus walls which gives significant secondary flows. Flow turning in turbine blading and vanes is high and generates strong secondary flow. Secondary flows also occur in pumps for liquids and include inlet prerotation, or intake vorticity, tip clearance flow (tip leakage), flow separation when operating away from the design condition, and secondary vorticity. The following, from Dixon, shows the secondary flow generated by flow turning in an axial compressor blade or stator passage. Consider flow with an approach velocity c1. The velocity profile will be non-uniform due to friction between the annulus wall and the fluid. The vorticity of this boundary layer is normal to the approach velocity and of magnitude where z is the distance to the wall. As the vorticity of each blade onto each other will be of opposite directions, a secondary vorticity will be generated. If the deflection angle, e, between the guide vanes is small, the magnitude of the secondary vorticity is represented as This secondary flow will be the integrated effect of the distribution of secondary vorticity along the blade length. Gas turbine engines Gas turbine engines have a power-producing primary airflow passing through the compressor. They also have a substantial (25% of core flow in a Pratt & Whitney PW2000) secondary flow obtained from the primary flow and which is pumped from the compressor and used by the secondary air system. Like the secondary flow in turbomachinery this secondary flow is also a loss to the power-producing capability of the engine. Air-breathing propulsion systems Thrust-producing flow which passes through an engines thermal cycle is called primary airflow. Using only cycle flow was relatively short-lived as the turbojet engine. Airflow through a propeller or a turbomachine fan is called secondary flow and is not part of the thermal cycle. This use of secondary flow reduces losses and increases the overall efficiency of the propulsion system. The secondary flow may be many times that through the engine. Supersonic air-breathing propulsion systems During the 1960s cruising at speeds between Mach 2 to 3 was pursued for commercial and military aircraft. Concorde, North American XB-70 and Lockheed SR-71 used ejector-type supersonic nozzles which had a secondary flow obtained from the inlet upstream of the engine compressor. The secondary flow was used to purge the engine compartment, cool the engine case, cool the ejector nozzle and cushion the primary expansion. The secondary flow was ejected by the pumping action of the primary gas flow through the engine nozzle and the ram pressure in the inlet. See also Notes References Dixon, S.L. (1978), Fluid Mechanics and Thermodynamics of Turbomachinery pp 181–184, Third edition, Pergamon Press Ltd, UK External links Coupled CFD and Thermal Steady State Analysis of Steam Turbine Secondary Flow Path Fluid dynamics Meteorological phenomena Tropical cyclone meteorology
Secondary flow
[ "Physics", "Chemistry", "Engineering" ]
2,702
[ "Physical phenomena", "Earth phenomena", "Chemical engineering", "Meteorological phenomena", "Piping", "Fluid dynamics" ]
13,873,874
https://en.wikipedia.org/wiki/Zeta%20potential%20titration
Zeta potential titration is a titration of heterogeneous systems, for example colloids and emulsions. Solids in such systems have very high surface area. This type of titration is used to study the zeta potential of these surfaces under different conditions. Details of zeta potential definition and measuring techniques can be found in the International Standard. Iso-electric Point The iso-electric point is one such property. The iso-electric point is the pH value at which the zeta potential is approximately zero. At a pH near the iso-electric point (± 2 pH units), colloids are usually unstable; the particles tend to coagulate or flocculate. Such titrations use acids or bases as titration reagents. Tables of iso-electric points for different materials are available. The attached figure illustrates results of such titrations for concentrated dispersions of alumina (4% v/v) and rutile (7% v/v). It is seen that iso-electric point of alumina is around pH 9.3, whereas for rutile it is around pH 4. Alumina is unstable in the pH range from 7 to 11. Rutile is unstable in the pH range from 2 to 6. Surfactants and Stabilization Another purpose of this titration is determination of the optimum dose of surfactant for achieving stabilization or flocculation of a heterogeneous system. Measurement In a zeta-potential titration, the Zeta potential is the indicator. Measurement of the zeta potential can be performed using microelectrophoresis, or electrophoretic light scattering, or electroacoustic phenomena. The last method makes possible to perform titrations in concentrated systems, with no dilution. References Further reading Kosmulski M. (2009). Surface Charging and Points of Zero Charge. CRC Press; 1st edition (Hardcover). Category Chemical mixtures Colloidal chemistry Condensed matter physics Soft matter Titration
Zeta potential titration
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
417
[ "Colloidal chemistry", "Titration", "Instrumental analysis", "Soft matter", "Phases of matter", "Materials science", "Colloids", "Surface science", "Chemical mixtures", "Condensed matter physics", "nan", "Matter" ]
13,878,482
https://en.wikipedia.org/wiki/Glass-to-metal%20seal
Glass-to-metal seals are a type of mechanical seal which joins glass and metal surfaces. They are very important elements in the construction of vacuum tubes, electric discharge tubes, incandescent light bulbs, glass-encapsulated semiconductor diodes, reed switches, glass windows in metal cases, and metal or ceramic packages of electronic components. Properly done, such a seal is hermetic (capable of supporting a vacuum, good electrical insulation, special optical properties e.g. UV lamps). To achieve such a seal, two properties must hold: The molten glass must be capable of wetting the metal, in order to form a tight bond, and The thermal expansion of the glass and metal must be closely matched so that the seal remains solid as the assembly cools. Thinking for example about a metal wire in a glass bulb sealing, the metal glass contact can break if the coefficients of thermal expansion (CTE) are not well aligned. For the case that the CTE of the metal is larger than the CTE of the glass, the sealing shows a high probability to break upon cooling. By lowering the temperature, the metal wire shrinks more than the glass does, leading to a strong tensile force on the glass, which finally leads to breakage. On the other hand, if the CTE of the glass is larger than the CTE of the metal wire, the seal will tighten upon cooling since compression force is applied on the glass. According to all requirements that need to be fulfilled and the strong necessity to align the CTE of both materials, there are only a few companies offering specialty glass for glass-metal sealing, such as SCHOTT AG and Morgan Advanced Materials. Glass-to-metal bonds Glass and metal can bond together by purely mechanical means, which usually gives weaker joints, or by chemical interaction, where the oxide layer on the metal surface forms a strong bond with the glass (the glass itself is about 73% composed of a silicon dioxide (SiO2)) . The acid-base reactions are main causes of interaction between glass-metal in the presence of metal oxides on the surface of metal. After complete dissolution of the surface oxides into the glass, further progress of interaction depends on the oxygen activity at the interface. The oxygen activity can be increased by diffusion of molecular oxygen through some defects like cracks. Also, reduction of the thermodynamically less stable components in the glass (and releasing the oxygen ions) can increase the oxygen activity at the interface. In other words, the redox reactions are main causes of interaction between glass-metal in the absence of metal oxides on the surface of metal. For achieving a vacuum-tight seal, the seal must not contain bubbles. The bubbles are most commonly created by gases escaping the metal at high temperature; degassing the metal before its sealing is therefore important, especially for nickel and iron and their alloys. This is achieved by heating the metal in vacuum or sometimes in hydrogen atmosphere or in some cases even in air at temperatures above those used during the sealing process. Oxidizing of the metal surface also reduces gas evolution. Most of the evolved gas is produced due to the presence of carbon impurities in the metals; these can be removed by heating in hydrogen. The glass-oxide bond is stronger than glass-metal. The oxide forms a layer on the metal surface, with the proportion of oxygen changing from zero in the metal to the stoichiometry of the oxide and the glass itself. A too-thick oxide layer tends to be porous on the surface and mechanically weak, flaking, compromising the bond strength and creating possible leakage paths along the metal-oxide interface. Proper thickness of the oxide layer is therefore critical. Copper Metallic copper does not bond well to glass. Copper(I) oxide, however, is wetted by molten glass and partially dissolves in it, forming a strong bond. The oxide also bonds well to the underlying metal. But copper(II) oxide causes weak joints that may leak and its formation must be prevented. For bonding copper to glass, the surface needs to be properly oxidized. The oxide layer is to have the right thickness; too little oxide would not provide enough material for the glass to anchor to, too much oxide would cause the oxide layer to fail, and in both cases the joint would be weak and possibly non-hermetic. To improve the bonding to glass, the oxide layer should be borated; this is achieved by e.g. dipping the hot part into a concentrated solution of borax and then heating it again for certain time. This treatment stabilizes the oxide layer by forming a thin protective layer of sodium borate on its surface, so the oxide does not grow too thick during subsequent handling and joining. The layer should have uniform deep red to purple sheen. The boron oxide from the borated layer diffuses into glass and lowers its melting point. The oxidation occurs by oxygen diffusing through the molten borate layer and forming copper(I) oxide, while formation of copper(II) oxide is inhibited. The copper-to-glass seal should look brilliant red, almost scarlet; pink, sherry and honey colors are also acceptable. Too thin an oxide layer appears light, up to the color of metallic copper, while too thick oxide looks too dark. Oxygen-free copper has to be used if the metal comes in contact with hydrogen (e.g. in a hydrogen-filled tube or during handling in the flame). Normally, copper contains small inclusions of copper(I) oxide. Hydrogen diffuses through the metal and reacts with the oxide, reducing it to copper and yielding water. The water molecules however can not diffuse through the metal, are trapped in the location of the inclusion, and cause embrittlement. As copper(I) oxide bonds well to the glass, it is often used for combined glass-metal devices. The ductility of copper can be used for compensation of the thermal expansion mismatch in e.g. the knife-edge seals. For wire feed throughs, dumet wire – nickel-iron alloy plated with copper – is frequently used. Its maximum diameter is however limited to about 0.5 mm due to its thermal expansion. Copper can be sealed to glass without the oxide layer, but the resulting joint is less strong. Platinum Platinum has similar thermal expansion as glass and is well-wetted with molten glass. It however does not form oxides, so its bond strength is lower. The seal has metallic color and limited strength. Gold Like platinum, gold does not form oxides that could assist in bonding. Glass-gold bonds are therefore metallic in color and weak. Gold tends to be used for glass-metal seals only rarely. Special compositions of soda-lime glasses that match the thermal expansion of gold, containing tungsten trioxide and oxides of lanthanum, aluminum and zirconium, exist. Silver Silver forms a thin layer of silver oxide on its surface. This layer dissolves in molten glass and forms silver silicate, facilitating a strong bond. Nickel Nickel can bond with glass either as a metal, or via the nickel(II) oxide layer. The metal joint has metallic color and inferior strength. The oxide-layer joint has characteristic green-grey color. Nickel plating can be used in similar way as copper plating, to facilitate better bonding with the underlying metal. Iron Iron is only rarely used for feedthroughs, but frequently gets coated with vitreous enamel, where the interface is also a glass-metal bond. The bond strength is also governed by the character of the oxide layer on its surface. A presence of cobalt in the glass leads to a chemical reaction between the metallic iron and cobalt oxide, yielding iron oxide dissolved in glass and cobalt alloying with the iron and forming dendrites, growing into the glass and improving the bond strength. Iron can not be directly sealed to lead glass, as it reacts with the lead oxide and reduces it to metallic lead. For sealing to lead glasses, it has to be copper-plated or an intermediate lead-free glass has to be used. Iron is prone to creating gas bubbles in glass due to the residual carbon impurities; these can be removed by heating in wet hydrogen. Plating with copper, nickel or chromium is also advised. Chromium Chromium is a highly reactive metal present in many iron alloys. Chromium may react with glass, reducing the silicon and forming crystals of chromium silicide growing into the glass and anchoring together the metal and glass, improving the bond strength. Kovar Kovar, an iron-nickel-cobalt alloy, has low thermal expansion similar to high-borosilicate glass and is frequently used for glass-metal seals especially for the application in x-ray tubes or glass lasers. It can bond to glass via the intermediate oxide layer of nickel(II) oxide and cobalt(II) oxide; the proportion of iron oxide is low due to its reduction with cobalt. The bond strength is highly dependent on the oxide layer thickness and character. The presence of cobalt makes the oxide layer easier to melt and dissolve in the molten glass. A grey, grey-blue or grey-brown color indicates a good seal. A metallic color indicates lack of oxide, while black color indicates overly oxidized metal, in both cases leading to a weak joint. Molybdenum Molybdenum bonds to the glass via the intermediate layer of molybdenum(IV) oxide. Due to its low thermal expansion coefficient, matched to glass, molybdenum, like tungsten, is often used for glass-metal bonds especially in conjunction with aluminium-silicate glass. Its high electrical conductivity makes it superior over nickel-cobalt-iron alloys. It is favored by the lighting industry as feedthroughs for lightbulbs and other devices. Molybdenum oxidizes much faster than tungsten and quickly develops a thick oxide layer that does not adhere well, its oxidation should be therefore limited to just yellowish or at most blue-green color. The oxide is volatile and evaporates as a white smoke above 700 °C; excess oxide can be removed by heating in inert gas (argon) at 1000 °C. Molybdenum strips are used instead of wires where higher currents (and higher cross-sections of the conductors) are needed. Tungsten Tungsten bonds to the glass via the intermediate layer of tungsten(VI) oxide. A properly formed bond has characteristic coppery/orange/brown-yellow color in lithium-free glasses; in lithium-containing glasses the bond is blue due to formation of lithium tungstate. Due to its low thermal expansion coefficient, matched to glass, tungsten is frequently used for glass-metal bonds. Tungsten forms satisfying bonds with glasses with similar thermal expansion coefficient such as high-borosilicate glass. The surface of both the metal and glass should be smooth, without scratches. Tungsten has the lowest expansion coefficient of metals and the highest melting point. Stainless steel 304 Stainless steel forms bonds with glass via an intermediate layer of chromium(III) oxide and iron(III) oxide. Further reactions of chromium, forming chromium silicide dendrites, are possible. The thermal expansion coefficient of steel is however fairly different from the glass; like with copper, this can be alleviated by using knife-edge (Houskeeper) seals. Zirconium Zirconium wire can be sealed to glass with just little treatment – rubbing with abrasive paper and short heating in flame. Zirconium is used in applications demanding chemical resistance or lack of magnetism. Titanium Titanium, like zirconium, can be sealed to some glasses with just little treatment. Indium Indium and some of its alloys can be used as a solder capable of wetting glass, ceramics, and metals and joining them together. Indium has low melting point and is very soft; the softness allows it to deform plastically and absorb the stresses from thermal expansion mismatches. Due to its very low vapor pressure, indium finds use in glass-metal seals used in vacuum technology and cryogenic applications. Gallium Gallium is a soft metal with melting point at 30 °C. It readily wets glasses and most metals and can be used for seals that can be assembled/disassembled by just slight heating. It can be used as a liquid seal up to high temperatures or even at lower temperatures when alloyed with other metals (e.g. as galinstan). Mercury Mercury is a metal liquid at normal temperature and does not wet glass. It was used as the earliest glass-to-metal seal and is still in use for liquid seals for e.g. rotary shafts. Mercury seal The first technological use of a glass-to-metal seal was the encapsulation of the vacuum in the barometer by Torricelli. The liquid mercury wets the glass and thus provides for a vacuum tight seal. Liquid mercury was also used to seal the metal leads of early mercury arc lamps into the fused silica bulbs. A less toxic and more expensive alternative to mercury is gallium. Mercury and gallium seals can be used for vacuum-sealing rotary shafts. Platinum wire seal The next step was to use thin platinum wire. Platinum is easily wetted by glass and has a similar coefficient of thermal expansion as typical soda-lime and lead glass. It is also easy to work with because of its non-oxidibility and high melting point. This type of seal was used in scientific equipment throughout the 19th century and also in the early incandescent lamps and radio tubes. Dumet wire seal In 1911 the Dumet-wire seal was invented which is still the common practice to seal copper leads through soda-lime or lead glass. If copper is properly oxidised before it is wetted by molten glass a vacuum tight seal of good mechanical strength can be obtained. After copper is oxidized, it is often dipped in a borax solution, as borating the copper helps prevents over-oxidation when reintroduced to a flame. Simple copper wire is not usable because its CTE is much higher than that of the glass. Thus, on cooling a strong tensile force acts on the glass-to-metal interface and it breaks. Glass and glass-to-metal interfaces are especially sensitive to tensile stress. Dumet-wire is a copper clad wire (25% of copper by weight) with a core of nickel-iron alloy 42 (42% of nickel by weight). The core having low CTE makes it possible to produce a wire with a radial CTE lower than a linear CTE of the glass, so that the glass-to-metal interface is under a low compression stress. It is not possible to adjust the axial thermal expansion of the wire as well. Because of the much higher mechanical strength of the nickel-iron core compared to the copper, the axial CTE the wire is about the same as of the core. Therefore, a shear stress builds up which is limited to a safe value by the low tensile strength of the copper. This is also the reason why Dumet is only useful for wire diameters lower than about 0.5 mm. In a typical Dumet seal through the base of a vacuum tube a short piece of Dumet-wire is butt welded to a nickel wire at one end and a copper wire at the other end. When the base is pressed of lead glass the Dumet-wire and a short part of the nickel and the copper wire are enclosed in the glass. Then the nickel wire and the glass around the Dumet-wire are heated by a gas flame and the glass seals to the Dumet-wire. The nickel and copper do not seal vacuum tight to the glass but are mechanically supported. The butt welding also avoids problems with gas-leakages at the interface between the core wire and the copper. Copper tube seal Another possibility to avoid a strong tensile stress when sealing copper through glass is the use of a thin walled copper tube instead of a solid wire. Here a shear stress builds up in the glass-to-metal interface which is limited by the low tensile strength of the copper combined with a low tensile stress. The copper tube is insensitive to high electric current compared to a Dumet-seal because on heating the tensile stress converts into a compression stress which is again limited by the tensile strength of the copper. Also, it is possible to lead an additional solid copper wire through the copper tube. In a later variant, only a short section of the copper tube has a thin wall and the copper tube is hindered to shrink at cooling by a ceramic tube inside the copper tube. If large parts of copper are to be fitted to glass like the water cooled copper anode of a high power radio transmitter tube or an x-ray tube historically the Houskeeper knife edge seal is used. Here the end of a copper tube is machined to a sharp knife edge, invented by O. Kruh in 1917. In the method described by W.G. Houskeeper the outside or the inside of the copper tube right to the knife edge is wetted with glass and connected to the glass tube. In later descriptions the knife edge is just wetted several millimeters deep with glass, usually deeper on the inside, and then connected to the glass tube. If copper is sealed to glass, it is an advantage to get a thin bright red containing layer between copper and glass. This is done by borating. After W.J. Scott a copper plated tungsten wire is immersed for about 30 s in chromic acid and then washed thoroughly in running tap water. Then it is dipped into a saturated solution of borax and heated to bright red heat in the oxidizing part of a gas flame. Possibly followed by quenching in water and drying. Another method is to oxidize the copper slightly in a gas flame and then to dip it into borax solution and let it dry. The surface of the borated copper is black when hot and turns to dark wine red on cooling. It is also possible to make a bright seal between copper and glass where it is possible to see the blank copper surface through the glass, but this gives less adherence than the seal with the red containing layer. If glass is melted on copper in a reducing hydrogen atmosphere the seal is extremely weak. If copper is to be heated in hydrogen-containing atmosphere e.g. a gas flame it needs to be oxygen-free to prevent hydrogen embrittlement. Copper which is meant to be used as an electrical conductor is not necessarily oxygen-free and contains particles of which react with hydrogen that diffuses into the copper to which cannot diffuse out-off the copper and thus causes embrittlement. The copper usually used in vacuum applications is of the very pure OFHC (oxygen-free-high-conductivity) quality which is both free of and deoxidising additives which might evaporate at high temperature in vacuum. Copper disc seal In the copper disc seal, as proposed by W.G. Houskeeper, the end of a glass tube is closed by a round copper disc. An additional ring of glass on the opposite side of the disc increases the possible thickness of the disc to more than 0.3 mm. Best mechanical strength is obtained if both sides of the disc are fused to the same type of glass tube and both tubes are under vacuum. The disc seal is of special practical interest because it is a simple method to make a seal to low expansion borosilicate glass without the need of special tools or materials. The keys to success are proper borating, heating of the joint to a temperature as close to the melting point of the copper as possible and to slow down the cooling, at least by packing the assembly into glass wool while it is still red hot. Matched seal In a matched seal the thermal expansion of metal and glass is matched. Copper-plated tungsten wire can be used to seal through borosilicate glass with a low coefficient of thermal expansion which is matched by tungsten. The tungsten is electrolytically copper plated and heated in hydrogen atmosphere to fill cracks in the tungsten and to get a proper surface to easily seal to glass. The borosilicate glass of usual laboratory glassware has a lower coefficient of thermal expansion than tungsten, thus it is necessary to use an intermediate sealing glass to get a stress-free seal. There are combinations of glass and iron-nickel-cobalt alloys (Kovar) where even the non-linearity of the thermal expansion is matched. These alloys can be directly sealed to glass, but then the oxidation is critical. Also, their low electrical conductivity is a disadvantage. Thus, they are often gold plated. It is also possible to use silver plating, but then an additional gold layer is necessary as an oxygen diffusion barrier to prevent the formation of iron oxide. While there are Fe-Ni alloys which match the thermal expansion of tungsten at room temperature, they are not useful to seal to glass because of a too strong increase of their thermal expansion at higher temperatures. Reed switches use a matched seal between an iron-nickel alloy (NiFe 52) and a matched glass. The glass of reed switches is usually green due to its iron content because the sealing of reed switches is done by heating with infrared radiation and this glass shows a high absorption in the near infrared. The electrical connections of high-pressure sodium vapour lamps, the light yellow lamps for street lighting, are made of niobium alloyed with 1% of zirconium. Historically, some television cathode ray tubes were made by using ferric steel for the funnel and glass matched in expansion to ferric steel. The steel plate used had a diffusion layer enriched with chromium at the surface made by heating the steel together with chromium oxide in a HCl-containing atmosphere. In contrast to copper, pure iron does not bond strongly to silicate glass. Also, technical iron contains some carbon which forms bubbles of CO when it is sealed to glass under oxidizing conditions. Both are a major source of problems for the technical enamel coating of steel and make direct seals between iron and glass unsuitable for high vacuum applications. The oxide layer formed on chromium-containing steel can seal vacuum tight to glass and the chromium strongly reacts with carbon. Silver-plated iron was used in early microwave tubes. It is possible to make matched seals between copper or austenitic steel and glass, but silicate glass with that high thermal expansion is especially fragile and has a low chemical durability. Molybdenum foil seal Another widely used method to seal through glass with low coefficient of thermal expansion is the use of strips of thin molybdenum foil. This can be done with matched coefficients of thermal expansion. Then the edges of the strip also have to be knife sharp. The disadvantage here is that the tip of the edge which is a local point of high tensile stress reaches through the wall of the glass container. This can lead to low gas leakages. In the tube to tube knife edge seal the edge is either outside, inside, or buried into the glass wall. Compression seal Another possibility of seal construction is the compression seal. This type of glass-to-metal seal can be used to feed through the wall of a metal container. Here the wire is usually matched to the glass which is inside of the bore of a strong metal part with higher coefficient of thermal expansion. Compression seals can withstand extremely high pressures and physical stress such as mechanical and thermal shock. Silver chloride Silver chloride, which melts at 457 C bonds to glass, metals and other materials and has been used for vacuum seals. Even if it can be a convenient way to seal metal into glass it will not be a true glass to metal seal but rather a combination of a glass to silver chloride and a silver chloride to metal bond; an inorganic alternative to wax or glue bonds. Design aspects Also the mechanical design of a glass-to-metal seal has an important influence on the reliability of the seal. In practical glass-to-metal seals cracks usually start at the edge of the interface between glass and metal either inside or outside the glass container. If the metal and the surrounding glass are symmetric the crack propagates in an angle away from the axis. So, if the glass envelope of the metal wire extends far enough from the wall of the container the crack will not go through the wall of the container but it will reach the surface on the same side where it started and the seal will not leak despite the crack. Another important aspect is the wetting of the metal by the glass. If the thermal expansion of the metal is higher than the thermal expansion of the glass like with the Houskeeper seal, a high contact angle (bad wetting) means that there is a high tensile stress in the surface of the glass near the metal. Such seals usually break inside the glass and leave a thin cover of glass on the metal. If the contact angle is low (good wetting) the surface of the glass is everywhere under compression stress like an enamel coating. Ordinary soda-lime glass does not flow on copper at temperatures below the melting point of the copper and, thus, does not give a low contact angle. The solution is to cover the copper with a solder glass which has a low melting point and does flow on copper and then to press the soft soda-lime glass onto the copper. The solder glass must have a coefficient of thermal expansion which is equal or a little lower than that of the soda-lime glass. Classically high lead containing glasses are used, but it is also possible to substitute these by multi-component glasses e.g. based on the system ---------. See also Hermetic seal Notes References External links Glass-To-Metal Hermetic Sealing Seals (mechanical) Industrial processes Glass applications Glass engineering and science Glass compositions
Glass-to-metal seal
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
5,334
[ "Seals (mechanical)", "Glass chemistry", "Glass engineering and science", "Glass compositions", "Materials science", "Materials", "Matter" ]
13,883,867
https://en.wikipedia.org/wiki/Lithium%20burning
Lithium burning is a nucleosynthetic process in which lithium is depleted in a star. Lithium is generally present in brown dwarfs and not in older low-mass stars. Stars, which by definition must achieve the high temperature (2.5 × 106 K) necessary for fusing hydrogen, rapidly deplete their lithium. Lithium-7 Burning of the most abundant isotope of lithium, lithium-7, occurs by a collision of lithium-7 and a proton producing beryllium-8, which promptly decays into two helium-4 nuclei. The temperature necessary for this reaction is just below the temperature necessary for hydrogen fusion. Convection in low-mass stars ensures that lithium in the whole volume of the star is depleted. Therefore, the presence of the lithium line in a candidate brown dwarf's spectrum is a strong indicator that it is indeed substellar. Lithium-6 From a study of lithium abundances in 53 T Tauri stars, it has been found that lithium depletion varies strongly with size, suggesting that lithium burning by the P-P chain, during the last highly convective and unstable stages during the pre–main sequence later phase of the Hayashi contraction may be one of the main sources of energy for T Tauri stars. Rapid rotation tends to improve mixing and increase the transport of lithium into deeper layers where it is destroyed. T Tauri stars generally increase their rotation rates as they age, through contraction and spin-up, as they conserve angular momentum. This causes an increased rate of lithium loss with age. Lithium burning will also increase with higher temperatures and mass, and will last for at most a little over 100 million years. The P-P chain for lithium burning is as follows {| border="0" |- style="height:2em;" |   || +  ||   || →  || || ||   ||   ||   |- style="height:2em;" |   ||   ||   ||   ||   || +  || → || || + |- style="height:2em;" |   || +  ||   || →  ||   || ||   ||   ||   |- style="height:2em;" |   ||   ||   ||   ||   ||   || →   2×  ||   || + energy |} It will not occur in stars less than sixty times the mass of Jupiter. In this way, the rate of lithium depletion can be used to calculate the age of the star. Lithium test The use of lithium to distinguish candidate brown dwarfs from low-mass stars is commonly referred to as the lithium test. Heavier stars like the Sun can retain lithium in their outer atmospheres, which never get hot enough for lithium depletion, but those are distinguishable from brown dwarfs by their size. Brown dwarfs at the high end of their mass range (60–75 MJ) can be hot enough to deplete their lithium when they are young. Dwarfs of mass greater than 65 MJ can burn off their lithium by the time they are half a billion years old; thus, this test is not perfect. See also Cosmological lithium problem Dilithium Halo nucleus Isotopes of lithium Lithium References Nuclear fusion Lithium
Lithium burning
[ "Physics", "Chemistry" ]
665
[ "Nuclear fusion", "Nuclear physics" ]
13,884,766
https://en.wikipedia.org/wiki/Integrated%20master%20plan
In the United States Department of Defense, the Integrated Master Plan (IMP) and the Integrated Master Schedule (IMS) are important program management tools that provide significant assistance in the planning and scheduling of work efforts in large and complex materiel acquisitions. The IMP is an event-driven plan that documents the significant accomplishments necessary to complete the work and ties each accomplishment to a key program event. The IMP is expanded to a time-based IMS to produce a networked and multi-layered schedule showing all detailed tasks required to accomplish the work effort contained in the IMP. The IMS flows directly from the IMP and supplements it with additional levels of detail——both then form the foundations to implement an Earned Value Management System. The IMP is a bilateral agreement between the Government and a contractor on what defines the “event-driven” program. The IMP documents the key events, accomplishments, and the evaluation "criteria" in the development, production and/or modification of a military system; moreover, the IMS provides sequential events and key decision points (generally meetings) to assess program progress. Usually the IMP is a contractual document. Supporting the IMP is the IMS that is made up of "tasks" depicting the work effort needed to complete the "criteria". It is a detailed time-driven plan for program execution that helps to ensure on-time delivery dates are achieved, and that tracking and status tool are used during program execution. These tools must show progress, interrelationships and dependencies. In civic planning or urban planning, Integrated Master Plan is used at the levels of city development, county, and state or province to refer to a document integrating diverse aspects of a public works project. Purpose and Objectives The primary purpose of the IMP—and the supporting detailed schedules of the IMS—is their use by the U.S. Government and Contractor acquisition team as the day-to-day tools for the planning, executing, and tracking program technical, schedule, and cost status, including risk mitigation efforts. The IMP provides a better structure than either the Work Breakdown Structure (WBS) or Organizational Breakdown Structure (OBS) for measuring actual integrated master schedule (IMS) progress. The primary objective of the IMP is a single plan that establishes the program or project fundamentals. It provides a hierarchical, event-based plan that contains: Events; Significant accomplishments; Entry and exit criteria; however it does not include any dates or durations. Using the IMP provides sufficient definition for explain program process and completion tracking, as well as providing effective communication of the program/project content and the "What and How" of the program. Rationale The IMP is a collection of milestones (called "events") that form the process architecture of the program. This means the sequence of events must always result in a deliverable product or service. While delivering products or services is relatively straight forward in some instances (i.e., list the tasks to be done, arrange them in the proper sequence, and execute to this “plan”), in other cases, problems often arise: (i) the description of "complete" is often missing for intermediate activities; (ii) program partners, integration activities, and subcontractors all have unknown or possibly unknowable impacts on the program; and (iii) as products or services are delivered the maturity of the program changes (e.g., quality and functionality expectations, as well as other attributes)——this maturity provided by defining "complete" serves as an insurance policy against future problems encountered later in the program. Often, it's easier to define the IMP by stating what it is not. The IMP is NOT BASED on calendar dates, and therefore it is not schedule oriented; each event is completed when its supporting accomplishments are completed, and this completion is evidenced by the satisfaction of the criteria supporting each of the accomplishments. Furthermore, many of the IMP events are fixed by customer-defined milestones (e.g., Preliminary or Critical Design Review, Production Deliver, etc.) while intermediate events are defined by the Supplier (e.g., integration and test, software build releases, Test Readiness Review, etc.). The critical IMP attribute is its focus on events, when compared to effort or task focused planning. The event focus asks and answers the question what does done look like? rather than what work has been done. Certainly work must be done to complete a task, but a focus solely on the work hides the more important metric of are we meeting our commitments? While meeting commitments is critical, it's important to first define the criteria used for judging if the commitments are being met. This is where Significant Accomplishments (SA) and their Accomplishment Criteria (AC) become important. It is important to meet commitments, but recognizing when the commitment has been met is even more important. Attributes and Characteristics The IMP provides Program Traceability by expanding and complying with the program's Statement of Objectives (SOO), Technical Performance Requirements (TPRs), the Contract Work Breakdown Structure (CWBS), and the Contract Statement of Work (CSOW)—all of which are based on the Customer's WBS to form the basis of the IMS and all cost reporting. The IMP implements a measurable and trackable program structure to accomplish integrated product development, integrate the functional program activities, and incorporates functional, lower-level and subcontractor IMPs. The IMP provides a framework for independent evaluation of Program Maturity by allowing insight into the overall effort with a level-of-detail that is consistent with levied risk and complexity metrics. It uses the methodology of decomposing events into a logical series of accomplishments having measurable criteria to demonstrate the completion and/or quality of accomplishments. Requirements Flowdown A Government customer tasks a Supplier to prepare and implement an IMP that linked with the IMS and integrated with the EVMS. The IMP list the contract requirements documents (e.g., Systems Requirements Document and Technical Requirements Document (i.e., the system specification or similar document)) as well as the IMP events corresponding to development and/or production activities required by the contract. The IMP should include significant accomplishments encompassing all steps necessary to satisfy all contract objectives and requirements, manage all significant risks, and facilitate Government insight for each event. Significant accomplishments shall be networked to show their logical relationships and that they flow logically from one to another. The IMP, IMS, and EVMS products will usually include the prime contractor, subcontractor, and major vendor activities and products. Evaluation of an IMS When evaluating a proposed IMS, the user should focus on realistic task durations, predecessor/successor relationships, and identification of critical path tasks with viable risk mitigation and contingency plans. An IMS summarized at too high a level may result in obscuring critical execution elements, and contributing to failure of the EVMS to report progress. A high-level IMS may fail to show related risk management approaches being used, which can result in long duration tasks and artificial linkages masking the true critical path. In general, the IMP is a top-down planning tool and the IMS as the bottom-up execution tool. The IMS is a scheduling tool for management control of program progression, not for cost collection purposes. An IMS would seek general consistency and a standardized approach to project planning, scheduling and analysis. It may use guides such as the PASEG Generally Accepted Schedule Principles (GASP) as guidance to improve execution and enable EVMS. Relationship to other Documents The IMP/IMS are related to the product-based Work Breakdown Structure (WBS) as defined in MIL-STD-881, by giving a second type of view on the effort, for different audiences or to provide a combination which gives better overall understanding. Linkage between the IMP/IMS and WBS is done by referencing the WBS numbering whenever the PE (Program Event), SA (Significant Accomplishment), or AC (Accomplishment Criteria) involves a deliverable product. Reporting Formats The IMP is often called out as a contract data deliverable on United States Department of Defense materiel acquisitions, as well as other U.S. Government procurements. Formats for these deliverables are covered in Data Item Descriptions (DIDs) that define the data content, format, and data usages. Recently, the DoD cancelled the DID (DI-MISC-81183A) that jointly addressed both the IMP and the IMS. The replacement documents include DI-MGMT-81650 (Integrated Master Schedule), DI-MGMT-81334A (Contract Work Breakdown Structure) and DI-MGMT-81466 (Contract Performance Report). In addition DFARS 252.242–7001 and 252.242–7002 provide guidance for integrating IMP/IMS with Earned Value Management. References Military of the United States standards Schedule (project management) Systems engineering Military terminology of the United States Procurement Urban planning
Integrated master plan
[ "Physics", "Engineering" ]
1,832
[ "Systems engineering", "Physical quantities", "Time", "Urban planning", "Schedule (project management)", "Spacetime", "Architecture" ]
13,885,985
https://en.wikipedia.org/wiki/Computron%20tube
The Computron was an electron tube designed to perform the parallel addition and multiplication of digital numbers. It was conceived by Richard L. Snyder, Jr., Jan A. Rajchman, Paul Rudnick and the digital computer group at the laboratories of the Radio Corporation of America under the direction of Vladimir Zworykin. Development began in 1941 under contract OEM-sr-591 to Division 7 of the National Defense Research Committee of the United States Office of Research and Development. The numerical function of the Computron was to solve the equation where A, B, C, and D are 14 bit inputs and S is a 28 bit output. This function was key to the RCA attempt to produce a non-analog computer based fire-control system for use in artillery aiming during WWII. A simple way to describe the physically complex Computron is to begin with a cathode ray tube structure in the form of a right-circular cylinder with a central vertical cathode structure. The cylinder is composed of 14 discrete planes, each plane having 14 individual radial outward projecting beams. Each of the 196 individual beams is steered by multiple deflection plates toward its two targets. Some deflection plates are connected to circuitry external to the Computron and are the data inputs. The balance of the plates are connected to internal targets and are the partial sums and products from other stages within the tube. Some of the targets are connected to circuitry outside the tube and represent the result. The electronic function of the Computron design incorporated steered, rather than gated, multiple electron beams. Additionally, the Computron was based on the ability of a secondary electron emission target, under electron bombardment, to assume the potential of the nearest collector electrode. The Additron Tube design by Josef Kates gated electron beams of a fixed trajectory with several control grids which either passed or blocked a current. The Computron was a complex cathode ray tube while the Additron was a triode with multiple grids and targets. A subsection of the Computron was prototyped and tested and the concept validated but the building of an entire device was never attempted. A United States Patent was filed 30 July 1943 and granted 22 July 1947 for the Computron. Modern implications The Computron design was an early attempt to produce not only a vacuum tube integrated circuit for both size and reliability (lifetime) issues, but to minimize external electrical connections between active elements. The goal of integration is not merely to reduce external signal connections into and out of a package by including multiple active devices in one package, as in the Loewe 3NF tube. It is to merge the functions of the active devices for a technical synergy. A modern example would be the multiple-emitter transistor of transistor–transistor logic integrated circuits Another modern construct anticipated by the Computron is the barrel shifter circuit which is used in many numeric computation style microprocessors. Damning praise The Computron was an idea born of the necessity of war research. It was to be a key element of the electronic digital computer which had yet to be built. But the project was begun to increase the accuracy of artillery in battle, not to advance the state of the embryonic electronic computer. Its fate was well described in a letter to Dr. Paul E. Klopsteg, Head of NDRC Division 17, dated 6 February 1943 which concludes: ...As I said above, our entire Division is exceedingly reluctant to see a development which is scientifically so beautiful and so promising dropped at this point, though cold reason tells us that we cannot justify the expenditure of additional Government funds on the basis of Fire Control at this time. Sincerely yours, Harold L. Hazen Chief, Division 7 Patents References Early computers Digital electronics Computer arithmetic Vacuum tubes
Computron tube
[ "Physics", "Mathematics", "Engineering" ]
789
[ "Digital electronics", "Vacuum tubes", "Vacuum", "Computer arithmetic", "Electronic engineering", "Arithmetic", "Matter" ]
3,227,838
https://en.wikipedia.org/wiki/Dowd%E2%80%93Beckwith%20ring-expansion%20reaction
The Dowd–Beckwith ring-expansion reaction is an organic reaction in which a cyclic carbonyl (typically a β-keto ester) is expanded by up to 4 carbons in a free radical ring expansion reaction through an α-alkylhalo substituent. The radical initiator system is based on azobisisobutyronitrile and tributyltin hydride. The cyclic β-keto ester can be obtained through a Dieckmann condensation. The original reaction consisted of a nucleophilic aliphatic substitution of the enolate of ethyl cyclohexanone-2-carboxylate with 1,4-diiodobutane and sodium hydride followed by ring expansion to ethyl cyclodecanone-6-carboxylate. A side-reaction is organic reduction of the iodoalkane. Reaction mechanism The reaction mechanism involves a bicyclic intermediate. The reaction is initiated by thermal decomposition of AIBN. The resulting radicals abstract hydrogen from tributyltin hydride to a tributyltin radical which in turn abstracts the halogen atom to form an alkyl radical. This radical attacks the carbonyl group to an intermediate bicyclic ketyl. This intermediate then rearranges with ring expansion to a new carbon radical species which recombines with a proton radical from tributyltin hydride propagating the catalytic cycle. Scope A side reaction accompanying this ring expansion is organic reduction of the halo alkane to a saturated alkyl group. One study shows that the success depends critically on the accessibility of the carbonyl group. Deuterium experiments also show the presence of a 1,5 hydride shift. The reaction of the alkyl radical with the ester carbonyl group is also a possibility but has an unfavorable activation energy. References A new tributyltin hydride-based rearrangement of bromomethyl .beta.-keto esters. A synthetically useful ring expansion to .gamma.-keto esters Paul Dowd, Soo Chang Choi; J. Am. Chem. Soc.; 1987; 109(11); 3493–3494. Free radical ring expansion by three and four carbons Paul Dowd, Soo Chang Choi; J. Am. Chem. Soc.; 1987; 109(21); 6548–6549. Rearrangement of suitably constituted aryl, alkyl, or vinyl radicals by acyl or cyano group migration Athelstan L. J. Beckwith, D. M. O'Shea, Steven W. Westwood; J. Am. Chem. Soc.; 1988; 110(8); 2565–2575. Three-Carbon Dowd–Beckwith Ring Expansion Reaction versus Intramolecular 1,5-Hydrogen Transfer Reaction: A Theoretical Study Diego Ardura and Tomás L. Sordo J. Org. Chem.; 2005; 70(23) pp. 9417–9423; (Article) Rearrangement reactions Name reactions Ring expansion reactions
Dowd–Beckwith ring-expansion reaction
[ "Chemistry" ]
666
[ "Name reactions", "Ring expansion reactions", "Rearrangement reactions", "Organic reactions" ]
3,228,801
https://en.wikipedia.org/wiki/Optoelectric%20nuclear%20battery
An optoelectric nuclear battery (also radiophotovoltaic device, radioluminescent nuclear battery or radioisotope photovoltaic generator) is a type of nuclear battery in which nuclear energy is converted into light, which is then used to generate electrical energy. This is accomplished by letting the ionizing radiation emitted by the radioactive isotopes hit a luminescent material (scintillator or phosphor), which in turn emits photons that generate electricity upon striking a photovoltaic cell. The technology was developed by researchers of the Kurchatov Institute in Moscow. Description A beta emitter such as technetium-99 or strontium-90 is suspended in a gas or liquid containing luminescent gas molecules of the excimer type, constituting a "dust plasma". This permits a nearly lossless emission of beta electrons from the emitting dust particles. The electrons then excite the gases whose excimer line is selected for the conversion of the radioactivity into a surrounding photovoltaic layer such that a theoretical lightweight, low-pressure, high-efficiency battery can be realized. (In practice, existing designs are heavy and involve high pressure.) These nuclides are relatively low-cost radioactive waste from nuclear power reactors. The diameter of the dust particles is so small (a few micrometers) that the electrons from the beta decay leave the dust particles nearly without loss. The surrounding weakly ionized plasma consists of gases or gas mixtures (such as krypton, argon, and xenon) with excimer lines such that a considerable amount of the energy of the beta electrons is converted into this light. The surrounding walls contain photovoltaic layers with wide forbidden zones, such as diamond, which convert the optical energy generated from the radiation into electrical energy. A German patent provides a description of an optoelectric nuclear battery, which would consist of an excimer of argon, xenon, or krypton (or a mixture of two or three of them) in a pressure vessel with an internal mirrored surface, finely-ground radioisotope, and an intermittent ultrasonic stirrer, illuminating a photocell with a bandgap tuned for the excimer. When the beta-emitting nuclides (e.g., krypton-85 or argon-39) emit beta particles, they excite their own electrons in the narrow excimer band at a minimum of thermal losses, so that this radiation is converted in a high-bandgap photovoltaic layer (e.g., in p-n diamond) very efficiently into electricity. The electric power per weight, compared with existing radionuclide batteries, can then be increased by a factor 10 to 50 or more. If the pressure vessel is made from carbon fiber/epoxy, the power-to-weight ratio is said to be comparable to an air-breathing engine with fuel tanks. The advantage of this design is that precision electrode assemblies are not needed, and most beta particles escape the finely-divided bulk material to contribute to the battery's net power. Disadvantages High price of the radionuclides. High-pressure (up to 10 MPa or 100 bar) heavy containment vessel. A failure of containment would release high-pressure jets of finely-divided radioisotopes, forming an effective dirty bomb. The inherent risk of failure is likely to limit this device to space-based applications, where the finely-divided radioisotope source is only removed from a safe transport medium and placed in the high-pressure gas after the device has left Earth orbit. As a DIY project A simple betaphotovoltaic nuclear battery can be constructed from readily-available tritium vials (tritium-filled glass tubes coated with a radioluminescent phosphor) and solar cells. One design featuring 14 22.5x3mm tritium vials produced 1.23 microwatts at a maximum powerpoint of 1.6 volts. Another design combined the battery with a capacitor to power a pocket calculator for up to one minute at a time. See also Nuclear battery Betavoltaic device Radioisotopic thermoelectric generator Radioisotope piezoelectric generator List of battery types References Polymers, Phosphors, and Voltaics for Radioisotope Microbatteries, by Kenneth E. Bower (Editor), et al. US Patent 7,482,533 Nuclear-cored battery Nuclear technology Battery types
Optoelectric nuclear battery
[ "Physics" ]
952
[ "Nuclear technology", "Nuclear physics" ]
3,229,108
https://en.wikipedia.org/wiki/Favorskii%20rearrangement
The Favorskii rearrangement is principally a rearrangement of cyclopropanones and α-halo ketones that leads to carboxylic acid derivatives. In the case of cyclic α-halo ketones, the Favorskii rearrangement constitutes a ring contraction. This rearrangement takes place in the presence of a base, sometimes hydroxide, to yield a carboxylic acid, but usually either an alkoxide base or an amine to yield an ester or an amide, respectively. α,α'-Dihaloketones eliminate HX under the reaction conditions to give α,β-unsaturated carbonyl compounds. Note that trihalomethyl ketone substrates will result in haloform and carboxylate formation via the haloform reaction instead. History The reaction is named for the Russian chemist Alexei Yevgrafovich Favorskii. Reaction mechanism The reaction mechanism is thought to involve the formation of an enolate on the side of the ketone away from the chlorine atom. This enolate cyclizes to a cyclopropanone intermediate which is then attacked by the hydroxide nucleophile. Its formation can otherwise be viewed as a 2-electron electrocyclization of a 1,3-dipole, which can be captured in Diels Alder reactions. The cyclopropanone intermediate is opened to yield the more stable carbanion, which is quickly protonated. The second step has also been proposed to be stepwise process, with chloride anion leaving first to produce a zwitterionic oxyallyl cation before a disrotatory electrocyclic ring closure takes place to afford the cyclopropanone intermediate. Usage of alkoxide anions such as sodium methoxide, instead of sodium hydroxide, yields the ring-contracted ester product. When enolate formation is impossible, the Favorskii rearrangement takes place by an alternate mechanism, in which addition to hydroxide to the ketone takes place, followed by concerted collapse of the tetrahedral intermediate and migration of the neighboring carbon with displacement of the halide. This is sometimes known as the pseudo-Favorskii rearrangement or quazi-Favorskii rearrangement, although previous to labeling studies, it was thought that all Favorskii rearrangements proceeded through this mechanism. Wallach degradation In the related Wallach degradation (Otto Wallach, 1918) not one but two halogen atoms flank the ketone resulting in a new contracted ketone after oxidation and decarboxylation Photo-Favorskii reaction The reaction type also exists as a photochemical reaction. The photo-Favorskii reaction has been used in the photochemical unlocking of certain phosphates (for instance those of ATP) protected by p-hydroxyphenacyl groups. The deprotection proceeds through a triplet diradical (3) and a dione spiro intermediate (4) although the latter has thus far eluded detection. See also Syntheses of cubane proceed by Favorskii rearrangements: Trimethylenemethane cycloaddition, which can proceed via a similar mechanism Homo-Favorskii rearrangement, the rearrangement of β-halo ketones and cyclobutanones Wolff rearrangement - can react similarly References Further reading Rearrangement reactions Name reactions Ring contraction reactions
Favorskii rearrangement
[ "Chemistry" ]
716
[ "Name reactions", "Ring contraction reactions", "Rearrangement reactions", "Organic reactions" ]
3,231,202
https://en.wikipedia.org/wiki/Pan-STARRS
The Panoramic Survey Telescope and Rapid Response System (Pan-STARRS1; obs. code: F51 and Pan-STARRS2 obs. code: F52) located at Haleakala Observatory, Hawaii, US, consists of astronomical cameras, telescopes and a computing facility that is surveying the sky for moving or variable objects on a continual basis, and also producing accurate astrometry and photometry of already-detected objects. In January 2019 the second Pan-STARRS data release was announced. At 1.6 petabytes, it is the largest volume of astronomical data ever released. Description The Pan-STARRS Project is a collaboration between the University of Hawaiʻi Institute for Astronomy, MIT Lincoln Laboratory, Maui High Performance Computing Center and Science Applications International Corporation. Telescope construction was funded by the U.S. Air Force. By detecting differences from previous observations of the same areas of the sky, Pan-STARRS is discovering many new asteroids, comets, variable stars, supernovae and other celestial objects. Its primary mission is now to detect Near-Earth Objects that threaten impact events and it is expected to create a database of all objects visible from Hawaii (three-quarters of the entire sky) down to apparent magnitude 24. Construction of Pan-STARRS was funded in large part by the U.S. Air Force Research Laboratory. Additional funding to complete Pan-STARRS2 came from the NASA Near Earth Object Observation Program, which also supplies most of the funding to operate the telescopes. The Pan-STARRS NEO survey searches all the sky north of declination −47.5. The first Pan-STARRS telescope (PS1) is located at the summit of Haleakalā on Maui, Hawaii, and went online on 6 December 2008 under the administration of the University of Hawaiʻi. PS1 began full-time science observations on 13 May 2010 and the PS1 Science Mission ran until March 2014. Operations were funded by the PS1 Science Consortium, PS1SC, a consortium including the Max Planck Society in Germany, National Central University in Taiwan, Edinburgh, Durham and Queen's Belfast Universities in the UK, and Johns Hopkins and Harvard Universities in the United States and the Las Cumbres Observatory Global Telescope Network. Consortium observations for the all sky (as visible from Hawaii) survey were completed in April 2014. Having completed PS1, the Pan-STARRS Project focused on building Pan-STARRS 2 (PS2), for which first light was achieved in 2013, with full science operations scheduled for 2014 and then the full array of four telescopes, sometimes called PS4. Completing the array of four telescopes is estimated at a total cost of US$100 million for the entire array. As of mid-2014, Pan-STARRS 2 was in the process of being commissioned. In the wake of substantial funding problems, no clear timeline existed for additional telescopes beyond the second. In March 2018, Pan-STARRS 2 was credited by the Minor Planet Center for the discovery of the potentially hazardous Apollo asteroid , its first minor-planet discovery made at Haleakala on 13 May 2015. Instruments Pan-STARRS currently (2018) consists of two 1.8-m Ritchey–Chrétien telescopes located at Haleakala in Hawaii. The initial telescope, PS1, saw first light using a low-resolution camera in June 2006. The telescope has a 3° field of view, which is extremely large for telescopes of this size, and is equipped with what was the largest digital camera ever built, recording almost 1.4 billion pixels per image. The focal plane has 60 separately mounted close packed CCDs arranged in an 8 × 8 array. The corner positions are not populated, as the optics do not illuminate the corners. Each CCD device, called an Orthogonal Transfer Array (OTA), has 4800 × 4800 pixels, separated into 64 cells, each of 600 × 600 pixels. This gigapixel camera or 'GPC' saw first light on 22 August 2007, imaging the Andromeda Galaxy. After initial technical difficulties that were later mostly solved, PS1 began full operation on 13 May 2010. Nick Kaiser, principal investigator of the Pan-STARRS project, summed it up, saying, "PS1 has been taking science-quality data for six months, but now we are doing it dusk-to-dawn every night." The PS1 images, however, remain slightly less sharp than initially planned, which significantly affects some scientific uses of the data. Each image requires about 2 gigabytes of storage and exposure times will be 30 to 60 seconds (enough to record objects down to apparent magnitude 22), with an additional minute or so used for computer processing. Since images are taken on a continuous basis, about 10 terabytes of data are acquired by PS1 every night. Comparing against a database of known unvarying objects compiled from earlier observations will yield objects of interest: anything that has changed brightness and/or position for any reason. As of June 30, 2010, University of Hawaiʻi in Honolulu received an $8.4 million contract modification under the PanSTARRS multi-year program to develop and deploy a telescope data management system for the project. The very large field of view of the telescopes and the relatively short exposure times enable approximately 6000 square degrees of sky to be imaged every night. The entire sky is 4π steradians, or 4π × (180/π)2 ≈ 41,253.0 square degrees, of which about 30,000 square degrees are visible from Hawaii, which means that the entire sky can be imaged in a period of 40 hours (or about 10 hours per night on four days). Given the need to avoid times when the Moon is bright, this means that an area equivalent to the entire sky will be surveyed four times a month, which is entirely unprecedented. By the end of its initial three-year mission in April 2014, PS1 had imaged the sky 12 times in each of 5 filters ('g', 'r', 'i', 'z', and 'y'). Filters 'g', 'r', and 'i' have the bandpasses of the Sloan Digital Sky Survey (SDSS) filters. (Midpoints and bandwidths at half maximum are 464 nm and 128 nm, 658 nm and 138 nm, and 806 nm and 149 nm, respectively.) The'z' filter has the SDSS midpoint (900 nm), but its longwave cutoff avoids water absorptions bands beginning at 930 nm. The shortwave cutoff of the 'y' filter is set by the water absorption bands that end around 960 nm. The longwave cutoff band is currently at 1030 nm to avoid the worst of the detector sensitivity to temperature variations. Science Pan-STARRS is currently mostly funded by a grant from the NASA Near Earth Object Observations program. It therefore spends 90% of its observing time in dedicated searches for Near Earth Objects. Systematically surveying the entire sky on a continuous basis is an unprecedented project and is expected to produce a dramatically larger number of discoveries of various types of celestial objects. For instance, the current leading asteroid discovery survey, the Mount Lemmon Survey, reaches an apparent magnitude of 22 V. Pan-STARRS will go about one magnitude fainter and cover the entire sky visible from Hawaii. The ongoing survey will also complement the efforts to map the infrared sky by the NASA WISE orbital telescope, with the results of one survey complementing and extending the other. The second data release, Pan-STARRS DR2, announced in January 2019, is the largest volume of astronomical data ever released. At over 1.6 petabytes of images, it is equivalent to 30,000 times the text content of Wikipedia. The data reside in the Mikulski Archive for Space Telescopes (MAST). Military limitations (until end 2011) According to Defense Industry Daily, significant limitations were put on the PS1 survey to avoid recording sensitive objects. Streak detection software (known as "Magic") was used to censor pixels containing information about satellites in the image. Early versions of this software were immature, leaving a fill factor of 68% of the full field of view (which figure includes gaps between the detectors), but by March 2010 this had improved to 76%, a small reduction from the approximately 80% available. At the end of 2011, the USAF completely eliminated the masking requirement (for all images, past and future). Thus, with the exception of a few non-functioning OTA cells, the entire field of view can be used. Solar System In addition to the large number of expected discoveries in the asteroid belt, Pan-STARRS is expected to detect at least 100,000 Jupiter trojans (compared to 2900 known as of end-2008); at least 20,000 Kuiper belt objects (compared to 800 known as of mid-2005); thousands of trojan asteroids of Saturn, Uranus, and Neptune (currently eight Neptune trojans are known, none for Saturn, and one for Uranus); and large numbers of centaurs and comets. Apart from dramatically adding to the number of known Solar System objects, Pan-STARRS will remove or mitigate the observational bias inherent in many current surveys. For instance, among currently known objects there is a bias favoring low orbital inclination, and thus an object such as escaped detection until recently despite its bright apparent magnitude of 17, which is not much fainter than Pluto. Also, among currently known comets, there is a bias favoring those with short perihelion distances. Reducing the effects of this observational bias will enable a more complete picture of Solar System dynamics. For instance, it is expected that the number of Jupiter trojans larger than 1 km may in fact roughly match the number of asteroid-belt objects, although the currently known population of the latter is several orders of magnitude larger. Pan-STARRS data will elegantly complement the WISE (infrared) survey. WISE infrared images will permit an estimate of size for asteroids and trojan objects tracked over longer periods of time by Pan-STARRS. In 2017, Pan-STARRS detected the first known interstellar object, 1I/2017 U1 'Oumuamua, passing through the Solar System. During the formation of a planetary system, it is thought that a very large number of objects are ejected due to gravitational interactions with planets (as many as 1013 such objects in the case of the Solar System). Objects ejected from planetary systems of other stars might plausibly be throughout the Milky Way and some may pass through the Solar System. Pan-STARRS may detect collisions involving small asteroids. These are quite rare and none have yet been observed, but with a dramatic increase in the number of asteroids discovered it is expected from statistical considerations that some collision events may be observed. In November 2019, a review of images from Pan-STARRS revealed that the telescope had captured the disintegration of asteroid P/2016 G1. The asteroid was struck by a smaller object, and gradually fell apart. Astronomers speculate that the object that struck the asteroid may have massed only , traveling at . Beyond the Solar System It is expected that Pan-STARRS will discover an extremely large number of variable stars, including such stars in other nearby galaxies; this may lead to the discovery of previously unknown dwarf galaxies. In discovering numerous Cepheid variables and eclipsing binary stars, it will help determine distances to nearby galaxies with greater precision. It is expected to discover many Type Ia supernovae in other galaxies, which are important in studying the effects of dark energy, and also optical afterglows of gamma ray bursts. Because very young stars (such as T Tauri stars) are usually variable, Pan-STARRS should discover many of these and improve our understanding of them. It is also expected that Pan-STARRS may discover many extrasolar planets by observing their transits across their parent stars, as well as gravitational microlensing events. Pan-STARRS will also measure proper motion and parallax and should thereby discover many brown dwarfs, white dwarfs, and other nearby faint objects, and it should be able to conduct a complete census of all stars within 100 parsecs of the Sun. Prior proper motion and parallax surveys often did not detect faint objects such as the recently discovered Teegarden's star, which are too faint for projects such as Hipparcos. Also, by identifying stars with large parallax but very small proper motion for follow-up radial velocity measurements, Pan-STARRS may even be able to permit the detection of hypothetical Nemesis-type objects if these actually exist. Selected discoveries See also C/2014 G3 Vera C. Rubin Observatory List of near-Earth object observation projects Zwicky Transient Facility Notes References External links PS1 Science Consortium web site The Pan-STARRS1 data archive home page Project Pan-STARRS and the Outer Solar System New telescope will hunt dangerous asteroids. NS 2006 World's biggest digital camera to join asteroid search Is there a Planet X? Early warning of dangerous asteroids and comets Optical telescopes Astronomical surveys Near-Earth object tracking
Pan-STARRS
[ "Astronomy" ]
2,695
[ "Astronomical surveys", "Works about astronomy", "Astronomical objects" ]
3,231,627
https://en.wikipedia.org/wiki/Lagrangian%20Grassmannian
In mathematics, the Lagrangian Grassmannian is the smooth manifold of Lagrangian subspaces of a real symplectic vector space V. Its dimension is n(n + 1) (where the dimension of V is 2n). It may be identified with the homogeneous space , where is the unitary group and the orthogonal group. Following Vladimir Arnold it is denoted by Λ(n). The Lagrangian Grassmannian is a submanifold of the ordinary Grassmannian of V. A complex Lagrangian Grassmannian is the complex homogeneous manifold of Lagrangian subspaces of a complex symplectic vector space V of dimension 2n. It may be identified with the homogeneous space of complex dimension n(n + 1) , where is the compact symplectic group. As a homogeneous space To see that the Lagrangian Grassmannian Λ(n) can be identified with , note that is a 2n-dimensional real vector space, with the imaginary part of its usual inner product making it into a symplectic vector space. The Lagrangian subspaces of are then the real subspaces of real dimension n on which the imaginary part of the inner product vanishes. An example is . The unitary group acts transitively on the set of these subspaces, and the stabilizer of is the orthogonal group . It follows from the theory of homogeneous spaces that Λ(n) is isomorphic to as a homogeneous space of . Topology The stable topology of the Lagrangian Grassmannian and complex Lagrangian Grassmannian is completely understood, as these spaces appear in the Bott periodicity theorem: , and – they are thus exactly the homotopy groups of the stable orthogonal group, up to a shift in indexing (dimension). In particular, the fundamental group of is infinite cyclic. Its first homology group is therefore also infinite cyclic, as is its first cohomology group, with a distinguished generator given by the square of the determinant of a unitary matrix, as a mapping to the unit circle. Arnold showed that this leads to a description of the Maslov index, introduced by V. P. Maslov. For a Lagrangian submanifold M of V, in fact, there is a mapping which classifies its tangent space at each point (cf. Gauss map). The Maslov index is the pullback via this mapping, in of the distinguished generator of . Maslov index A path of symplectomorphisms of a symplectic vector space may be assigned a Maslov index, named after V. P. Maslov; it will be an integer if the path is a loop, and a half-integer in general. If this path arises from trivializing the symplectic vector bundle over a periodic orbit of a Hamiltonian vector field on a symplectic manifold or the Reeb vector field on a contact manifold, it is known as the Conley–Zehnder index. It computes the spectral flow of the Cauchy–Riemann-type operators that arise in Floer homology. It appeared originally in the study of the WKB approximation and appears frequently in the study of quantization, quantum chaos trace formulas, and in symplectic geometry and topology. It can be described as above in terms of a Maslov index for linear Lagrangian submanifolds. References V. I. Arnold, Characteristic class entering in quantization conditions, Funktsional'nyi Analiz i Ego Prilozheniya, 1967, 1,1, 1-14, . V. P. Maslov, Théorie des perturbations et méthodes asymptotiques. 1972 Assorted source material relating to the Maslov index. Symplectic geometry Topology of homogeneous spaces Mathematical quantization
Lagrangian Grassmannian
[ "Physics" ]
800
[ "Mathematical quantization", "Quantum mechanics" ]
3,231,876
https://en.wikipedia.org/wiki/Pneumatic%20artificial%20muscles
Pneumatic artificial muscles (PAMs) are contractile or extensional devices operated by pressurized air filling a pneumatic bladder. In an approximation of human muscles, PAMs are usually grouped in pairs: one agonist and one antagonist. PAMs were first developed (under the name of McKibben Artificial Muscles) in the 1950s for use in artificial limbs. The Bridgestone rubber company (Japan) commercialized the idea in the 1980s under the name of Rubbertuators. The retraction strength of the PAM is limited by the sum total strength of individual fibers in the woven shell. The exertion distance is limited by the tightness of the weave; a very loose weave allows greater bulging, which further twists individual fibers in the weave. One example of a complex configuration of air muscles is the Shadow Dexterous Hand developed by the Shadow Robot Company, which also sells a range of muscles for integration into other projects/systems. Advantages PAMs are very lightweight because their main element is a thin membrane. This allows them to be directly connected to the structure they power, which is an advantage when considering the replacement of a defective muscle. If a defective muscle has to be substituted, its location will always be known and its substitution becomes easier. This is an important characteristic, since the membrane is connected to rigid endpoints, which introduces tension concentrations and therefore possible membrane ruptures. Another advantage of PAMs is their inherent compliant behavior: when a force is exerted on the PAM, it "gives in", without increasing the force in the actuation. This is an important feature when the PAM is used as an actuator in a robot that interacts with a human, or when delicate operations have to be carried out. In PAMs the force is not only dependent on pressure but also on their state of inflation. This is one of the major advantages; the mathematical model that supports the PAMs functionality is a non-linear system, which makes them much easier than conventional pneumatic cylinder actuators to control precisely. The relationship between force and extension in PAMs mirrors what is seen in the length-tension relationship in biological muscle systems. The compressibility of the gas is also an advantage since it adds compliance. As with other pneumatic systems PAM actuators usually need electric valves and a compressed air generator. The loose-weave nature of the outer fiber shell also enables PAMs to be flexible and to mimic biological systems. If the surface fibers are very badly damaged and become unevenly distributed leaving a gap, the internal bladder may inflate through the gap and rupture. As with all pneumatic systems it is important that they are not operated when damaged. Hydraulic operation Although the technology is primarily pneumatically (gas) operated, there is nothing that prevents the technology from also being hydraulically (liquid) operated. Using an incompressible fluid increases system rigidity and reduces compliant behavior. In 2017, such a device was presented by Bridgestone and the Tokyo Institute of Technology, with a claimed strength-to-weight ratio five to ten times higher than for conventional electric motors and hydraulic cylinders. See also Artificial muscle Electroactive polymer Exoskeleton Notes External links Pneumatic Artificial Muscles: actuators for robotics and automation Bas Overvelde's ballooning muscles Pneumatic artificial muscles Biped robot powered by pneumatic artificial muscles Soft Robot Manipulators with McKibben muscles Air Muscles from Images Company Air Muscles from Shadow Robots Robotics hardware Pneumatic actuators
Pneumatic artificial muscles
[ "Engineering" ]
721
[ "Robotics hardware", "Robotics engineering" ]
3,232,652
https://en.wikipedia.org/wiki/Pressure%20experiment
Pressure experiments are experiments performed at pressures lower or higher than atmospheric pressure, called low-pressure experiments and high-pressure experiments, respectively. Pressure experiment are necessary because substances behave differently at different pressures. For example, water boils at a lower temperature at lower pressures. The equipment used for pressure experiments depends on whether the pressure is to be increased or decreased and by how much. A vacuum pump is used to remove the air out of a vacuum vessel for low-pressure experiments. High-pressures can be created with a piston-cylinder apparatus, up to () and . The piston is shifted with hydraulics, decreasing the volume inside the confining cylinder and increasing the pressure. For higher pressures, up to , a multi-anvil cell is used and for even higher pressures the diamond anvil cell. The diamond anvil cell is used to create extremely high pressures, as much as a million atmospheres (), though only over a small area. The current record is , but the sample size is confined to the order of tens of micrometres (). References See also Orders of magnitude (pressure) Physics experiments
Pressure experiment
[ "Physics" ]
228
[ "Experimental physics", "Physics experiments" ]
3,232,803
https://en.wikipedia.org/wiki/RNA%20polymerase%20III
In eukaryote cells, RNA polymerase III (also called Pol III) is a protein that transcribes DNA to synthesize 5S ribosomal RNA, tRNA, and other small RNAs. The genes transcribed by RNA Pol III fall in the category of "housekeeping" genes whose expression is required in all cell types and most environmental conditions. Therefore, the regulation of Pol III transcription is primarily tied to the regulation of cell growth and the cell cycle and thus requires fewer regulatory proteins than RNA polymerase II. Under stress conditions, however, the protein Maf1 represses Pol III activity. Rapamycin is another Pol III inhibitor via its direct target TOR. Transcription The process of transcription (by any polymerase) involves three main stages: Initiation, requiring the construction of the RNA polymerase complex on the gene's promoter Elongation, the synthesis of the RNA transcript Termination, the finishing of RNA transcription, and disassembly of the RNA polymerase complex Initiation Pol III is unusual (compared to Pol II) by requiring no control sequences upstream of the gene, instead normally relying on internal control sequences - sequences within the transcribed section of the gene (although upstream sequences are occasionally seen, e.g. U6 snRNA gene has an upstream TATA box as seen in Pol II Promoters). There are three classes of Pol III initiation, corresponding to 5S rRNA, tRNA, and U6 snRNA initiation. In all cases, the process starts with transcription factors binding to control sequences and ends with TFIIIB (Transcription Factor for polymerase III B) being recruited to the complex and assembling Pol III. TFIIIB consists of three subunits: TATA binding protein (TBP), a TFIIB-related factor (BRF1, or BRF2 for transcription of a subset of Pol III-transcribed genes in vertebrates), and a B-double-prime (BDP1) unit. The overall architecture bears similarities to that of Pol II. Class I Typical stages in 5S rRNA (also termed class I) gene initiation: TFIIIA (Transcription Factor for polymerase III A) binds to the intragenic (lying within the transcribed DNA sequence) 5S rRNA control sequence, the C Block (also termed box C). TFIIIA serves as a platform that replaces the A and B Blocks for positioning TFIIIC in an orientation with respect to the start site of transcription that is equivalent to what is observed for tRNA genes. Once TFIIIC is bound to the TFIIIA-DNA complex, the assembly of TFIIIB proceeds as described for tRNA transcription. Class II Typical stages in a tRNA (also termed class II) gene initiation: TFIIIC (Transcription Factor for polymerase III C) binds to two intragenic (lying within the transcribed DNA sequence) control sequences, the A and B Blocks (also termed box A and box B). TFIIIC acts as an assembly factor that positions TFIIIB to bind to DNA at a site centered approximately 26 base pairs upstream of the start site of transcription. TFIIIB is the transcription factor that assembles Pol III at the start site of transcription. Once TFIIIB is bound to DNA, TFIIIC is no longer required. TFIIIB also plays an essential role in promoter opening. Class III Typical stages in a U6 snRNA (also termed class III) gene initiation (documented in vertebrates only): SNAPc (SNRNA Activating Protein complex; subunits: 1, 2, 3, 4, 5) (also termed PBP and PTF) binds to the PSE (Proximal Sequence Element) centered approximately 55 base pairs upstream of the start site of transcription. This assembly is greatly stimulated by the Pol II transcription factors Oct1 and STAF that bind to an enhancer-like DSE (Distal Sequence Element) at least 200 base pairs upstream of the start site of transcription. These factors and promoter elements are shared between Pol II and Pol III transcription of snRNA genes. SNAPc acts to assemble TFIIIB at a TATA box centered 26 base pairs upstream of the start site of transcription. It is the presence of a TATA box that specifies that the snRNA gene is transcribed by Pol III rather than Pol II. The TFIIIB for U6 snRNA transcription contains a smaller Brf1 paralogue, Brf2. TFIIIB is the transcription factor that assembles Pol III at the start site of transcription. Sequence conservation predicts that TFIIIB containing Brf2 also plays a role in promoter opening. Elongation TFIIIB remains bound to DNA following the initiation of transcription by Pol III, unlike bacterial σ factors and most of the basal transcription factors for Pol II transcription. This leads to a high rate of transcriptional reinitiation of Pol III-transcribed genes. One study conducted on Saccharomyces cerevisiae found the average rate of chain elongation was 21 to 22 nucleotides per second, with the fastest being 29 nucleotides per second. These rates were comparable to elongation rates of RNA polymerase II found by an in vivo study conducted on Drosophila. The analysis of the individual steps of RNA chain elongation depicted that adding U and A to U-terminated RNA chains was slow. Termination Polymerase III terminates transcription at small polyUs stretch (5-6). In eukaryotes, a hairpin loop is not required, but may enhance termination efficiency in humans. In Saccharomyces cerevisiae, it was found that termination of transcription occurred in the sequence T7GT6 and was progressive. The presence of transcripts with five, six, and seven U residues and the slow readthrough of the T7 stretch suggest that the incorporation of a single G into the RNA chain served to reset elongation rates either entirely or substantially. Transcribed RNAs The types of RNAs transcribed from RNA polymerase III include: Transfer RNAs 5S ribosomal RNA U6 spliceosomal RNA RNase P and RNase MRP RNA 7SL RNA (the RNA component of the signal recognition particle) Vault RNAs Y RNA SINEs (short interspersed repetitive elements) 7SK RNA Several microRNAs Several small nucleolar RNAs Several gene regulatory antisense RNAs Role in DNA repair RNA polymerase III appears to be essential for homologous recombinational repair of DNA double-strand breaks. RNA polymerase III catalyzes the formation of a transient RNA-DNA hybrid at double strand breaks, an essential intermediate step in homologous recombination mediated double-strand break repair. This step protects the 3’ overhanging DNA strand from degradation. After the transient RNA-DNA hybrid intermediate is formed, the RNA strand is replaced by the RAD51 protein, which then catalyzes the ssDNA invasion step of homologous recombination. See also RNA polymerase References Gene expression Proteins
RNA polymerase III
[ "Chemistry", "Biology" ]
1,463
[ "Biomolecules by chemical classification", "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Proteins" ]
3,233,543
https://en.wikipedia.org/wiki/Phosphoinositide%20phospholipase%20C
Phosphoinositide phospholipase C (PLC, EC 3.1.4.11, triphosphoinositide phosphodiesterase, phosphoinositidase C, 1-phosphatidylinositol-4,5-bisphosphate phosphodiesterase, monophosphatidylinositol phosphodiesterase, phosphatidylinositol phospholipase C, PI-PLC, 1-phosphatidyl-D-myo-inositol-4,5-bisphosphate inositoltrisphosphohydrolase; systematic name 1-phosphatidyl-1D-myo-inositol-4,5-bisphosphate inositoltrisphosphohydrolase) is a family of eukaryotic intracellular enzymes that play an important role in signal transduction processes. These enzymes belong to a larger superfamily of Phospholipase C. Other families of phospholipase C enzymes have been identified in bacteria and trypanosomes. Phospholipases C are phosphodiesterases. Phospholipase Cs participate in phosphatidylinositol 4,5-bisphosphate (PIP2) metabolism and lipid signaling pathways in a calcium-dependent manner. At present, the family consists of six sub-families comprising a total of 13 separate isoforms that differ in their mode of activation, expression levels, catalytic regulation, cellular localization, membrane binding avidity and tissue distribution. All are capable of catalyzing the hydrolysis of PIP2 into two important second messenger molecules, which go on to alter cell responses such as proliferation, differentiation, apoptosis, cytoskeleton remodeling, vesicular trafficking, ion channel conductance, endocrine function and neurotransmission. Reaction and catalytic mechanism All family members are capable of catalyzing the hydrolysis of PIP2, a phosphatidylinositol at the inner leaflet of the plasma membrane into the two second messengers, inositol trisphosphate (IP3) and diacylglycerol (DAG). The chemical reaction may be expressed as: 1-phosphatidyl-1D-myo-inositol 4,5-bisphosphate + H2O 1D-myo-inositol 1,4,5-trisphosphate + diacylglycerol PLCs catalyze the reaction in two sequential steps. The first reaction is a phosphotransferase step that involves an intramolecular attack between the hydroxyl group at the 2' position on the inositol ring and the adjacent phosphate group resulting in a cyclic IP3 intermediate. At this point, DAG is generated. However, in the second phosphodiesterase step, the cyclic intermediate is held within the active site long enough to be attacked by a molecule of water, resulting in a final acyclic IP3 product. It should be mentioned that bacterial forms of the enzyme, which contain only the catalytic lipase domain, produce cyclic intermediates exclusively, whereas the mammalian isoforms generate predominantly the acyclic product. However, it is possible to alter experimental conditions (e.g., temperature, pH) in vitro such that some mammalian isoforms will alter the degree to which they produce mixtures of cyclic/acyclic products along with DAG. This catalytic process is tightly regulated by reversible phosphorylation of different phosphoinositides and their affinity for different regulatory proteins. Cell location Phosphoinositide phospholipase C performs its catalytic function at the plasma membrane where the substrate PIP2 is present. This membrane docking is mediated mostly by lipid-binding domains (e.g. PH domain and C2 domain) that display affinity for different phospholipid components of the plasma membrane. It is important to note that research has also discovered that, in addition to the plasma membrane, phosphoinositide phospholipase C also exists within other sub-cellular regions such as the cytoplasm and nucleus of the cell. At present, it is unclear exactly what the definitive roles for these enzymes in these cellular compartments are, particularly the nucleus. Function Phospholipase C performs a catalytic mechanism, depleting PIP2 and generating inositol trisphosphate (IP3) and diacylglycerol (DAG). Depletion of PIP2 inactivates numerous effector molecules in the plasma membrane, most notably PIP2 dependent channels and transporters responsible for setting the cell's membrane potential. The hydrolytic products also go on to modulate the activity of downstream proteins important for cellular signaling. IP3 is soluble, and diffuses through the cytoplasm and interacts with IP3 receptors on the endoplasmic reticulum, causing the release of calcium and raising the level of intracellular calcium. Further reading: Function of calcium in humans DAG remains within the inner leaflet of the plasma membrane due to its hydrophobic character, where it recruits protein kinase C (PKC), which becomes activated in conjunction with binding calcium ions. This results in a host of cellular responses through stimulation of calcium-sensitive proteins such as Calmodulin. Further reading: Function of protein kinase C Domain structure In terms of domain organization, all family members possess homologous X and Y catalytic domains in the form of a distorted Triose Phosphate Isomerase (TIM) barrel with a highly disordered, charged, and flexible intervening linker region. Likewise, all isoforms possess four EF hand domains, and a single C2 domain that flank the X and Y catalytic core. An N-terminal PH domain is present in every family except for the sperm-specific ζ isoform. SH2 (phosphotyrosine binding) and SH3 (proline-rich-binding) domains are found only in the γ form (specifically within the linker region), and only the ε form contains both guanine nucleotide exchange factor (GEF) and RA (Ras Associating) domains. The β subfamily is distinguished from the others by the presence of a long C-terminal extension immediately downstream of the C2 domain, which is required for activation by Gαq subunits, and which plays a role in plasma membrane binding and nuclear localization. Isoenzymes and activation The phospholipase C family consists of 13 isoenzymes split between six subfamilies, PLC-δ (1,3 & 4), -β(1-4), -γ(1,2), -ε, -ζ, and the recently discovered -η(1,2) isoform. Depending on the specific subfamily in question, activation can be highly variable. Activation by either Gαq or Gβγ G-protein subunits (making it part of a G protein-coupled receptor signal transduction pathway) or by transmembrane receptors with intrinsic or associated tyrosine kinase activity has been reported. In addition, members of the Ras superfamily of small GTPases (namely the Ras and Rho subfamilies) have also been implicated. It should also be mentioned that all forms of phospholipase C require calcium for activation, many of them possessing multiple calcium contact sites in the catalytic region. The only isoform that is known to be inactive at basal intracellular calcium levels is the δ subfamily of enzymes suggesting that they function as calcium amplifiers that become activated downstream of other PLC family members. PLC-β PLC-β(1-4) (120-155kDa) are activated by Gαq subunits through their C2 domain and long C-terminal extension. Gβγ subunits are known to activate the β2 and β3 isozymes only; however, this occurs through the PH domain and/or through interactions with the catalytic domain. The exact mechanism still requires further investigation. The PH domain of β2 and β3 plays a dual role, much like PLC-δ1, by binding to the plasma membrane, as well as being a site of interaction for the catalytic activator. However, PLC-β binds to the lipid surface independent of PIP2 with all isozymes preferring phosphoinositol-3-phosphate or neutral membranes. Members of the Rho GTPase family (e.g., Rac1, Rac2, Rac3, and cdc42) have been implicated in their activation by binding to an alternate site on the N-terminal PH domain followed by subsequent recruitment to the plasma membrane. A crystal structure of Rac1 bound to the PH domain of PLCβ2 has been solved. Like PLC-δ1, many PLC-β isoforms (in particular, PLC-β1) have been found to take up residence in the nuclear compartment. A basic amino acid region within the enzyme's long C-terminal tail appears to function as a Nuclear Localization Signal for import into the nucleus. PLC-β1 seems to play unspecified roles in cellular proliferation and differentiation. PLC-γ PLC-γ (120-155kDa) is activated by receptor and non-receptor tyrosine kinases due to the presence of two SH2 and a single SH3 domain situated between a split PH domain within the linker region. Although this particular isoform does not contain classic nuclear export or localization sequences, it has been found within the nucleus of certain cell lines. There are two main isoforms of PLCγ expressed in human specimens, PLC-γ1 and PLC-γ2. PLC-γ2 PLC-γ2 plays a major role in BCR signal transduction. Absence of this enzyme in knockout specimens severely inhibits the development of B cells because the same signaling pathways necessary for antigen mediated B cell activation are necessary for B cell development from CLPs. In B cell signaling, PI 3-kinase is recruited to the BCR early in the signal transduction pathway. PI-3K phosphorylates PIP2 (Phosphatidylinositol 4,5-bisphosphate) into PIP3 (Phosphatidylinositol 3,4,5-trisphosphate). The increase in concentration of PIP3 recruits PLC-γ2 to the BCR complex which binds to BLNK on the BCR scaffold and membrane PIP3. PLC-γ2 is then phosphorylated by Syk on one site and Btk on two sites. PLC-γ2 then competes with PI-3K for PIP2 which it hydrolyzes into IP3 (inositol 1,4,5-trisphosphate), which ultimately raises intercellular calcium, and diacylglycerol (DAG), which activates portions of the PKC family. Because PLC-γ2 competes for PIP2 with the original signaling molecule PI3K, it serves as a negative feedback mechanism. PLC-δ The PLC-δ subfamily consists of three family members, δ1, 2, and 3. PLC-δ1 (85kDa) is the most well understood of the three. The enzyme is activated by high calcium levels generated by other PLC family members, and therefore functions as a calcium amplifier within the cell. Binding of its substrate PIP2 to the N-terminal PH domain is highly specific and functions to promote activation of the catalytic core. In addition, this specificity helps tether the enzyme tightly to the plasma membrane in order to access substrate through ionic interactions between the phosphate groups of PIP2 and charged residues in the PH domain. While the catalytic core does possess a weak affinity for PIP2, the C2 domain has been shown to mediate calcium-dependent phospholipid binding as well. In this model, the PH and C2 domains operate in concert as a "tether and fix" apparatus necessary for processive catalysis by the enzyme. PLC-δ1 also possesses a classical leucine-rich nuclear export signal (NES) in its EF hand motif, as well as a Nuclear localization signal within its linker region. These two elements combined allow PLC-δ1 to actively translocate into and out of the nucleus. However, its function in the nucleus remains unclear. The widely expressed PLC-δ1 isoform is the best-characterized phospholipase family member, as it was the first to have high-resolution X-ray crystal structures available for analysis. In terms of domain architecture, all of the enzymes are built upon a common PLC-δ backbone, wherein each family displays similarities, as well as obvious distinctions, that contribute to unique regulatory properties within the cell. Because it is the only family found expressed in lower eukaryotic organisms such as yeast and slime molds, it is considered the prototypical PLC isoform. The other family members more than likely evolved from PLC-δ as their domain architecture and mechanism of activation were expanded. Although a full'' crystal structure has not been obtained, high-resolution X-ray crystallography has yielded the molecular structure of the N-terminal PH domain complexed with its product IP3, as well as the remainder of the enzyme with the PH domain ablated. These structures have provided researchers with the necessary information to begin speculating about other family members such as PLCβ2. Other PLC families PLC-ε (230-260kDa ) is activated by Ras and Rho GTPases. PLC-ζ (75kDa) is thought to play an important role in vertebrate fertilization by producing intracellular calcium oscillations important for the start of embryonic development. However, the mechanism of activation still remains unclear. This isoform is also capable of entering the early-formed pronucleus after fertilization, which seems to coincide with the cessation of calcium mobilization. It, like PLC-δ1 and PLC-β, possesses nuclear export and localization sequences. PLC-η has been implicated in neuronal functioning. Human proteins in this family PLCB1; PLCB2; PLCB3; PLCB4; PLCD1; PLCD3; PLCD4; PLCE1; PLCG1; PLCG2; PLCH1; PLCH2; PLCL1; PLCL2; PLCZ1 See also Clostridium perfringens alpha toxin Lipid signaling PH domain, found in some phospholipases C Phospholipase Zinc-dependent phospholipase C, a different family of phospholipase C References EC 3.1.4 Peripheral membrane proteins Enzymes of known structure Signal transduction Protein families Enzymes Calcium enzymes Hydrolases Calcium signaling Cell signaling G protein-coupled receptors Cell biology de:Phospholipase C es:Fosfolipasa C fr:Phospholipase C he:פוספוליפאז C ru:Фосфолипаза C
Phosphoinositide phospholipase C
[ "Chemistry", "Biology" ]
3,186
[ "Cell biology", "Protein classification", "Signal transduction", "G protein-coupled receptors", "Biochemistry", "Protein families", "Neurochemistry", "Calcium signaling" ]
9,144,207
https://en.wikipedia.org/wiki/Milman%E2%80%93Pettis%20theorem
In mathematics, the Milman–Pettis theorem states that every uniformly convex Banach space is reflexive. The theorem was proved independently by D. Milman (1938) and B. J. Pettis (1939). S. Kakutani gave a different proof in 1939, and John R. Ringrose published a shorter proof in 1959. Mahlon M. Day (1941) gave examples of reflexive Banach spaces which are not isomorphic to any uniformly convex space. References S. Kakutani, Weak topologies and regularity of Banach spaces, Proc. Imp. Acad. Tokyo 15 (1939), 169–173. D. Milman, On some criteria for the regularity of spaces of type (B), C. R. (Doklady) Acad. Sci. U.R.S.S, 20 (1938), 243–246. B. J. Pettis, A proof that every uniformly convex space is reflexive, Duke Math. J. 5 (1939), 249–253. J. R. Ringrose, A note on uniformly convex spaces, J. London Math. Soc. 34 (1959), 92. Banach spaces Theorems in functional analysis fr:Théorème de Milman-Pettis
Milman–Pettis theorem
[ "Mathematics" ]
266
[ "Theorems in mathematical analysis", "Theorems in functional analysis" ]
9,148,277
https://en.wikipedia.org/wiki/Mathematical%20descriptions%20of%20the%20electromagnetic%20field
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking. Vector field approach The most common description of the electromagnetic field uses two three-dimensional vector fields called the electric field and the magnetic field. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as (electric field) and (magnetic field). If only the electric field (E) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field (B) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations. Maxwell's equations in the vector field approach The behaviour of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell-Heaviside's equations: {| class="toccolours collapsible" style="background-color:#ECFCF4; padding:6; cellpadding=6;text-align:left;border:2px solid #50C878" |- |text-align="center" colspan="2"|Maxwell's equations (vector fields) |- | || Gauss's law |- | || Gauss's law for magnetism |- | || Faraday's law of induction |- |   || Ampère-Maxwell law |} where ρ is the charge density, which can (and often does) depend on time and position, ε0 is the electric constant, μ0 is the magnetic constant, and J is the current per unit area, also a function of time and position. The equations take this form with the International System of Quantities. When dealing with only nondispersive isotropic linear materials, Maxwell's equations are often modified to ignore bound charges by replacing the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. For some materials that have more complex responses to electromagnetic fields, these properties can be represented by tensors, with time-dependence related to the material's ability to respond to rapid field changes (dispersion (optics), Green–Kubo relations), and possibly also field dependencies representing nonlinear and/or nonlocal material responses to large amplitude fields (nonlinear optics). Potential field approach Many times in the use and calculation of electric and magnetic fields, the approach used first computes an associated potential: the electric potential, , for the electric field, and the magnetic vector potential, A, for the magnetic field. The electric potential is a scalar field, while the magnetic potential is a vector field. This is why sometimes the electric potential is called the scalar potential and the magnetic potential is called the vector potential. These potentials can be used to find their associated fields as follows: Maxwell's equations in potential formulation These relations can be substituted into Maxwell's equations to express the latter in terms of the potentials. Faraday's law and Gauss's law for magnetism (the homogeneous equations) turn out to be identically true for any potentials. This is because of the way the fields are expressed as gradients and curls of the scalar and vector potentials. The homogeneous equations in terms of these potentials involve the divergence of the curl and the curl of the gradient , which are always zero. The other two of Maxwell's equations (the inhomogeneous equations) are the ones that describe the dynamics in the potential formulation. These equations taken together are as powerful and complete as Maxwell's equations. Moreover, the problem has been reduced somewhat, as the electric and magnetic fields together had six components to solve for. In the potential formulation, there are only four components: the electric potential and the three components of the vector potential. However, the equations are messier than Maxwell's equations using the electric and magnetic fields. Gauge freedom These equations can be simplified by taking advantage of the fact that the electric and magnetic fields are physically meaningful quantities that can be measured; the potentials are not. There is a freedom to constrain the form of the potentials provided that this does not affect the resultant electric and magnetic fields, called gauge freedom. Specifically for these equations, for any choice of a twice-differentiable scalar function of position and time λ, if is a solution for a given system, then so is another potential given by: This freedom can be used to simplify the potential formulation. Either of two such scalar functions is typically chosen: the Coulomb gauge and the Lorenz gauge. Coulomb gauge The Coulomb gauge is chosen in such a way that , which corresponds to the case of magnetostatics. In terms of λ, this means that it must satisfy the equation This choice of function results in the following formulation of Maxwell's equations: Several features about Maxwell's equations in the Coulomb gauge are as follows. Firstly, solving for the electric potential is very easy, as the equation is a version of Poisson's equation. Secondly, solving for the magnetic vector potential is particularly difficult. This is the big disadvantage of this gauge. The third thing to note, and something that is not immediately obvious, is that the electric potential changes instantly everywhere in response to a change in conditions in one locality. For instance, if a charge is moved in New York at 1 pm local time, then a hypothetical observer in Australia who could measure the electric potential directly would measure a change in the potential at 1 pm New York time. This seemingly violates causality in special relativity, i.e. the impossibility of information, signals, or anything travelling faster than the speed of light. The resolution to this apparent problem lies in the fact that, as previously stated, no observers can measure the potentials; they measure the electric and magnetic fields. So, the combination of ∇φ and ∂A/∂t used in determining the electric field restores the speed limit imposed by special relativity for the electric field, making all observable quantities consistent with relativity. Lorenz gauge condition A gauge that is often used is the Lorenz gauge condition. In this, the scalar function λ is chosen such that meaning that λ must satisfy the equation The Lorenz gauge results in the following form of Maxwell's equations: The operator is called the d'Alembertian (some authors denote this by only the square ). These equations are inhomogeneous versions of the wave equation, with the terms on the right side of the equation serving as the source functions for the wave. As with any wave equation, these equations lead to two types of solution: advanced potentials (which are related to the configuration of the sources at future points in time), and retarded potentials (which are related to the past configurations of the sources); the former are usually disregarded where the field is to analyzed from a causality perspective. As pointed out above, the Lorenz gauge is no more valid than any other gauge since the potentials cannot be directly measured, however the Lorenz gauge has the advantage of the equations being Lorentz invariant. Extension to quantum electrodynamics Canonical quantization of the electromagnetic fields proceeds by elevating the scalar and vector potentials; φ(x), A(x), from fields to field operators. Substituting into the previous Lorenz gauge equations gives: Here, J and ρ are the current and charge density of the matter field. If the matter field is taken so as to describe the interaction of electromagnetic fields with the Dirac electron given by the four-component Dirac spinor field ψ, the current and charge densities have form: where α are the first three Dirac matrices. Using this, we can re-write Maxwell's equations as: which is the form used in quantum electrodynamics. Geometric algebra formulations Analogous to the tensor formulation, two objects, one for the electromagnetic field and one for the current density, are introduced. In geometric algebra (GA) these are multivectors, which sometimes follow Ricci calculus. Algebra of physical space In the Algebra of physical space (APS), also known as the Clifford algebra , the field and current are represented by multivectors. The field multivector, known as the Riemann–Silberstein vector, is and the four-current multivector is using an orthonormal basis . Similarly, the unit pseudoscalar is , due to the fact that the basis used is orthonormal. These basis vectors share the algebra of the Pauli matrices, but are usually not equated with them, as they are different objects with different interpretations. After defining the derivative Maxwell's equations are reduced to the single equation In three dimensions, the derivative has a special structure allowing the introduction of a cross product: from which it is easily seen that Gauss's law is the scalar part, the Ampère–Maxwell law is the vector part, Faraday's law is the pseudovector part, and Gauss's law for magnetism is the pseudoscalar part of the equation. After expanding and rearranging, this can be written as Spacetime algebra We can identify APS as a subalgebra of the spacetime algebra (STA) , defining and . The s have the same algebraic properties of the gamma matrices but their matrix representation is not needed. The derivative is now The Riemann–Silberstein becomes a bivector and the charge and current density become a vector Owing to the identity Maxwell's equations reduce to the single equation Differential forms approach In what follows, cgs-Gaussian units, not SI units are used. (To convert to SI, see here.) By Einstein notation, we implicitly take the sum over all values of the indices that can vary within the dimension. Field 2-form In free space, where and are constant everywhere, Maxwell's equations simplify considerably once the language of differential geometry and differential forms is used. The electric and magnetic fields are now jointly described by a 2-form F in a 4-dimensional spacetime manifold. The Faraday tensor (electromagnetic tensor) can be written as a 2-form in Minkowski space with metric signature as which is the exterior derivative of the electromagnetic four-potential The source free equations can be written by the action of the exterior derivative on this 2-form. But for the equations with source terms (Gauss's law and the Ampère-Maxwell equation), the Hodge dual of this 2-form is needed. The Hodge star operator takes a p-form to a ()-form, where n is the number of dimensions. Here, it takes the 2-form (F) and gives another 2-form (in four dimensions, ). For the basis cotangent vectors, the Hodge dual is given as (see ) and so on. Using these relations, the dual of the Faraday 2-form is the Maxwell tensor, Current 3-form, dual current 1-form Here, the 3-form J is called the electric current form or current 3-form: That F is a closed form, and the exterior derivative of its Hodge dual is the current 3-form, express Maxwell's equations: Here d denotes the exterior derivative – a natural coordinate- and metric-independent differential operator acting on forms, and the (dual) Hodge star operator is a linear transformation from the space of 2-forms to the space of (4 − 2)-forms defined by the metric in Minkowski space (in four dimensions even by any metric conformal to this metric). The fields are in natural units where . Since d2 = 0, the 3-form J satisfies the conservation of current (continuity equation): The current 3-form can be integrated over a 3-dimensional space-time region. The physical interpretation of this integral is the charge in that region if it is spacelike, or the amount of charge that flows through a surface in a certain amount of time if that region is a spacelike surface cross a timelike interval. As the exterior derivative is defined on any manifold, the differential form version of the Bianchi identity makes sense for any 4-dimensional manifold, whereas the source equation is defined if the manifold is oriented and has a Lorentz metric. In particular the differential form version of the Maxwell equations are a convenient and intuitive formulation of the Maxwell equations in general relativity. Note: In much of the literature, the notations and are switched, so that is a 1-form called the current and is a 3-form called the dual current. Linear macroscopic influence of matter In a linear, macroscopic theory, the influence of matter on the electromagnetic field is described through more general linear transformation in the space of 2-forms. We call the constitutive transformation. The role of this transformation is comparable to the Hodge duality transformation. The Maxwell equations in the presence of matter then become: where the current 3-form J still satisfies the continuity equation . When the fields are expressed as linear combinations (of exterior products) of basis forms θi, the constitutive relation takes the form where the field coefficient functions and the constitutive coefficients are anticommutative for swapping of each one's indices. In particular, the Hodge star operator that was used in the above case is obtained by taking in terms of tensor index notation with respect to a (not necessarily orthonormal) basis in a tangent space and its dual basis in , having the gram metric matrix and its inverse matrix , and is the Levi-Civita symbol with . Up to scaling, this is the only invariant tensor of this type that can be defined with the metric. In this formulation, electromagnetism generalises immediately to any 4-dimensional oriented manifold or with small adaptations any manifold. Alternative metric signature In the particle physicist's sign convention for the metric signature , the potential 1-form is The Faraday curvature 2-form becomes and the Maxwell tensor becomes The current 3-form J is and the corresponding dual 1-form is The current norm is now positive and equals with the canonical volume form . Curved spacetime Traditional formulation Matter and energy generate curvature of spacetime. This is the subject of general relativity. Curvature of spacetime affects electrodynamics. An electromagnetic field having energy and momentum also generates curvature in spacetime. Maxwell's equations in curved spacetime can be obtained by replacing the derivatives in the equations in flat spacetime with covariant derivatives. (Whether this is the appropriate generalization requires separate investigation.) The sourced and source-free equations become (cgs-Gaussian units): and Here, is a Christoffel symbol that characterizes the curvature of spacetime and ∇α is the covariant derivative. Formulation in terms of differential forms The formulation of the Maxwell equations in terms of differential forms can be used without change in general relativity. The equivalence of the more traditional general relativistic formulation using the covariant derivative with the differential form formulation can be seen as follows. Choose local coordinates xα that gives a basis of 1-forms dxα in every point of the open set where the coordinates are defined. Using this basis and cgs-Gaussian units we define The antisymmetric field tensor Fαβ, corresponding to the field 2-form F The current-vector infinitesimal 3-form J The epsilon tensor contracted with the differential 3-form produces 6 times the number of terms required. Here g is as usual the determinant of the matrix representing the metric tensor, gαβ. A small computation that uses the symmetry of the Christoffel symbols (i.e., the torsion-freeness of the Levi-Civita connection) and the covariant constantness of the Hodge star operator then shows that in this coordinate neighborhood we have: the Bianchi identity the source equation the continuity equation Classical electrodynamics as the curvature of a line bundle An elegant and intuitive way to formulate Maxwell's equations is to use complex line bundles or a principal U(1)-bundle, on the fibers of which U(1) acts regularly. The principal U(1)-connection ∇ on the line bundle has a curvature F = ∇2, which is a two-form that automatically satisfies and can be interpreted as a field strength. If the line bundle is trivial with flat reference connection d we can write and with A the 1-form composed of the electric potential and the magnetic vector potential. In quantum mechanics, the connection itself is used to define the dynamics of the system. This formulation allows a natural description of the Aharonov–Bohm effect. In this experiment, a static magnetic field runs through a long magnetic wire (e.g., an iron wire magnetized longitudinally). Outside of this wire the magnetic induction is zero, in contrast to the vector potential, which essentially depends on the magnetic flux through the cross-section of the wire and does not vanish outside. Since there is no electric field either, the Maxwell tensor throughout the space-time region outside the tube, during the experiment. This means by definition that the connection ∇ is flat there. In mentioned Aharonov–Bohm effect, however, the connection depends on the magnetic field through the tube since the holonomy along a non-contractible curve encircling the tube is the magnetic flux through the tube in the proper units. This can be detected quantum-mechanically with a double-slit electron diffraction experiment on an electron wave traveling around the tube. The holonomy corresponds to an extra phase shift, which leads to a shift in the diffraction pattern. Discussion and other approaches Following are the reasons for using each of such formulations. Potential formulation In advanced classical mechanics it is often useful, and in quantum mechanics frequently essential, to express Maxwell's equations in a potential formulation involving the electric potential (also called scalar potential) φ, and the magnetic potential (a vector potential) A. For example, the analysis of radio antennas makes full use of Maxwell's vector and scalar potentials to separate the variables, a common technique used in formulating the solutions of differential equations. The potentials can be introduced by using the Poincaré lemma on the homogeneous equations to solve them in a universal way (this assumes that we consider a topologically simple, e.g. contractible space). The potentials are defined as in the table above. Alternatively, these equations define E and B in terms of the electric and magnetic potentials that then satisfy the homogeneous equations for E and B as identities. Substitution gives the non-homogeneous Maxwell equations in potential form. Many different choices of A and φ are consistent with given observable electric and magnetic fields E and B, so the potentials seem to contain more, (classically) unobservable information. The non uniqueness of the potentials is well understood, however. For every scalar function of position and time , the potentials can be changed by a gauge transformation as without changing the electric and magnetic field. Two pairs of gauge transformed potentials and are called gauge equivalent, and the freedom to select any pair of potentials in its gauge equivalence class is called gauge freedom. Again by the Poincaré lemma (and under its assumptions), gauge freedom is the only source of indeterminacy, so the field formulation is equivalent to the potential formulation if we consider the potential equations as equations for gauge equivalence classes. The potential equations can be simplified using a procedure called gauge fixing. Since the potentials are only defined up to gauge equivalence, we are free to impose additional equations on the potentials, as long as for every pair of potentials there is a gauge equivalent pair that satisfies the additional equations (i.e. if the gauge fixing equations define a slice to the gauge action). The gauge-fixed potentials still have a gauge freedom under all gauge transformations that leave the gauge fixing equations invariant. Inspection of the potential equations suggests two natural choices. In the Coulomb gauge, we impose , which is mostly used in the case of magneto statics when we can neglect the term. In the Lorenz gauge (named after the Dane Ludvig Lorenz), we impose The Lorenz gauge condition has the advantage of being Lorentz invariant and leading to Lorentz-invariant equations for the potentials. Manifestly covariant (tensor) approach Maxwell's equations are exactly consistent with special relativity—i.e., if they are valid in one inertial reference frame, then they are automatically valid in every other inertial reference frame. In fact, Maxwell's equations were crucial in the historical development of special relativity. However, in the usual formulation of Maxwell's equations, their consistency with special relativity is not obvious; it can only be proven by a laborious calculation. For example, consider a conductor moving in the field of a magnet. In the frame of the magnet, that conductor experiences a magnetic force. But in the frame of a conductor moving relative to the magnet, the conductor experiences a force due to an electric field. The motion is exactly consistent in these two different reference frames, but it mathematically arises in quite different ways. For this reason and others, it is often useful to rewrite Maxwell's equations in a way that is "manifestly covariant"—i.e. obviously consistent with special relativity, even with just a glance at the equations—using covariant and contravariant four-vectors and tensors. This can be done using the EM tensor F, or the 4-potential A, with the 4-current J. Differential forms approach Gauss's law for magnetism and the Faraday–Maxwell law can be grouped together since the equations are homogeneous, and be seen as geometric identities expressing the field F (a 2-form), which can be derived from the 4-potential A. Gauss's law for electricity and the Ampere–Maxwell law could be seen as the dynamical equations of motion of the fields, obtained via the Lagrangian principle of least action, from the "interaction term" AJ (introduced through gauge covariant derivatives), coupling the field to matter. For the field formulation of Maxwell's equations in terms of a principle of extremal action, see electromagnetic tensor. Often, the time derivative in the Faraday–Maxwell equation motivates calling this equation "dynamical", which is somewhat misleading in the sense of the preceding analysis. This is rather an artifact of breaking relativistic covariance by choosing a preferred time direction. To have physical degrees of freedom propagated by these field equations, one must include a kinetic term for A, and take into account the non-physical degrees of freedom that can be removed by gauge transformation . See also gauge fixing and Faddeev–Popov ghosts. Geometric calculus approach This formulation uses the algebra that spacetime generates through the introduction of a distributive, associative (but not commutative) product called the geometric product. Elements and operations of the algebra can generally be associated with geometric meaning. The members of the algebra may be decomposed by grade (as in the formalism of differential forms) and the (geometric) product of a vector with a k-vector decomposes into a -vector and a -vector. The -vector component can be identified with the inner product and the -vector component with the outer product. It is of algebraic convenience that the geometric product is invertible, while the inner and outer products are not. As such, powerful techniques such as Green's functions can be used. The derivatives that appear in Maxwell's equations are vectors and electromagnetic fields are represented by the Faraday bivector F. This formulation is as general as that of differential forms for manifolds with a metric tensor, as then these are naturally identified with r-forms and there are corresponding operations. Maxwell's equations reduce to one equation in this formalism. This equation can be separated into parts as is done above for comparative reasons. See also Ricci calculus Electromagnetic wave equation Speed of light Electric constant Magnetic constant Free space Near and far field Electromagnetic field Electromagnetic radiation Quantum electrodynamics List of electromagnetism equations Notes References (with worked problems in Warnick, Russer 2006 ) Electromagnetism Mathematical physics
Mathematical descriptions of the electromagnetic field
[ "Physics", "Mathematics" ]
5,158
[ "Electromagnetism", "Physical phenomena", "Applied mathematics", "Theoretical physics", "Fundamental interactions", "Mathematical physics" ]
9,861,379
https://en.wikipedia.org/wiki/Potentiometric%20titration
In analytical chemistry, potentiometric titration is a technique similar to direct titration of a redox reaction. It is a useful means of characterizing an acid. No indicator is used; instead the electric potential is measured across the analyte, typically an electrolyte solution. To do this, two electrodes are used, an indicator electrode (the glass electrode and metal ion indicator electrode) and a reference electrode. Reference electrodes generally used are hydrogen electrodes, calomel electrodes, and silver chloride electrodes. The indicator electrode forms an electrochemical half-cell with the interested ions in the test solution. The reference electrode forms the other half-cell. The overall electric potential is calculated as is the potential drop over the test solution between the two electrodes. is recorded at intervals as the titrant is added. A graph of potential against volume added can be drawn and the end point of the reaction is halfway between the jump in voltage. depends on the concentration of the interested ions with which the indicator electrode is in contact. For example, the electrode reaction may be As the concentration of changes, the changes correspondingly. Thus the potentiometric titration involve measurement of with the addition of titrant. Types of potentiometric titration include acid–base titration (total alkalinity and total acidity), redox titration (HI/HY and cerate), precipitation titration (halides), and complexometric titration (free EDTA and Antical #5). History The first potentiometric titration was carried out in 1893 by Robert Behrend at Ostwald's Institute in Leipzig. He titrated mercurous solution with potassium chloride, potassium bromide, and potassium iodide. He used a mercury electrode along with a mercury/mercurous nitrate reference electrode. He found that in a cell composed of mercurous nitrate and mercurous nitrate/mercury, the initial voltage is 0. If potassium chloride is added to mercurous nitrate on one side, mercury (I) chloride is precipitated. This decreased the osmotic pressure of mercury (I) ions on the side and creates a potential difference. This potential difference increases slowly as additional potassium chloride is added, but then increases more rapidly. He found the greatest potential difference is achieved once all of the mercurous nitrate has been precipitated. This was used to discern end points of titrations. Wilhelm Böttger then developed the tool of potentiometric titration while working at Ostwald's Institute. He used potentiometric titration to observe the differences in titration between strong and weak acids, as well as the behavior of polybasic acids. He introduced the idea of using potentiometric titrations for acids and bases that could not be titrated in conjunction with a colorimetric indicator Potentiometric titrations were first used for redox titrations by Crotogino. He titrated halide ions with potassium permanganate using a shiny platinum electrode and a calomel electrode. He said that if an oxidizing agent is added to a reducing solution then the equilibrium between the reducing substance and reaction product will shift towards the reaction product. This changes the potential very slowly until the amount of reducing substance becomes very small. A large change in potential will occur then once a small addition of the titrating solution is added, as the final amounts of reducing agent are removed and the potential corresponds solely to the oxidizing agent. This large increase in potential difference signifies the endpoint of the reaction. See also Chronoamperometry Electroanalytical methods References Titration
Potentiometric titration
[ "Chemistry" ]
779
[ "Instrumental analysis", "Titration" ]
1,681,984
https://en.wikipedia.org/wiki/Molecular%20sieve
A molecular sieve is a material with pores (voids or holes), having uniform size comparable to that of individual molecules, linking the interior of the solid to its exterior. These materials embody the molecular sieve effect, the preferential sieving of molecules larger than the pores. Many kinds of materials exhibit some molecular sieves, but zeolites dominate the field. Zeolites are almost always aluminosilicates, or variants where some or all of the Si or Al centers are replaced by similarly charged elements. Sieving process The diameters of the pores that comprise molecular sieves are similar in size to small molecules. Large molecules cannot enter or be adsorbed, while smaller molecules can. As a mixture of molecules migrates through the stationary bed of porous, semi-solid substance referred to as a sieve (or matrix), the components of the highest molecular weight (which are unable to pass into the molecular pores) leave the bed first, followed by successively smaller molecules. Most of molecular sieves are aluminosilicates (zeolites) with Si/Al molar ratio less than 2, but there are also examples of activated charcoal and silica gel. The pore diameter of a molecular sieve is measured in ångströms (Å) or nanometres (nm). According to IUPAC notation, microporous materials have pore diameters of less than 2 nm (20 Å) and macroporous materials have pore diameters of greater than 50 nm (500 Å); the mesoporous category thus lies in the middle with pore diameters between 2 and 50 nm (20–500 Å). The sieving properties of molecular sieves are classified as microporous (3-10 Å pores) including zeolite A, LTA, and FAU. Some clays, active carbon, and porous glass meet this criterion. mesoporous materials (<2 nm pores) macroporous materials (2–50 nm pores), e.g., in the form of Silicon dioxide (used to make silica gel): 24 Å (2.4 nm) Applications Some molecular sieves are used in size-exclusion chromatography, a separation technique that sorts molecules based on their size. Another important use is as a desiccant. They are often utilized in the petrochemical industry for drying gas streams. For example, in the liquid natural gas (LNG) industry, the water content of the gas needs to be reduced to less than 1 ppmv to prevent blockages caused by ice or methane clathrate. Laboratory use In the laboratory, molecular sieves are used to dry solvent. "Sieves" have proven to be superior to traditional drying techniques, which often employ aggressive desiccants. Under the term zeolites, molecular sieves are used for a wide range of catalytic applications. They catalyze isomerisation, alkylation, and epoxidation, and are used in large scale industrial processes, including hydrocracking and fluid catalytic cracking. They are also used in the filtration of air supplies for breathing apparatus, for example those used by scuba divers and firefighters. In such applications, air is supplied by an air compressor and is passed through a cartridge filter which, depending on the application, is filled with molecular sieve and/or activated carbon, finally being used to charge breathing air tanks. Such filtration can remove particulates and compressor exhaust products from the breathing air supply. FDA approval The U.S. FDA has as of April 1, 2012, approved sodium aluminosilicate for direct contact with consumable items under 21 CFR 182.2727. Prior to this approval the European Union had used molecular sieves with pharmaceuticals and independent testing suggested that molecular sieves meet all government requirements but the industry had been unwilling to fund the expensive testing required for government approval. Regeneration Methods for regeneration of molecular sieves include pressure change (as in oxygen concentrators), heating and purging with a carrier gas (as when used in ethanol dehydration), or heating under high vacuum. Regeneration temperatures range from to depending on molecular sieve type. In contrast, silica gel can be regenerated by heating it in a regular oven to for two hours. However, some types of silica gel will "pop" when exposed to enough water. This is caused by breakage of the silica spheres when contacting the water. Adsorption capabilities 3A Approximate chemical formula: ((K2O) (Na2O)) • Al2O3• 2 SiO2 • 9/2 H2O Silica-alumina ratio: SiO2/ Al2O3≈2 Production 3A molecular sieves are produced by cation exchange of potassium for sodium in 4A molecular sieves (See below) Usage 3A molecular sieves do not adsorb molecules with diameters are larger than 3 Å. The characteristics of these molecular sieves include fast adsorption speed, frequent regeneration ability, good crushing resistance and pollution resistance. These features can improve both the efficiency and lifetime of the sieve. 3A molecular sieves are the necessary desiccant in petroleum and chemical industries for refining oil, polymerization, and chemical gas-liquid depth drying. 3A molecular sieves are used to dry a range of materials, such as ethanol, air, refrigerants, natural gas and unsaturated hydrocarbons. The latter include cracking gas, acetylene, ethylene, propylene and butadiene. 3A molecular sieves are stored at room temperature, with a relative humidity not more than 90%. They are sealed under reduced pressure, being kept away from water, acids and alkalis. 4A Chemical formula: Na2O•Al2O3•2SiO2•9/2H2O Silicon-aluminium ratio: 1:1 (SiO2/ Al2O3≈2) Production For the production of 4A sieve, typically aqueous solutions of sodium silicate and sodium aluminate are combined at 80 °C. The product is "activated" by "heating" at 400 °C 4A sieves serve as the precursor to 3A and 5A sieves through cation exchange of sodium for potassium (for 3A) or calcium (for 5A) Uses The main use of zeolitic molecular sieves is in laundry detergents. In 2001, an estimated 1200 kilotons of zeolite A were produced for this purpose, which entails water softening. 4A molecular sieves are widely used to dry laboratory solvents. They can absorb water and other species with a critical diameter less than 4 Å such as NH3, H2S, SO2, CO2, C2H5OH, C2H6, and C2H4. Some molecular sieves are used to assist detergents as they can produce demineralized water through calcium ion exchange, remove and prevent the deposition of dirt. They are widely used to replace phosphorus. The 4A molecular sieve plays a major role to replace sodium tripolyphosphate as detergent auxiliary in order to mitigate the environmental impact of the detergent. It also can be used as a soap forming agent and in toothpaste. Other purposes The metallurgical industry: separating agent, separation, extraction of brine potassium, rubidium, caesium, etc. Petrochemical industry, catalyst, desiccant, adsorbent Agriculture: soil conditioner Medicine: load silver zeolite antibacterial agent. Morphology of molecular sieves Molecular sieves are available in diverse shape and sizes. Spherical beads have advantage over other shapes as they offer lower pressure drop and are mechanically robust. See also Lime (mineral) Zeolite Notes References External links Sieves Put A Lid On Greenhouse Gas Molecular Sieve Safety About Molecular Sieves Filters Desiccants Vacuum Chemical process engineering Industrial gases Gas technologies
Molecular sieve
[ "Physics", "Chemistry", "Engineering" ]
1,651
[ "Chemical equipment", "Chemical engineering", "Vacuum", "Filters", "Desiccants", "Materials", "Industrial gases", "Filtration", "Chemical process engineering", "Matter" ]
1,682,105
https://en.wikipedia.org/wiki/Indicator%20%28distance%20amplifying%20instrument%29
In various contexts of science, technology, and manufacturing (such as machining, fabricating, and additive manufacturing), an indicator is any of various instruments used to accurately measure small distances and angles, and amplify them to make them more obvious. The name comes from the concept of indicating to the user that which their naked eye cannot discern; such as the presence, or exact quantity, of some small distance (for example, a small height difference between two flat surfaces, a slight lack of concentricity between two cylinders, or other small physical deviations). The classic mechanical version, called a dial indicator, provides a dial display similar to a clock face with clock hands; the hands point to graduations in circular scales on the dial which represent the distance of the probe tip from a zero setting. The internal works of a mechanical dial indicator are similar to the precision clockworks of a mechanical wristwatch, employing a rack and pinion gear to read the probe position, instead of a pendulum escapement to read time. The side of the indicator probe shaft is cut with teeth to provide the rack gear. When the probe moves, the rack gear drives a pinion gear to rotate, spinning the indicator "clock" hand. Springs preload the gear mechanism to minimize the backlash error in the reading. Precise quality of the gear forms and bearing freedom determines the repeatable precision of measurement achieved. Since the mechanisms are necessarily delicate, rugged framework construction is required to perform reliably in harsh applications such as machine tool metalworking operations, similar to how wristwatches are ruggedized. Other types of indicator include mechanical devices with cantilevered pointers and electronic devices with digital displays. Electronic versions employ an optical or capacitive grating to detect microscopic steps in the position of the probe. Indicators may be used to check the variation in tolerance during the inspection process of a machined part, measure the deflection of a beam or ring under laboratory conditions, as well as many other situations where a small measurement needs to be registered or indicated. Dial indicators typically measure ranges from 0.25 mm to 300 mm (0.015in to 12.0in), with graduations of 0.001 mm to 0.01 mm (metric) or 0.00005in to 0.001in (imperial/customary). Various names are used for indicators of different types and purposes, including dial gauge, clock, probe indicator, pointer, test indicator, dial test indicator, drop indicator, plunger indicator, and others. General classification There are several variables in dial indicators: Analog versus digital/electronic readout (most are analog) Dial size. Typically referred to be American Gauge Design Specification (AGD): {| class="wikitable" |- ! AGD !! Diameter range (in) !! Diameter range (mm) |- | 0 || 1- || 25-35 |- | 1 || -2 || 35-50 |- | 2 || 2- || 50-60 |- | 3 || -3 || 60-75 |- | 4 || 3- || 76-95 |} Accuracy Range of travel Number of dial revolutions Dial style: balanced (e.g., −15 to 0 to +15) or continuous (e.g., 0 to 30) Graduation style: positive numbers (clockwise) or negative numbers (counterclockwise) Revolution counters, which show the number of revolutions of the principal needle. Principles Indicators inherently provide relative measurement only. But given that suitable references are used (for example, gauge blocks), they often allow a practical equivalent of absolute measure, with periodic recalibration against the references. However, the user must know how to use them properly and understand how in some situations, their measurements will still be relative rather than absolute because of factors such as cosine error (discussed later). Applications In a quality environment to check for consistency and accuracy in the manufacturing process. On the workshop floor to initially set up or calibrate a machine, prior to a production run. By toolmakers (such as moldmakers) in the process of manufacturing precision tooling. In metal engineering workshops, where a typical application is the centering of a lathe's workpiece in a four jaw chuck. The dial indicator is used to indicate the run-out (the misalignment between the workpiece's axis of rotational symmetry and the axis of rotation of the spindle) of the workpiece, with the ultimate aim of reducing it to a suitably small range using small chuck jaw adjustments. In areas other than manufacturing where accurate measurements need to be recorded (e.g., physics). To check for lateral run-out when affixing a new rotor to an automotive disc brake. Lateral run-out (lack of perpendicularity between the disc surface and the shaft axis, caused by deformations or more frequently by a lack of proper cleaning of the mounting surface of hub. This run-out can produce brake pedal pulsations, vibration of the vehicle when brakes are applied and can induce uneven wear of the disc. The lateral run-out can be caused by uneven torque, damaged studs, or a burr or rust between the hub and rotor. This variation can be tested with a dial indicator, and most times the variation can be more or less cancelled by reinstalling the disc in other position, so that the tolerances of both the hub and the disc tend to cancel each other. To reduce the run-out, the disc is mounted and torqued to half the specified torque (as there is no wheel to distribute stresses) then a dial Indicator is placed against the braking surface and the face of the dial is centered, the disc is slowly rotated by hand and the maximum deviation is noted. If the maximum run-out is within the maximum allowed run-out specified in the manual, the disc can be installed at that position, but if the technician wants to minimize the total lateral run-out, other around the clock positions can be tried. Excessive run-out can rapidly ruin the disc if it exceeds the specified tolerance (typically up to but most discs can attain less than or less if installed at the optimum position). Probe indicator Probe indicators typically consist of a graduated dial and needle driven by a clockwork (thus the clock terminology) to record the minor increments, with a smaller embedded clock face and needle to record the number of needle rotations on the main dial. The dial has fine gradations for precise measurement. The spring-loaded probe (or plunger) moves perpendicularly to the object being tested by either retracting or extending from the indicator's body. The dial face can be rotated to any position, this is used to orient the face towards the user as well as set the zero point, there will also be some means of incorporating limit indicators (the two metallic tabs visible in the right image, at 90 and 10 respectively), these limit tabs may be rotated around the dial face to any required position. There may also be a lever arm available that will allow the indicator's probe to be retracted easily. Mounting the indicator may be done several ways. Many indicators have a mounting lug with a hole for a bolt as part of the back plate. Alternatively, the device can be held by the cylindrical stem that guides the plunger using a collet or special clamp, which is the method generally used by tools designed to integrate an indicator as a primary component, such as thickness gauges and comparators. Common outside diameters for the stem are 3/8 inch and 8 mm, though there are other diameters made. Another option that a few manufacturers include is dovetail mounts compatible with those on dial test indicators. Dial test indicator A dial test indicator, also known as a lever arm test indicator or finger indicator, has a smaller measuring range than a standard dial indicator. A test indicator measures the deflection of the arm, the probe does not retract but swings in an arc around its hinge point. The lever may be interchanged for length or ball diameter, and permits measurements to be taken in narrow grooves and small bores where the body of a probe type may not reach. The model shown is bidirectional, some types may have to be switched via a side lever to be able to measure in the opposite direction. These indicators actually measure angular displacement and not linear displacement; linear distance is correlated to the angular displacement based on the correlating variables. If the cause of movement is perpendicular to the finger, the linear displacement error is acceptably small within the display range of the dial. However, this error starts to become noticeable when this cause is as much as 10° off the ideal 90°. This is called cosine error, because the indicator is only registering the cosine of the movement, whereas the user likely is interested in the net movement vector. Cosine error is discussed in more detail below. Contact points of test indicators most often come with a standard spherical tip of 1, 2, or 3 mm diameter. Many are of steel (alloy tool steel or HSS); higher-end models are of carbides (such as tungsten carbide) for greater wear resistance. Other materials are available for contact points depending on application, such as ruby (high wear resistance) or Teflon or PVC (to avoid scratching the workpiece). These are more expensive and are not always available as OEM options, but they are extremely useful in applications that demand them. Modern dial test indicators are usually mounted using either an integrated stem (on the right of the image) or by a special clamp that grabs a dovetail on the indicator body. Some instruments may use special holders. Test indicator Prior to modern geared dial mechanisms, test indicators using a single lever or systems of levers were common. The range and precision of these devices were generally inferior to modern dial type units, with a range of 10/1000 inch to 30/1000 inch, and precision of 1/1000 inch being typical. One common single lever test indicator was the Starrett (No. 64), and those using systems of levers for amplification were made by companies such as Starrett (No. 564) and Lufkin (No. 199A), as well as smaller companies like Ideal Tool Co. Devices that could be used as either a lever test indicator or a plunger type were also manufactured by Koch. Digital indicator With the advent of electronics, the clock face (dial) has been replaced in some indicators with digital displays (usually LCDs) and the clockwork has been replaced by linear encoders. Digital indicators have some advantages over their analog predecessors. Many models of digital indicator can record and transmit the data electronically to a computer, through an interface such as RS-232 or USB. This facilitates statistical process control (SPC), because a computer can record the measurement results in a tabular dataset (such as a database table or spreadsheet) and interpret them (by performing statistical analysis on them). This obviates manual recording of long columns of numbers, which not only reduces the risk of the operator introducing errors (such as digit transpositions) but also greatly improves the productivity of the process by freeing the human from time-consuming data recording and copying tasks. Another advantage is that they can be switched between metric and inch units with the press of a button, thus obviating a separate unit conversion step of entering into a calculator or web browser and then recording the results. Contact point (tip) types Plunger (drop) indicator tips On drop indicators, the tip of the probe usually may be interchanged with a range of shapes and sizes depending on the application. The tips typically are attached with either a #4-48 or an M2.5 screw thread. Spherical tips are often used to give point contact. Cylindrical and flat tips are also used as the need arises. Needle-shaped tips allow the tip to enter a small hole or slot. Accessory sets of tips are sold separately and inexpensively, so that even indicators that have no set of tips may be augmented with a new set. Dial test indicator tips Dial test indicators, whose tips swing in an arc rather than plunging linearly, usually have spherical tips. This shape gives point contact, allowing for consistent measurements as the tip moves through its arc (via consistent offset distance from ball surface to center point, regardless of ball contact angle with the measured surface). Several spherical diameters are commercially offered; 1mm, 2mm, and 3mm are the standard sizes. Despite the advantage just mentioned (regarding contact angle irrelevance) of the ball (sphere) itself, the contact angle of the lever overall does matter. On most DTIs it must be parallel (0°, 180°) to the surface being measured in order for the measurement to be truly accurate, that is, for the magnitude of the dial reading to reflect the true tip movement distance without cosine error. In other words, the path of the tip's movement must coincide with the vector that is being measured; otherwise, only the cosine of the vector is being measured (yielding the error called cosine error). In such cases the indicator may still be useful, but an offset (multiplier or correction factor) must be applied to achieve a correct measurement (where the measurement is absolute rather than merely comparative). (This fact applies to the angle between the lever and the part, not to the angle between the lever and the DTI body, which is adjustable on most DTIs.) The same principle is also employed with CMM touch trigger probes (TTPs), where the machine (when used correctly) adjusts its ball-offset compensation to account for any difference between the approach vector and the surface vector. Some DTIs (such as the Interapid line and its competitors) are made with a built-in allowance such that a 12° tip angle (between the lever and the surface being measured) is the angle that corresponds to zero cosine error. This is a great convenience to the user because of the practicality of having the ball being clear of the indicator body such that the unit may pass over a surface. Changing the tip of a DTI is not as simple an affair as changing the tip of a drop indicator, because the tip, being a lever, has its length precisely matched to the clockwork inside the indicator, so that the length of the arc of its extremity's movement has a known ratio to the gears that drive the dial's needle. Thus to add a longer or shorter tip requires a correction factor to be multiplied with the dial reading in order to yield a true distance reading. DTI tips are often threaded for interchange (like drop indicator tips), with small flats to accept a spanner; but the intent regarding user-serviceable tip change is limited only to the tips that originally came with the indicator, because of the above-mentioned importance of the length. Typically a DTI comes with only a few tips, such as a small-ball tip and a large-ball tip. Neither of the above considerations (cosine error or lever length error) matters if the dial reading is being used only comparatively (rather than absolutely). But the avoidance of mistakes of the comparative-versus-absolute-confounding type rests with the knowledge and attention of the user, rather than with the instrument itself, and thus repairers of DTIs usually will not certify the accuracy of a DTI that cannot offer an accurate absolute measurement—even if it is perfectly good for comparative use alone. Such a DTI could still be certified (and labeled) for comparative use only, but because risk of user error is involved, gauge calibration rules in machine shops either demand a "comparative use only" label (if the users can be trusted to understand and follow it) or demand that the indicator be removed from service (if not). See also Indicator diagram, a pressure-volume diagram measured on a piston engine References Further reading External links Practical metal and woodworking applications Simulator - dial indicator in millimeter with graduation of 0.01mm Dimensional instruments Metalworking measuring instruments
Indicator (distance amplifying instrument)
[ "Physics", "Mathematics" ]
3,330
[ "Quantity", "Dimensional instruments", "Physical quantities", "Size" ]
1,683,711
https://en.wikipedia.org/wiki/Hermetic%20seal
A hermetic seal is any type of sealing that makes a given object airtight (preventing the passage of air, oxygen, or other gases). The term originally applied to airtight glass containers but, as technology advanced, it applied to a larger category of materials, including metals, rubber, and plastics. Hermetic seals are essential to the correct and safe functionality of many electronic and healthcare products. Used technically, it is stated in conjunction with a specific test method and conditions of use. Colloquially, the exact requirements of such a seal varies with the application. Etymology The word hermetic comes from the Greek god Hermes. A hermetic seal comes from alchemy in the tradition of Hermeticism. The legendary Hermes Trismegistus supposedly invented the process of making a glass tube airtight using a secret seal. Uses Some kinds of packaging must maintain a seal against the flow of gases, for example, packaging for some foods, pharmaceuticals, chemicals, and consumer goods. The term can describe the result of some food preservation practices, such as vacuum packing and canning. Packaging materials include glass, aluminum cans, metal foils, and gas-impermeable plastics. Some buildings designed with sustainable architecture principles may use airtight technologies to conserve energy. Green buildings may include windows that combine triple-pane insulated glazing with argon or krypton gas to reduce thermal conductivity and increase efficiency. In landscape and exterior construction projects, airtight seals may be used to protect general services and landscape lighting electrical connections/splices. Airtight implies both waterproof and vapor-proof. Applications for hermetic sealing include semiconductor electronics, thermostats, optical devices, MEMS, and switches. Electrical or electronic parts may be hermetic sealed to secure against water vapor and foreign bodies to maintain proper functioning and reliability. Hermetic sealing for airtight conditions is used in archiving significant historical items. In 1951, The U.S. Constitution, U.S. Declaration of Independence, and U.S. Bill of Rights were hermetically sealed with helium gas in glass cases housed in the U.S. National Archives in Washington, D.C.. In 2003, they were moved to new glass cases hermetically sealed with argon. In the funeral industry, some caskets and burial vaults are hermetically sealed by a rubber seal and being locked. Types of epoxy hermetic seals Typical epoxy resins have pendant hydroxyl (-OH) groups along their chain that can form bonds or strong polar attractions to oxide or hydroxyl surfaces. Most inorganic surfaces—i.e., metals, minerals, glasses, ceramics—have polarity so they have high surface energy. The important factor in determining good adhesive strength is whether the surface energy of the substrate is close to or higher than the surface energy of the cured adhesive. Certain epoxy resins and their processes can create a hermetic bond to copper, brass, stainless steel, specialty alloys, plastic, or epoxy itself with similar coefficients of thermal expansion, and are used in the manufacture of hermetic electrical and fiber optic hermetic seals. Epoxy-based seals can increase signal density within a feedthrough design compared to other technologies with minimal spacing requirements between electrical conductors. Epoxy hermetic seal designs can be used in hermetic seal applications for low or high vacuum or pressures, effectively sealing gases or fluids including helium gas to very low helium gas leak rates similar to glass or ceramic. Hermetic epoxy seals also offer the design flexibility of sealing either copper alloy wires or pins instead of the much less electrically conductive Kovar pin materials required in glass or ceramic hermetic seals. With a typical operating temperature range of −70 °C to +125 °C or 150 °C, epoxy hermetic seals are more limited in comparison to glass or ceramic seals, although some hermetic epoxy designs are capable of withstanding 200 °C. Types of glass-to-metal hermetic seals When the glass and the metal being hermetically sealed have the same coefficient of thermal expansion, a "matched seal" derives its strength from bond between the glass and the metal's oxide. This type of glass-to-metal hermetic seal is generally used for low-intensity applications such as in light bulb bases. "Compression seals" occur when the glass and the metal have different coefficients of thermal expansion such that the metal compresses around the solidified glass as it cools. Compression seals can withstand very high pressure and are used in a variety of industrial applications. Compared to epoxy hermetic seals, glass-to-metal seals can be operated up to much higher temperatures (250 °C for compression seals, 450 °C for matched seals). The material selection is however more limited due to thermal expansion constraints. The sealing process is performed at roughly 1000 °C in an inert or reducing atmosphere to prevent discoloration of the parts. Ceramic-to-metal hermetic seals Co-fired ceramic seals are an alternative to glass. Ceramic seals exceed the design barriers of glass to metal seals due to superior hermetic performance in high stress environments requiring a robust seal. Choosing between glass versus ceramic depends on the application, weight, thermal solution, and material requirements. Glassware sealing Sealing solids Glass taper joints can be sealed hermetically with PTFE sealing rings (high vacuum tight, air leakage rate 10−6 mBar × L/sec and below), o-rings (optionally encapsulated o-rings), or PTFE sleeves, sometimes used instead of grease that can dissolve into contamination. PTFE tape, PTFE resin string, and wax are other alternatives that are finding widespread use, but require a little care when winding onto the joint to ensure a good seal is produced. Grease A thin layer of grease made for this application can be applied to the ground glass surfaces to be connected, and the inner joint is inserted into the outer joint such that the ground glass surfaces of each are next to each other to make the connection. In addition to making a leak-tight connection, the grease lets two joints be later separated more easily. A potential drawback of such grease is that if used on laboratory glassware for a long time in high-temperature applications (such as for continuous distillation), the grease may eventually contaminate the chemicals. Also, reagents may react with the grease, especially under vacuum. For these reasons, it is advisable to apply a light ring of grease at the fat end of the taper and not its tip, to keep it from going inside the glassware. If the grease smears over the entire taper surface on mating, too much was used. Using greases specifically designed for this purpose is also a good idea, as these are often better at sealing under vacuum, thicker, and so less likely to flow out of the taper, become fluidic at higher temperatures than Vaseline (a common substitute), and are more chemically inert than other substitutes. Cleaning Ground glass joints are translucent when clean and free of debris. Solvents, reaction mixtures, and old grease appear as transparent spots. Grease can be removed by wiping with an appropriate solvent; ethers, methylene chloride, ethyl acetate, or hexanes work well for silicone- and hydrocarbon-based greases. Fluoroether-based greases are quite impervious to organic solvents. Most chemists simply wipe them off as much as possible. Some fluorinated solvents can remove fluoroether greases, but are costlier than laboratory solvents. Testing Standard test methods are available for measuring the moisture vapor transmission rate, oxygen transmission rate, etc. of packaging materials. Completed packages, however, involve heat seals, joints, and closures that often reduce the effective barrier of the package. For example, the glass of a glass bottle may have an effective total barrier but the screw cap closure and the closure liner might not. See also Building insulation Glass-to-metal seal Heat sealer Moisture vapor transmission rate Permeation Notes Seals (mechanical) Hermes Trismegistus
Hermetic seal
[ "Physics" ]
1,687
[ "Seals (mechanical)", "Materials", "Matter" ]
1,683,751
https://en.wikipedia.org/wiki/1-Butyl-3-methylimidazolium%20tetrachloroferrate
1-Butyl-3-methylimidazolium tetrachloroferrate is a magnetic ionic liquid. It can be obtained from 1-butyl-3-methylimidazolium chloride and ferric chloride. It has quite low water solubility. Due to the presence of the high spin FeCl4 anion, the liquid is paramagnetic and a magnetic susceptibility of 40.6 × 10−6 emu g−1 is reported. A simple small neodymium magnet suffices to attract the liquid in a test tube. References Magnetism Ionic liquids Imidazolium compounds ferrates Iron(III) compounds Iron_complexes Chlorometallates
1-Butyl-3-methylimidazolium tetrachloroferrate
[ "Chemistry" ]
148
[ "Salts", "Organic compounds", "Ferrates", "Organic compound stubs", "Organic chemistry stubs" ]
1,684,561
https://en.wikipedia.org/wiki/Method%20of%20loci
The method of loci is a strategy for memory enhancement, which uses visualizations of familiar spatial environments in order to enhance the recall of information. The method of loci is also known as the memory journey, memory palace, journey method, memory spaces, or mind palace technique. This method is a mnemonic device adopted in ancient Roman and Greek rhetorical treatises (in the anonymous Rhetorica ad Herennium, Cicero's De Oratore, and Quintilian's Institutio Oratoria). Many memory contest champions report using this technique to recall faces, digits, and lists of words. It is the term most often found in specialised works on psychology, neurobiology, and memory, though it was used in the same general way at least as early as the first half of the nineteenth century in works on rhetoric, logic, and philosophy. John O'Keefe and Lynn Nadel refer to:... "the method of loci", an imaginal technique known to the ancient Greeks and Romans and described by Yates (1966) in her book The Art of Memory as well as by Luria (1969). In this technique the subject memorizes the layout of some building, or the arrangement of shops on a street, or any geographical entity which is composed of a number of discrete loci. When desiring to remember a set of items the subject 'walks' through these loci in their imagination and commits an item to each one by forming an image between the item and any feature of that locus. Retrieval of items is achieved by 'walking' through the loci, allowing the latter to activate the desired items. The efficacy of this technique has been well established (Ross and Lawrence 1968, Crovitz 1969, 1971, Briggs, Hawkins and Crovitz 1970, Lea 1975), as is the minimal interference seen with its use. The items to be remembered in this mnemonic system are mentally associated with specific physical locations. The method relies on memorized spatial relationships to establish order and recollect memorial content. It is also known as the "Journey Method", used for storing lists of related items, or the "Roman Room" technique, which is most effective for storing unrelated information. Contemporary usage Many effective memorisers today resort to the "method of loci" to some degree. Contemporary memory competition, in particular the World Memory Championship, was initiated in 1991 and the first United States championship was held in 1997. Part of the competition requires committing to memory and recalling a sequence of digits, two-digit numbers, alphabetic letters, or playing cards. In a simple method of doing this, contestants, using various strategies well before competing, commit to long-term memory a unique vivid image associated with each item. They have also committed to long-term memory a familiar route with firmly established stop-points or loci. Then in the competition they need only deposit the image that they have associated with each item at the loci. To recall, they retrace the route, "stop" at each locus, and "observe" the image. They then translate this back to the associated item. For example, Ed Cooke, a Grand Master of Memory, describes to Josh Foer in his book Moonwalking with Einstein how he uses the method of loci. First, he describes a very familiar location where he can clearly remember many different smaller locations like his sink in his childhood home or his dog's bed. Cooke also advises that the more outlandish and vulgar the symbol used to memorize the material, the more likely it will stick. Memory champions elaborate on this by combining images. Eight-time World Memory Champion Dominic O'Brien uses this technique. The 2006 World Memory Champion, Clemens Mayer, used a 300-point-long journey through his house for his world record in "number half marathon", memorising 1040 random digits in a half-hour. An anonymous individual has used the method of loci to memorise pi to over 65,536 (216) digits. The technique is taught as a metacognitive technique in learning-to-learn courses. It is generally applied to encoding the key ideas of a subject. Two approaches are: Link the key ideas of a subject and then deep-learn those key ideas in relation to each other, and Think through the key ideas of a subject in depth, re-arrange the ideas in relation to an argument, then link the ideas to loci in good order. The method of loci has also been shown to help those with depression remember positive, self-affirming memories. A study at the University of Maryland evaluated participants' ability to accurately recall two sets of familiar faces, using a traditional desktop, and with a head-mounted display. The study was designed to utilize the method of loci technique, with virtual environments resembling memory palaces. The study found an 8.8% recall improvement in favor of the head-mounted display, in part due to participants being able to employ their vestibular and proprioceptive sensations. Method The Rhetorica ad Herennium and most other sources recommend that the method of loci should be integrated with other forms of elaborative encoding (i.e., adding visual, auditory, or other details) to strengthen memory. However, due to the strength of spatial memory, simply mentally placing objects in real or imagined locations without further elaboration can be effective for simple associations. A variation of the "method of loci" involves creating imaginary locations (houses, palaces, roads, and cities) to which the same procedure is applied. It is accepted that there is a greater cost involved in the initial setup, but thereafter the performance is in line with the standard loci method. The purported advantage is to create towns and cities that each represent a topic or an area of study, thus offering an efficient filing of the information and an easy path for the regular review necessary for long-term memory storage. Something that is likely a reference to the "method of loci" techniques survives to this day in the common English phrases "in the first place", "in the second place", and so forth. The technique is also used for second-language vocabulary learning, as polyglot Timothy Doner described in his 2014 TED talk. Applicability of the term The designation is not used with strict consistency. In some cases it refers broadly to what is otherwise known as the art of memory, the origins of which are related, according to tradition, in the story of Simonides of Ceos and the collapsing banquet hall. For example, after relating the story of how Simonides relied on remembered seating arrangements to call to mind the faces of recently deceased guests, Stephen M. Kosslyn remarks "[t]his insight led to the development of a technique the Greeks called the method of loci, which is a systematic way of improving one's memory by using imagery." Skoyles and Sagan indicate that "an ancient technique of memorization called Method of Loci, by which memories are referenced directly onto spatial maps" originated with the story of Simonides. Referring to mnemonic methods, Verlee Williams mentions, "One such strategy is the 'loci' method, which was developed by Simonides, a Greek poet of the fifth and sixth centuries BC." Loftus cites the foundation story of Simonides (more or less taken from Frances Yates) and describes some of the most basic aspects of the use of space in the art of memory. She states, "This particular mnemonic technique has come to be called the "method of loci". While place or position certainly figured prominently in ancient mnemonic techniques, no designation equivalent to "method of loci" was used exclusively to refer to mnemonic schemes relying upon space for organization. In other cases the designation is generally consistent, but more specific: "The Method of Loci is a Mnemonic Device involving the creation of a Visual Map of one's house." This term can be misleading: the ancient principles and techniques of the art of memory, hastily glossed in some of the works, cited above, depended equally upon images and places. The designator "method of loci" does not convey the equal weight placed on both elements. Training in the art or arts of memory as a whole, as attested in classical antiquity, was far more inclusive and comprehensive in the treatment of this subject. Spatial mnemonics and specific brain activation Brain scans of "superior memorizers", 90% of whom use the method of loci technique, have shown that it involves activation of regions of the brain involved in spatial awareness, such as the medial parietal cortex, retrosplenial cortex, and the right posterior hippocampus. The medial parietal cortex is most associated with encoding and retrieving of information. Patients who have medial parietal cortex damage have trouble linking landmarks with certain locations; many of these patients are unable to give or follow directions and often get lost. The retrosplenial cortex is also linked to memory and navigation. In one study on the effects of selective granular retrosplenial cortex lesions in rats, the researcher found that damage to the retrosplenial cortex led to impaired spatial learning abilities. Rats with damage to this area failed to recall which areas of the maze they had already visited, rarely explored different arms of the maze, almost never recalled the maze in future trials, and took longer to reach the end of the maze, as compared to rats with a fully working retrosplenial cortex. In a classic study in cognitive neuroscience, O'Keefe and Nadel proposed "that the hippocampus is the core of a neural memory system providing an objective spatial framework within which the items and events of an organism's experience are located and interrelated." This theory has generated considerable debate and further experiment. It has been noted that "[t]he hippocampus underpins our ability to navigate, to form and recollect memories, and to imagine future experiences. How activity across millions of hippocampal neurons supports these functions is a fundamental question in neuroscience, wherein the size, sparseness, and organization of the hippocampal neural code are debated." In a more recent study, memory champions during resting periods did not exhibit specific regional brain differences, but distributed functional brain network connectivity changes compared to control subjects. When volunteers trained use of the method of loci for six weeks, the training-induced changes in brain connectivity were similar to the brain network organization that distinguished memory champions from controls. Fictional portrayals Fictional portrayals of the method of loci extend as far back as ancient Greek myths. In the novels Hannibal (1999) and Hannibal Rising (2006), by Thomas Harris, a detailed description of Hannibal Lecter's memory palace is provided. We catch up to him as the swift slippers of his mind pass from the foyer into the Great Hall of the Seasons. The palace is built according to the rules discovered by Simonides of Ceos and elaborated by Cicero four hundred years later; it is airy, high-ceilinged, furnished with objects and tableaux that are vivid, striking, sometimes shocking and absurd, and often beautiful. The displays are well spaced and well lighted like those of a great museum. [...] On the floor before the painting is this tableau, life-sized in painted marble. A parade in Arlington National Cemetery led by Jesus, thirty-three, driving a '27 Model-T Ford truck, a "Tin Lizzie", with J. Edgar Hoover standing in the truck bed wearing a tutu and waving to an unseen crowd. Marching behind him is Clarice Starling carrying a .308 Enfield rifle at shoulder arms. In the first episode of Bordertown (2016), detective Kari Sorjonen explains the memory palace concept, and, throughout the series, he marks rectangles with tape on his basement floor where he stands to imagine himself at various significant loci in a case, organized into memory palaces. The television series The Mentalist, which premiered in late 2008, mentions memory palaces on multiple occasions. The main character Patrick Jane claims to use a memory palace to memorise cards and gamble successfully. In the eleventh episode of season two, Jane teaches his colleague Wayne Rigsby how to construct a memory palace, explaining that they are good for memorising large chunks of information at a time. In "The Reunion Job", Episode 2 of Season 3 of the television show Leverage, the criminal team must "hack" the Roman Room of a tech giant, as he's created a memory palace out of his senior in high school to remember his passwords. In the 2003 film Dreamcatcher, the character Jonesy has an elaborate memory palace which plays a major role in the plot and is shown several times in the film, depicted as a physical building that Jonesy is walking through as a way to represent him accessing the memories. In the BBC television series Sherlock, which premiered in 2010, the title character uses mind palaces to remember various things throughout the show. In Hilary Mantel's 2009 novel Wolf Hall, the fictionalized version of Thomas Cromwell describes "memory palace" techniques and his uses of it. In the 2017 medical drama The Good Doctor, series protagonist Shaun Murphy uses the Method of Loci to figure out various medical diagnoses. In the 2020 video game The Sinking City, the main character Charles Reed is a detective that keeps points of interest in a mind palace menu. In the 2020 video game Twin Mirror, the main character Sam Higgs uses the mind palace in various points of the game to relive memories and investigate. In the 2023 video game Alan Wake II, FBI Agent Saga Anderson uses an adapted version, which she calls the "Mind Place", throughout the story to review cases and associated evidence. See also Catherine of Siena's "inner cell" Mental image Spatial memory Citations General and cited references Dann, Jack (1995) The Memory Cathedral: A Secret History of Leonardo da Vinci: Bantam Books 0553378570 Dresler, Martin, et al."Mnemonic Training Reshapes Brain Networks to Support Superior Memory", Neuron, 8 March 2017. Cognitive training Conceptual models Dialectic Learning methods Mnemonics Rhetoric Spatial cognition
Method of loci
[ "Physics" ]
2,933
[ "Spacetime", "Space", "Spatial cognition" ]
1,684,586
https://en.wikipedia.org/wiki/Schools%20Interoperability%20Framework
The Schools Interoperability Framework, Systems Interoperability Framework (UK), or SIF, is a data-sharing open specification for academic institutions from kindergarten through workforce. This specification is being used primarily in the United States, Canada, the UK, Australia, and New Zealand; however, it is increasingly being implemented in India, and elsewhere. The specification comprises two parts: an XML specification for modeling educational data which is specific to the educational locale (such as North America, Australia or the UK), and a service-oriented architecture (SOA) based on both direct and brokered RESTful-models for sharing that data between institutions, which is international and shared between the locales. SIF is not a product, but an industry initiative that enables diverse applications to interact and share data. , SIF was estimated to have been used in more than 48 US states and 6 countries, supporting five million students. The specification was started and maintained by its specification body, the Schools Interoperability Framework Association, renamed the Access For Learning Community (A4L) in 2015. History Traditionally, the standalone applications used by public school districts have the limitation of data isolation; that is, it is difficult to access and share their data. This often results in redundant data entry, data integrity problems, and inefficient or incomplete reporting. In such cases, a student's information can appear in multiple places but may not be identical, for example, or decision makers may be working with incomplete or inaccurate information. Many district and site technology coordinators also experience an increase in technical support problems from maintaining numerous proprietary systems. SIF was created to solve these issues. The Schools Interoperability Framework (SIF) began as an initiative chiefly championed initially by Microsoft to create "a blueprint for educational software interoperability and data access." It was designed to be an initiative drawing upon the strengths of the leading vendors in the K-12 market to enable schools' IT professionals to build, manage and upgrade their systems. It was endorsed by close to 20 leading K-12 vendors of student information, library, transportation, food service applications and more. The first pilot sites began in the summer of 1999, and the first SIF-based products began to show up in 2000. In the beginning it was not clear which approach would become the national standard in the United States. Both SIF and EDI were vying for the position in 2000 but SIF began taking the lead in 2002 or so. In 2000, the National School Boards Association held a panel discussion during its annual meeting on the topic of SIF. In 2007 in the United Kingdom Becta has championed the adoption of SIF as a national standard for schools data interchange. In 2008 it was announced that in the UK the standard will become known as the "Systems Interoperability Framework". This reflects the intention in the UK to develop SIF to be used in other organizations beyond just schools. Members The SIF specification is supported by the A4L community. A4L members collaborate on a variety of technical solutions and standards which include but are not limited to the Schools Interoperability Framework. Members include districts, states, vendors, non-profits, and various government agencies. Criticism SIF has all the pains and challenges that come with any SOA specification and data model. When building specifications via consensus not everyone is always happy and sometimes the end product isn't perfect. Also given all the moving parts in modeling the entire K12's enterprise the specification has many points of possible failure. This is not particular to SIF but to any record-level, automated system moving standardized data from one source to another in a heterogeneous environment. Out-of-the-box interoperability and ease of use and implementation were part of a 12-18 month focus from 2007 and through 2009. How SIF Works SIF 2.x relied on using a broker called a Zone Integration Server (ZIS) to manage communication between applications. SIF 3.x and SIF 2.8+ allows for both brokered and direct communication between applications. Brokered Rather than have each application vendor try to set up a separate connection to every other application, SIF has defined the set of rules and definitions to share data within a "SIF Zone"— or Environment which is a logical grouping of applications in which software application agents communicate with each other through a central communication point. Zones are managed by an enterprise data broker sometimes called a Zone Integration Server (ZIS). A single ZIS can manage multiple Zones. However, the current infrastructure specification supports RESTful connections directly between applications AND/OR utilizing a brokered environment. Data travels between applications as a series of standardized messages, queries, and events written in XML or JSON and sent using Internet protocols. The SIF specification defines such events and the "choreography" that allows data to move back and forth between the applications. Direct Direct SIF allow one application to communicate directly to another via simple REST calls to PUT, POST, GET, or DELETE resources. This is ideal for simple environments with two or maybe three players where complex choreographies are not necessary. It is easier to implement than a brokered environment in two- or three-node situations. Interface Code SIF Agents are pieces of software that exist either internal to an application or installed next to it. The SIF Agents function as extensions of each application and serve as the intermediary between the software application and the SIF Zone. In brokered environments, the broker keeps track of the Agents registered in the environment and its Zones and manages transactions between Agents, enabling them to provide data and respond to requests. The broker controls all access, routing, and security within the system. Standardization of the behavior of the Agents and the broker means that SIF can add standard functionality to a Zone by simply adding SIF-enabled applications over time. Vertical interoperability "Vertical interoperability" is a situation in which SIF agents at different levels of an organization communicate using a SIF Zone. Vertical interoperability involves data collection from multiple agents (upward) or publishing of information to multiple agents (downward). For example, a state-level data warehouse may listen for changes in district-level data warehouses and update its database accordingly. Or a state entity may wish to publish teacher certification data to districts. The three pieces of the SIF specification that deal directly with vertical interoperability are the Student Locator object, the Vertical Reporting object, and the Data Warehouse object. A good example of this would be the Century Consultants SIS Agent working with the Pearson SLF Agent sending student data to the State Agency and getting Student Testing Identifiers in return. SIF in relation to other standards SIF was designed before REST, SOAP, namespaces, and web service standards were as mature as they are today. As a result, it has a robust SOA that is more vetted than the current SOAP specifications but does not use the SOAP or WS standards. The 2.0 SIF Web Services specification began the process of joining these two worlds, and the 3.0 Infrastructure specification completes the transformation to a SOA specification using modern tools. The 2.0 Web Services specification allows for more generalized XML messaging structures typically found in enterprise messaging systems that use the concept of an enterprise service bus. Web service standards are also designed to support secure public interfaces and XML appliances can make the setup and configuration easier. The SIF 2.0 Web Services specification allows for the use of Web Services to communicate in and out of the Zone. The 3.0 Infrastructure allows any data payload to be moved across it and is designed around RESTful design patterns. It allows both brokered and direct exchanges in a RESTful manner utilizing either XML or JSON payloads. CEDS Starting with SIF 3.0 the SIF Specification relies entirely-unless impossible or not practical- on the Common Education Data Standards CEDS for its controlled vocabulary and element definition. This allows it to transport CEDS over the wire and be compatible with other CEDS-compliant data sets. LISS (Australia) A similar standard LISS supports vendor integration 'within' a school site. This overcomes some limitations where a school has elected to use a Zone integration server (not a requirement in SIF 3.x implementations) LISS Lightweight Interoperability Standard for Schools connects primarily smaller, 'local' modules, such as timetabling, roll call, reporting or others, to the main admin system on a given school site. LISS works either across the web, or via a local network, and has a simpler format. Other Standards SIFA is also working closely with the Postsecondary Electronic Standards Council (PESC), SCORM, and other standards organizations. Versions In August 2013 the SIF Association announced the release of the SIF Implementation Specification 3.0. The SIF Implementation Specification (North America) 3.0 is made up of a globally utilized reference infrastructure and North America data model focusing on supporting the Common Education Data Standards (CEDS) initiative. The new 3.0 infrastructure allows the transport of various data models including those from the other global SIF communities as well as data from the numerous “alphabet soup” data initiatives that are populating the education landscape. In essence – education now can utilize “one wire with one plug” – not the never-ending proprietary API's and “one off” connections. The specification fully supports RESTful Web Services and SOAP-based protocols. The Australian 3.4 Data Model specification had come out in Fall of 2016, as well as a 3.1.2 release of the Global SIF Infrastructure. The version 2.8 specification is the last 2.x version of SIF. Most of the SIF implementations in the United States and abroad are 2.x deployments. The A4L Community has just released a new version of the SIF Specification called "Unity" that will use the best objects from the 3.x specification and the foundation of the 2.8 specification, and be able to run on either the 3.x infrastructure or the 2.x infrastructure. This is a boon to the thousands of districts and many states using the SIF 2 infrastructure and allows a clean migration path to utilizing more modern RestFUL architectures if desired. SIF Express The SIF 3.2 Release includes the SIF XPress Roster and the SIF Xpress Student Record Exchange (SRE). These are the result of work being done by various members of the association (vendors, agencies, regional centers) on a more easily adopted, easier to implement sub-set of the specification that handles the roster and basic uses cases. Privacy The Access for Learning community has recently started taking strong leadership in the education Privacy space globally. The association has created and supports an organization called the Student Data Privacy Consortium, or SDPC. and working closely with national Australian privacy efforts See also Access For Learning Community Enterprise application integration Machine-readable document Open Knowledge Initiative SCORM Standard data model Shibboleth (Internet2) Web services References External links Official website of the Access For Learning (A4L) community About the Postsecondary Electronic Standards Council (PESC) Educational technology standards Interoperability
Schools Interoperability Framework
[ "Engineering" ]
2,298
[ "Telecommunications engineering", "Interoperability" ]
1,684,758
https://en.wikipedia.org/wiki/Nielsen%E2%80%93Thurston%20classification
In mathematics, Thurston's classification theorem characterizes homeomorphisms of a compact orientable surface. William Thurston's theorem completes the work initiated by . Given a homeomorphism f : S → S, there is a map g isotopic to f such that at least one of the following holds: g is periodic, i.e. some power of g is the identity; g preserves some finite union of disjoint simple closed curves on S (in this case, g is called reducible); or g is pseudo-Anosov. The case where S is a torus (i.e., a surface whose genus is one) is handled separately (see torus bundle) and was known before Thurston's work. If the genus of S is two or greater, then S is naturally hyperbolic, and the tools of Teichmüller theory become useful. In what follows, we assume S has genus at least two, as this is the case Thurston considered. (Note, however, that the cases where S has boundary or is not orientable are definitely still of interest.) The three types in this classification are not mutually exclusive, though a pseudo-Anosov homeomorphism is never periodic or reducible. A reducible homeomorphism g can be further analyzed by cutting the surface along the preserved union of simple closed curves Γ. Each of the resulting compact surfaces with boundary is acted upon by some power (i.e. iterated composition) of g, and the classification can again be applied to this homeomorphism. The mapping class group for surfaces of higher genus Thurston's classification applies to homeomorphisms of orientable surfaces of genus ≥ 2, but the type of a homeomorphism only depends on its associated element of the mapping class group Mod(S). In fact, the proof of the classification theorem leads to a canonical representative of each mapping class with good geometric properties. For example: When g is periodic, there is an element of its mapping class that is an isometry of a hyperbolic structure on S. When g is pseudo-Anosov, there is an element of its mapping class that preserves a pair of transverse singular foliations of S, stretching the leaves of one (the unstable foliation) while contracting the leaves of the other (the stable foliation). Mapping tori Thurston's original motivation for developing this classification was to find geometric structures on mapping tori of the type predicted by the Geometrization conjecture. The mapping torus Mg of a homeomorphism g of a surface S is the 3-manifold obtained from S × [0,1] by gluing S × {0} to S × {1} using g. If S has genus at least two, the geometric structure of Mg is related to the type of g in the classification as follows: If g is periodic, then Mg has an H2 × R structure; If g is reducible, then Mg has incompressible tori, and should be cut along these tori to yield pieces that each have geometric structures (the JSJ decomposition); If g is pseudo-Anosov, then Mg has a hyperbolic (i.e. H3) structure. The first two cases are comparatively easy, while the existence of a hyperbolic structure on the mapping torus of a pseudo-Anosov homeomorphism is a deep and difficult theorem (also due to Thurston). The hyperbolic 3-manifolds that arise in this way are called fibered because they are surface bundles over the circle, and these manifolds are treated separately in the proof of Thurston's geometrization theorem for Haken manifolds. Fibered hyperbolic 3-manifolds have a number of interesting and pathological properties; for example, Cannon and Thurston showed that the surface subgroup of the arising Kleinian group has limit set which is a sphere-filling curve. Fixed point classification The three types of surface homeomorphisms are also related to the dynamics of the mapping class group Mod(S) on the Teichmüller space T(S). Thurston introduced a compactification of T(S) that is homeomorphic to a closed ball, and to which the action of Mod(S) extends naturally. The type of an element g of the mapping class group in the Thurston classification is related to its fixed points when acting on the compactification of T(S): If g is periodic, then there is a fixed point within T(S); this point corresponds to a hyperbolic structure on S whose isometry group contains an element isotopic to g; If g is pseudo-Anosov, then g has no fixed points in T(S) but has a pair of fixed points on the Thurston boundary; these fixed points correspond to the stable and unstable foliations of S preserved by g. For some reducible mapping classes g, there is a single fixed point on the Thurston boundary; an example is a multi-twist along a pants decomposition Γ. In this case the fixed point of g on the Thurston boundary corresponds to Γ. This is reminiscent of the classification of hyperbolic isometries into elliptic, parabolic, and hyperbolic types (which have fixed point structures similar to the periodic, reducible, and pseudo-Anosov types listed above). See also Train track map References Travaux de Thurston sur les surfaces, Astérisque, 66-67, Soc. Math. France, Paris, 1979 Geometric topology Homeomorphisms Surfaces Theorems in topology
Nielsen–Thurston classification
[ "Mathematics" ]
1,177
[ "Mathematical theorems", "Homeomorphisms", "Theorems in topology", "Geometric topology", "Topology", "Mathematical problems" ]
1,685,554
https://en.wikipedia.org/wiki/Wilkinson%27s%20catalyst
Wilkinson's catalyst (chlorido­tris(triphenylphosphine)­rhodium(I)) is a coordination complex of rhodium with the formula [RhCl(PPh3)], where 'Ph' denotes a phenyl group. It is a red-brown colored solid that is soluble in hydrocarbon solvents such as benzene, and more so in tetrahydrofuran or chlorinated solvents such as dichloromethane. The compound is widely used as a catalyst for hydrogenation of alkenes. It is named after chemist and Nobel laureate Sir Geoffrey Wilkinson, who first popularized its use. Historically, Wilkinson's catalyst has been a paradigm in catalytic studies leading to several advances in the field such as the implementation of some of the first heteronuclear magnetic resonance studies for its structural elucidation in solution (31P), parahydrogen-induced polarization spectroscopy to determine the nature of transient reactive species, or one of the first detailed kinetic investigation by Halpern to elucidate the mechanism. Furthermore, the catalytic and organometallic studies on Wilkinson's catalyst also played a significant role on the subsequent development of cationic Rh- and Ru-based asymmetric hydrogenation transfer catalysts which set the foundations for modern asymmetric catalysis. Structure and basic properties According to single crystal X-ray diffraction the compound adopts a slightly distorted square planar structure. In analyzing the bonding, it is a complex of Rh(I), a d8 transition metal ion. From the perspective of the 18-electron rule, the four ligands each provides two electrons, for a total of 16-electrons. As such the compound is coordinatively unsaturated, i.e. susceptible to binding substrates (alkenes and H2). In contrast, IrCl(PPh3)3 undergoes cyclometallation to give HIrCl(PPh3)2(PPh2C6H4), a coordinatively saturated Ir(III) complex that is not catalytically active. Synthesis Wilkinson's catalyst is usually obtained by treating rhodium(III) chloride hydrate with an excess of triphenylphosphine in refluxing ethanol. Triphenylphosphine serves as both a ligand and a two-electron reducing agent that oxidizes itself from oxidation state (III) to (V). In the synthesis, three equivalents of triphenylphosphine become ligands in the product, while the fourth reduces rhodium(III) to rhodium(I). RhCl3(H2O)3 + 4 PPh3 → RhCl(PPh3)3 + OPPh3 + 2 HCl + 2 H2O Catalytic applications Wilkinson's catalyst is best known for catalyzing the hydrogenation of olefins with molecular hydrogen. The mechanism of this reaction involves the initial dissociation of one or two triphenylphosphine ligands to give 14- or 12-electron complexes, respectively, followed by oxidative addition of H2 to the metal. Subsequent π-complexation of alkene, migratory insertion (intramolecular hydride transfer or olefin insertion), and reductive elimination complete the formation of the alkane product, e.g.: In terms of their rates of hydrogenation, the degree of substitution on the olefin substrate is the key factor, since the rate-limiting step in the mechanism is the insertion into the olefin which is limited by the severe steric hindrance around the metal center. In practice, terminal and disubstituted alkenes are good substrates, but more hindered alkenes are slower to hydrogenate. The hydrogenation of alkynes is troublesome to control since alkynes tend to be reduced to alkanes, via intermediacy of the cis-alkene. Ethylene reacts with Wilkinson's catalyst to give RhCl(C2H4)(PPh3)2, but it is not a substrate for hydrogenation. Related catalytic processes Wilkinson's catalyst also catalyzes many other hydrofunctionalization reactions including hydroacylation, hydroboration, and hydrosilylation of alkenes. Hydroborations have been studied with catecholborane and pinacolborane. It is also active for the hydrosilylation of alkenes. In the presence of strong base and hydrogen, Wilkinson's catalyst forms reactive Rh(I) species with superior catalytic activities on the hydrogenation of internal alkynes and functionalized tri-substituted alkenes. Reactions RhCl(PPh3)3 reacts with carbon monoxide to give bis(triphenylphosphine)rhodium carbonyl chloride, trans-RhCl(CO)(PPh3)2. The same complex arises from the decarbonylation of aldehydes: RhCl(PPh3)3 + RCHO → RhCl(CO)(PPh3)2 + RH + PPh3 Upon stirring in benzene solution, RhCl(PPh3)3 converts to the poorly soluble red-colored dimer [RhCl(PPh3)2]2. This conversion further demonstrates the lability of the triphenylphosphine ligands. In the presence of base, H2, and additional triphenylphosphine, Wilkinson's complex converts to hydridotetrakis(triphenylphosphine)rhodium(I), HRh(PPh3)4. This 18e complex is also an active hydrogenation catalyst. See also Rhodium-catalyzed hydrogenation References Rhodium(I) compounds Catalysts Homogeneous catalysis Triphenylphosphine complexes Coordination complexes Hydrogenation catalysts
Wilkinson's catalyst
[ "Chemistry" ]
1,241
[ "Catalysis", "Catalysts", "Hydrogenation catalysts", "Coordination complexes", "Homogeneous catalysis", "Coordination chemistry", "Hydrogenation", "Chemical kinetics" ]
2,344,516
https://en.wikipedia.org/wiki/NF-%CE%BAB
Nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) is a family of transcription factor protein complexes that controls transcription of DNA, cytokine production and cell survival. NF-κB is found in almost all animal cell types and is involved in cellular responses to stimuli such as stress, cytokines, free radicals, heavy metals, ultraviolet irradiation, oxidized LDL, and bacterial or viral antigens. NF-κB plays a key role in regulating the immune response to infection. Incorrect regulation of NF-κB has been linked to cancer, inflammatory and autoimmune diseases, septic shock, viral infection, and improper immune development. NF-κB has also been implicated in processes of synaptic plasticity and memory. Discovery NF-κB was discovered by Ranjan Sen in the lab of Nobel laureate David Baltimore via its interaction with an 11-base pair sequence in the immunoglobulin light-chain enhancer in B cells. Later work by Alexander Poltorak and Bruno Lemaitre in mice and Drosophila fruit flies established Toll-like receptors as universally conserved activators of NF-κB signalling. These works ultimately contributed to awarding of the Nobel Prize to Bruce Beutler and Jules A. Hoffmann, who were the principal investigators of those studies. Structure All proteins of the NF-κB family share a Rel homology domain in their N-terminus. A subfamily of NF-κB proteins, including RelA, RelB, and c-Rel, have a transactivation domain in their C-termini. In contrast, the NF-κB1 and NF-κB2 proteins are synthesized as large precursors, p105 and p100, which undergo processing to generate the mature p50 and p52 subunits, respectively. The processing of p105 and p100 is mediated by the ubiquitin/proteasome pathway and involves selective degradation of their C-terminal region containing ankyrin repeats. Whereas the generation of p52 from p100 is a tightly regulated process, p50 is produced from constitutive processing of p105. The p50 and p52 proteins have no intrinsic ability to activate transcription and thus have been proposed to act as transcriptional repressors when binding κB elements as homodimers. Indeed, this confounds the interpretation of p105-knockout studies, where the genetic manipulation is removing an IκB (full-length p105) and a likely repressor (p50 homodimers) in addition to a transcriptional activator (the RelA-p50 heterodimer). Members NF-κB family members share structural homology with the retroviral oncoprotein v-Rel, resulting in their classification as NF-κB/Rel proteins. There are five proteins in the mammalian NF-κB family: The NF-κB/Rel proteins can be divided into two classes, which share general structural features: Below are the five human NF-κB family members: Species distribution and evolution In addition to mammals, NF-κB is found in a number of simple animals as well. These include cnidarians (such as sea anemones, coral and hydra), porifera (sponges), single-celled eukaryotes including Capsaspora owczarzaki and choanoflagellates, and insects (such as moths, mosquitoes and fruitflies). The sequencing of the genomes of the mosquitoes A. aegypti and A. gambiae, and the fruitfly D. melanogaster has allowed comparative genetic and evolutionary studies on NF-κB. In those insect species, activation of NF-κB is triggered by the Toll pathway (which evolved independently in insects and mammals) and by the Imd (immune deficiency) pathway. Signaling Effect of activation NF-κB is crucial in regulating cellular responses because it belongs to the category of "rapid-acting" primary transcription factors, i.e., transcription factors that are present in cells in an inactive state and do not require new protein synthesis in order to become activated (other members of this family include transcription factors such as c-Jun, STATs, and nuclear hormone receptors). This allows NF-κB to be a first responder to harmful cellular stimuli. Known inducers of NF-κB activity are highly variable and include reactive oxygen species (ROS), tumor necrosis factor alpha (TNFα), interleukin 1-beta (IL-1β), bacterial lipopolysaccharides (LPS), isoproterenol, cocaine, endothelin-1 and ionizing radiation. NF-κB suppression of tumor necrosis factor cytotoxicity (apoptosis) is due to induction of antioxidant enzymes and sustained suppression of c-Jun N-terminal kinases (JNKs). Receptor activator of NF-κB (RANK), which is a type of TNFR, is a central activator of NF-κB. Osteoprotegerin (OPG), which is a decoy receptor homolog for RANK ligand (RANKL), inhibits RANK by binding to RANKL, and, thus, osteoprotegerin is tightly involved in regulating NF-κB activation. Many bacterial products and stimulation of a wide variety of cell-surface receptors lead to NF-κB activation and fairly rapid changes in gene expression. The identification of Toll-like receptors (TLRs) as specific pattern recognition molecules and the finding that stimulation of TLRs leads to activation of NF-κB improved our understanding of how different pathogens activate NF-κB. For example, studies have identified TLR4 as the receptor for the LPS component of Gram-negative bacteria. TLRs are key regulators of both innate and adaptive immune responses. Unlike RelA, RelB, and c-Rel, the p50 and p52 NF-κB subunits do not contain transactivation domains in their C terminal halves. Nevertheless, the p50 and p52 NF-κB members play critical roles in modulating the specificity of NF-κB function. Although homodimers of p50 and p52 are, in general, repressors of κB site transcription, both p50 and p52 participate in target gene transactivation by forming heterodimers with RelA, RelB, or c-Rel. In addition, p50 and p52 homodimers also bind to the nuclear protein Bcl-3, and such complexes can function as transcriptional activators. Inhibition In unstimulated cells, the NF-κB dimers are sequestered in the cytoplasm by a family of inhibitors, called IκBs (Inhibitor of κB), which are proteins that contain multiple copies of a sequence called ankyrin repeats. By virtue of their ankyrin repeat domains, the IκB proteins mask the nuclear localization signals (NLS) of NF-κB proteins and keep them sequestered in an inactive state in the cytoplasm. IκBs are a family of related proteins that have an N-terminal regulatory domain, followed by six or more ankyrin repeats and a PEST domain near their C terminus. Although the IκB family consists of IκBα, IκBβ, IκBε, and Bcl-3, the best-studied and major IκB protein is IκBα. Due to the presence of ankyrin repeats in their C-terminal halves, p105 and p100 also function as IκB proteins. The c-terminal half of p100, that is often referred to as IκBδ, also functions as an inhibitor. IκBδ degradation in response to developmental stimuli, such as those transduced through LTβR, potentiate NF-κB dimer activation in a NIK dependent non-canonical pathway. Activation process (canonical/classical) Activation of the NF-κB is initiated by the signal-induced degradation of IκB proteins. This occurs primarily via activation of a kinase called the IκB kinase (IKK). IKK is composed of a heterodimer of the catalytic IKKα and IKKβ subunits and a "master" regulatory protein termed NEMO (NF-κB essential modulator) or IKKγ. When activated by signals, usually coming from the outside of the cell, the IκB kinase phosphorylates two serine residues located in an IκB regulatory domain. When phosphorylated on these serines (e.g., serines 32 and 36 in human IκBα), the IκB proteins are modified by a process called ubiquitination, which then leads them to be degraded by a cell structure called the proteasome. With the degradation of IκB, the NF-κB complex is then freed to enter the nucleus where it can 'turn on' the expression of specific genes that have DNA-binding sites for NF-κB nearby. The activation of these genes by NF-κB then leads to the given physiological response, for example, an inflammatory or immune response, a cell survival response, or cellular proliferation. Translocation of NF-κB to nucleus can be detected immunocytochemically and measured by laser scanning cytometry. NF-κB turns on expression of its own repressor, IκBα. The newly synthesized IκBα then re-inhibits NF-κB and, thus, forms an auto feedback loop, which results in oscillating levels of NF-κB activity. In addition, several viruses, including the AIDS virus HIV, have binding sites for NF-κB that controls the expression of viral genes, which in turn contribute to viral replication or viral pathogenicity. In the case of HIV-1, activation of NF-κB may, at least in part, be involved in activation of the virus from a latent, inactive state. YopP is a factor secreted by Yersinia pestis, the causative agent of plague, that prevents the ubiquitination of IκB. This causes this pathogen to effectively inhibit the NF-κB pathway and thus block the immune response of a human infected with Yersinia. Inhibitors of NF-κB activity Concerning known protein inhibitors of NF-κB activity, one of them is IFRD1, which represses the activity of NF-κB p65 by enhancing the HDAC-mediated deacetylation of the p65 subunit at lysine 310, by favoring the recruitment of HDAC3 to p65. In fact IFRD1 forms trimolecular complexes with p65 and HDAC3. The NAD-dependent protein deacetylase and longevity factor SIRT1 inhibits NF-κB gene expression by deacetylating the RelA/p65 subunit of NF-κB at lysine 310. Non-canonical/alternate pathway A select set of cell-differentiating or developmental stimuli, such as lymphotoxin β-receptor (LTβR), BAFF or RANKL, activate the non-canonical NF-κB pathway to induce NF-κB/RelB:p52 dimer in the nucleus. In this pathway, activation of the NF-κB inducing kinase (NIK) upon receptor ligation led to the phosphorylation and subsequent proteasomal processing of the NF-κB2 precursor protein p100 into mature p52 subunit in an IKK1/IKKa dependent manner. Then p52 dimerizes with RelB to appear as a nuclear RelB:p52 DNA binding activity. RelB:p52 regulates the expression of homeostatic lymphokines, which instructs lymphoid organogenesis and lymphocyte trafficking in the secondary lymphoid organs. In contrast to the canonical signaling that relies on NEMO-IKK2 mediated degradation of IκBα, -β, -ε, non-canonical signaling depends on NIK mediated processing of p100 into p52. Given their distinct regulations, these two pathways were thought to be independent of each other. However, it was found that syntheses of the constituents of the non-canonical pathway, viz RelB and p52, are controlled by canonical IKK2-IκB-RelA:p50 signaling. Moreover, generation of the canonical and non-canonical dimers, viz RelA:p50 and RelB:p52, within the cellular milieu are mechanistically interlinked. These analyses suggest that an integrated NF-κB system network underlies activation of both RelA and RelB containing dimer and that a malfunctioning canonical pathway will lead to an aberrant cellular response also through the non-canonical pathway. Most intriguingly, a recent study identified that TNF-induced canonical signalling subverts non-canonical RelB:p52 activity in the inflamed lymphoid tissues limiting lymphocyte ingress. Mechanistically, TNF inactivated NIK in LTβR‐stimulated cells and induced the synthesis of Nfkb2 mRNA encoding p100; these together potently accumulated unprocessed p100, which attenuated the RelB activity. A role of p100/Nfkb2 in dictating lymphocyte ingress in the inflamed lymphoid tissue may have broad physiological implications. In addition to its traditional role in lymphoid organogenesis, the non-canonical NF-κB pathway also directly reinforces inflammatory immune responses to microbial pathogens by modulating canonical NF-κB signalling. It was shown that p100/Nfkb2 mediates stimulus-selective and cell-type-specific crosstalk between the two NF-κB pathways and that Nfkb2-mediated crosstalk protects mice from gut pathogens. On the other hand, a lack of p100-mediated regulations repositions RelB under the control of TNF-induced canonical signalling. In fact, mutational inactivation of p100/Nfkb2 in multiple myeloma enabled TNF to induce a long-lasting RelB activity, which imparted resistance in myeloma cells to chemotherapeutic drug. In immunity NF-κB is a major transcription factor that regulates genes responsible for both the innate and adaptive immune response. Upon activation of either the T- or B-cell receptor, NF-κB becomes activated through distinct signaling components. Upon ligation of the T-cell receptor, protein kinase Lck is recruited and phosphorylates the ITAMs of the CD3 cytoplasmic tail. ZAP70 is then recruited to the phosphorylated ITAMs and helps recruit LAT and PLC-γ, which causes activation of PKC. Through a cascade of phosphorylation events, the kinase complex is activated and NF-κB is able to enter the nucleus to upregulate genes involved in T-cell development, maturation, and proliferation. In the nervous system In addition to roles in mediating cell survival, studies by Mark Mattson and others have shown that NF-κB has diverse functions in the nervous system including roles in plasticity, learning, and memory. In addition to stimuli that activate NF-κB in other tissues, NF-κB in the nervous system can be activated by Growth Factors (BDNF, NGF) and synaptic transmission such as glutamate. These activators of NF-κB in the nervous system all converge upon the IKK complex and the canonical pathway. Recently there has been a great deal of interest in the role of NF-κB in the nervous system. Current studies suggest that NF-κB is important for learning and memory in multiple organisms including crabs, fruit flies, and mice. NF-κB may regulate learning and memory in part by modulating synaptic plasticity, synapse function, as well as by regulating the growth of dendrites and dendritic spines. Genes that have NF-κB binding sites are shown to have increased expression following learning, suggesting that the transcriptional targets of NF-κB in the nervous system are important for plasticity. Many NF-κB target genes that may be important for plasticity and learning include growth factors (BDNF, NGF) cytokines (TNF-alpha, TNFR) and kinases (PKAc). Despite the functional evidence for a role for Rel-family transcription factors in the nervous system, it is still not clear that the neurological effects of NF-κB reflect transcriptional activation in neurons. Most manipulations and assays are performed in the mixed-cell environments found in vivo, in "neuronal" cell cultures that contain significant numbers of glia, or in tumor-derived "neuronal" cell lines. When transfections or other manipulations have been targeted specifically at neurons, the endpoints measured are typically electrophysiology or other parameters far removed from gene transcription. Careful tests of NF-κB-dependent transcription in highly purified cultures of neurons generally show little to no NF-κB activity. Some of the reports of NF-κB in neurons appear to have been an artifact of antibody nonspecificity. Of course, artifacts of cell culture—e.g., removal of neurons from the influence of glia—could create spurious results as well. But this has been addressed in at least two co-culture approaches. Moerman et al. used a coculture format whereby neurons and glia could be separated after treatment for EMSA analysis, and they found that the NF-κB induced by glutamatergic stimuli was restricted to glia (and, intriguingly, only glia that had been in the presence of neurons for 48 hours). The same investigators explored the issue in another approach, utilizing neurons from an NF-κB reporter transgenic mouse cultured with wild-type glia; glutamatergic stimuli again failed to activate in neurons. Some of the DNA-binding activity noted under certain conditions (particularly that reported as constitutive) appears to result from Sp3 and Sp4 binding to a subset of κB enhancer sequences in neurons. This activity is actually inhibited by glutamate and other conditions that elevate intraneuronal calcium. In the final analysis, the role of NF-κB in neurons remains opaque due to the difficulty of measuring transcription in cells that are simultaneously identified for type. Certainly, learning and memory could be influenced by transcriptional changes in astrocytes and other glial elements. And it should be considered that there could be mechanistic effects of NF-κB aside from direct transactivation of genes. Clinical significance Cancers NF-κB is widely used by eukaryotic cells as a regulator of genes that control cell proliferation and cell survival. As such, many different types of human tumors have misregulated NF-κB: that is, NF-κB is constitutively active. Active NF-κB turns on the expression of genes that keep the cell proliferating and protect the cell from conditions that would otherwise cause it to die via apoptosis. In cancer, proteins that control NF-κB signaling are mutated or aberrantly expressed, leading to defective coordination between the malignant cell and the rest of the organism. This is evident both in metastasis, as well as in the inefficient eradication of the tumor by the immune system. Normal cells can die when removed from the tissue they belong to, or when their genome cannot operate in harmony with tissue function: these events depend on feedback regulation of NF-κB, and fail in cancer. Defects in NF-κB results in increased susceptibility to apoptosis leading to increased cell death. This is because NF-κB regulates anti-apoptotic genes especially the TRAF1 and TRAF2 and therefore abrogates the activities of the caspase family of enzymes, which are central to most apoptotic processes. In tumor cells, NF-κB activity is enhanced, as for example, in 41% of nasopharyngeal carcinoma, colorectal cancer, prostate cancer and pancreatic tumors. This is either due to mutations in genes encoding the NF-κB transcription factors themselves or in genes that control NF-κB activity (such as IκB genes); in addition, some tumor cells secrete factors that cause NF-κB to become active. Blocking NF-κB can cause tumor cells to stop proliferating, to die, or to become more sensitive to the action of anti-tumor agents. Thus, NF-κB is the subject of much active research among pharmaceutical companies as a target for anti-cancer therapy. However, even though convincing experimental data have identified NF-κB as a critical promoter of tumorigenesis, which creates a solid rationale for the development of antitumor therapy that is based upon suppression of NF-κB activity, caution should be exercised when considering anti-NF-κB activity as a broad therapeutic strategy in cancer treatment as data has also shown that NF-κB activity enhances tumor cell sensitivity to apoptosis and senescence. In addition, it has been shown that canonical NF-κB is a Fas transcription activator and the alternative NF-κB is a Fas transcription repressor. Therefore, NF-κB promotes Fas-mediated apoptosis in cancer cells, and thus inhibition of NF-κB may suppress Fas-mediated apoptosis to impair host immune cell-mediated tumor suppression. Inflammation Because NF-κB controls many genes involved in inflammation, it is not surprising that NF-κB is found to be chronically active in many inflammatory diseases, such as inflammatory bowel disease, arthritis, sepsis, gastritis, asthma, atherosclerosis and others. It is important to note though, that elevation of some NF-κB activators, such as osteoprotegerin (OPG), are associated with elevated mortality, especially from cardiovascular diseases. Elevated NF-κB has also been associated with schizophrenia. Recently, NF-κB activation has been suggested as a possible molecular mechanism for the catabolic effects of cigarette smoke in skeletal muscle and sarcopenia. Research has shown that during inflammation the function of a cell depends on signals it activates in response to contact with adjacent cells and to combinations of hormones, especially cytokines that act on it through specific receptors. A cell's phenotype within a tissue develops through mutual stimulation of feedback signals that coordinate its function with other cells; this is especially evident during reprogramming of cell function when a tissue is exposed to inflammation, because cells alter their phenotype, and gradually express combinations of genes that prepare the tissue for regeneration after the cause of inflammation is removed. Particularly important are feedback responses that develop between tissue resident cells, and circulating cells of the immune system. Fidelity of feedback responses between diverse cell types and the immune system depends on the integrity of mechanisms that limit the range of genes activated by NF-κB, allowing only expression of genes which contribute to an effective immune response and subsequently, a complete restoration of tissue function after resolution of inflammation. In cancer, mechanisms that regulate gene expression in response to inflammatory stimuli are altered to the point that a cell ceases to link its survival with the mechanisms that coordinate its phenotype and its function with the rest of the tissue. This is often evident in severely compromised regulation of NF-κB activity, which allows cancer cells to express abnormal cohorts of NF-κB target genes. This results in not only the cancer cells functioning abnormally: cells of surrounding tissue alter their function and cease to support the organism exclusively. Additionally, several types of cells in the microenvironment of cancer may change their phenotypes to support cancer growth. Inflammation, therefore, is a process that tests the fidelity of tissue components because the process that leads to tissue regeneration requires coordination of gene expression between diverse cell types. NEMO NEMO deficiency syndrome is a rare genetic condition relating to a fault in IKBKG that in turn activates NF-κB. It mostly affects males and has a highly variable set of symptoms and prognoses. Aging and obesity NF-κB is increasingly expressed with obesity and aging, resulting in reduced levels of the anti-inflammatory, pro-autophagy, anti-insulin resistance protein sirtuin 1. NF-κB increases the levels of the microRNA miR-34a, which inhibits nicotinamide adenine dinucleotide (NAD) synthesis by binding to its promoter region, resulting in lower levels of sirtuin 1. NF-κB and interleukin 1 alpha mutually induce each other in senescent cells in a positive feedback loop causing the production of senescence-associated secretory phenotype (SASP) factors. NF-κB and the NAD-degrading enzyme CD38 also mutually induce each other. NF-κB is a central component of the cellular response to damage. NF-κB is activated in a variety of cell types that undergo normal or accelerated aging. Genetic or pharmacologic inhibition of NF-κB activation can delay the onset of numerous aging related symptoms and pathologies. This effect may be explained, in part, by the finding that reduction of NF-κB reduces the production of mitochondria-derived reactive oxygen species that can damage DNA. Addiction NF-κB is one of several induced transcriptional targets of ΔFosB which facilitates the development and maintenance of an addiction to a stimulus. In the caudate putamen, NF-κB induction is associated with increases in locomotion, whereas in the nucleus accumbens, NF-κB induction enhances the positive reinforcing effect of a drug through reward sensitization. Non-drug inhibitors Many natural products (including anti-oxidants) that have been promoted to have anti-cancer and anti-inflammatory activity have also been shown to inhibit NF-κB. There is a controversial US patent (US patent 6,410,516) that applies to the discovery and use of agents that can block NF-κB for therapeutic purposes. This patent is involved in several lawsuits, including Ariad v. Lilly. Recent work by Karin, Ben-Neriah and others has highlighted the importance of the connection between NF-κB, inflammation, and cancer, and underscored the value of therapies that regulate the activity of NF-κB. Extracts from a number of herbs and dietary plants are efficient inhibitors of NF-κB activation in vitro. Nobiletin, a flavonoid isolated from citrus peels, has been shown to inhibit the NF-κB signaling pathway in mice. The circumsporozoite protein of Plasmodium falciparum has been shown to be an inhibitor of NF-κB. Likewise, various withanolides of Withania somnifera (Ashwagandha) have been found to have inhibiting effects on NF-κB through inhibition of proteasome mediated ubiquitin degradation of IκBα. As a drug target Aberrant activation of NF-κB is frequently observed in many cancers. Moreover, suppression of NF-κB limits the proliferation of cancer cells. In addition, NF-κB is a key player in the inflammatory response. Hence methods of inhibiting NF-κB signaling has potential therapeutic application in cancer and inflammatory diseases. Both the canonical and non-canonical NF-κB pathways require proteasomal degradation of regulatory pathway components for NF-κB signalling to occur. The proteosome inhibitor Bortezomib broadly blocks this activity and is approved for treatment of NF-κB driven Mantle Cell Lymphoma and Multiple Myeloma. The discovery that activation of NF-κB nuclear translocation can be separated from the elevation of oxidant stress gives a promising avenue of development for strategies targeting NF-κB inhibition. The drug denosumab acts to raise bone mineral density and reduce fracture rates in many patient sub-groups by inhibiting RANKL. RANKL acts through its receptor RANK, which in turn promotes NF-κB, RANKL normally works by enabling the differentiation of osteoclasts from monocytes. Disulfiram, olmesartan and dithiocarbamates can inhibit the NF-κB signaling cascade. Effort to develop direct NF-κB inhibitor has emerged with compounds such as (-)-DHMEQ, PBS-1086, IT-603 and IT-901. (-)-DHMEQ and PBS-1086 are irreversible binder to NF-κB while IT-603 and IT-901 are reversible binder. DHMEQ covalently binds to Cys 38 of p65. Anatabine's antiinflammatory effects are claimed to result from modulation of NF-κB activity. However the studies purporting its benefit use abnormally high doses in the millimolar range (similar to the extracellular potassium concentration), which are unlikely to be achieved in humans. BAY 11-7082 has also been identified as a drug that can inhibit the NF-κB signaling cascade. It is capable of preventing the phosphorylation of IKK-α in an irreversible manner such that there is down regulation of NF-κB activation. It has been shown that administration of BAY 11-7082 rescued renal functionality in diabetic-induced Sprague-Dawley rats by suppressing NF-κB regulated oxidative stress. Research has shown that the N-acylethanolamine, palmitoylethanolamide is capable of PPAR-mediated inhibition of NF-κB. The biological target of iguratimod, a drug marketed to treat rheumatoid arthritis in Japan and China, was unknown as of 2015, but the primary mechanism of action appeared to be preventing NF-κB activation. See also IKK2 RELA TNF receptor superfamily Imd pathway Notes References External links Delta0 Aging-related proteins Protein complexes Programmed cell death Transcription factors
NF-κB
[ "Chemistry", "Biology" ]
6,530
[ "Gene expression", "Aging-related proteins", "Signal transduction", "Senescence", "Induced stem cells", "Programmed cell death", "Transcription factors" ]
2,346,823
https://en.wikipedia.org/wiki/De%20Bruijn%20graph
In graph theory, an -dimensional De Bruijn graph of symbols is a directed graph representing overlaps between sequences of symbols. It has vertices, consisting of all possible sequences of the given symbols; the same symbol may appear multiple times in a sequence. For a set of symbols the set of vertices is: If one of the vertices can be expressed as another vertex by shifting all its symbols by one place to the left and adding a new symbol at the end of this vertex, then the latter has a directed edge to the former vertex. Thus the set of arcs (that is, directed edges) is Although De Bruijn graphs are named after Nicolaas Govert de Bruijn, they were invented independently by both de Bruijn and I. J. Good. Much earlier, Camille Flye Sainte-Marie implicitly used their properties. Properties If , then the condition for any two vertices forming an edge holds vacuously, and hence all the vertices are connected, forming a total of edges. Each vertex has exactly incoming and outgoing edges. Each -dimensional De Bruijn graph is the line digraph of the De Bruijn graph with the same set of symbols. Each De Bruijn graph is Eulerian and Hamiltonian. The Euler cycles and Hamiltonian cycles of these graphs (equivalent to each other via the line graph construction) are De Bruijn sequences. The line graph construction of the three smallest binary De Bruijn graphs is depicted below. As can be seen in the illustration, each vertex of the -dimensional De Bruijn graph corresponds to an edge of the De Bruijn graph, and each edge in the -dimensional De Bruijn graph corresponds to a two-edge path in the De Bruijn graph. Dynamical systems Binary De Bruijn graphs can be drawn in such a way that they resemble objects from the theory of dynamical systems, such as the Lorenz attractor: This analogy can be made rigorous: the -dimensional -symbol De Bruijn graph is a model of the Bernoulli map The Bernoulli map (also called the map for ) is an ergodic dynamical system, which can be understood to be a single shift of a -adic number. The trajectories of this dynamical system correspond to walks in the De Bruijn graph, where the correspondence is given by mapping each real in the interval to the vertex corresponding to the first digits in the base- representation of . Equivalently, walks in the De Bruijn graph correspond to trajectories in a one-sided subshift of finite type. Embeddings resembling this one can be used to show that the binary De Bruijn graphs have queue number 2 and that they have book thickness at most 5. Uses Some grid network topologies are De Bruijn graphs. The distributed hash table protocol Koorde uses a De Bruijn graph. In bioinformatics, De Bruijn graphs are used for de novo assembly of sequencing reads into a genome. See also De Bruijn torus Hamming graph Kautz graph References External links Tutorial on using De Bruijn Graphs in Bioinformatics by Homolog.us Dynamical systems Automata (computation) Parametric families of graphs Directed graphs
De Bruijn graph
[ "Physics", "Mathematics" ]
674
[ "Mechanics", "Dynamical systems" ]
2,347,000
https://en.wikipedia.org/wiki/Ancient%20Mesopotamian%20units%20of%20measurement
Ancient Mesopotamian units of measurement originated in the loosely organized city-states of Early Dynastic Sumer. Each city, kingdom and trade guild had its own standards until the formation of the Akkadian Empire when Sargon of Akkad issued a common standard. This standard was improved by Naram-Sin, but fell into disuse after the Akkadian Empire dissolved. The standard of Naram-Sin was readopted in the Ur III period by the Nanše Hymn which reduced a plethora of multiple standards to a few agreed upon common groupings. Successors to Sumerian civilization including the Babylonians, Assyrians, and Persians continued to use these groupings. Akkado-Sumerian metrology has been reconstructed by applying statistical methods to compare Sumerian architecture, architectural plans, and issued official standards such as Statue B of Gudea and the bronze cubit of Nippur. Archaic system The systems that would later become the classical standard for Mesopotamia were developed in parallel with writing in Sumer during Late Uruk Period (c. 3500–3000). Studies of protocuneiform indicate twelve separate counting systems used in Uruk IV-III. Seven of these were also used in the contemporary Proto-Elamite writing system. The bisexagesimal systems went out of use after the Early Dynastic I/II period. Sexagesimal System S used to count slaves, animals, fish, wooden objects, stone objects, containers. Sexagesimal System S' used to count dead animals, certain types of beer Bisexagesimal System B used to count cereal, bread, fish, milk products Bisexagesimal System B* used to count rations GAN2 System G used to count field measurement ŠE system Š used to count barley by volume ŠE system Š' used to count malt by volume ŠE system Š" used to count wheat by volume ŠE System Š* used to count barley groats EN System E used to count weight U4 System U used to count calendrics DUGb System Db used to count milk by volume DUGc System Db used to count beer by volume In Early Dynastic Sumer (–2300 BCE) metrology and mathematics were indistinguishable and treated as a single scribal discipline. The idea of an abstract number did not yet exist, thus all quantities were written as metrological symbols and never as numerals followed by a unit symbol. For example there was a symbol for one-sheep and another for one-day but no symbol for one. About 600 of these metrological symbols exist, for this reason archaic Sumerian metrology is complex and not fully understood. In general however, length, volume, and mass are derived from a theoretical standard cube, called 'gur (also spelled kor in some literature)', filled with barley, wheat, water, or oil. However, because of the different specific gravities of these substances combined with dual numerical bases (sexagesimal or decimal), multiple sizes of the gur-cube were used without consensus. The different gur-cubes are related by proportion, based on the water gur-cube, according to four basic coefficients and their cubic roots. These coefficients are given as: Komma = correction when planning rations with a 360-day year Leimma = conversion from decimal to a sexagesimal number system Diesis = Euboic = One official government standard of measurement of the archaic system was the Cubit of Nippur (2650 BCE). It is a Euboic Mana + 1 Diesis (432 grams). This standard is the main reference used by archaeologists to reconstruct the system. Classical system A major improvement came in 2150 BCE during the Akkadian Empire under the reign of Naram-Sin when the competing systems were unified by a single official standard, the royal gur-cube. His reform is considered the first standardized system of measure in Mesopotamia. The royal gur-cube (Cuneiform: LU2.GAL.GUR, ; Akkadian: šarru kurru) was a theoretical cuboid of water approximately 6 m × 6 m × 0.5 m from which all other units could be derived. The Neo-Sumerians continued use of the royal gur-cube as indicated by the Letter of Nanse issued in 2000 BCE by Gudea. Use of the same standard continued through the Neo-Babylonian Empire, Neo-Assyrian Empire, and Achaemenid Empire. Length Units of length are prefixed by the logogram DU () a convention of the archaic period counting system from which it was evolved. Basic length was used in architecture and field division. Distance units were geodectic as distinguished from non-geodectic basic length units. Sumerian geodesy divided latitude into seven zones between equator and pole. Area The GAN2 system G counting system evolved into area measurements. A special unit measuring brick quantity by area was called the brick-garden (Cuneiform: SIG.SAR ; Sumerian: šeg12-sar; Akkadian: libittu-mūšaru) which held 720 bricks. Capacity or volume Capacity was measured by either the ŠE system Š for dry capacity or the ŠE system Š* for wet capacity. A sila was about 1 liter. Mass or weight Mass was measured by the EN system E Values below are an average of weight artifacts from Ur and Nippur. The ± value represents 1 standard deviation. All values have been rounded to second digit of the standard deviation. Time In the Archaic System time notation was written in the U4 System U. Multiple lunisolar calendars existed; however the civil calendar from the holy city of Nippur (Ur III period) was adopted by Babylon as their civil calendar. The calendar of Nippur dates to 3500 BCE and was itself based on older astronomical knowledge of an uncertain origin. The main astronomical cycles used to construct the calendar were the month, year, and day. Relationship to other metrologies The Classical Mesopotamian system formed the basis for Elamite, Hebrew, Urartian, Hurrian, Hittite, Ugaritic, Phoenician, Babylonian, Assyrian, Persian, Arabic, and Islamic metrologies. The Classical Mesopotamian System also has a proportional relationship, by virtue of standardized commerce, to Bronze Age Harappan and Egyptian metrologies. See also Assyrian lion weights Babylonian mathematics Historical weights and measures Weights and measures References Citations Bibliography Further reading External links An online calculator Sumerian art and architecture Babylonia Sumer Measurement Mesopotamian Mesopotamian Mesopotamian Mesopotamian
Ancient Mesopotamian units of measurement
[ "Mathematics" ]
1,363
[ "Obsolete units of measurement", "Quantity", "Systems of units", "Units of measurement" ]
2,347,254
https://en.wikipedia.org/wiki/Cygnus%20A
Cygnus A (3C 405) is a radio galaxy, one of the strongest radio sources in the sky. A concentrated radio source in Cygnus was discovered by Grote Reber in 1939. In 1946 Stanley Hey and his colleague James Phillips identified that the source scintillated rapidly, and must therefore be a compact object. In 1951, Cygnus A, along with Cassiopeia A, and Puppis A were the first "radio stars" identified with an optical source. Of these, Cygnus A became the first radio galaxy, the other two being nebulae inside the Milky Way. In 1953 Roger Jennison and M K Das Gupta showed it to be a double source. Like all radio galaxies, it contains an active galactic nucleus. The supermassive black hole at the core has a mass of . Images of the galaxy in the radio portion of the electromagnetic spectrum show two jets protruding in opposite directions from the galaxy's center. These jets extend many times the width of the portion of the host galaxy which emits radiation at visible wavelengths. At the ends of the jets are two lobes with "hot spots" of more intense radiation at their edges. These hot spots are formed when material from the jets collides with the surrounding intergalactic medium. In 2016, a radio transient was discovered 460 parsecs away from the center of Cygnus A. Between 1989 and 2016, the object, cospatial with a previously-known infrared source, exhibited at least an eightfold increase in radio flux density, with comparable luminosity to the brightest known supernova. Due to the lack of measurements in the intervening years, the rate of brightening is unknown, but the object has remained at a relatively constant flux density since its discovery. The data are consistent with a second supermassive black hole orbiting the primary object, with the secondary having undergone a rapid accretion rate increase. The inferred orbital timescale is of the same order as the activity of the primary source, suggesting the secondary may be perturbing the primary and causing the outflows. See also List of galaxies List of nearest galaxies References Further reading External links Information about Cygnus A from SIMBAD. Hubble Uncovers a Hidden Quasar in a Nearby Galaxy (Cygnus A) A primitive transit recording (7 July 1998) of Cygnus A by English radio amateur Geoff Royle G4FAS 405.0 Cygnus (constellation) Radio galaxies Quasars Astronomical objects discovered in 1939
Cygnus A
[ "Astronomy" ]
519
[ "Cygnus (constellation)", "Constellations" ]
2,347,765
https://en.wikipedia.org/wiki/WiCell
WiCell Research Institute is a scientific research institute in Madison, Wisconsin that focuses on stem cell research. Independently governed and supported as a 501(c)(3) organization, WiCell operates as an affiliate of the Wisconsin Alumni Research Foundation and works to advance stem cell research at the University of Wisconsin–Madison and beyond. History Established in 1998 to develop stem cell technology, WiCell Research Institute is a nonprofit organization that creates and distributes human pluripotent stem cell lines worldwide. WiCell also provides cytogenetic and technical services, establishes scientific protocols and supports basic research on the UW-Madison campus. WiCell serves as home to the Wisconsin International Stem Cell Bank. This stem cell repository stores, characterizes and provides access to stem cell lines for use in research and clinical development. The cell bank originally stored the first five human Embryonic stem cell lines derived by Dr. James Thomson of UW–Madison. It currently houses human embryonic stem cell lines, induced pluripotent stem cell lines, clinical grade cell lines developed in accordance with Good Manufacturing Practices (GMP) and differentiated cell lines including neural progenitor cells. To support continued progress in the field and help unlock the therapeutic potential of stem cells, in 2005 WiCell began providing cytogenetic services and quality control testing services. These services allow scientists to identify genetic abnormalities in cells or changes in stem cell colonies that might affect research results. Organization Chartered with a mission to support scientific investigation and research at UW–Madison, WiCell collaborates with faculty members and provides support with stem cell research projects. The institute established its cytogenetic laboratory to meet the growing needs of academic and commercial researchers to monitor genetic stability in stem cell cultures. Facilities WiCell maintains its stem cell banking facilities, testing and quality assurance laboratories and scientific team in UW–Madison’s University Research Park. UW–Madison faculty members use the institute’s laboratory space to conduct research, improve stem cell culture techniques and develop materials used in stem cell research. To ensure the therapeutic relevance of its cell lines, WiCell banks clinical grade cells under GMP guidelines. The organization works cooperatively with Waisman Biomanufacturing, a provider of cGMP manufacturing services for materials and therapeutics qualified for human clinical trials. Selected technologies The following technologies employed by WiCell allow scientists to conduct stem cell research with greater assurance of reproducible results, an expectation for publication in peer-reviewed journals. Scientific tools and services include: G-banded karyotyping, a baseline genomic screen that helps monitor genetic stability as cell colonies grow; Spectral karyotyping, which may be used as an adjunct to g-banded karyotyping and helps define complex rearrangements as well as genetic marker chromosomes; CGH, SNP, and CGH + SNP Microarrays, which detect genomic gains and losses as well as copy number changes; Fluorescence in situ hybridization (FISH), a test used to confirm findings and screen for microdeletions or duplications of known targets; fastFISH, a rapid screen for large numbers of clones and a cost-effective tool for monitoring aneuploidy; and Short Tandem Repeat Analysis, which may be used to monitor the identity of a cell line and confirm the relationship of induced pluripotent stem cells to their parent. External links WiCell Research Institute Wisconsin Alumni Research Foundation Waisman Biomanufacturing List of cell lines distributed by WiCell (from Cellosaurus) Biotechnology Laboratories in the United States Research institutes in Wisconsin Stem cell research Stem cell researchers Stem cells
WiCell
[ "Chemistry", "Biology" ]
748
[ "Stem cell research", "Stem cell researchers", "Biotechnology", "Translational medicine", "Tissue engineering", "nan" ]
75,820
https://en.wikipedia.org/wiki/Cork%20%28material%29
Cork is an impermeable buoyant material. It is the phellem layer of bark tissue which is harvested for commercial use primarily from Quercus suber (the cork oak), which is native to southwest Europe and northwest Africa. Cork is composed of suberin, a hydrophobic substance. Because of its impermeable, buoyant, elastic, and fire retardant properties, it is used in a variety of products, the most common of which is wine stoppers. The montado landscape of Portugal produces approximately half of the cork harvested annually worldwide, with Corticeira Amorim being the leading company in the industry. Cork was examined microscopically by Robert Hooke, which led to his discovery and naming of the cell. Cork composition varies depending on geographic origin, climate and soil conditions, genetic origin, tree dimensions, age (virgin or reproduction), and growth conditions. However, in general, cork is made up of suberin (average of about 40%), lignin (22%), polysaccharides (cellulose and hemicellulose) (18%), extractables (15%) and others. History Cork is a natural material used by humans for over 5,000 years. It is a material whose applications have been known since antiquity, especially in floating devices and as stopper for beverages, mainly wine, whose market, from the early twentieth century, had a massive expansion, particularly due to the development of several cork-based agglomerates. In China, Egypt, Babylon, and Persia from about 3000 BC, cork was already used for sealing containers, fishing equipment, and domestic applications. In ancient Greece (1600 to 1100 years BC) cork was used in footwear, to manufacture a type of sandals attached to the foot by straps, generally leather and with a sole in cork or leather. In the second century AD, a Greek physician, Dioscorides, noted several medical applications of cork, mainly for hair loss treatment. Nowadays, the majority of people know cork for its use as stoppers in wine bottles. The innovation of using cork as stopper can be traced back to the late 17th century, attributed to Dom Pierre Pérignon. Cork stoppers were adopted in 1729 by Ruinart and in 1973 by Moët et Chandon. Structure Cork presents a characteristic cellular structure in which the cells have usually a pentagonal or hexagonal shape. The cellular wall consists of a thin, lignin-rich middle lamella (internal primary wall), a thick secondary wall made up from alternating suberin and wax lamella, and a thin tertiary wall of polysaccharides. Some studies suggest that the secondary wall is lignified, and therefore, may not consist exclusively of suberin and waxes. The cells of cork are filled with a gas mixture similar to air, making them behave as authentic "pads," which contributes to the capability of cork to recover after compression. Sources There are about 2,200,000 hectares of cork oak (Quercus suber) forest in the Mediterranean basin, the native area of the species. The most extensively managed habitats are in Portugal (34%) and in Spain (27%). Annual production is about 300,000 tons; 49.6% from Portugal, 30.5% from Spain, 5.8% from Morocco, 4.9% from Algeria, 3.5% from Tunisia, 3.1% from Italy, and 2.6% from France. Once the trees are about 25 years old the cork is traditionally stripped from the trunks every nine years, with the first two harvests generally producing lower quality cork (male cork or virgin cork). The trees live for about 300 years. The cork industry is generally regarded as environmentally friendly. Cork production is generally considered sustainable because the cork tree is not cut down to obtain cork; only the bark is stripped to harvest the cork. The tree continues to live and grow. The sustainability of production and the easy recycling of cork products and by-products are two of its most distinctive aspects. Cork oak forests also prevent desertification and are a particular habitat in the Iberian Peninsula and the refuge of various endangered species. Carbon footprint studies conducted by Corticeira Amorim, Oeneo Bouchage of France and the Cork Supply Group of Portugal concluded that cork is the most environmentally friendly wine stopper in comparison to other alternatives. The Corticeira Amorim's study, in particular ("Analysis of the life cycle of Cork, Aluminum and Plastic Wine Closures"), was developed by PricewaterhouseCoopers, according to ISO 14040. Results concluded that, concerning the emission of greenhouse gases, each plastic stopper released 10 times more CO2, whilst an aluminium screw cap releases 26 times more CO2 than does a cork stopper. For example, to produce 1,000 cork stoppers 1.5 kg are emitted, but to produce the same amount of plastic stoppers 14 kg of are emitted and for the same amount of aluminium screw caps 37 kg are emitted. The Chinese cork oak is native to East Asia and is cultivated in a limited extent in China; the cork produced is considered inferior to Q. suber and are used to produce agglomerated cork products. The so-called "cork trees" (Phellodendron) are unrelated to the cork oak, they have corky bark but not thick enough for cork production. Harvesting Cork is extracted only from early May to late August, when the cork can be separated from the tree without causing permanent damage. When the tree reaches 25–30 years of age and about 24 in (60 cm) in circumference, the cork can be removed for the first time. However, this first harvest almost always produces poor quality or virgin cork (Portuguese ; Spanish or ). The workers who specialize in removing the cork are known as extractors. An extractor uses a very sharp axe to make two types of cuts on the tree: one horizontal cut around the plant, called a crown or necklace, at a height of about two to three times the circumference of the tree, and several vertical cuts called rulers or openings. This is the most delicate phase of the work because, even though cutting the cork requires significant force, the extractor must not damage the underlying phellogen or the tree will be harmed. To free the cork from the tree, the extractor pushes the handle of the axe into the rulers. A good extractor needs to use a firm but precise touch in order to free a large amount of cork without damaging the product or tree. These freed portions of the cork are called planks. The planks are usually carried off by hand since cork forests are rarely accessible to vehicles. The cork is stacked in piles in the forest or in yards at a factory and traditionally left to dry, after which it can be loaded onto a truck and shipped to a processor. Bark from initial harvests can be used to make flooring, shoes, insulation and other industrial products. Subsequent extractions usually occur at intervals of nine years, though it can take up to thirteen for the cork to reach an acceptable size. If the product is of high quality it is known as gentle cork (Portuguese , but also only if it is the second time; Spanish , also restricted to the second time), and, ideally, is used to make stoppers for wine and champagne bottles. Properties and uses Cork's elasticity combined with its near-impermeability makes it suitable as a material for bottle stoppers, especially for wine bottles. Cork stoppers represent about 60% of all cork based production. Cork has an almost zero Poisson's ratio, which means the radius of a cork does not change significantly when squeezed or pulled. Cork is an excellent gasket material. Some carburetor float bowl gaskets are made of cork, for example. Cork is also an essential element in the production of badminton shuttlecocks. Cork's bubble-form structure and natural fire retardant make it suitable for acoustic and thermal insulation in house walls, floors, ceilings, and facades. The by-product of more lucrative stopper production, corkboard, is gaining popularity as a non-allergenic, easy-to-handle and safe alternative to petrochemical-based insulation products. Cork is also used to make vinyl record slipmats, due to its ability to not attract dust. They also dampen static and vibrations. Sheets of cork, also often the by-product of stopper production, are used to make bulletin boards as well as floor and wall tiles. Cork's low density makes it a suitable material for fishing floats and buoys, as well as handles for fishing rods (as an alternative to neoprene). Granules of cork can also be mixed into concrete. The composites made by mixing cork granules and cement have lower thermal conductivity, lower density, and good energy absorption. Some of the property ranges of the composites are density (400–1500 kg/m3), compressive strength (1–26 MPa), and flexural strength (0.5–4.0 MPa). Use in wine bottling As late as the mid-17th century, French vintners did not use cork stoppers, using instead oil-soaked rags stuffed into the necks of bottles. Wine corks can be made of either a single piece of cork, or composed of particles, as in champagne corks; corks made of granular particles are called "agglomerated corks". Natural cork closures are used for about 80% of the 20 billion bottles of wine produced each year. After a decline in use as wine-stoppers due to the increase in the use of synthetic alternatives, cork wine-stoppers are making a comeback and currently represent approximately 60% of wine-stoppers in 2016. Because of the cellular structure of cork, it is easily compressed upon insertion into a bottle and will expand to form a tight seal. The interior diameter of the neck of glass bottles tends to be inconsistent, making this ability to seal through variable contraction and expansion an important attribute. However, unavoidable natural flaws, channels, and cracks in the bark make the cork itself highly inconsistent. In a 2005 closure study, 45% of corks showed gas leakage during pressure testing both from the sides of the cork as well as through the cork body itself. Since the mid-1990s, a number of wine brands have switched to alternative wine closures such as plastic stoppers, screw caps, or other closures. During 1972 more than half of the Australian bottled wine went bad due to corking. A great deal of anger and suspicion was directed at Portuguese and Spanish cork suppliers who were suspected of deliberately supplying bad cork to non-EEC wine makers to help prevent cheap imports. Cheaper wine makers developed the aluminium "Stelvin" cap with a polypropylene stopper wad. More expensive wines and carbonated varieties continued to use cork, although much closer attention was paid to the quality. Even so, some high premium makers prefer the Stelvin as it is a guarantee that the wine will be good even after many decades of ageing. Some consumers may have conceptions about screw caps being representative of lower quality wines, due to their cheaper price; however, in Australia, for example, much of the non-sparkling wine production now uses these Stelvin caps as a cork alternative, although some have recently switched back to cork citing issues using screw caps. The alternatives to cork have both advantages and disadvantages. For example, screwtops are generally considered to offer a trichloroanisole (TCA) free seal, but they also reduce the oxygen transfer rate between the bottle and the atmosphere to almost zero, which can lead to a reduction in the quality of the wine. TCA is the main documented cause of cork taint in wine. However, some in the wine industry say natural cork stoppers are important because they allow oxygen to interact with wine for proper aging, and are best suited for wines purchased with the intent to age. Stoppers which resemble natural cork very closely can be made by isolating the suberin component of the cork from the undesirable lignin, mixing it with the same substance used for contact lenses and an adhesive, and molding it into a standardized product, free of TCA or other undesirable substances. Composite corks with real cork veneers are used in cheaper wines. Celebrated Australian wine writer and critic James Halliday has written that since a cork placed inside the neck of a wine bottle is 350-year-old technology, it is logical to explore other more modern and precise methods of keeping wine safe. The study "Analysis of the life cycle of Cork, Aluminum and Plastic Wine Closures," conducted by PricewaterhouseCoopers and commissioned by a major cork manufacturer, Amorim, concluded that cork is the most environmentally responsible stopper, in a one-year life cycle analysis comparison with plastic stoppers and aluminum screw caps. Other uses On 28 November 2007, the Portuguese national postal service CTT issued the world's first postage stamp made of cork. In musical instruments, particularly woodwind instruments, where it is used to fasten together segments of the instrument, making the seams airtight. Low quality conducting baton handles are also often made out of cork. In shoes, especially those using welt construction to improve climate control and comfort. Because it is impermeable and moisture-resistant, cork is often used as an alternative to leather in handbags, wallets, and other fashion items. To make bricks for the outer walls of houses, as in Portugal's pavilion at Expo 2000. As the core of both baseballs and cricket balls. A corked bat is made by replacing the interior of a baseball bat with cork – a practice known as "corking". It was historically a method of cheating at baseball; the efficacy of the practice is now discredited. In various forms, in spacecraft heat shields and fairings. In the paper pick-up mechanisms in inkjet and laser printers. To make later-model pith helmets. Hung from hats to keep insects away. (See cork hat) As a core material in sandwich composite construction. As the friction lining material of an automatic transmission clutch, as designed in certain mopeds. Alternative of wood or aluminium in automotive interiors. Cork slabs are sometimes used by orchid growers as a natural mounting material. Cork paddles are used by glass blowers to manipulate and shape hot molten glass. Many racing bicycles have their handlebars wrapped in cork-based tape manufactured in a variety of colors. To make architectural models. See also Bung Cork Boat (vessel) Cork borer Cork cambium Corkscrew Cricket ball Notes References Margarida Pi i Contallé. 2006. Laboratory head in Manuel Serra Hongos y micotoxinas en tapones de corcho. Propuesta de límites micológicos aceptables Cork production corkfacts.com Instituto de Promoción del Corcho, Extremadura iprocor.org Analysis of the life cycle of Cork, Aluminium and Plastic Wine Closures Henley, Paul, BBC.com (18 September 2008). "Urging vintners to put a cork in it". PricewaterhouseCoopers/ECOBILAN (October 2008). Analysis of the life cycle of Cork, Aluminium and Plastic Wine Closures Cork - Forest in a Bottle. 2008. External links Cork Quality Council Book review: To cork or not to cork Material Properties Data: Cork Cork Recycling Initiative. 2017. Wine packaging and storage Non-timber forest products Materials Plant anatomy Plant morphology
Cork (material)
[ "Physics", "Biology" ]
3,226
[ "Plant morphology", "Materials", "Plants", "Matter" ]
75,873
https://en.wikipedia.org/wiki/Nernst%20equation
In electrochemistry, the Nernst equation is a chemical thermodynamical relationship that permits the calculation of the reduction potential of a reaction (half-cell or full cell reaction) from the standard electrode potential, absolute temperature, the number of electrons involved in the redox reaction, and activities (often approximated by concentrations) of the chemical species undergoing reduction and oxidation respectively. It was named after Walther Nernst, a German physical chemist who formulated the equation. Expression General form with chemical activities When an oxidizer () accepts a number z of electrons () to be converted in its reduced form (), the half-reaction is expressed as: Ox + ze- -> Red The reaction quotient (), also often called the ion activity product (IAP), is the ratio between the chemical activities (a) of the reduced form (the reductant, ) and the oxidized form (the oxidant, ). The chemical activity of a dissolved species corresponds to its true thermodynamic concentration taking into account the electrical interactions between all ions present in solution at elevated concentrations. For a given dissolved species, its chemical activity (a) is the product of its activity coefficient (γ) by its molar (mol/L solution), or molal (mol/kg water), concentration (C): a = γ C. So, if the concentration (C, also denoted here below with square brackets [ ]) of all the dissolved species of interest are sufficiently low and that their activity coefficients are close to unity, their chemical activities can be approximated by their concentrations as commonly done when simplifying, or idealizing, a reaction for didactic purposes: At chemical equilibrium, the ratio of the activity of the reaction product (aRed) by the reagent activity (aOx) is equal to the equilibrium constant of the half-reaction: The standard thermodynamics also says that the actual Gibbs free energy is related to the free energy change under standard state by the relationship: where is the reaction quotient and R is the universal ideal gas constant. The cell potential associated with the electrochemical reaction is defined as the decrease in Gibbs free energy per coulomb of charge transferred, which leads to the relationship The constant (the Faraday constant) is a unit conversion factor , where is the Avogadro constant and is the fundamental electron charge. This immediately leads to the Nernst equation, which for an electrochemical half-cell is For a complete electrochemical reaction (full cell), the equation can be written as where: is the half-cell reduction potential at the temperature of interest, is the standard half-cell reduction potential, is the cell potential (electromotive force) at the temperature of interest, is the standard cell potential, is the universal ideal gas constant: , is the temperature in kelvins, is the number of electrons transferred in the cell reaction or half-reaction, is the Faraday constant, the magnitude of charge (in coulombs) per mole of electrons: , is the reaction quotient of the cell reaction, and, is the chemical activity for the relevant species, where is the activity of the reduced form and is the activity of the oxidized form. Thermal voltage At room temperature (25 °C), the thermal voltage is approximately 25.693 mV. The Nernst equation is frequently expressed in terms of base-10 logarithms (i.e., common logarithms) rather than natural logarithms, in which case it is written: where λ = ln(10) ≈ 2.3026 and λVT ≈ 0.05916 Volt. Form with activity coefficients and concentrations Similarly to equilibrium constants, activities are always measured with respect to the standard state (1 mol/L for solutes, 1 atm for gases, and T = 298.15 K, i.e., 25 °C or 77 °F). The chemical activity of a species , , is related to the measured concentration via the relationship , where is the activity coefficient of the species . Because activity coefficients tend to unity at low concentrations, or are unknown or difficult to determine at medium and high concentrations, activities in the Nernst equation are frequently replaced by simple concentrations and then, formal standard reduction potentials used. Taking into account the activity coefficients () the Nernst equation becomes: Where the first term including the activity coefficients () is denoted and called the formal standard reduction potential, so that can be directly expressed as a function of and the concentrations in the simplest form of the Nernst equation: Formal standard reduction potential When wishing to use simple concentrations in place of activities, but that the activity coefficients are far from unity and can no longer be neglected and are unknown or too difficult to determine, it can be convenient to introduce the notion of the "so-called" standard formal reduction potential () which is related to the standard reduction potential as follows: So that the Nernst equation for the half-cell reaction can be correctly formally written in terms of concentrations as: and likewise for the full cell expression. According to Wenzel (2020), a formal reduction potential is the reduction potential that applies to a half reaction under a set of specified conditions such as, e.g., pH, ionic strength, or the concentration of complexing agents. The formal reduction potential is often a more convenient, but conditional, form of the standard reduction potential, taking into account activity coefficients and specific conditions characteristics of the reaction medium. Therefore, its value is a conditional value, i.e., that it depends on the experimental conditions and because the ionic strength affects the activity coefficients, will vary from medium to medium. Several definitions of the formal reduction potential can be found in the literature, depending on the pursued objective and the experimental constraints imposed by the studied system. The general definition of refers to its value determined when . A more particular case is when is also determined at pH 7, as e.g. for redox reactions important in biochemistry or biological systems. Determination of the formal standard reduction potential when 1 The formal standard reduction potential can be defined as the measured reduction potential of the half-reaction at unity concentration ratio of the oxidized and reduced species (i.e., when 1) under given conditions. Indeed: as, , when , , when , because , and that the term is included in . The formal reduction potential makes possible to more simply work with molar (mol/L, M) or molal (mol/kg , m) concentrations in place of activities. Because molar and molal concentrations were once referred as formal concentrations, it could explain the origin of the adjective formal in the expression formal potential. The formal potential is thus the reversible potential of an electrode at equilibrium immersed in a solution where reactants and products are at unit concentration. If any small incremental change of potential causes a change in the direction of the reaction, i.e. from reduction to oxidation or vice versa, the system is close to equilibrium, reversible and is at its formal potential. When the formal potential is measured under standard conditions (i.e. the activity of each dissolved species is 1 mol/L, T = 298.15 K = 25 °C = 77 °F, = 1 bar) it becomes de facto a standard potential. According to Brown and Swift (1949): "A formal potential is defined as the potential of a half-cell, measured against the standard hydrogen electrode, when the total concentration of each oxidation state is one formal". In this case, as for the standard reduction potentials, the concentrations of dissolved species remain equal to one molar (M) or one molal (m), and so are said to be one formal (F). So, expressing the concentration in molarity (1 mol/L): The term formal concentration (F) is now largely ignored in the current literature and can be commonly assimilated to molar concentration (M), or molality (m) in case of thermodynamic calculations. The formal potential is also found halfway between the two peaks in a cyclic voltammogram, where at this point the concentration of Ox (the oxidized species) and Red (the reduced species) at the electrode surface are equal. The activity coefficients and are included in the formal potential , and because they depend on experimental conditions such as temperature, ionic strength, and pH, cannot be referred as an immutable standard potential but needs to be systematically determined for each specific set of experimental conditions. Formal reduction potentials are applied to simplify calculations of a considered system under given conditions and measurements interpretation. The experimental conditions in which they are determined and their relationship to the standard reduction potentials must be clearly described to avoid to confuse them with standard reduction potentials. Formal standard reduction potential at pH 7 Formal standard reduction potentials () are also commonly used in biochemistry and cell biology for referring to standard reduction potentials measured at pH 7, a value closer to the pH of most physiological and intracellular fluids than the standard state pH of 0. The advantage is to defining a more appropriate redox scale better corresponding to real conditions than the standard state. Formal standard reduction potentials () allow to more easily estimate if a redox reaction supposed to occur in a metabolic process or to fuel microbial activity under some conditions is feasible or not. While, standard reduction potentials always refer to the standard hydrogen electrode (SHE), with [] = 1 M corresponding to a pH 0, and fixed arbitrarily to zero by convention, it is no longer the case at a pH of 7. Then, the reduction potential of a hydrogen electrode operating at pH 7 is -0.413 V with respect to the standard hydrogen electrode (SHE). Expression of the Nernst equation as a function of pH The and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram . explicitly denotes expressed versus the standard hydrogen electrode (SHE). For a half cell equation, conventionally written as a reduction reaction (i.e., electrons accepted by an oxidant on the left side): The half-cell standard reduction potential is given by where is the standard Gibbs free energy change, is the number of electrons involved, and is the Faraday's constant. The Nernst equation relates pH and as follows:   where curly brackets indicate activities, and exponents are shown in the conventional manner. This equation is the equation of a straight line for as a function of pH with a slope of volt (pH has no units). This equation predicts lower at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for the reduction of H+ into H2. is then often noted as to indicate that it refers to the standard hydrogen electrode (SHE) whose = 0 by convention under standard conditions (T = 298.15 K = 25 °C = 77 F, Pgas = 1 atm (1.013 bar), concentrations = 1 M and thus pH = 0). Main factors affecting the formal standard reduction potentials The main factor affecting the formal reduction potentials in biochemical or biological processes is most often the pH. To determine approximate values of formal reduction potentials, neglecting in a first approach changes in activity coefficients due to ionic strength, the Nernst equation has to be applied taking care to first express the relationship as a function of pH. The second factor to be considered are the values of the concentrations taken into account in the Nernst equation. To define a formal reduction potential for a biochemical reaction, the pH value, the concentrations values and the hypotheses made on the activity coefficients must always be explicitly indicated. When using, or comparing, several formal reduction potentials they must also be internally consistent. Problems may occur when mixing different sources of data using different conventions or approximations (i.e., with different underlying hypotheses). When working at the frontier between inorganic and biological processes (e.g., when comparing abiotic and biotic processes in geochemistry when microbial activity could also be at work in the system), care must be taken not to inadvertently directly mix standard reduction potentials versus SHE (pH = 0) with formal reduction potentials (pH = 7). Definitions must be clearly expressed and carefully controlled, especially if the sources of data are different and arise from different fields (e.g., picking and mixing data from classical electrochemistry and microbiology textbooks without paying attention to the different conventions on which they are based). Examples with a Pourbaix diagram To illustrate the dependency of the reduction potential on pH, one can simply consider the two oxido-reduction equilibria determining the water stability domain in a Pourbaix diagram . When water is submitted to electrolysis by applying a sufficient difference of electrical potential between two electrodes immersed in water, hydrogen is produced at the cathode (reduction of water protons) while oxygen is formed at the anode (oxidation of water oxygen atoms). The same may occur if a reductant stronger than hydrogen (e.g., metallic Na) or an oxidant stronger than oxygen (e.g., F2) enters in contact with water and reacts with it. In the here beside (the simplest possible version of a Pourbaix diagram), the water stability domain (grey surface) is delimited in term of redox potential by two inclined red dashed lines: Lower stability line with hydrogen gas evolution due to the proton reduction at very low Eh: (cathode: reduction) Higher stability line with oxygen gas evolution due to water oxygen oxidation at very high Eh: (anode: oxidation) When solving the Nernst equation for each corresponding reduction reaction (need to revert the water oxidation reaction producing oxygen), both equations have a similar form because the number of protons and the number of electrons involved within a reaction are the same and their ratio is one (2/2 for H2 and 4/4 with respectively), so it simplifies when solving the Nernst equation expressed as a function of pH. The result can be numerically expressed as follows: Note that the slopes of the two water stability domain upper and lower lines are the same (-59.16 mV/pH unit), so they are parallel on a Pourbaix diagram. As the slopes are negative, at high pH, both hydrogen and oxygen evolution requires a much lower reduction potential than at low pH. For the reduction of H+ into H2 the here above mentioned relationship becomes: because by convention = 0 V for the standard hydrogen electrode (SHE: pH = 0). So, at pH = 7, = -0.414 V for the reduction of protons. For the reduction of O2 into 2 H2O the here above mentioned relationship becomes: because = +1.229 V with respect to the standard hydrogen electrode (SHE: pH = 0). So, at pH = 7, = +0.815 V for the reduction of oxygen. The offset of -414 mV in is the same for both reduction reactions because they share the same linear relationship as a function of pH and the slopes of their lines are the same. This can be directly verified on a Pourbaix diagram. For other reduction reactions, the value of the formal reduction potential at a pH of 7, commonly referred for biochemical reactions, also depends on the slope of the corresponding line in a Pourbaix diagram i.e. on the ratio of the number of to the number of involved in the reduction reaction, and thus on the stoichiometry of the half-reaction. The determination of the formal reduction potential at pH = 7 for a given biochemical half-reaction requires thus to calculate it with the corresponding Nernst equation as a function of pH. One cannot simply apply an offset of -414 mV to the Eh value (SHE) when the ratio differs from 1. Applications in biology Beside important redox reactions in biochemistry and microbiology, the Nernst equation is also used in physiology for calculating the electric potential of a cell membrane with respect to one type of ion. It can be linked to the acid dissociation constant. Nernst potential The Nernst equation has a physiological application when used to calculate the potential of an ion of charge across a membrane. This potential is determined using the concentration of the ion both inside and outside the cell: When the membrane is in thermodynamic equilibrium (i.e., no net flux of ions), and if the cell is permeable to only one ion, then the membrane potential must be equal to the Nernst potential for that ion. Goldman equation When the membrane is permeable to more than one ion, as is inevitably the case, the resting potential can be determined from the Goldman equation, which is a solution of G-H-K influx equation under the constraints that total current density driven by electrochemical force is zero: where is the membrane potential (in volts, equivalent to joules per coulomb), is the permeability for that ion (in meters per second), is the extracellular concentration of that ion (in moles per cubic meter, to match the other SI units, though the units strictly don't matter, as the ion concentration terms become a dimensionless ratio), is the intracellular concentration of that ion (in moles per cubic meter), is the ideal gas constant (joules per kelvin per mole), is the temperature in kelvins, is the Faraday's constant (coulombs per mole). The potential across the cell membrane that exactly opposes net diffusion of a particular ion through the membrane is called the Nernst potential for that ion. As seen above, the magnitude of the Nernst potential is determined by the ratio of the concentrations of that specific ion on the two sides of the membrane. The greater this ratio the greater the tendency for the ion to diffuse in one direction, and therefore the greater the Nernst potential required to prevent the diffusion. A similar expression exists that includes (the absolute value of the transport ratio). This takes transporters with unequal exchanges into account. See: sodium-potassium pump where the transport ratio would be 2/3, so r equals 1.5 in the formula below. The reason why we insert a factor r = 1.5 here is that current density by electrochemical force Je.c.(Na+) + Je.c.(K+) is no longer zero, but rather Je.c.(Na+) + 1.5Je.c.(K+) = 0 (as for both ions flux by electrochemical force is compensated by that by the pump, i.e. Je.c. = −Jpump), altering the constraints for applying GHK equation. The other variables are the same as above. The following example includes two ions: potassium (K+) and sodium (Na+). Chloride is assumed to be in equilibrium. When chloride (Cl−) is taken into account, Derivation Using Boltzmann factor For simplicity, we will consider a solution of redox-active molecules that undergo a one-electron reversible reaction and that have a standard potential of zero, and in which the activities are well represented by the concentrations (i.e. unit activity coefficient). The chemical potential of this solution is the difference between the energy barriers for taking electrons from and for giving electrons to the working electrode that is setting the solution's electrochemical potential. The ratio of oxidized to reduced molecules, , is equivalent to the probability of being oxidized (giving electrons) over the probability of being reduced (taking electrons), which we can write in terms of the Boltzmann factor for these processes: Taking the natural logarithm of both sides gives If at  = 1, we need to add in this additional constant: Dividing the equation by to convert from chemical potentials to electrode potentials, and remembering that , we obtain the Nernst equation for the one-electron process : Using thermodynamics (chemical potential) Quantities here are given per molecule, not per mole, and so Boltzmann constant and the electron charge are used instead of the gas constant and Faraday's constant . To convert to the molar quantities given in most chemistry textbooks, it is simply necessary to multiply by the Avogadro constant: and . The entropy of a molecule is defined as where is the number of states available to the molecule. The number of states must vary linearly with the volume of the system (here an idealized system is considered for better understanding, so that activities are posited very close to the true concentrations). Fundamental statistical proof of the mentioned linearity goes beyond the scope of this section, but to see this is true it is simpler to consider usual isothermal process for an ideal gas where the change of entropy takes place. It follows from the definition of entropy and from the condition of constant temperature and quantity of gas that the change in the number of states must be proportional to the relative change in volume . In this sense there is no difference in statistical properties of ideal gas atoms compared with the dissolved species of a solution with activity coefficients equaling one: particles freely "hang around" filling the provided volume), which is inversely proportional to the concentration , so we can also write the entropy as The change in entropy from some state 1 to another state 2 is therefore so that the entropy of state 2 is If state 1 is at standard conditions, in which is unity (e.g., 1 atm or 1 M), it will merely cancel the units of . We can, therefore, write the entropy of an arbitrary molecule A as where is the entropy at standard conditions and [A] denotes the concentration of A. The change in entropy for a reaction is then given by We define the ratio in the last term as the reaction quotient: where the numerator is a product of reaction product activities, , each raised to the power of a stoichiometric coefficient, , and the denominator is a similar product of reactant activities. All activities refer to a time . Under certain circumstances (see chemical equilibrium) each activity term such as may be replaced by a concentration term, [A].In an electrochemical cell, the cell potential is the chemical potential available from redox reactions (). is related to the Gibbs free energy change only by a constant: , where is the number of electrons transferred and is the Faraday constant. There is a negative sign because a spontaneous reaction has a negative Gibbs free energy and a positive potential . The Gibbs free energy is related to the entropy by , where is the enthalpy and is the temperature of the system. Using these relations, we can now write the change in Gibbs free energy, and the cell potential, This is the more general form of the Nernst equation. For the redox reaction , and we have: The cell potential at standard temperature and pressure (STP) is often replaced by the formal potential , which includes the activity coefficients of the dissolved species under given experimental conditions (T, P, ionic strength, pH, and complexing agents) and is the potential that is actually measured in an electrochemical cell. Relation to the chemical equilibrium The standard Gibbs free energy is related to the equilibrium constant as follows: At the same time, is also equal to the product of the total charge () transferred during the reaction and the cell potential (): The sign is negative, because the considered system performs the work and thus releases energy. So, And therefore: Starting from the Nernst equation, one can also demonstrate the same relationship in the reverse way. At chemical equilibrium, or thermodynamic equilibrium, the electrochemical potential and therefore the reaction quotient () attains the special value known as the equilibrium constant (): Therefore, Or at standard state, We have thus related the standard electrode potential and the equilibrium constant of a redox reaction. Limitations In dilute solutions, the Nernst equation can be expressed directly in the terms of concentrations (since activity coefficients are close to unity). But at higher concentrations, the true activities of the ions must be used. This complicates the use of the Nernst equation, since estimation of non-ideal activities of ions generally requires experimental measurements. The Nernst equation also only applies when there is no net current flow through the electrode. The activity of ions at the electrode surface changes when there is current flow, and there are additional overpotential and resistive loss terms which contribute to the measured potential. At very low concentrations of the potential-determining ions, the potential predicted by Nernst equation approaches toward . This is physically meaningless because, under such conditions, the exchange current density becomes very low, and there may be no thermodynamic equilibrium necessary for Nernst equation to hold. The electrode is called unpoised in such case. Other effects tend to take control of the electrochemical behavior of the system, like the involvement of the solvated electron in electricity transfer and electrode equilibria, as analyzed by Alexander Frumkin and B. Damaskin, Sergio Trasatti, etc. Time dependence of the potential The expression of time dependence has been established by Karaoglanoff. Significance in other scientific fields The Nernst equation has been involved in the scientific controversy about cold fusion. Fleischmann and Pons, claiming that cold fusion could exist, calculated that a palladium cathode immersed in a heavy water electrolysis cell could achieve up to 1027 atmospheres of pressure inside the crystal lattice of the metal of the cathode, enough pressure to cause spontaneous nuclear fusion. In reality, only 10,000–20,000 atmospheres were achieved. The American physicist John R. Huizenga claimed their original calculation was affected by a misinterpretation of the Nernst equation. He cited a paper about Pd–Zr alloys. The Nernst equation allows the calculation of the extent of reaction between two redox systems and can be used, for example, to assess whether a particular reaction will go to completion or not. At chemical equilibrium, the electromotive forces (emf) of the two half cells are equal. This allows the equilibrium constant of the reaction to be calculated and hence the extent of the reaction. See also Concentration cell Dependency of reduction potential on pH Electrode potential Galvanic cell Goldman equation Membrane potential Nernst–Planck equation Pourbaix diagram Reduction potential Solvated electron Standard electrode potential Standard electrode potential (data page) Standard apparent reduction potentials in biochemistry at pH 7 (data page) References External links Nernst/Goldman Equation Simulator Nernst Equation Calculator Interactive Nernst/Goldman Java Applet DoITPoMS Teaching and Learning Package- "The Nernst Equation and Pourbaix Diagrams" Walther Nernst Electrochemical equations Eponymous equations of physics
Nernst equation
[ "Physics", "Chemistry", "Mathematics" ]
5,580
[ "Equations of physics", "Mathematical objects", "Eponymous equations of physics", "Equations", "Electrochemistry", "Electrochemical equations" ]
76,408
https://en.wikipedia.org/wiki/Transverse%20wave
In physics, a transverse wave is a wave that oscillates perpendicularly to the direction of the wave's advance. In contrast, a longitudinal wave travels in the direction of its oscillations. All waves move energy from place to place without transporting the matter in the transmission medium if there is one. Electromagnetic waves are transverse without requiring a medium. The designation “transverse” indicates the direction of the wave is perpendicular to the displacement of the particles of the medium through which it passes, or in the case of EM waves, the oscillation is perpendicular to the direction of the wave. A simple example is given by the waves that can be created on a horizontal length of string by anchoring one end and moving the other end up and down. Another example is the waves that are created on the membrane of a drum. The waves propagate in directions that are parallel to the membrane plane, but each point in the membrane itself gets displaced up and down, perpendicular to that plane. Light is another example of a transverse wave, where the oscillations are the electric and magnetic fields, which point at right angles to the ideal light rays that describe the direction of propagation. Transverse waves commonly occur in elastic solids due to the shear stress generated; the oscillations in this case are the displacement of the solid particles away from their relaxed position, in directions perpendicular to the propagation of the wave. These displacements correspond to a local shear deformation of the material. Hence a transverse wave of this nature is called a shear wave. Since fluids cannot resist shear forces while at rest, propagation of transverse waves inside the bulk of fluids is not possible. In seismology, shear waves are also called secondary waves or S-waves. Transverse waves are contrasted with longitudinal waves, where the oscillations occur in the direction of the wave. The standard example of a longitudinal wave is a sound wave or "pressure wave" in gases, liquids, or solids, whose oscillations cause compression and expansion of the material through which the wave is propagating. Pressure waves are called "primary waves", or "P-waves" in geophysics. Water waves involve both longitudinal and transverse motions. Mathematical formulation Mathematically, the simplest kind of transverse wave is a plane linearly polarized sinusoidal one. "Plane" here means that the direction of propagation is unchanging and the same over the whole medium; "linearly polarized" means that the direction of displacement too is unchanging and the same over the whole medium; and the magnitude of the displacement is a sinusoidal function only of time and of position along the direction of propagation. The motion of such a wave can be expressed mathematically as follows. Let be the direction of propagation (a vector with unit length), and any reference point in the medium. Let be the direction of the oscillations (another unit-length vector perpendicular to d). The displacement of a particle at any point of the medium and any time t (seconds) will be where A is the wave's amplitude or strength, T is its period, v is the speed of propagation, and is its phase at t = 0 seconds at . All these parameters are real numbers. The symbol "•" denotes the inner product of two vectors. By this equation, the wave travels in the direction and the oscillations occur back and forth along the direction . The wave is said to be linearly polarized in the direction . An observer that looks at a fixed point will see the particle there move in a simple harmonic (sinusoidal) motion with period T seconds, with maximum particle displacement A in each sense; that is, with a frequency of f = 1/T full oscillation cycles every second. A snapshot of all particles at a fixed time t will show the same displacement for all particles on each plane perpendicular to , with the displacements in successive planes forming a sinusoidal pattern, with each full cycle extending along by the wavelength λ = v T = v/f. The whole pattern moves in the direction with speed V. The same equation describes a plane linearly polarized sinusoidal light wave, except that the "displacement" S(, t) is the electric field at point and time t. (The magnetic field will be described by the same equation, but with a "displacement" direction that is perpendicular to both and , and a different amplitude.) Superposition principle In a homogeneous linear medium, complex oscillations (vibrations in a material or light flows) can be described as the superposition of many simple sinusoidal waves, either transverse or longitudinal. The vibrations of a violin string create standing waves, for example, which can be analyzed as the sum of many transverse waves of different frequencies moving in opposite directions to each other, that displace the string either up or down or left to right. The antinodes of the waves align in a superposition . Circular polarization If the medium is linear and allows multiple independent displacement directions for the same travel direction , we can choose two mutually perpendicular directions of polarization, and express any wave linearly polarized in any other direction as a linear combination (mixing) of those two waves. By combining two waves with same frequency, velocity, and direction of travel, but with different phases and independent displacement directions, one obtains a circularly or elliptically polarized wave. In such a wave the particles describe circular or elliptical trajectories, instead of moving back and forth. It may help understanding to revisit the thought experiment with a taut string mentioned above. Notice that you can also launch waves on the string by moving your hand to the right and left instead of up and down. This is an important point. There are two independent (orthogonal) directions that the waves can move. (This is true for any two directions at right angles, up and down and right and left are chosen for clarity.) Any waves launched by moving your hand in a straight line are linearly polarized waves. But now imagine moving your hand in a circle. Your motion will launch a spiral wave on the string. You are moving your hand simultaneously both up and down and side to side. The maxima of the side to side motion occur a quarter wavelength (or a quarter of a way around the circle, that is 90 degrees or π/2 radians) from the maxima of the up and down motion. At any point along the string, the displacement of the string will describe the same circle as your hand, but delayed by the propagation speed of the wave. Notice also that you can choose to move your hand in a clockwise circle or a counter-clockwise circle. These alternate circular motions produce right and left circularly polarized waves. To the extent your circle is imperfect, a regular motion will describe an ellipse, and produce elliptically polarized waves. At the extreme of eccentricity your ellipse will become a straight line, producing linear polarization along the major axis of the ellipse. An elliptical motion can always be decomposed into two orthogonal linear motions of unequal amplitude and 90 degrees out of phase, with circular polarization being the special case where the two linear motions have the same amplitude. Power in a transverse wave in string (Let the linear mass density of the string be μ.) The kinetic energy of a mass element in a transverse wave is given by: In one wavelength, kinetic energy Using Hooke's law the potential energy in mass element And the potential energy for one wavelength So, total energy in one wavelength Therefore average power is See also Longitudinal wave Luminiferous aether – the postulated medium for light waves; accepting that light was a transverse wave prompted a search for evidence of this physical medium Shear wave splitting Sinusoidal plane-wave solutions of the electromagnetic wave equation Transverse mode Elastography Shear-wave elasticity imaging References External links Interactive simulation of transverse wave Wave types explained with high speed film and animations Transverse and Longitudinal Waves Introductory module on these waves at Connexions Wave mechanics Acoustics Waves Polarization (waves)
Transverse wave
[ "Physics" ]
1,654
[ "Physical phenomena", "Classical mechanics", "Acoustics", "Astrophysics", "Waves", "Motion (physics)", "Wave mechanics", "Polarization (waves)" ]
76,416
https://en.wikipedia.org/wiki/Maximum%20power%20transfer%20theorem
In electrical engineering, the maximum power transfer theorem states that, to obtain maximum external power from a power source with internal resistance, the resistance of the load must equal the resistance of the source as viewed from its output terminals. Moritz von Jacobi published the maximum power (transfer) theorem around 1840; it is also referred to as "Jacobi's law". The theorem results in maximum power transfer from the power source to the load, but not maximum efficiency of useful power out of total power consumed. If the load resistance is made larger than the source resistance, then efficiency increases (since a higher percentage of the source power is transferred to the load), but the magnitude of the load power decreases (since the total circuit resistance increases). If the load resistance is made smaller than the source resistance, then efficiency decreases (since most of the power ends up being dissipated in the source). Although the total power dissipated increases (due to a lower total resistance), the amount dissipated in the load decreases. The theorem states how to choose (so as to maximize power transfer) the load resistance, once the source resistance is given. It is a common misconception to apply the theorem in the opposite scenario. It does not say how to choose the source resistance for a given load resistance. In fact, the source resistance that maximizes power transfer from a voltage source is always zero (the hypothetical ideal voltage source), regardless of the value of the load resistance. The theorem can be extended to alternating current circuits that include reactance, and states that maximum power transfer occurs when the load impedance is equal to the complex conjugate of the source impedance. The mathematics of the theorem also applies to other physical interactions, such as: mechanical collisions between two objects, the sharing of charge between two capacitors, liquid flow between two cylinders, the transmission and reflection of light at the boundary between two media. Maximizing power transfer versus power efficiency The theorem was originally misunderstood (notably by Joule) to imply that a system consisting of an electric motor driven by a battery could not be more than 50% efficient, since the power dissipated as heat in the battery would always be equal to the power delivered to the motor when the impedances were matched. In 1880 this assumption was shown to be false by either Edison or his colleague Francis Robbins Upton, who realized that maximum efficiency was not the same as maximum power transfer. To achieve maximum efficiency, the resistance of the source (whether a battery or a dynamo) could be (or should be) made as close to zero as possible. Using this new understanding, they obtained an efficiency of about 90%, and proved that the electric motor was a practical alternative to the heat engine. The efficiency is the ratio of the power dissipated by the load resistance to the total power dissipated by the circuit (which includes the voltage source's resistance of as well as ): Consider three particular cases (note that voltage sources must have some resistance): If , then Efficiency approaches 0% if the load resistance approaches zero (a short circuit), since all power is consumed in the source and no power is consumed in the short. If , then Efficiency is only 50% if the load resistance equals the source resistance (which is the condition of maximum power transfer). If , then Efficiency approaches 100% if the load resistance approaches infinity (though the total power level tends towards zero) or if the source resistance approaches zero. Using a large ratio is called impedance bridging. Impedance matching A related concept is reflectionless impedance matching. In radio frequency transmission lines, and other electronics, there is often a requirement to match the source impedance (at the transmitter) to the load impedance (such as an antenna) to avoid reflections in the transmission line. Calculus-based proof for purely resistive circuits In the simplified model of powering a load with resistance by a source with voltage and source resistance , then by Ohm's law the resulting current is simply the source voltage divided by the total circuit resistance: The power dissipated in the load is the square of the current multiplied by the resistance: The value of for which this expression is a maximum could be calculated by differentiating it, but it is easier to calculate the value of for which the denominator: is a minimum. The result will be the same in either case. Differentiating the denominator with respect to : For a maximum or minimum, the first derivative is zero, so or In practical resistive circuits, and are both positive, so the positive sign in the above is the correct solution. To find out whether this solution is a minimum or a maximum, the denominator expression is differentiated again: This is always positive for positive values of and , showing that the denominator is a minimum, and the power is therefore a maximum, when: The above proof assumes fixed source resistance . When the source resistance can be varied, power transferred to the load can be increased by reducing . For example, a 100 Volt source with an of will deliver 250 watts of power to a load; reducing to increases the power delivered to 1000 watts. Note that this shows that maximum power transfer can also be interpreted as the load voltage being equal to one-half of the Thevenin voltage equivalent of the source. In reactive circuits The power transfer theorem also applies when the source and/or load are not purely resistive. A refinement of the maximum power theorem says that any reactive components of source and load should be of equal magnitude but opposite sign. (See below for a derivation.) This means that the source and load impedances should be complex conjugates of each other. In the case of purely resistive circuits, the two concepts are identical. Physically realizable sources and loads are not usually purely resistive, having some inductive or capacitive components, and so practical applications of this theorem, under the name of complex conjugate impedance matching, do, in fact, exist. If the source is totally inductive (capacitive), then a totally capacitive (inductive) load, in the absence of resistive losses, would receive 100% of the energy from the source but send it back after a quarter cycle. The resultant circuit is nothing other than a resonant LC circuit in which the energy continues to oscillate to and fro. This oscillation is called reactive power. Power factor correction (where an inductive reactance is used to "balance out" a capacitive one), is essentially the same idea as complex conjugate impedance matching although it is done for entirely different reasons. For a fixed reactive source, the maximum power theorem maximizes the real power (P) delivered to the load by complex conjugate matching the load to the source. For a fixed reactive load, power factor correction minimizes the apparent power (S) (and unnecessary current) conducted by the transmission lines, while maintaining the same amount of real power transfer. This is done by adding a reactance to the load to balance out the load's own reactance, changing the reactive load impedance into a resistive load impedance. Proof In this diagram, AC power is being transferred from the source, with phasor magnitude of voltage (positive peak voltage) and fixed source impedance (S for source), to a load with impedance (L for load), resulting in a (positive) magnitude of the current phasor . This magnitude results from dividing the magnitude of the source voltage by the magnitude of the total circuit impedance: The average power dissipated in the load is the square of the current multiplied by the resistive portion (the real part) of the load impedance : where and denote the resistances, that is the real parts, and and denote the reactances, that is the imaginary parts, of respectively the source and load impedances and . To determine, for a given source, the voltage and the impedance the value of the load impedance for which this expression for the power yields a maximum, one first finds, for each fixed positive value of , the value of the reactive term for which the denominator: is a minimum. Since reactances can be negative, this is achieved by adapting the load reactance to: This reduces the above equation to: and it remains to find the value of which maximizes this expression. This problem has the same form as in the purely resistive case, and the maximizing condition therefore is The two maximizing conditions: describe the complex conjugate of the source impedance, denoted by and thus can be concisely combined to: See also Maximum power point tracking Notes References H.W. Jackson (1959) Introduction to Electronic Circuits, Prentice-Hall. External links Conjugate matching versus reflectionless matching (PDF) taken from Electromagnetic Waves and Antennas The Spark Transmitter. 2. Maximising Power, part 1. Circuit theorems Electrical engineering
Maximum power transfer theorem
[ "Physics", "Engineering" ]
1,825
[ "Electrical engineering", "Equations of physics", "Circuit theorems", "Physics theorems" ]
77,473
https://en.wikipedia.org/wiki/Flerovium
Flerovium is a synthetic chemical element; it has symbol Fl and atomic number 114. It is an extremely radioactive, superheavy element, named after the Flerov Laboratory of Nuclear Reactions of the Joint Institute for Nuclear Research in Dubna, Russia, where the element was discovered in 1999. The lab's name, in turn, honours Russian physicist Georgy Flyorov ( in Cyrillic, hence the transliteration of "yo" to "e"). IUPAC adopted the name on 30 May 2012. The name and symbol had previously been proposed for element 102 (nobelium) but were not accepted by IUPAC at that time. It is a transactinide in the p-block of the periodic table. It is in period 7 and is the heaviest known member of the carbon group. Initial chemical studies in 2007–2008 indicated that flerovium was unexpectedly volatile for a group 14 element. More recent results show that flerovium's reaction with gold is similar to that of copernicium, showing it is very volatile and may even be gaseous at standard temperature and pressure. Nonetheless it also seems to show some metallic properties, consistent with it being the heavier homologue of lead. Very little is known about flerovium, as it can only be produced one atom at a time, either through direct synthesis or through radioactive decay of even heavier elements, and all known isotopes are short-lived. Six isotopes of flerovium are known, ranging in mass number between 284 and 289; the most stable of these, , has a half-life of ~2.1 seconds, but the unconfirmed may have a longer half-life of 19 seconds, which would be one of the longest half-lives of any nuclide in these farthest reaches of the periodic table. Flerovium is predicted to be near the centre of the theorized island of stability, and it is expected that heavier flerovium isotopes, especially the possibly magic , may have even longer half-lives. Introduction History Pre-discovery In the late 1940s to early 1960s, the early days of making heavier and heavier transuranic elements, it was predicted that since such elements did not occur naturally, they would have shorter and shorter spontaneous fission half-lives, until they stopped existing altogether around element 108 (now called hassium). Initial work in synthesizing the heavier actinides seemed to confirm this. But the nuclear shell model, introduced in 1949 and extensively developed in the late 1960s by William Myers and Władysław Świątecki, stated that protons and neutrons form shells within a nucleus, analogous to electron shells. Noble gases are unreactive due to a full electron shell; similarly, it was theorized that elements with full nuclear shells – those having "magic" numbers of protons or neutrons – would be stabilized against decay. A doubly magic isotope, with magic numbers of both protons and neutrons, would be especially stabilized. Heiner Meldner calculated in 1965 that the next doubly magic isotope after was with 114 protons and 184 neutrons, which would be the centre of an "island of stability". This island of stability, supposedly from copernicium (Z = 112) to oganesson (Z = 118), would come after a long "sea of instability" from mendelevium (Z = 101) to roentgenium (Z = 111), and the flerovium isotopes in it were speculated in 1966 to have half-lives over 108 years. These early predictions fascinated researchers, and led to the first attempt to make flerovium, in 1968 with the reaction . No flerovium atoms were detected; this was thought to be because the compound nucleus only has 174 neutrons instead of the supposed magic 184, and this would have significant impact on the reaction cross section (yield) and half-lives of nuclei produced. It was then 30 more years before flerovium was first made. Later work suggests the islands of stability around hassium and flerovium occur because these nuclei are respectively deformed and oblate, which make them resistant to spontaneous fission, and that the true island of stability for spherical nuclei occurs at around unbibium-306 (122 protons, 184 neutrons). In the 1970s and 1980s, theoretical studies debated whether element 114 would be a more volatile metal like lead, or an inert gas. First signs The first sign of flerovium was found in December 1998 by a team of scientists at Joint Institute for Nuclear Research (JINR), Dubna, Russia, led by Yuri Oganessian, who bombarded a target of plutonium-244 with accelerated nuclei of calcium-48: + → * → + 2 This reaction had been tried before, without success; for this 1998 attempt, JINR had upgraded all of its equipment to detect and separate the produced atoms better and bombard the target more intensely. One atom of flerovium, alpha decaying with lifetime 30.4 s, was detected. The decay energy measured was 9.71 MeV, giving an expected half-life of 2–23 s. This observation was assigned to and was published in January 1999. The experiment was later repeated, but an isotope with these decay properties was never observed again, so the exact identity of this activity is unknown. It may have been due to the isomer , but because the presence of a whole series of longer-lived isomers in its decay chain would be rather doubtful, the most likely assignment of this chain is to the 2n channel leading to and electron capture to . This fits well with the systematics and trends of flerovium isotopes, and is consistent with the low beam energy chosen for that experiment, though further confirmation would be desirable via synthesis of in a 248Cm(48Ca,2n) reaction, which would alpha decay to . The RIKEN team reported possible synthesis of isotopes and in 2016 in a 248Cm(48Ca,2n) reaction, but the alpha decay of was missed, alpha decay of to was observed instead of electron capture to , and the assignment to instead of was not certain. Glenn T. Seaborg, a scientist at Lawrence Berkeley National Laboratory who had been involved in work to make such superheavy elements, had said in December 1997 that "one of his longest-lasting and most cherished dreams was to see one of these magic elements"; he was told of the synthesis of flerovium by his colleague Albert Ghiorso soon after its publication in 1999. Ghiorso later recalled: Seaborg died two months later, on 25 February 1999. In March 1999, the same team replaced the target with to make other flerovium isotopes. Two atoms of flerovium were produced as a result, each alpha-decaying with a half-life of 5.5 s. They were assigned as . This activity has not been seen again either, and it is unclear what nucleus was produced. It is possible that it was an isomer 287mFl or from electron capture by 287Fl, leading to 287Nh and 283Rg. Confirmed discovery The now-confirmed discovery of flerovium was made in June 1999 when the Dubna team repeated the first reaction from 1998. This time, two atoms of flerovium were produced; they alpha decayed with half-life 2.6 s, different from the 1998 result. This activity was initially assigned to 288Fl in error, due to the confusion regarding the previous observations that were assumed to come from 289Fl. Further work in December 2002 finally allowed a positive reassignment of the June 1999 atoms to 289Fl. In May 2009, the Joint Working Party (JWP) of IUPAC published a report on the discovery of copernicium in which they acknowledged discovery of the isotope 283Cn. This implied the discovery of flerovium, from the acknowledgement of the data for the synthesis of 287Fl and 291Lv, which decay to 283Cn. The discovery of flerovium-286 and -287 was confirmed in January 2009 at Berkeley. This was followed by confirmation of flerovium-288 and -289 in July 2009 at Gesellschaft für Schwerionenforschung (GSI) in Germany. In 2011, IUPAC evaluated the Dubna team's 1999–2007 experiments. They found the early data inconclusive, but accepted the results of 2004–2007 as flerovium, and the element was officially recognized as having been discovered. Isotopes While the method of chemical characterization of a daughter was successful for flerovium and livermorium, and the simpler structure of even–even nuclei made confirmation of oganesson (Z = 118) straightforward, there have been difficulties in establishing the congruence of decay chains from isotopes with odd protons, odd neutrons, or both. To get around this problem with hot fusion, the decay chains from which terminate in spontaneous fission instead of connecting to known nuclei as cold fusion allows, experiments were done in Dubna in 2015 to produce lighter isotopes of flerovium by reaction of 48Ca with 239Pu and 240Pu, particularly 283Fl, 284Fl, and 285Fl; the last had previously been characterized in the 242Pu(48Ca,5n)285Fl reaction at Lawrence Berkeley National Laboratory in 2010. 285Fl was more clearly characterized, while the new isotope 284Fl was found to undergo immediate spontaneous fission, and 283Fl was not observed. This lightest isotope may yet conceivably be produced in the cold fusion reaction 208Pb(76Ge,n)283Fl, which the team at RIKEN in Japan at one point considered investigating: this reaction is expected to have a higher cross-section of 200 fb than the "world record" low of 30 fb for 209Bi(70Zn,n)278Nh, the reaction which RIKEN used for the official discovery of element 113 (nihonium). Alternatively, it might be produced in future as a great-granddaughter of 295120, reachable in the 249Cf(50Ti,4n) reaction. The reaction 239Pu+48Ca has also been suggested as a means to produce 282Fl and 283Fl in the 5n and 4n channels respectively, but so far only the 3n channel leading to 284Fl has been observed. The Dubna team repeated their investigation of the 240Pu+48Ca reaction in 2017, observing three new consistent decay chains of 285Fl, another decay chain from this nuclide that may pass through some isomeric states in its daughters, a chain that could be assigned to 287Fl (likely from 242Pu impurities in the target), and some spontaneous fissions of which some could be from 284Fl, though other interpretations including side reactions involving evaporation of charged particles are also possible. The alpha decay of 284Fl to spontaneously fissioning 280Cn was finally observed by the Dubna team in 2024. Naming Per Mendeleev's nomenclature for unnamed and undiscovered elements, flerovium is sometimes called eka-lead. In 1979, IUPAC published recommendations according to which the element was to be called ununquadium (symbol Uuq), a systematic element name as a placeholder, until the discovery of the element is confirmed and a permanent name is decided on. Most scientists in the field called it "element 114", with the symbol of E114, (114) or 114. Per IUPAC recommendations, the discoverer(s) of a new element has the right to suggest a name. After IUPAC recognized the discovery of flerovium and livermorium on 1 June 2011, IUPAC asked the discovery team at JINR to suggest permanent names for the two elements. The Dubna team chose the name flerovium (symbol Fl), after Russia's Flerov Laboratory of Nuclear Reactions (FLNR), named after Soviet physicist Georgy Flyorov (also spelled Flerov); earlier reports claim the element name was directly proposed to honour Flyorov. In accordance with the proposal received from the discoverers, IUPAC officially named flerovium after Flerov Laboratory of Nuclear Reactions, not after Flyorov himself. Flyorov is known for writing to Joseph Stalin in April 1942 and pointing out the silence in scientific journals in the field of nuclear fission in the United States, Great Britain, and Germany. Flyorov deduced that this research must have become classified information in those countries. Flyorov's work and urgings led to the development of the USSR's own atomic bomb project. Flyorov is also known for the discovery of spontaneous fission with Konstantin Petrzhak. The naming ceremony for flerovium and livermorium was held on 24 October 2012 in Moscow. In a 2015 interview with Oganessian, the host, in preparation to ask a question, said, "You said you had dreamed to name [an element] after your teacher Georgy Flyorov." Without letting the host finish, Oganessian repeatedly said, "I did." Predicted properties Very few properties of flerovium or its compounds have been measured; due to its extremely limited and expensive production and the fact that it decays very quickly. A few singular properties have been measured, but for the most part, properties of flerovium remain unknown and only predictions are available. Nuclear stability and isotopes The basis of the chemical periodicity in the periodic table is the electron shell closure at each noble gas (atomic numbers 2, 10, 18, 36, 54, 86, and 118): as any further electrons must enter a new shell with higher energy, closed-shell electron configurations are markedly more stable, hence the inertness of noble gases. Protons and neutrons are also known to form closed nuclear shells, so the same happens at nucleon shell closures, which happen at specific nucleon numbers often dubbed "magic numbers". The known magic numbers are 2, 8, 20, 28, 50, and 82 for protons and neutrons; also 126 for neutrons. Nuclei with magic proton and neutron numbers, such as helium-4, oxygen-16, calcium-48, and lead-208, are "doubly magic" and are very stable. This stability is very important for superheavy elements: with no stabilization, half-lives would be expected by exponential extrapolation to be nanoseconds at darmstadtium (element 110), because the ever-increasing electrostatic repulsion between protons overcomes the limited-range strong nuclear force that holds nuclei together. The next closed nucleon shells (magic numbers) are thought to denote the centre of the long-sought island of stability, where half-lives to alpha decay and spontaneous fission lengthen again. Initially, by analogy with neutron magic number 126, the next proton shell was also expected at element 126, too far beyond the synthesis capabilities of the mid-20th century to get much theoretical attention. In 1966, new values for the potential and spin–orbit interaction in this region of the periodic table contradicted this and predicted that the next proton shell would instead be at element 114, and that nuclei in this region would be relatively stable against spontaneous fission. The expected closed neutron shells in this region were at neutron number 184 or 196, making 298Fl and 310Fl candidates for being doubly magic. 1972 estimates predicted a half-life of around 1 year for 298Fl, which was expected to be near an island of stability centered near 294Ds (with a half-life around 1010 years, comparable to 232Th). After making the first isotopes of elements 112–118 at the turn of the 21st century, it was found that these neutron-deficient isotopes were stabilized against fission. In 2008 it was thus hypothesized that the stabilization against fission of these nuclides was due to their oblate nuclei, and that a region of oblate nuclei was centred on 288Fl. Also, new theoretical models showed that the expected energy gap between the proton orbitals 2f7/2 (filled at element 114) and 2f5/2 (filled at element 120) was smaller than expected, so element 114 no longer appeared to be a stable spherical closed nuclear shell. The next doubly magic nucleus is now expected to be around 306Ubb, but this nuclide's expected short half-life and low production cross section make its synthesis challenging. Still, the island of stability is expected to exist in this region, and nearer its centre (which has not been approached closely enough yet) some nuclides, such as 291Mc and its alpha- and beta-decay daughters, may be found to decay by positron emission or electron capture and thus move into the centre of the island. Due to the expected high fission barriers, any nucleus in this island of stability would decay exclusively by alpha decay and perhaps some electron capture and beta decay, both of which would bring the nuclei closer to the beta-stability line where the island is expected to be. Electron capture is needed to reach the island, which is problematic because it is not certain that electron capture is a major decay mode in this region of the chart of nuclides. Experiments were done in 2000–2004 at Flerov Laboratory of Nuclear Reactions in Dubna studying the fission properties of the compound nucleus 292Fl by bombarding 244Pu with accelerated 48Ca ions. A compound nucleus is a loose combination of nucleons that have not yet arranged themselves into nuclear shells. It has no internal structure and is held together only by the collision forces between the two nuclei. Results showed how such nuclei fission mainly by expelling doubly magic or nearly doubly magic fragments such as 40Ca, 132Sn, 208Pb, or 209Bi. It was also found that 48Ca and 58Fe projectiles had a similar yield for the fusion-fission pathway, suggesting possible future use of 58Fe projectiles in making superheavy elements. It has also been suggested that a neutron-rich flerovium isotope can be formed by quasifission (partial fusion followed by fission) of a massive nucleus. Recently it has been shown that multi-nucleon transfer reactions in collisions of actinide nuclei (such as uranium and curium) might be used to make neutron-rich superheavy nuclei in the island of stability, though production of neutron-rich nobelium or seaborgium is more likely. Theoretical estimates of alpha decay half-lives of flerovium isotopes, support the experimental data. The fission-survived isotope 298Fl, long expected to be doubly magic, is predicted to have alpha decay half-life ~17 days. Making 298Fl directly by a fusion–evaporation pathway is currently impossible: no known combination of target and stable projectile can give 184 neutrons for the compound nucleus, and radioactive projectiles such as 50Ca (half-life 14 s) cannot yet be used in the needed quantity and intensity. One possibility for making the theorized long-lived nuclei of copernicium (291Cn and 293Cn) and flerovium near the middle of the island, is using even heavier targets such as 250Cm, 249Bk, 251Cf, and 254Es, that when fused with 48Ca would yield isotopes such as 291Mc and 291Fl (as decay products of 299Uue, 295Ts, and 295Lv), which may have just enough neutrons to alpha decay to nuclides close enough to the centre of the island to possibly undergo electron capture and move inward to the centre. However, reaction cross sections would be small and little is yet known about the decay properties of superheavies near the beta-stability line. This may be the current best hope to synthesize nuclei in the island of stability, but it is speculative and may or may not work in practice. Another possibility is to use controlled nuclear explosions to get the high neutron flux needed to make macroscopic amounts of such isotopes. This would mimic the r-process where the actinides were first produced in nature and the gap of instability after polonium bypassed, as it would bypass the gaps of instability at 258–260Fm and at mass number 275 (atomic numbers 104 to 108). Some such isotopes (especially 291Cn and 293Cn) may even have been synthesized in nature, but would decay far too quickly (with half-lives of only thousands of years) and be produced in far too small quantities (~10−12 the abundance of lead) to be detectable today outside cosmic rays. Atomic and physical Flerovium is in group 14 in the periodic table, below carbon, silicon, germanium, tin, and lead. Every previous group 14 element has 4 electrons in its valence shell, hence valence electron configuration ns2np2. For flerovium, the trend will continue and the valence electron configuration is predicted as 7s27p2; flerovium will be similar to its lighter congeners in many ways. Differences are likely to arise; a large contributor is spin–orbit (SO) interaction—mutual interaction between the electrons' motion and spin. It is especially strong in superheavy elements, because the electrons move faster than in lighter atoms, at speeds comparable to the speed of light. For flerovium, it lowers the 7s and the 7p electron energy levels (stabilizing the corresponding electrons), but two of the 7p electron energy levels are stabilized more than the other four. The stabilization of the 7s electrons is called the inert pair effect, and the effect "tearing" the 7p subshell into the more and less stabilized parts is called subshell splitting. Computational chemists see the split as a change of the second (azimuthal) quantum number from 1 to and for the more stabilized and less stabilized parts of the 7p subshell, respectively. For many theoretical purposes, the valence electron configuration may be represented to reflect the 7p subshell split as 7s7p. These effects cause flerovium's chemistry to be somewhat different from that of its lighter neighbours. Because the spin–orbit splitting of the 7p subshell is very large in flerovium, and both of flerovium's filled orbitals in the 7th shell are stabilized relativistically; the valence electron configuration of flerovium may be considered to have a completely filled shell. Its first ionization energy of should be the second-highest in group 14. The 6d electron levels are also destabilized, leading to some early speculations that they may be chemically active, though newer work suggests this is unlikely. Because the first ionization energy is higher than in silicon and germanium, though still lower than in carbon, it has been suggested that flerovium could be classed as a metalloid. Flerovium's closed-shell electron configuration means metallic bonding in metallic flerovium is weaker than in the elements before and after; so flerovium is expected to have a low boiling point, and has recently been suggested to be possibly a gaseous metal, similar to predictions for copernicium, which also has a closed-shell electron configuration. Flerovium's melting and boiling points were predicted in the 1970s to be around 70 and 150 °C, significantly lower than for the lighter group 14 elements (lead has 327 and 1749 °C), and continuing the trend of decreasing boiling points down the group. Earlier studies predicted a boiling point of ~1000 °C or 2840 °C, but this is now considered unlikely because of the expected weak metallic bonding and that group trends would expect flerovium to have low sublimation enthalpy. Preliminary 2021 calculations predicted that flerovium should have melting point −73 °C (lower than mercury at −39 °C and copernicium, predicted 10 ± 11 °C) and boiling point 107 °C, which would make it a liquid metal. Like mercury, radon, and copernicium, but not lead and oganesson (eka-radon), flerovium is calculated to have no electron affinity. A 2010 study published calculations predicting a hexagonal close-packed crystal structure for flerovium due to spin–orbit coupling effects, and a density of 9.928 g/cm3, though this was noted to be probably slightly too low. Newer calculations published in 2017 expected flerovium to crystallize in face-centred cubic crystal structure like its lighter congener lead, and calculations published in 2022 predicted a density of 11.4 ± 0.3 g/cm3, similar to lead (11.34 g/cm3). These calculations found that the face-centred cubic and hexagonal close-packed structures should have nearly the same energy, a phenomenon reminiscent of the noble gases. These calculations predict that hexagonal close-packed flerovium should be a semiconductor, with a band gap of 0.8 ± 0.3 eV. (Copernicium is also predicted to be a semiconductor.) These calculations predict that the cohesive energy of flerovium should be around −0.5 ± 0.1 eV; this is similar to that predicted for oganesson (−0.45 eV), larger than that predicted for copernicium (−0.38 eV), but smaller than that of mercury (−0.79 eV). The melting point was calculated as 284 ± 50 K (11 ± 50 °C), so that flerovium is probably a liquid at room temperature, although the boiling point was not determined. The electron of a hydrogen-like flerovium ion (Fl113+; remove all but one electron) is expected to move so fast that its mass is 1.79 times that of a stationary electron, due to relativistic effects. (The figures for hydrogen-like lead and tin are expected to be 1.25 and 1.073 respectively.) Flerovium would form weaker metal–metal bonds than lead and would be adsorbed less on surfaces. Chemical Flerovium is the heaviest known member of group 14, below lead, and is projected to be the second member of the 7p series of elements. Nihonium and flerovium are expected to form a very short subperiod corresponding to the filling of the 7p1/2 orbital, coming between the filling of the 6d5/2 and 7p3/2 subshells. Their chemical behaviour is expected to be very distinctive: nihonium's homology to thallium has been called "doubtful" by computational chemists, while flerovium's to lead has been called only "formal". The first five group 14 members show a +4 oxidation state and the latter members have increasingly prominent +2 chemistry due to onset of the inert pair effect. For tin, the +2 and +4 states are similar in stability, and lead(II) is the most stable of all the chemically well-understood +2 oxidation states in group 14. The 7s orbitals are very highly stabilized in flerovium, so a very large sp3 orbital hybridization is needed to achieve a +4 oxidation state, so flerovium is expected to be even more stable than lead in its strongly predominant +2 oxidation state and its +4 oxidation state should be highly unstable. For example, the dioxide (FlO2) is expected to be highly unstable to decomposition into its constituent elements (and would not be formed by direct reaction of flerovium with oxygen), and flerovane (FlH4), which should have Fl–H bond lengths of 1.787 Å and would be the heaviest homologue of methane (the lighter compounds include silane, germane and stannane), is predicted to be more thermodynamically unstable than plumbane, spontaneously decomposing to flerovium(II) hydride (FlH2) and H2. The tetrafluoride FlF4 would have bonding mostly due to sd hybridizations rather than sp3 hybridizations, and its decomposition to the difluoride and fluorine gas would be exothermic. The other tetrahalides (for example, FlCl4 is destabilized by about 400 kJ/mol) decompose similarly. The corresponding polyfluoride anion should be unstable to hydrolysis in aqueous solution, and flerovium(II) polyhalide anions such as and are predicted to form preferentially in solutions. The sd hybridizations were suggested in early calculations, as flerovium's 7s and 6d electrons share about the same energy, which would allow a volatile hexafluoride to form, but later calculations do not confirm this possibility. In general, spin–orbit contraction of the 7p1/2 orbital should lead to smaller bond lengths and larger bond angles: this has been theoretically confirmed in FlH2. Still, even FlH2 should be relativistically destabilized by 2.6 eV to below Fl+H2; the large spin–orbit effects also break down the usual singlet–triplet divide in the group 14 dihydrides. FlF2 and FlCl2 are predicted to be more stable than FlH2. Due to relativistic stabilization of flerovium's 7s27p valence electron configuration, the 0 oxidation state should also be more stable for flerovium than for lead, as the 7p1/2 electrons begin to also have a mild inert pair effect: this stabilization of the neutral state may bring about some similarities between the behavior of flerovium and the noble gas radon. Due to flerovium's expected relative inertness, diatomic compounds FlH and FlF should have lower energies of dissociation than the corresponding lead compounds PbH and PbF. Flerovium(IV) should be even more electronegative than lead(IV); lead(IV) has electronegativity 2.33 on the Pauling scale, though the lead(II) value is only 1.87. Flerovium could be a noble metal. Flerovium(II) should be more stable than lead(II), and halides FlX+, FlX2, , and (X = Cl, Br, I) are expected to form readily. The fluorides would undergo strong hydrolysis in aqueous solution. All flerovium dihalides are expected to be stable; the difluoride being water-soluble. Spin–orbit effects would destabilize the dihydride (FlH2) by almost . In aqueous solution, the oxyanion flerovite () would also form, analogous to plumbite. Flerovium(II) sulfate (FlSO4) and sulfide (FlS) should be very insoluble in water, and flerovium(II) acetate (Fl(C2H3O2)2) and nitrate (Fl(NO3)2) should be quite water-soluble. The standard electrode potential for reduction of Fl2+ ion to metallic flerovium is estimated to be around +0.9 V, confirming the increased stability of flerovium in the neutral state. In general, due to relativistic stabilization of the 7p1/2 spinor, Fl2+ is expected to have properties intermediate between those of Hg2+ or Cd2+ and its lighter congener Pb2+. Experimental chemistry Flerovium is currently the last element whose chemistry has been experimentally investigated, though studies so far are not conclusive. Two experiments were done in April–May 2007 in a joint FLNR-PSI collaboration to study copernicium chemistry. The first experiment used the reaction 242Pu(48Ca,3n)287Fl; and the second, 244Pu(48Ca,4n)288Fl: these reactions give short-lived flerovium isotopes whose copernicium daughters would then be studied. Adsorption properties of the resultant atoms on a gold surface were compared to those of radon, as it was then expected that copernicium's full-shell electron configuration would lead to noble-gas like behavior. Noble gases interact with metal surfaces very weakly, which is uncharacteristic of metals. The first experiment found 3 atoms of 283Cn but seemingly also 1 atom of 287Fl. This was a surprise; transport time for the product atoms is ~2 s, so the flerovium should have decayed to copernicium before adsorption. In the second reaction, 2 atoms of 288Fl and possibly 1 of 289Fl were seen. Two of the three atoms showed adsorption characteristics associated with a volatile, noble-gas-like element, which has been suggested but is not predicted by more recent calculations. These experiments gave independent confirmation for the discovery of copernicium, flerovium, and livermorium via comparison with published decay data. Further experiments in 2008 to confirm this important result detected 1 atom of 289Fl, and supported previous data showing flerovium had a noble-gas-like interaction with gold. Empirical support for a noble-gas-like flerovium soon weakened. In 2009 and 2010, the FLNR-PSI collaboration synthesized more flerovium to follow up their 2007 and 2008 studies. In particular, the first three flerovium atoms made in the 2010 study suggested again a noble-gas-like character, but the complete set taken together resulted in a more ambiguous interpretation, unusual for a metal in the carbon group but not fully like a noble gas in character. In their paper, the scientists refrained from calling flerovium's chemical properties "close to those of noble gases", as had previously been done in the 2008 study. Flerovium's volatility was again measured through interactions with a gold surface, and provided indications that the volatility of flerovium was comparable to that of mercury, astatine, and the simultaneously investigated copernicium, which had been shown in the study to be a very volatile noble metal, conforming to its being the heaviest known group 12 element. Still, it was pointed out that this volatile behavior was not expected for a usual group 14 metal. In experiments in 2012 at GSI, flerovium's chemistry was found to be more metallic than noble-gas-like. Jens Volker Kratz and Christoph Düllmann specifically named copernicium and flerovium as being in a new category of "volatile metals"; Kratz even speculated that they might be gases at standard temperature and pressure. These "volatile metals", as a category, were expected to fall between normal metals and noble gases in terms of adsorption properties. Contrary to the 2009 and 2010 results, it was shown in the 2012 experiments that the interactions of flerovium and copernicium respectively with gold were about equal. Further studies showed that flerovium was more reactive than copernicium, in contradiction to previous experiments and predictions. In a 2014 paper detailing the experimental results of the chemical characterization of flerovium, the GSI group wrote: "[flerovium] is the least reactive element in the group, but still a metal." Nevertheless, in a 2016 conference about chemistry and physics of heavy and superheavy elements, Alexander Yakushev and Robert Eichler, two scientists who had been active at GSI and FLNR in determining flerovium's chemistry, still urged caution based on the inconsistencies of the various experiments previously listed, noting that the question of whether flerovium was a metal or a noble gas was still open with the known evidence: one study suggested a weak noble-gas-like interaction between flerovium and gold, while the other suggested a stronger metallic interaction. The longer-lived isotope has been considered of interest for future radiochemical studies. Experiments published in 2022 suggest that flerovium is a metal, exhibiting lower reactivity towards gold than mercury, but higher reactivity than radon. The experiments could not identify if the adsorption was due to elemental flerovium (considered more likely), or if it was due to a flerovium compound such as FlO that was more reactive towards gold than elemental flerovium, but both scenarios involve flerovium forming chemical bonds. See also Island of stability: Flerovium–Unbinilium–Unbihexium Isotopes of flerovium Extended periodic table Notes References Bibliography pp. 030001-1–030001-17, pp. 030001-18–030001-138, Table I. The NUBASE2016 table of nuclear and decay properties External links CERN Courier – First postcard from the island of nuclear stability CERN Courier – Second postcard from the island of stability Chemical elements Synthetic elements
Flerovium
[ "Physics", "Chemistry" ]
7,744
[ "Matter", "Chemical elements", "Synthetic materials", "Synthetic elements", "Atoms", "Radioactivity" ]
78,130
https://en.wikipedia.org/wiki/Max-flow%20min-cut%20theorem
In computer science and optimization theory, the max-flow min-cut theorem states that in a flow network, the maximum amount of flow passing from the source to the sink is equal to the total weight of the edges in a minimum cut, i.e., the smallest total weight of the edges which if removed would disconnect the source from the sink. This is a special case of the duality theorem for linear programs and can be used to derive Menger's theorem and the Kőnig–Egerváry theorem. Definitions and statement The theorem equates two quantities: the maximum flow through a network, and the minimum capacity of a cut of the network. To state the theorem, each of these notions must first be defined. Network A network consists of a finite directed graph , where V denotes the finite set of vertices and is the set of directed edges; a source and a sink ; a capacity function, which is a mapping denoted by or for . It represents the maximum amount of flow that can pass through an edge. Flows A flow through a network is a mapping denoted by or , subject to the following two constraints: Capacity Constraint: For every edge , Conservation of Flows: For each vertex apart from and (i.e. the source and sink, respectively), the following equality holds: A flow can be visualized as a physical flow of a fluid through the network, following the direction of each edge. The capacity constraint then says that the volume flowing through each edge per unit time is less than or equal to the maximum capacity of the edge, and the conservation constraint says that the amount that flows into each vertex equals the amount flowing out of each vertex, apart from the source and sink vertices. The value of a flow is defined by where as above is the source and is the sink of the network. In the fluid analogy, it represents the amount of fluid entering the network at the source. Because of the conservation axiom for flows, this is the same as the amount of flow leaving the network at the sink. The maximum flow problem asks for the largest flow on a given network. Maximum Flow Problem. Maximize , that is, to route as much flow as possible from to . Cuts The other half of the max-flow min-cut theorem refers to a different aspect of a network: the collection of cuts. An s-t cut is a partition of such that and . That is, an s-t cut is a division of the vertices of the network into two parts, with the source in one part and the sink in the other. The cut-set of a cut is the set of edges that connect the source part of the cut to the sink part: Thus, if all the edges in the cut-set of are removed, then no positive flow is possible, because there is no path in the resulting graph from the source to the sink. The capacity of an s-t cut is the sum of the capacities of the edges in its cut-set, where if and , otherwise. There are typically many cuts in a graph, but cuts with smaller weights are often more difficult to find. Minimum s-t Cut Problem. Minimize , that is, determine and such that the capacity of the s-t cut is minimal. Main theorem In the above situation, one can prove that the value of any flow through a network is less than or equal to the capacity of any s-t cut, and that furthermore a flow with maximal value and a cut with minimal capacity exist. The main theorem links the maximum flow value with the minimum cut capacity of the network. Max-flow min-cut theorem. The maximum value of an s-t flow is equal to the minimum capacity over all s-t cuts. Example The figure on the right shows a flow in a network. The numerical annotation on each arrow, in the form f/c, indicates the flow (f) and the capacity (c) of the arrow. The flows emanating from the source total five (2+3=5), as do the flows into the sink (2+3=5), establishing that the flow's value is 5. One s-t cut with value 5 is given by S={s,p} and T={o, q, r, t}. The capacities of the edges that cross this cut are 3 and 2, giving a cut capacity of 3+2=5. (The arrow from o to p is not considered, as it points from T back to S.) The value of the flow is equal to the capacity of the cut, showing that the flow is a maximal flow and the cut is a minimal cut. Note that the flow through each of the two arrows that connect S to T is at full capacity; this is always the case: a minimal cut represents a 'bottleneck' of the system. Linear program formulation The max-flow problem and min-cut problem can be formulated as two primal-dual linear programs. The max-flow LP is straightforward. The dual LP is obtained using the algorithm described in dual linear program: the variables and sign constraints of the dual correspond to the constraints of the primal, and the constraints of the dual correspond to the variables and sign constraints of the primal. The resulting LP requires some explanation. The interpretation of the variables in the min-cut LP is: The minimization objective sums the capacity over all the edges that are contained in the cut. The constraints guarantee that the variables indeed represent a legal cut: The constraints (equivalent to ) guarantee that, for non-terminal nodes u,v, if u is in S and v is in T, then the edge (u,v) is counted in the cut (). The constraints (equivalent to ) guarantee that, if v is in T, then the edge (s,v) is counted in the cut (since s is by definition in S). The constraints (equivalent to ) guarantee that, if u is in S, then the edge (u,t) is counted in the cut (since t is by definition in T). Note that, since this is a minimization problem, we do not have to guarantee that an edge is not in the cut - we only have to guarantee that each edge that should be in the cut, is summed in the objective function. The equality in the max-flow min-cut theorem follows from the strong duality theorem in linear programming, which states that if the primal program has an optimal solution, x*, then the dual program also has an optimal solution, y*, such that the optimal values formed by the two solutions are equal. Application Cederbaum's maximum flow theorem The maximum flow problem can be formulated as the maximization of the electrical current through a network composed of nonlinear resistive elements. In this formulation, the limit of the current between the input terminals of the electrical network as the input voltage approaches , is equal to the weight of the minimum-weight cut set. Generalized max-flow min-cut theorem In addition to edge capacity, consider there is capacity at each vertex, that is, a mapping denoted by , such that the flow has to satisfy not only the capacity constraint and the conservation of flows, but also the vertex capacity constraint In other words, the amount of flow passing through a vertex cannot exceed its capacity. Define an s-t cut to be the set of vertices and edges such that for any path from s to t, the path contains a member of the cut. In this case, the capacity of the cut is the sum of the capacity of each edge and vertex in it. In this new definition, the generalized max-flow min-cut theorem states that the maximum value of an s-t flow is equal to the minimum capacity of an s-t cut in the new sense. Menger's theorem In the undirected edge-disjoint paths problem, we are given an undirected graph and two vertices and , and we have to find the maximum number of edge-disjoint s-t paths in . Menger's theorem states that the maximum number of edge-disjoint s-t paths in an undirected graph is equal to the minimum number of edges in an s-t cut-set. Project selection problem In the project selection problem, there are projects and machines. Each project yields revenue and each machine costs to purchase. We want to select a subset of the project, and purchase a subset of the machines, to maximize the total profit (revenue of the selected projects minus cost of the purchased machines). We must obey the following constraint: each project specifies a set of machines which must be purchased if the project is selected. (Each machine, once purchased, can be used by any selected project.) To solve the problem, let be the set of projects not selected and be the set of machines purchased, then the problem can be formulated as, Since the first term does not depend on the choice of and , this maximization problem can be formulated as a minimization problem instead, that is, The above minimization problem can then be formulated as a minimum-cut problem by constructing a network, where the source is connected to the projects with capacity , and the sink is connected by the machines with capacity . An edge with infinite capacity is added if project requires machine . The s-t cut-set represents the projects and machines in and respectively. By the max-flow min-cut theorem, one can solve the problem as a maximum flow problem. The figure on the right gives a network formulation of the following project selection problem: The minimum capacity of an s-t cut is 250 and the sum of the revenue of each project is 450; therefore the maximum profit g is 450 − 250 = 200, by selecting projects and . The idea here is to 'flow' each project's profits through the 'pipes' of its machines. If we cannot fill the pipe from a machine, the machine's return is less than its cost, and the min cut algorithm will find it cheaper to cut the project's profit edge instead of the machine's cost edge. Image segmentation problem In the image segmentation problem, there are pixels. Each pixel can be assigned a foreground value or a background value . There is a penalty of if pixels are adjacent and have different assignments. The problem is to assign pixels to foreground or background such that the sum of their values minus the penalties is maximum. Let be the set of pixels assigned to foreground and be the set of points assigned to background, then the problem can be formulated as, This maximization problem can be formulated as a minimization problem instead, that is, The above minimization problem can be formulated as a minimum-cut problem by constructing a network where the source (orange node) is connected to all the pixels with capacity , and the sink (purple node) is connected by all the pixels with capacity . Two edges () and () with capacity are added between two adjacent pixels. The s-t cut-set then represents the pixels assigned to the foreground in and pixels assigned to background in . History An account of the discovery of the theorem was given by Ford and Fulkerson in 1962: "Determining a maximal steady state flow from one point to another in a network subject to capacity limitations on arcs ... was posed to the authors in the spring of 1955 by T.E. Harris, who, in conjunction with General F. S. Ross (Ret.) had formulated a simplified model of railway traffic flow, and pinpointed this particular problem as the central one suggested by the model. It was not long after this until the main result, Theorem 5.1, which we call the max-flow min-cut theorem, was conjectured and established. A number of proofs have since appeared." Proof Let be a network (directed graph) with and being the source and the sink of respectively. Consider the flow computed for by Ford–Fulkerson algorithm. In the residual graph obtained for (after the final flow assignment by Ford–Fulkerson algorithm), define two subsets of vertices as follows: : the set of vertices reachable from in : the set of remaining vertices i.e. Claim. , where the capacity of an s-t cut is defined by . Now, we know, for any subset of vertices, . Therefore, for we need: All outgoing edges from the cut must be fully saturated. All incoming edges to the cut must have zero flow. To prove the above claim we consider two cases: In , there exists an outgoing edge such that it is not saturated, i.e., . This implies, that there exists a forward edge from to in , therefore there exists a path from to in , which is a contradiction. Hence, any outgoing edge is fully saturated. In , there exists an incoming edge such that it carries some non-zero flow, i.e., . This implies, that there exists a backward edge from to in , therefore there exists a path from to in , which is again a contradiction. Hence, any incoming edge must have zero flow. Both of the above statements prove that the capacity of cut obtained in the above described manner is equal to the flow obtained in the network. Also, the flow was obtained by Ford-Fulkerson algorithm, so it is the max-flow of the network as well. Also, since any flow in the network is always less than or equal to capacity of every cut possible in a network, the above described cut is also the min-cut which obtains the max-flow. A corollary from this proof is that the maximum flow through any set of edges in a cut of a graph is equal to the minimum capacity of all previous cuts. See also Approximate max-flow min-cut theorem Edmonds–Karp algorithm Flow network Ford–Fulkerson algorithm GNRS conjecture Linear programming Maximum flow Menger's theorem Minimum cut References Combinatorial optimization Theorems in graph theory Network flow problem
Max-flow min-cut theorem
[ "Mathematics" ]
2,869
[ "Theorems in graph theory", "Theorems in discrete mathematics" ]
78,172
https://en.wikipedia.org/wiki/International%20Geophysical%20Year
The International Geophysical Year (IGY; ), also referred to as the third International Polar Year, was an international scientific project that lasted from 1 July 1957 to 31 December 1958. It marked the end of a long period during the Cold War when scientific interchange between East and West had been seriously interrupted. Sixty-seven countries participated in IGY projects, although one notable exception was the mainland People's Republic of China, which was protesting against the participation of the Republic of China (Taiwan). East and West agreed to nominate the Belgian Marcel Nicolet as secretary general of the associated international organization. The IGY encompassed fourteen Earth science disciplines: aurora, airglow, cosmic rays, geomagnetism, gravity, ionospheric physics, longitude and latitude determinations (precision mapping), meteorology, oceanography, nuclear radiation, glaciology, seismology, rockets and satellites, and solar activity. The timing of the IGY was particularly suited for studying some of these phenomena, since it covered the peak of solar cycle 19. Both the Soviet Union and the U.S. launched artificial satellites for this event; the Soviet Union's Sputnik 1, launched on October 4, 1957, was the first successful artificial satellite. Other significant achievements of the IGY included the discovery of the Van Allen radiation belts by Explorer 1 and mid-ocean submarine ridges, an important confirmation of plate-tectonic theory. Also, many international bases were established in Antarctica, many of which have been maintained to the present day, including at the south pole. History The origin of the International Geophysical Year can be traced to the International Polar Years held in 1882–1883, then in 1932–1933 and most recently from March 2007 to March 2009. On 5 April 1950, multiple scientists (including Lloyd Berkner, Sydney Chapman, S. Fred Singer, and Harry Vestine) met in James Van Allen's living room and suggested that the time was ripe to have a worldwide Geophysical Year instead of a Polar Year, especially considering recent advances in rocketry, radar, and computing. Berkner and Chapman proposed to the International Council of Scientific Unions that an International Geophysical Year (IGY) be planned for 1957–58, coinciding with an approaching period of maximum solar activity. In 1952, the IGY was announced. Joseph Stalin's death in 1953 opened the way for international collaboration with the Soviet Union. In 1952 the Comité Speciale de l'Année Géophysique Internationale (CSAGI), a special committee of the ICSU, was established to coordinate the International Geophysical Year (IGY) under president Sydney Chapman, a British geophysicist. Events On 29 July 1955, James C. Hagerty, president Dwight D. Eisenhower's press secretary, announced that the United States intended to launch "small Earth circling satellites" between 1 July 1957 and 31 December 1958 as part of the United States contribution to the International Geophysical Year (IGY). Project Vanguard would be managed by the Naval Research Laboratory and to be based on developing sounding rockets, which had the advantage that they were primarily used for non-military scientific experiments. Four days later, at the Sixth Congress of International Astronautical Federation in Copenhagen, scientist Leonid I. Sedov spoke to international reporters at the Soviet embassy and announced his country's intention to launch a satellite in the "near future". To the surprise of many, the USSR launched Sputnik 1 as the first artificial Earth satellite on 4 October 1957. After several failed Vanguard launches, Wernher von Braun and his team convinced President Dwight D. Eisenhower to use one of their US Army missiles for the Explorer program (there was not yet an inhibition about using military rockets to get into space). On 8 November 1957, the US Secretary of Defense instructed the US Army to use a modified Jupiter-C rocket to launch a satellite. The US achieved this goal only four months later with Explorer 1, on 1 February 1958, but after Sputnik 2 on 3 November 1957, making Explorer 1 the third artificial Earth satellite. Vanguard 1 became the fourth, launched on 17 March 1958. The Soviet launches would be followed by considerable political consequences, one of which was the creation of the US space agency NASA on 29 July 1958. The British–American survey of the Atlantic, carried out between September 1954 and July 1959, discovered the full length of the mid-Atlantic ridges (plate tectonics); it was a major discovery during the IGY. World Data Centers Although the 1932 Polar Year accomplished many of its goals, it fell short on others because of the advance of World War II. In fact, because of the war, much of the data collected and scientific analyses completed during the 1932 Polar Year were lost forever, something that was particularly troubling to the IGY organizing committee. The committee resolved that "all observational data shall be available to scientists and scientific institutions in all countries." They felt that without the free exchange of data across international borders, there would be no point in having an IGY. In April 1957, just three months before the IGY began, scientists representing the various disciplines of the IGY established the World Data Center system. The United States hosted World Data Center "A" and the Soviet Union hosted World Data Center "B". World Data Center "C" was subdivided among countries in Western Europe, Australia, and Japan. NOAA hosted seven of the fifteen World Data Centers in the United States. Each World Data Center would eventually archive a complete set of IGY data to deter losses prevalent during the International Polar Year of 1932. Each World Data Center was equipped to handle many different data formats, including computer punch cards and tape—the original computer media. In addition, each host country agreed to abide by the organizing committee's resolution that there should be a free and open exchange of data among nations. ICSU-WDS goals are to preserve quality-assured scientific data and information, to facilitate open access, and promote the adoption of standards. ICSU World Data System created in 2008 superseded the World Data Centers (WDCs) and Federation of Astronomical and Geophysical data analysis Services (FAGS) created by ICSU to manage data generated by the International Geophysical Year. Antarctica The IGY triggered an 18-month year of Antarctic science. The International Council of Scientific Unions, a parent body, broadened the proposals from polar studies to geophysical research. More than 70 existing national scientific organizations then formed IGY committees, and participated in the cooperative effort. Australia established its first permanent base on the Antarctic continent at Mawson in 1954. It is now the longest continuously operating station south of the Antarctic Circle. Davis was added in 1957, in the Vestfold Hills, 400 miles (640 km) east of Mawson. The wintering parties for the IGY numbered 29 at Mawson and 4 at Davis, all male. (Both stations now have 16 to 18 winterers, including both sexes.) As a part of the IGY activities, a two-man camp was installed beside Taylor Glacier, 60 miles (97 km) west of Mawson. Its principal purpose was to enable parallactic photography of the aurora australis (thus locating it in space), but it also permitted studies of Emperor penguins in the adjacent rookery. Two years later, Australia took over the running of Wilkes, a station built for the IGY by the United States. When Wilkes rapidly deteriorated from snow and ice accumulation, plans were made to build Casey Station, known as Repstat ("replacement station"). Opened in 1969, Repstat was replaced by present-day Casey station in 1988. Halley Research Station was founded in 1956 for the IGY by an expedition from the (British) Royal Society. The bay where the expedition set up their base was named Halley Bay, after the astronomer Edmond Halley. Showa Station, the first Japanese base in Antarctica, was set up in January 1957, supported by the ice breaker Sōya. When the ship returned a year later, it became beset offshore (stuck in the sea-ice). It was eventually freed with the assistance of the US icebreaker Burton Island but could not resupply the station. The 1957 winterers were retrieved by helicopter, but bad weather prevented going back for the station's 15 sled dogs, which were left chained up. When the ship returned a year later, two of the dogs, Taro and Jiro, were still alive. They had escaped the dogline and survived by killing Adélie penguins in a nearby rookery (which were preserved by the low temperature). The two dogs became instant national heroes in Japan. A Japanese movie about this story was made in 1983, Antarctica. France contributed Dumont d'Urville Station and Charcot Station in Adélie Land. As a forerunner expedition, the ship Commandant Charcot of the French Navy spent nine months of 1949/50 at the coast of Adelie Land. The first French station, Port Martin, was completed 9 April 1950, but destroyed by fire the night of 22 to 23 January 1952. Belgium established the King Baudouin Base in 1958. The expedition was led by Gaston de Gerlache, son of Adrien de Gerlache who had led the 1897–1899 Belgian Antarctic Expedition. In December 1958, four team members were stranded several hundred kilometers inland when one of the skis on their light aircraft broke on landing. After a ten-day ordeal, they were rescued by an IL-14 aircraft after a flight of 1,940 miles (3,100 km) from the Soviet base, Mirny Station. The Amundsen–Scott South Pole Station was erected as the first permanent structure at the South Pole in January 1957. It survived intact for 53 years, but was slowly buried in the ice (as all structures there eventually sink into the icy crust), until it was demolished in December 2010 for safety reasons. Arctic Ice Skate 2 was a floating research station constructed and staffed by U.S. scientists. It mapped the bottom of the Arctic Ocean. Zeke Langdon was a meteorologist on the project. Ice Skate 2 was planned to be staffed in 6 month shifts, but due to soft ice surfaces for landing some crew members were stationed for much longer. At one point they lost all communications with anyone over their radios for one month except the expedition on the North Pole. At another point the ice sheet broke up and their fuel tanks started floating away from the base. They had to put pans under the plane engines as soon as they landed as any oil spots would go straight through the ice in the intense sunshine. Their only casualty was a man who got too close to the propeller with the oil pan. Norbert Untersteiner was the project leader for Drifting Station Alpha and in 2008 produced and narrated a documentary about the project for the National Snow and Ice Data Center. Participating countries The participating countries for the IGY included the following: Kenya Uganda Legacy In the end, the IGY was a resounding success, and it led to advancements that live on today. For example, the work of the IGY led directly to the Antarctic Treaty, which called for the use of Antarctica for peaceful purposes and cooperative scientific research. Since then, international cooperation has led to protecting the Antarctic environment, preserving historic sites, and conserving the animals and plants. Today, 41 nations have signed the Treaty and international collaborative research continues. The ICSU World Data System (WDS) was created by the 29th General Assembly of the International Council for Science (ICSU) and builds on the 50-year legacy of the former ICSU World Data Centres (WDCs) and former Federation of Astronomical and Geophysical data-analysis Services (FAGS). This World Data System, hosts the repositories for data collected during the IGY. Seven of the 15 World Data Centers in the United States are co-located at NOAA National Data Centers or at NOAA affiliates. These ICSU Data Centers not only preserve historical data, but also promote research and ongoing data collection. The fourth International Polar Year on 2007–2008 focused on climate change and its effects on the polar environment. Sixty countries participated in this effort and it included studies in the Arctic and Antarctic. In popular culture "I.G.Y. (What a Beautiful World)" is a track on Donald Fagen's 1982 album, The Nightfly. The song is sung from an optimistic viewpoint during the IGY, and features references to then-futuristic concepts, such as solar power (first used in 1958), Spandex (invented in 1959), space travel for entertainment, and an undersea international high-speed rail. The song peaked at #26 on the Billboard Hot 100 on 27 November – 11 December 1982 and was nominated for a Grammy Award for song of the year. The IGY is featured prominently in a 1957–1958 run of Pogo comic strips by Walt Kelly. The characters in the strip refer to the scientific initiative as the "G.O. Fizzickle Year". During this run, the characters try to make their own contributions to scientific endeavours, such as putting a flea on the moon. Compilations of the strips were published by Simon & Schuster SC in 1958 as G.O. Fizzickle Pogo and later Pogo's Will Be That Was in 1979. The run was also included in Pogo: The Complete Daily & Sunday Comic Strips Vol. 5: Out of This World at Home published by Fantagraphics in 2018. Jazz saxophonist and composer Gil Mellé recorded a "Dedicatory Piece to the Geo-Physical Year of 1957" for his album Primitive Modern, released by Prestige Records. The IGY was featured in a cartoon by Russell Brockbank in Punch in November 1956. It shows the three main superpowers UK, USA and USSR at the South Pole, each with a gathering of penguins which they are trying to educate with "culture". The penguins in the British camp are being bored with Francis Bacon; in the American camp they are happily playing baseball, while the Russian camp resembles a gulag, with barbed-wire fences and the penguins are made to march and perform military maneuvers. The Alistair MacLean novel Night Without End takes place in and around an IGY research station in Greenland. The IGY features in two episodes of the 1960–61 season of the documentary television series Expedition!: "The Frozen Continent" and "Man's First Winter at the South Pole". See also International Biological Program International Year of Planet Earth List of Antarctic expeditions Baker-Nunn satellite tracking camera Operation Moonwatch Operation Phototrack Sulphur Mountain Cosmic Ray Station References and sources References Sources University of Saskatchewan Archives History of ionosondes, at the U.K.'s Rutherford Appleton Laboratory History of arctic exploration James Van Allen, From High School to the Beginning of the Space Era: A Biographical Sketch by George Ludwig Fraser, Ronald. (1957). Once Round the Sun: The Story of the International Geophysical Year, 1957–58. London, England: Hodder and Stroughton Limited. Sullivan, Walter. (1961). Assault on the Unknown: The International Geophysical Year. New York, New York: McGraw-Hill Book Company. Wilson, J. Tuzo. (1961). IGY: The Year of the New Moons. New York, New York: Alfred A. Knopf, Inc. External links Documents regarding the International Geophysical Year, Dwight D. Eisenhower Presidential Library "IGY On the Ice", produced by Barbara Bogaev, Soundprint. 2011 radio documentary with John C. Behrendt, Tony Gowan, Phil Smith, and Charlie Bentley. The Papers of Robert L. Long Jr. at Dartmouth College Library 1957 in Antarctica 1957 in science 1958 in Antarctica 1958 in science Geophysics History of Earth science United Nations observances Science events History of the Ross Dependency
International Geophysical Year
[ "Physics" ]
3,257
[ "Applied and interdisciplinary physics", "Geophysics" ]
78,214
https://en.wikipedia.org/wiki/Permaculture
Permaculture is an approach to land management and settlement design that adopts arrangements observed in flourishing natural ecosystems. It includes a set of design principles derived using whole-systems thinking. It applies these principles in fields such as regenerative agriculture, town planning, rewilding, and community resilience. The term was coined in 1978 by Bill Mollison and David Holmgren, who formulated the concept in opposition to modern industrialized methods, instead adopting a more traditional or "natural" approach to agriculture. Permaculture has been criticised as being poorly defined and unscientific. Critics have pushed for less reliance on anecdote and extrapolation from ecological first principles, in favor of peer-reviewed research to substantiate productivity claims and to clarify methodology. Peter Harper from the Centre for Alternative Technology suggests that most of what passes for permaculture has no relevance to real problems. Defenders of permaculture reply that researchers have concluded it to be a "sustainable alternative to conventional agriculture", that it "strongly" enhances carbon stocks, soil quality, and biodiversity, making it "an effective tool to promote sustainable agriculture, ensure sustainable production patterns, combat climate change and halt and reverse land degradation and biodiversity loss". They further point out that most of permaculture’s most common methods, such as agroforestry, polycultures, and water harvesting features, are also backed by peer-reviewed research. Background History In 1911, Franklin Hiram King wrote Farmers of Forty Centuries: Or Permanent Agriculture in China, Korea and Japan, describing farming practices of East Asia designed for "permanent agriculture". In 1929, Joseph Russell Smith appended King's term as the subtitle for Tree Crops: A Permanent Agriculture, which he wrote in response to widespread deforestation, plow agriculture, and erosion in the eastern mountains and hill regions of the United States. He proposed the planting of tree fruits and nuts as human and animal food crops that could stabilize watersheds and restore soil health. Smith saw the world as an inter-related whole and suggested mixed systems of trees with understory crops. This book inspired individuals such as Toyohiko Kagawa who pioneered forest farming in Japan in the 1930s. Another pioneer, George Washington Carver, advocated for practices now common in permaculture, including the use of crop rotation to restore nitrogen to the soil and repair damaged farmland, in his work at the Tuskegee Institute between 1896 and his death in 1947. In his 1964 book Water for Every Farm, the Australian agronomist and engineer P. A. Yeomans advanced a definition of permanent agriculture as one that can be sustained indefinitely. Yeomans introduced both an observation-based approach to land use in Australia in the 1940s and in the 1950s the Keyline Design as a way of managing the supply and distribution of water in semi-arid regions. Other early influences include Stewart Brand's works, Ruth Stout and Esther Deans, who pioneered no-dig gardening, and Masanobu Fukuoka who, in the late 1930s in Japan, began advocating no-till orchards and gardens and natural farming. In the late 1960s, Bill Mollison, senior lecturer in Environmental Psychology at University of Tasmania, and David Holmgren, graduate student at the then Tasmanian College of Advanced Education started developing ideas about stable agricultural systems on the southern Australian island of Tasmania. Their recognition of the unsustainable nature of modern industrialized methods and their inspiration from Tasmanian Aboriginal and other traditional practises were critical to their formulation of permaculture. In their view, industrialized methods were highly dependent on non-renewable resources, and were additionally poisoning land and water, reducing biodiversity, and removing billions of tons of topsoil from previously fertile landscapes. They responded with permaculture. This term was first made public with the publication of their 1978 book Permaculture One. Following the publication of Permaculture One, Mollison responded to widespread enthusiasm for the work by traveling and teaching a three-week program that became known as the Permaculture Design Course. It addressed the application of permaculture design to growing in major climatic and soil conditions, to the use of renewable energy and natural building methods, and to "invisible structures" of human society. He found ready audiences in Australia, New Zealand, the USA, Britain, and Europe, and from 1985 also reached the Indian subcontinent and southern Africa. By the early 1980s, the concept had broadened from agricultural systems towards sustainable human habitats and at the 1st Intl. Permaculture Convergence, a gathering of graduates of the PDC held in Australia, the curriculum was formalized and its format shortened to two weeks. After Permaculture One, Mollison further refined and developed the ideas while designing hundreds of properties. This led to the 1988 publication of his global reference work, Permaculture: A Designers Manual. Mollison encouraged graduates to become teachers and set up their own institutes and demonstration sites. Critics suggest that this success weakened permaculture's social aspirations of moving away from industrial social forms. They argue that the self-help model (akin to franchising) has had the effect of creating market-focused social relationships that the originators initially opposed. Foundational ethics The ethics on which permaculture builds are: "Care of the Earth: Provision for all life systems to continue and multiply". "Care of people: Provision for people to access those resources necessary for their existence". "Setting limits to population and consumption: By governing our own needs, we can set resources aside to further the above principles". Mollison's 1988 formulation of the third ethic was restated by Holmgren in 2002 as "Set limits to consumption and reproduction, and redistribute surplus" and is elsewhere condensed to "share the surplus". Permaculture emphasizes patterns of landscape, function, and species assemblies. It determines where these elements should be placed so they can provide maximum benefit to the local environment. Permaculture maximizes synergy of the final design. The focus of permaculture, therefore, is not on individual elements, but rather on the relationships among them. The aim is for the whole to become greater than the sum of its parts, minimizing waste, human labour, and energy input, and to and maximize benefits through synergy. Permaculture design is founded in replicating or imitating natural patterns found in ecosystems because these solutions have emerged through evolution over thousands of years and have proven to be effective. As a result, the implementation of permaculture design will vary widely depending on the region of the Earth it is located in. Because permaculture's implementation is so localized and place specific, scientific literature for the field is lacking or not always applicable. Design principles derive from the science of systems ecology and the study of pre-industrial examples of sustainable land use. A core theme of permaculture is the idea of "people care". Seeking prosperity begins within a local community or culture that can apply the tenets of permaculture to sustain an environment that supports them and vice versa. This is in contrast to typical modern industrialized societies, where locality and generational knowledge is often overlooked in the pursuit of wealth or other forms of societal leverage. Theory Design principles Holmgren articulated twelve permaculture design principles in his Permaculture: Principles and Pathways Beyond Sustainability: Observe and interact: Take time to engage with nature to design solutions that suit a particular situation. Catch and store energy: Develop systems that collect resources at peak abundance for use in times of need. Obtain a yield: Emphasize projects that generate meaningful rewards. Apply self-regulation and accept feedback: Discourage inappropriate activity to ensure that systems function well. Use and value renewable resources and services: Make the best use of nature's abundance: reduce consumption and dependence on non-renewable resources. Produce no waste: Value and employ all available resources: waste nothing. Design from patterns to details: Observe patterns in nature and society and use them to inform designs, later adding details. Integrate rather than segregate: Proper designs allow relationships to develop between design elements, allowing them to work together to support each other. Use small and slow solutions: Small and slow systems are easier to maintain, make better use of local resources, and produce more sustainable outcomes. Use and value diversity: Diversity reduces system-level vulnerability to threats and fully exploits its environment. Use edges and value the marginal: The border between things is where the most interesting events take place. These are often the system's most valuable, diverse, and productive elements. Creatively use and respond to change: A positive impact on inevitable change comes from careful observation, followed by well-timed intervention. Guilds A guild is a mutually beneficial group of species that form a part of the larger ecosystem. Within a guild each species of insect or plant provides a unique set of diverse services that work in harmony. Plants may be grown for food production, drawing nutrients from deep in the soil through tap roots, balancing nitrogen levels in the soil (legumes), for attracting beneficial insects to the garden, and repelling undesirable insects or pests. There are several types of guilds, such as community function guilds, mutual support guilds, and resource partitioning guilds. Community function guilds group species based on a specific function or niche that they fill in the garden. Examples of this type of guild include plants that attract a particular beneficial insect or plants that restore nitrogen to the soil. These types of guilds are aimed at solving specific problems which may arise in a garden, such as infestations of harmful insects and poor nutrition in the soil. Establishment guilds are commonly used when working to establish target species (the primary vegetables, fruits, herbs, etc. you want to be established in your garden) with the support of pioneer species (plants that will help the target species succeed). For example, in temperate climates, plants such as comfrey (as a weed barrier and dynamic accumulator), lupine (as a nitrogen fixer), and daffodil (as a gopher deterrent) can together form a guild for a fruit tree. As the tree matures, the support plants will likely eventually be shaded out and can be used as compost. Mature guilds form once your target species are established. For example, if the tree layer of your landscape closes its canopy, sun-loving support plants will be shaded out and die. Shade loving medicinal herbs such as ginseng, Black Cohosh, and goldenseal can be planted as an understory. Mutual support guilds group species together that are complementary by working together and supporting each other. This guild may include a plant that fixes nitrogen, a plant that hosts insects that are predators to pests, and another plant that attracts pollinators. Resource partitioning guilds group species based on their abilities to share essential resources with one another through a process of niche differentiation. An example of this type of guild includes placing a fibrous- or shallow-rooted plant next to a tap-rooted plant so that they draw from different levels of soil nutrients. Zones Zones intelligently organize design elements in a human environment based on the frequency of human use and plant or animal needs. Frequently manipulated or harvested elements of the design are located close to the house in zones 1 and 2. Manipulated elements located further away are used less frequently. Zones are numbered from 0 to 5 based on positioning. Zone 0 The house, or home center. Here permaculture principles aim to reduce energy and water needs harnessing natural resources such as sunlight, to create a harmonious, sustainable environment in which to live and work. Zone 0 is an informal designation, not specifically defined in Mollison's book. Zone 1 The zone nearest to the house, the location for those elements in the system that require frequent attention, or that need to be visited often, such as salad crops, herb plants, soft fruit like strawberries or raspberries, greenhouse and cold frames, propagation area, worm compost bin for kitchen waste, etc. Raised beds are often used in Zone 1 in urban areas. Zone 2 This area is used for siting perennial plants that require less frequent maintenance, such as occasional weed control or pruning, including currant bushes and orchards, pumpkins, sweet potato, etc. Also, a good place for beehives, larger-scale composting bins, etc. Zone 3 The area where main crops are grown, both for domestic use and for trade purposes. After establishment, care and maintenance required are fairly minimal (provided mulches and similar things are used), such as watering or weed control maybe once a week. Zone 4 A semi-wild area, mainly used for forage and collecting wild plants as well as production of timber for construction or firewood. Zone 5 A wilderness area. Humans do not intervene in zone 5 apart from observing natural ecosystems and cycles. This zone hosts a natural reserve of bacteria, molds, and insects that can aid the zones above it. Edge effect The edge effect in ecology is the increased diversity that results when two habitats meet. Permaculturists argue that these places can be highly productive. An example of this is a coast. Where land and sea meet is a rich area that meets a disproportionate percentage of human and animal needs. This idea is reflected in permacultural designs by using spirals in herb gardens, or creating ponds that have wavy undulating shorelines rather than a simple circle or oval (thereby increasing the amount of edge for a given area). On the other hand, in a keyhole bed, edges are minimized to avoid wasting space and effort. Common practices Hügelkultur Hügelkultur is the practice of burying wood to increase soil water retention. The porous structure of wood acts like a sponge when decomposing underground. During the rainy season, sufficient buried wood can absorb enough water to sustain crops through the dry season. This technique is a traditional practice that has been developed over centuries in Europe and has been recently adopted by permaculturalists. The Hügelkultur technique can be implemented through building mounds on the ground as well as in raised garden beds. In raised beds, the practice "imitates natural nutrient cycling found in wood decomposition and the high water-holding capacities of organic detritus, while also improving bed structure and drainage properties." This is done by placing wood material (e.g. logs and sticks) in the bottom of the bed before piling organic soil and compost on top. A study comparing the water retention capacities of Hügel raised beds to non-Hügel beds determined that Hügel beds are both lower maintenance and more efficient in the long term by requiring less irrigation. Sheet mulching Mulch is a protective cover placed over soil. Mulch material includes leaves, cardboard, and wood chips. These absorb rain, reduce evaporation, provide nutrients, increase soil organic matter, create habitat for soil organisms, suppress weed growth and seed germination, moderate diurnal temperature swings, protect against frost, and reduce erosion. Sheet mulching or lasagna gardening is a gardening technique that attempts to mimic the leaf cover that is found on forest floors. No-till gardening Edward Faulkner's 1943 book Plowman's Folly, King's 1946 pamphlet "Is Digging Necessary?", A. Guest's 1948 book "Gardening without Digging", and Fukuoka's "Do Nothing Farming" all advocated forms of no-till or no-dig gardening. No-till gardening seeks to minimise disturbance to the soil community so as to maintain soil structure and organic matter. Cropping practices Low-effort permaculture favours perennial crops which do not require tilling and planting every year. Annual crops inevitably require more cultivation. They can be incorporated into permaculture by using traditional techniques such as crop rotation, intercropping, and companion planting so that pests and weeds of individual annual crop species do not build up, and minerals used by specific crop plants do not become successively depleted. Companion planting aims to make use of beneficial interactions between species of cultivated plants. Such interactions include pest control, pollination, providing habitat for beneficial insects, and maximizing use of space; all of these may help to increase productivity. Rainwater harvesting Rainwater harvesting is the accumulation and storage of rainwater for reuse before it runs off or reaches the aquifer. It has been used to provide drinking water, water for livestock, and water for irrigation, as well as other typical uses. Rainwater collected from the roofs of houses and local institutions can make an important contribution to the availability of drinking water. It can supplement the water table and increase urban greenery. Water collected from the ground, sometimes from areas which are specially prepared for this purpose, is called stormwater harvesting. Greywater is wastewater generated from domestic activities such as laundry, dishwashing, and bathing, which can be recycled for uses such as landscape irrigation and constructed wetlands. Greywater is largely sterile, but not potable (drinkable). Keyline design is a technique for maximizing the beneficial use of water resources. It was developed in Australia by farmer and engineer P. A. Yeomans. Keyline refers to a contour line extending in both directions from a keypoint. Plowing above and below the keyline provides a watercourse that directs water away from a purely downhill course to reduce erosion and encourage infiltration. It is used in designing drainage systems. Compost production Vermicomposting is a common practice in permaculture. The practice involves using earthworms, such as red wigglers, to break down green and brown waste. The worms produce worm castings, which can be used to organically fertilize the garden. Worms are also introduced to garden beds, helping to aerate the soil and improve water retention. Worms may multiply quickly if provided conditions are ideal. For example, a permaculture farm in Cuba began with 9 tiger worms in 2001 and 15 years later had a population of over 500,000. The worm castings are particularly useful as part of a seed starting mix and regular fertilizer. Worm castings are reportedly more successful than conventional compost for seed starting. Sewage or blackwater contains human or animal waste. It can be composted, producing biogas and manure. Human waste can be sourced from a composting toilet, outhouse or dry bog (rather than a plumbed toilet). Economising on space Space can be saved in permaculture gardens with techniques such as herb spirals which group plants closely together. A herb spiral, invented by Mollison, is a round cairn of stones packed with earth at the base and sand higher up; sometimes there is a small pond on the south side (in the northern hemisphere). The result is a series of microclimate zones, wetter at the base, drier at the top, warmer and sunnier on the south side, cooler and drier to the north. Each herb is planted in the zone best suited to it. Domesticated animals Domesticated animals are often incorporated into site design. Activities that contribute to the system include: foraging to cycle nutrients, clearing fallen fruit, weed maintenance, spreading seeds, and pest maintenance. Nutrients are cycled by animals, transformed from their less digestible form (such as grass or twigs) into more nutrient-dense manure. Multiple animals can contribute, including cows, goats, chickens, geese, turkey, rabbits, and worms. An example is chickens who can be used to scratch over the soil, thus breaking down the topsoil and using fecal matter as manure. Factors such as timing and habits are critical. For example, animals require much more daily attention than plants. Fruit trees Masanobu Fukuoka experimented with no-pruning methods on his family farm in Japan, finding that trees which were never pruned could grow well, whereas previously-pruned trees often died when allowed to grow without further pruning. He felt that this reflected the Tao-philosophy of Wú wéi, meaning no action against nature or "do-nothing" farming. He claimed yields comparable to intensive arboriculture with pruning and chemical fertilisation. Applications Agroforestry Agroforestry uses the interactive benefits from combining trees and shrubs with crops or livestock. It combines agricultural and forestry technologies to create more diverse, productive, profitable, healthy and sustainable land-use systems. Trees or shrubs are intentionally used within agricultural systems, or non-timber forest products are cultured in forest settings. Forest gardens Forest gardens or food forests are permaculture systems designed to mimic natural forests. Forest gardens incorporate processes and relationships that the designers understand to be valuable in natural ecosystems. A mature forest ecosystem is organised into layers with constituents such as trees, understory, ground cover, soil, fungi, insects, and other animals. Because plants grow to different heights, a diverse community of organisms can occupy a relatively small space, each at a different layer. Rhizosphere: Root layers within the soil. The major components of this layer are the soil and the organisms that live within it such as plant roots and zomes (including root crops such as potatoes and other edible tubers), fungi, insects, nematodes, and earthworms. Soil surface/groundcover: Overlaps with the herbaceous layer and the groundcover layer; however plants in this layer grow much closer to the ground, densely fill bare patches, and typically can tolerate some foot traffic. Cover crops retain soil and lessen erosion, along with green manures that add nutrients and organic matter, especially nitrogen. Herbaceous layer: Plants that die back to the ground every winter, if cold enough. No woody stems. Many beneficial plants such as culinary and medicinal herbs are in this layer; whether annuals, biennials, or perennials. Shrub layer: woody perennials of limited height. Includes most berry bushes. Understory layer: trees that flourish under the canopy. The canopy: the tallest trees. Large trees dominate, but typically do not saturate the area, i.e., some patches are devoid of trees. Vertical layer: climbers or vines, such as runner beans and lima beans (vine varieties). Suburban and urban permaculture The fundamental element of suburban and urban permaculture is the efficient utilization of space. Wildfire journal suggests using methods such as the keyhole garden which require little space. Neighbors can collaborate to increase the scale of transformation, using sites such as recreation centers, neighborhood associations, city programs, faith groups, and schools. Columbia, an ecovillage in Portland, Oregon, consisting of 37 apartment condominiums, influenced its neighbors to implement permaculture principles, including in front-yard gardens. Suburban permaculture sites such as one in Eugene, Oregon, include rainwater catchment, edible landscaping, removing paved driveways, turning a garage into living space, and changing a south side patio into passive solar. Vacant lot farms are community-managed farm sites, but are often seen by authorities as temporary rather than permanent. For example, Los Angeles' South Central Farm (1994–2006), one of the largest urban gardens in the United States, was bulldozed with approval from property owner Ralph Horowitz, despite community protest. The possibilities and challenges for suburban or urban permaculture vary with the built environment around the world. For example, land is used more ecologically in Jaisalmer, India than in American planned cities such as Los Angeles: Marine systems Permaculture derives its origin from agriculture, although the same principles, especially its foundational ethics, can also be applied to mariculture, particularly seaweed farming. In Marine Permaculture, artificial upwelling of cold, deep ocean water is induced. When an attachment substrate is provided in association with such an upwelling, and kelp sporophytes are present, a kelp forest ecosystem can be established (since kelp needs the cool temperatures and abundant dissolved macronutrients present in such an environment). Microalgae proliferate as well. Marine forest habitat is beneficial for many fish species, and the kelp is a renewable resource for food, animal feed, medicines and various other commercial products. It is also a powerful tool for carbon fixation. The upwelling can be powered by renewable energy on location. Vertical mixing has been reduced due to ocean stratification effects associated with climate change. Reduced vertical mixing and marine heatwaves have decimated seaweed ecosystems in many areas. Marine permaculture mitigates this by restoring some vertical mixing and preserves these important ecosystems. By preserving and regenerating habitat offshore on a platform, marine permaculture employs natural processes to regenerate marine life. Grazing Grazing is blamed for much destruction. However, when grazing is modeled after nature, it can have the opposite effect. Cell grazing is a system of grazing in which herds or flocks are regularly and systematically moved to fresh range with the intent to maximize forage quality and quantity. Sepp Holzer and Joel Salatin have shown how grazing can start ecological succession or prepare ground for planting. Allan Savory's holistic management technique has been likened to "a permaculture approach to rangeland management". One variation is conservation grazing, where the primary purpose of the animals is to benefit the environment and the animals are not necessarily used for meat, milk or fiber. Sheep can replace lawn mowers. Goats and sheep can eat invasive plants. Natural building Natural building involves using a range of building systems and materials that apply permaculture principles. The focus is on durability and the use of minimally processed, plentiful, or renewable resources, as well as those that, while recycled or salvaged, produce healthy living environments and maintain indoor air quality. For example, cement, a common building material, emits carbon dioxide and is harmful to the environment while natural building works with the environment, using materials that are biodegradable, such as cob, adobe, rammed earth (unburnt clay), and straw bale (which insulates as well as modern synthetic materials). Issues Intellectual property Trademark and copyright disputes surround the word permaculture. Mollison's books claimed on the copyright page, "The contents of this book and the word PERMACULTURE are copyright." Eventually Mollison acknowledged that he was mistaken and that no copyright protection existed. In 2000, Mollison's U.S.-based Permaculture Institute sought a service mark for the word permaculture when used in educational services such as conducting classes, seminars, or workshops. The service mark would have allowed Mollison and his two institutes to set enforceable guidelines regarding how permaculture could be taught and who could teach it, particularly with relation to the PDC, despite the fact that he had been certifying teachers since 1993. This attempt failed and was abandoned in 2001. Mollison's application for trademarks in Australia for the terms "Permaculture Design Course" and "Permaculture Design" was withdrawn in 2003. In 2009 he sought a trademark for "Permaculture: A Designers' Manual" and "Introduction to Permaculture", the names of two of his books. These applications were withdrawn in 2011. Australia has never authorized a trademark for the word permaculture. Methodology Permaculture has been criticised as being poorly defined and unscientific. Critics have pushed for less reliance on anecdote and extrapolation from ecological first principles in favor of peer-reviewed research to substantiate productivity claims and to clarify methodology. Peter Harper from the Centre for Alternative Technology suggests that most of what passes for permaculture is irrelevant to real problems. Harper notes that British organic farmers are "embarrassed or openly derisive" of permaculture, while the permaculture expert Robert Kourik found the supposed advantages of "less- or no-work gardening, bountiful yields, and the soft fuzzy glow of knowing that the garden will ... live on without you" were often illusory. Harper found "many permacultures" are based on ideas ranging from practical farming techniques to "bullshit ... no more than charming cultural graces." Defenders respond that permaculture is not yet a mainstream scientific tradition and lacks the resources of mainstream industrial agriculture. Rafter Ferguson and Sarah Lovell point out that permaculturalists rarely engage with mainstream research in agroecology, agroforestry, or ecological engineering, and claim that mainstream science has an elitist or pro-corporate bias. Julius Krebs and Sonja Bach argue in Sustainability that there is "scientific evidence for all twelve [of Holmgren's] principles". In 2017, Ferguson and Lovell presented sociological and demographic data from 36 self-described American permaculture farms. The farms were well diversified, with a median effective number of enterprises per farm of 3.6 (out of a maximum of 6 in the analysis method used). Business strategies included small mixed farms, integrated producers of perennial and animal crops, mixes of production and services, livestock, and service-based businesses. Median household income ($38,750) was less than either the national median household income ($51,017) or the national median farm household income ($68,680). A 2019 study by Hirschfeld and Van Acker found that adopting permaculture consistently encouraged the cultivation of perennials, crop diversity, landscape heterogeneity, and nature conservation. They discovered that grass-roots adopters were "remarkably consistent" in their implementation of permaculture, leading them to conclude that the movement could exert influence over positive agroecological transitions. In 2024, Reiff and colleagues stated that permaculture is a "sustainable alternative to conventional agriculture", and that it "strongly" enhances carbon stocks, soil quality, and biodiversity, making it "an effective tool to promote sustainable agriculture, ensure sustainable production patterns, combat climate change and halt and reverse land degradation and biodiversity loss." They point out that most of permaculture's commonest methods, such as agroforestry, polycultures, and water harvesting features, are backed by peer-reviewed research. See also References Sources – The first systematic review of the permaculture literature, from the perspective of agroecology. ; Jacke, Dave with Eric Toensmeier. Edible Forest Gardens. Volume I: Ecological Vision and Theory for Temperate-Climate Permaculture, Volume II: Ecological Design and Practice for Temperate-Climate Permaculture. Edible Forest Gardens (US) 2005 Loofs, Mona. Permaculture, Ecology and Agriculture: An investigation into Permaculture theory and practice using two case studies in northern New South Wales Honours thesis, Human Ecology Program, Department of Geography, Australian National University 1993 Macnamara, Looby. People and Permaculture: caring and designing for ourselves, each other and the planet. Permanent Publications (UK) (2012) Odum, H. T., Jorgensen, S.E. and Brown, M.T. 'Energy hierarchy and transformity in the universe', in Ecological Modelling, 178, pp. 17–28 (2004). Paull, J. "Permanent Agriculture: Precursor to Organic Farming", Journal of Bio-Dynamics Tasmania, no.83, pp. 19–21, 2006. Organic eprints. Shepard, Mark: Restoration Agriculture – Redesigning Agriculture in Nature's Image, Acres US, 2013, Woodrow, Linda. The Permaculture Home Garden. Penguin Books (Australia). Yeomans, P.A. Water for Every Farm: A practical irrigation plan for every Australian property, KG Murray, Sydney, NSW, Australia (1973). External links Ethics and principles of permaculture (Holmgren's) Permaculture Commons – collection of permaculture material under free licenses The 15 pamphlets based on the 1981 Permaculture Design Course given by Bill Mollison (co-founder of permaculture) all in 1 PDF file The Permaculture Research Institute – Permaculture Forums, Courses, Information, News and Worldwide Reports The Worldwide Permaculture Network – Database of permaculture people and projects worldwide Australian inventions Environmental design Environmental social science concepts Horticulture Landscape architecture Rural community development Sustainable agriculture Sustainable design Sustainable food system Sustainable gardening Systems ecology
Permaculture
[ "Engineering", "Environmental_science" ]
6,748
[ "Environmental design", "Systems ecology", "Landscape architecture", "Environmental social science concepts", "Design", "Environmental social science", "Architecture" ]
78,439
https://en.wikipedia.org/wiki/Dobson%20unit
The Dobson unit (DU) is a unit of measurement of the amount of a trace gas in a vertical column through the Earth's atmosphere. It originated by, and continues to be primarily used in respect to, the study of atmospheric ozone, whose total column amount, usually termed "total ozone", and sometimes "column abundance", is dominated by the high concentrations of ozone in the stratospheric ozone layer. The Dobson unit is defined as the thickness (in units of 10 μm) of that layer of pure gas which would be formed by the total column amount at standard conditions for temperature and pressure (STP). This is sometimes referred to as a 'milli-atmo-centimeter'. A typical column amount of 300 DU of atmospheric ozone therefore would form a 3 mm layer of pure gas at the surface of the Earth if its temperature and pressure conformed to STP. The Dobson unit is named after Gordon Dobson, a researcher at the University of Oxford who in the 1920s built the first instrument to measure total ozone from the ground, making use of a double prism monochromator to measure the differential absorption of different bands of solar ultraviolet radiation by the ozone layer. This instrument, called the Dobson ozone spectrophotometer, has formed the backbone of the global network for monitoring atmospheric ozone and was the source of the discovery in 1984 of the Antarctic ozone hole. Ozone NASA uses a baseline value of 220 DU for ozone. This was chosen as the starting point for observations of the Antarctic ozone hole, since values of less than 220 Dobson units were not found before 1979. Also, from direct measurements over Antarctica, a column ozone level of less than 220 Dobson units is a result of the ozone loss from chlorine and bromine compounds. Sulfur dioxide In addition, Dobson units are often used to describe total column densities of sulfur dioxide, which occurs in the atmosphere in small amounts due to the combustion of fossil fuels, from biological processes releasing dimethyl sulfide, or by natural combustion such as forest fires. Large amounts of sulfur dioxide may be released into the atmosphere as well by volcanic eruptions. The Dobson unit is used to describe total column amounts of sulfur dioxide because it appeared in the early days of ozone remote sensing on ultraviolet satellite instruments (such as TOMS). Derivation The Dobson unit arises from the ideal gas law where P and V are pressure and volume respectively, and n, R and T are the number of moles of gas, the gas constant (8.314 J/(mol·K)), and T is temperature in kelvins (K). The number density of air is the number of molecules or atoms per unit volume: and when plugged into the real gas law, the number density of air is found by using pressure, temperature and the real gas constant: The number density (molecules/volume) of air at standard temperature and pressure (T = 273 K and P = 101325 Pa) is, by using this equation, With some unit conversions of joules to pascal cubic meters, the equation for molecules/volume is A Dobson unit is the total amount of a trace gas per unit area. In atmospheric sciences, this is referred to as a column density. How, though, are units of molecules per cubic meter, a volume, to be converted to molecules per square centimeter, an area? This must be done by integration. To get the column density, integrate the total column over a height. Per the definition of Dobson units, 1 DU = 0.01 mm of trace gas when compressed down to sea level at standard temperature and pressure. So integrating the number density of air from 0 to 0.01 mm, it becomes equal to 1 DU: Thus the value of 1 DU is 2.69 molecules per meter squared. References Ozone Atmospheric chemistry Units of measurement
Dobson unit
[ "Chemistry", "Mathematics" ]
778
[ "Quantity", "Oxidizing agents", "Ozone", "nan", "Units of measurement" ]
78,448
https://en.wikipedia.org/wiki/Downwelling
Downwelling is the downward movement of a fluid parcel and its properties (e.g., salinity, temperature, pH) within a larger fluid. It is closely related to upwelling, the upward movement of fluid. While downwelling is most commonly used to describe an oceanic process, it's also used to describe a variety of Earth phenomena. This includes mantle dynamics, air movement, and movement in freshwater systems (e.g., large lakes). This article will focus on oceanic downwelling and its important implications for ocean circulation and biogeochemical cycles. Two primary mechanisms transport water downward: buoyancy forcing and wind-driven Ekman transport (i.e., Ekman pumping). Downwelling has important implications for marine life. Surface water generally has a lower nutrient content compared to deep water due to primary production using nutrients in the photic zone. Surface water is, however, high in oxygen compared to the deep ocean due to photosynthesis and air-sea gas exchange. When water is moved downwards, oxygen is pumped below the surface, where it is used by decaying organisms. Downwelling events are accompanied by low primary production in the surface ocean due to a lack of nutrient supply from below. Mechanisms Buoyancy Buoyancy-forced downwelling, often termed convection, is the deepening of a water parcel due to a change in the density of that parcel. Density changes in the surface ocean are primarily the result of evaporation, precipitation, heating, cooling, or the introduction and mixing of an alternate water or salinity source, such as river input or brine rejection. Notably, convection is the driving force behind global thermohaline circulation. For a water parcel to move downward, the density of that parcel must increase; therefore, evaporation, cooling, and brine rejection are the processes that control buoyancy-forced downwelling. Wind-driven Ekman transport Ekman transport is the net mass transport of the ocean surface resulting from wind stress and the Coriolis force. As wind blows across the ocean surface, it causes a frictional force that drags the uppermost surface water along with it. Due to the Earth's rotation, these surface currents develop at 45° to the wind direction. However, compounding frictional forces cause the net transport across the Ekman layer to be 90° to the right of wind stress in the Northern Hemisphere and 90° to the left in the Southern Hemisphere. Ekman transport piles up water between the trade winds and westerlies in subtropical gyres, or near the shore during coastal downwelling. The increased mass of surface water creates high-pressure zones that push water downward. It can also create long convergence zones during sustained winds to create Langmuir circulation. Buoyancy-forced downwelling Buoyancy is lost through cooling, evaporation, and brine rejection through sea ice formation. Buoyancy loss occurs on many spatial and temporal scales. In the open ocean, there are regions where cooling and mixed layer deepening occurs at night, and the ocean re-stratifies during the day. On annual cycles, widespread cooling begins in the fall, and convective mixed layer deepening can reach hundreds of meters into the ocean interior. In comparison, the wind-driven mixed layer depth is limited to 150 m. Large evaporation events can cause convection; however, latent heat loss associated with evaporation is usually dominant and in the winter, this process drives Mediterranean Sea deep water formation. In select locations - Greenland Sea, Labrador Sea, Weddell Sea, and Ross Sea - deep convection (>1000 m) ventilates (oxygenates) most of the deep water of the global ocean and drives the thermohaline circulation. Wind-forced downwelling Subtropical gyres Subtropical gyres act on the largest scale that we observe downwelling. Winds to the north and south of each ocean basin blow opposite each other such that Ekman transport moves water toward the basin's center. This movement piles up water, creating a high-pressure zone in the center of the gyre, low pressure on the borders, and deepens the mixed layer. The water in this zone would diffuse outward if the planet weren't spinning. However, because of the Coriolis force, the water rotates clockwise in the Northern Hemisphere and counterclockwise in the southern, creating a gyre. While it spins, the rotating high-pressure zone forces water downward, resulting in downwelling. Typical downwelling rates associated with ocean gyres are on the order of 10’s of meters per year. Coastal downwelling Coastal downwelling occurs when winds blow parallel to the shore. With such winds, Ekman transport directs water movement towards or directly away from the shore. If Ekman transport moves water towards the shore, the shoreline acts as a barrier causing surface water to pile up onshore. The piled-up water is forced downwards, pumping warm, nutrient-poor, oxygenated water below the mixed layer. Langmuir circulation Langmuir circulation develops from the wind, which, through Ekman transport, creates alternating zones of convergence and divergence at the ocean surface. In convergent zones, marked by long strips of floating debris accumulation, such as the Great Pacific Garbage Patch, coherent vortices transport surface waters to the base of the mixed layer develop. Also, direct wind stirring and current shear at the base of the mixed layer can create instabilities and turbulence that further mix properties within and at the base. Association with other ocean features Eddies Meso- (>10-100's km) and submesoscale (<1-10 km) eddies are ubiquitous features of the upper ocean. Eddies have either a cyclonic (cold-core) or anticyclonic (warm-core) rotation. Warm-core eddies are characterized by anticyclonic rotation that directs surface waters inward, creating high sea surface temperature and height. The high central hydrostatic pressure maintained by this rotation causes the downwelling of water and the depression of isopycnals - surfaces of constant density (see Eddy pumping) at scales of hundreds of meters per year. The typical result is a deeper surface layer of warm water often characterized by low primary production. Warm-core eddies play multiple important roles in biogeochemical cycling and air-sea interactions. For example, these eddies are seen to decrease ice formation in the Southern Ocean due to their high sea surface temperatures. It has also been observed that air-sea fluxes of carbon dioxide decrease at the center of these eddies and that temperature was the leading cause of this inhibited flux. Warm-core eddies transport oxygen into the ocean interior (below the photic zone) which supports respiration. Although compounds such as oxygen are transported into the deep ocean, there is an observed decrease in carbon export in warm-core eddies due to intensified stratification at their center. Such stratification inhibits the mixing of nutrient-rich waters to the surface where they could fuel primary production. In this case, since primary production stays low, carbon export potential remains low. Fronts and filaments Ocean fronts are formed by the horizontal convergence of dissimilar water masses. They can develop at regions of freshwater input marked by horizontal density gradients due to salinity and temperature differences or the stretching and elongation of rotating flows. Submesoscale fronts and filaments are formed by ocean current interactions and flow instabilities. They are regions that connect the surface layer and the ocean interior. These regions are characterized by horizontal buoyancy gradients < 10 km in scale, caused by sloping isopycnals. Two primary mechanisms transport surface waters to depth: the adiabatic tilting and relaxation of these isopycnals, and along-isopycnal flow or subduction. These mechanisms can transport surface properties, such as heat, below the mixed layer and assist in carbon sequestration through the biological pump. Numerical models predict vertical velocities at submesoscale fronts on the order of 100 m/day. However, vertical velocities over 1000 m/day have been observed using ocean floats. These observations are rare because ship-based sensors do not have sufficient accuracy to measure vertical velocities. Variability Downwelling trends differ between latitudes and can be associated with variations in wind strength and changing seasons. In some areas, coastal downwelling is a seasonal event pushing nutrient-depleted waters towards the shore. The relaxation or reversal of upwelling-favorable winds creates periods of downwelling as waters pile up along the coast. Temperature differences and wind patterns are seasonal in temperate latitudes, creating highly variable upwelling and downwelling conditions. For example, in fall and winter along the Pacific Northwest coast in the United States, southerly winds in the Gulf of Alaska and California Current system create downwelling-favorable conditions, transporting offshore water from the south and west towards the coast. These downwelling events tend to last for days and can be associated with winter storms and contribute to low levels of primary production observed during fall and winter. In contrast, during the "spring transition" at the end of the downwelling season and the beginning of the upwelling season is marked by the presence of cold, nutrient-rich, upwelled water at the coast, which stimulates high levels of primary production. In contrast to seasonally variable temperate regions, downwelling is relatively steady at the poles as cold air decreases the temperature of salty water transported by gyres from the tropics. During the neutral and La Niña phases of the El Niño Southern Oscillation (ENSO), steady easterly trade winds in equatorial regions can cause water to pile up in the western Pacific. A weakening of these trade winds can create downwelling Kelvin waves, which propagate along the equator in the eastern Pacific. Series of Kelvin waves associated with anomalously warm sea surface temperatures in the eastern Pacific can be a predecessor to an El Niño event. During the El Niño phase of ENSO, the disruption of trade winds causes ocean water to pile up off the western coast of South America. This shift is associated with a decrease in upwelling and may enhance coastal downwelling. Effects on ocean biogeochemistry Biogeochemical cycling related to downwelling is constrained by the location and frequency at which this process occurs. The majority of downwelling, as described above, occurs in polar regions as deep and bottom water formation or in the center of subtropical gyres. Bottom and deep water formation in the Southern Ocean (Weddell Sea) and North Atlantic Ocean (Greenland, Labrador, Norwegian, and Mediterranean Seas) is a major contributor towards the removal and sequestration of anthropogenic carbon dioxide, dissolved organic carbon (DOC), and dissolved oxygen. Dissolved gas solubility is greater in cold water allowing for increased gas concentrations. The Southern Ocean alone has been shown to be the most important high-latitude region controlling pre-industrial atmospheric carbon dioxide by general circulation model simulations. Circulation of water into the Antarctic deep-water formation region is one of the main factors drawing carbon dioxide into the surface oceans. The other is the biological pump, which is typically limited by iron in the Southern Ocean in areas with high nutrients and low chlorophyll (HNLC). DOC can become entrained during bottom and deep water formation which is a large portion of biogenic carbon export. It is thought that the export of DOC is up to 30% of the biogenic carbon that makes it into the deep ocean. The intensity of the DOC flux to depth relies on the strength of winter convection, which also affects the microbial food web, causing variations in the DOC exported to depth. Dissolved oxygen is also downwelled at bottom and deep water formation sites, contributing to elevated dissolved oxygen concentrations below 1000 meters. Subtropical gyres are typically limited in macro and micro nutrients such as nitrogen, phosphorus, and iron; resulting in picophytoplankton communities that have low nutrient requirements. This is in part due to consistent downwelling, which transports nutrients away from the photic zone. These oligotrophic areas are thought to be sustained by rapid nutrient cycling which could leave little carbon remaining that could be sequestered. The dynamics of picophytoplankton's role in carbon cycling in subtropical gyres is poorly understood and is being actively researched. Areas with the highest primary productivity play significant roles in biogeochemical cycling of carbon and nitrogen. Downwelling can either alleviate or induce anoxic conditions, depending on the initial conditions and location. Sustained periods of upwelling can cause deoxygenation which is relieved by a downwelling event transporting dissolved oxygen back down to depths. Anoxic conditions can also result from persistent downwelling after an algal bloom of high-biomass dinoflagellates. The accumulation of dinoflagellates and other forms of biomass nearshore due to downwelling will eventually cause nutrient depletion and mortality of organisms. As the biomass decays, oxygen becomes depleted by heterotrophic bacteria, inducing anoxic conditions. References External links Wind-Driven Surface Currents: Upwelling and Downwelling Background Oceanography Meteorological phenomena
Downwelling
[ "Physics", "Environmental_science" ]
2,759
[ "Physical phenomena", "Earth phenomena", "Applied and interdisciplinary physics", "Hydrology", "Oceanography", "Meteorological phenomena" ]
1,105,823
https://en.wikipedia.org/wiki/Degaussing
Degaussing, or deperming, is the process of decreasing or eliminating a remnant magnetic field. It is named after the gauss, a unit of magnetism, which in turn was named after Carl Friedrich Gauss. Due to magnetic hysteresis, it is generally not possible to reduce a magnetic field completely to zero, so degaussing typically induces a very small "known" field referred to as bias. Degaussing was originally applied to reduce ships' magnetic signatures during World War II. Degaussing is also used to reduce magnetic fields in tape recorders and cathode-ray tube displays, and to destroy data held on magnetic storage. Ships' hulls The term was first used by then-Commander Charles F. Goodeve, Royal Canadian Naval Volunteer Reserve, during World War II while trying to counter the German magnetic naval mines that were wreaking havoc on the British fleet. The mines detected the increase in the magnetic field when the steel in a ship concentrated the Earth's magnetic field over it. Admiralty scientists, including Goodeve, developed a number of systems to induce a small "N-pole up" field into the ship to offset this effect, meaning that the net field was the same as the background. Since the Germans used the gauss as the unit of the strength of the magnetic field in their mines' triggers (not yet a standard measure), Goodeve referred to the various processes to counter the mines as degaussing. The term became a common word. The original method of degaussing was to install electromagnetic coils into the ships, known as coiling. In addition to being able to bias the ship continually, coiling also allowed the bias field to be reversed in the southern hemisphere, where the mines were set to detect "N-pole down" fields. British ships, notably cruisers and battleships, were well protected by about 1943. Installing such special equipment was, however, far too expensive and difficult to service all ships that would need it, so the navy developed an alternative called wiping, which Goodeve also devised. In this procedure, a large electrical cable with a pulse of about 2000 amperes flowing through it was dragged upwards on the side of the ship, starting at the waterline. For submarines, the current came from the vessels' own propulsion batteries. This induced the proper field into the ship in the form of a slight bias. It was originally thought that the pounding of the sea and the ship's engines would slowly randomize this field, but in testing, this was found not to be a real problem. A more serious problem was later realized: as a ship travels through Earth's magnetic field, it will slowly pick up that field, counteracting the effects of the degaussing. From then on captains were instructed to change direction as often as possible to avoid this problem. Nevertheless, the bias did wear off eventually, and ships had to be degaussed on a schedule. Smaller ships continued to use wiping through the war. To aid the Dunkirk evacuation, the British wiped 400 ships in four days. During World War II, the United States Navy commissioned a specialized class of degaussing ships that were capable of performing this function. One of them, USS Deperm (ADG-10), was named after the procedure. After the war, the capabilities of the magnetic fuzes were greatly improved, by detecting not the field itself, but changes in it. This meant a degaussed ship with a magnetic hot spot would still set off the mine. Additionally, the precise orientation of the field was also measured, something a simple bias field could not remove, at least not for all points on the ship. A series of ever-increasingly complex coils were introduced to offset these fuze improvements, with modern systems including no fewer than three separate sets of coils to cancel the field in all axes. Degaussing range The effectiveness of ships' degaussing was monitored by shore-based degaussing ranges (or degaussing stations, magnetic ranges) installed beside shipping channels outside ports. The vessel under test passed at a steady speed over loops on the seabed that were monitored from buildings on the shore. The installation was used both to establish the magnetic characteristics of a hull to establish the correct value of degaussing equipment to be installed, or as a "spot check" on vessels to confirm that degaussing equipment was performing correctly. Some stations had active coils that provided magnetic treatment, offering to un-equipped ships some limited protection against future encounters with magnetic mines. High-temperature superconductivity The US Navy tested, in April 2009, a prototype of its High-Temperature Superconducting Degaussing Coil System, referred to as "HTS Degaussing". The system works by encircling the vessel with superconducting ceramic cables whose purpose is to neutralize the ship's magnetic signature, as in the legacy copper systems. The main advantage of the HTS Degaussing Coil system is greatly reduced weight (sometimes by as much as 80%) and increased efficiency. A ferrous-metal-hulled ship or submarine, by its very nature, develops a magnetic signature as it travels, due to a magneto-mechanical interaction with Earth's magnetic field. It also picks up the magnetic orientation of the Earth's magnetic field where it is built. This signature can be exploited by magnetic mines or facilitate the detection of a submarine by ships or aircraft with magnetic anomaly detection (MAD) equipment. Navies use the deperming procedure, in conjunction with degaussing, as a countermeasure against this. Specialized deperming facilities, such as the United States Navy's Lambert's Point Deperming Station at Naval Station Norfolk, or Pacific Fleet Submarine Drive-In Magnetic Silencing Facility (MSF) at Joint Base Pearl Harbor–Hickam, are used to perform the procedure. During a close-wrap magnetic treatment, heavy-gauge copper cables encircle the hull and superstructure of the vessel, and high electrical currents (up to 4000 amperes) are pulsed through the cables. This has the effect of "resetting" the ship's magnetic signature to the ambient level after flashing its hull with electricity. It is also possible to assign a specific signature that is best suited to the particular area of the world in which the ship will operate. In drive-in magnetic silencing facilities, all cables are either hung above, below and on the sides, or concealed within the structural elements of facilities. Deperming is "permanent". It is only done once unless major repairs or structural modifications are done to the ship. Early experiments With the introduction of iron ships, the adverse effect of the metal hull on steering compasses was noted. It was also observed that lightning strikes had a significant effect on compass deviation, identified in some extreme cases as being caused by the reversal of the ship's magnetic signature. In 1866, Evan Hopkins of London registered a patent for a process "to depolarise iron vessels and leave them thenceforth free from any compass-disturbing influence whatever". The technique was described as follows: "For this purpose he employed a number of Grove's batteries and electromagnets. The latter were to be passed along the plates till the desired end had been obtained... the process must not be overdone for fear of re-polarising in the opposite direction." The invention was, however, reported to be "incapable of being carried to a successful issue", and "quickly died a natural death". Color cathode-ray tubes Color CRT displays, the technology underlying many television and computer monitors before the early 2010s, require degaussing. Many CRT displays use a shadow mask (a perforated metal screen) near the front of the tube to ensure that each electron beam hits the corresponding phosphors of the correct color. If this plate becomes magnetized (e.g. if someone sweeps a magnet on the screen or places loudspeakers nearby), it imparts an undesired deflection to the electron beams and the displayed image becomes distorted and discolored. To minimize this, CRTs have a copper or aluminum coil wrapped around the front of the display, known as the degaussing coil. Monitors without an internal coil can be degaussed using an external handheld version. Internal degaussing coils in CRTs are generally much weaker than external degaussing coils, since a better degaussing coil takes up more space. A degauss circuit induces an oscillating magnetic field with a decreasing amplitude which leaves the shadow mask with a reduced residual magnetization. Many televisions and monitors automatically degauss their picture tube when switched on, before an image is displayed. The high current surge that takes place during this automatic degauss is the cause of an audible "thunk", a loud hum or some clicking noises, which can be heard (and felt) when televisions and CRT computer monitors are switched on, due to the capacitors discharging and injecting current into the coil. Visually, this causes the image to shake dramatically for a short period of time. A degauss option is also usually available for manual selection in the operations menu in such appliances. In most commercial equipment the AC current surge to the degaussing coil is regulated by a simple positive temperature coefficient (PTC) thermistor device, which initially has a low resistance, allowing a high current, but quickly changes to a high resistance, allowing minimal current, due to self-heating of the thermistor. Such devices are designed for a one-off transition from cold to hot at power up; "experimenting" with the degauss effect by repeatedly switching the device on and off may cause this component to fail. The effect will also be weaker, since the PTC will not have had time to cool off. Magnetic data storage media Data is stored in the magnetic media, such as hard drives, floppy disks, and magnetic tape, by making very small areas called magnetic domains change their magnetic alignment to be in the direction of an applied magnetic field. This phenomenon occurs in much the same way a compass needle points in the direction of the Earth's magnetic field. Degaussing, commonly called erasure, leaves the domains in random patterns with no preference to orientation, thereby rendering previous data unrecoverable. There are some domains whose magnetic alignment is not randomized after degaussing. The information these domains represent is commonly called magnetic remanence or remanent magnetization. Proper degaussing will ensure there is insufficient magnetic remanence to reconstruct the data. Erasure via degaussing may be accomplished in two ways: in AC erasure, the medium is degaussed by applying an alternating field that is reduced in amplitude over time from an initial high value (i.e., AC powered); in DC erasure, the medium is saturated by applying a unidirectional field (i.e., DC powered or by employing a permanent magnet). A degausser is a device that can generate a magnetic field for degaussing magnetic storage media. The magnetic field needed for degaussing magnetic data storage media is a powerful one that normal magnets cannot easily achieve and maintain. Irreversible damage to some media types Many forms of generic magnetic storage media can be reused after degaussing, including reel-to-reel audio tape, VHS videocassettes, and floppy disks. These older media types are simply a raw medium which are overwritten with fresh new patterns, created by fixed-alignment read/write heads. For certain forms of computer data storage, however, such as modern hard disk drives and some tape drives, degaussing renders the magnetic media completely unusable and damages the storage system. This is due to the devices having an infinitely variable read/write head positioning mechanism which relies on special servo control data (e.g. Gray Code) that is meant to be permanently recorded onto the magnetic media. This servo data is written onto the media a single time at the factory using special-purpose servo writing hardware. The servo patterns are normally never overwritten by the device for any reason and are used to precisely position the read/write heads over data tracks on the media, to compensate for sudden jarring device movements, thermal expansion, or changes in orientation. Degaussing indiscriminately removes not only the stored data but also the servo control data, and without the servo data the device is no longer able to determine where data is to be read or written on the magnetic medium. The servo data must be rewritten to become usable again; with modern hard drives, this is generally not possible without manufacturer-specific and often model-specific service equipment. Tape recorders In tape recorders such as reel-to-reel and compact cassette audio tape recorders, remnant magnetic fields will over time gather on metal parts such as guide posts tape heads. These are points that come into contact with the magnetic tape. The remnant fields can cause an increase in audible background noise during playback. Cheap, handheld consumer degaussers can significantly reduce this effect. Types of degaussers Degaussers range in size from small ones used in offices for erasing magnetic data storage devices to industrial-size degaussers for use on piping, ships, submarines, and other large-sized items, equipment to vehicles. Rating and categorizing degaussers depends on the strength of the magnetic field the degausser generates, the method of generating a magnetic field in the degausser, the type of operations the degausser is suitable for, the working rate of the degausser based on whether it is a high volume degausser or a low volume degausser, and mobility of the degausser among others. From these criteria of rating and categorization, there are thus electromagnetic degaussers, permanent magnet degaussers as the main types of degaussers. Electromagnetic degaussers An electromagnetic degausser passes an electrical charge through a degaussing coil to generate a magnetic field. Sub-types of electromagnetic degaussers are several such as Rotating Coil Degaussers and Pulse Demagnetization Technology degaussers since the technologies used in the degaussers are often developed and patented by respective manufacturing companies such as Verity Systems and Maurer Magnetic among others, so that the degausser is suitable for its intended use. Electromagnetic degaussers generate strong magnetic fields, and have a high rate of work. Rotating coil degausser Performance of a degaussing machine is the major determinant of the effectiveness of degaussing magnetic data storage media. Effectiveness does not improve when the media passes through the same degaussing magnetic field more than once. Rotating the media by 90 degrees improves effectiveness of degaussing the media. One magnetic media degaussers’ manufacturer, Verity Systems, has used this principle in a rotating coil technique they developed. Their rotating coil degausser passes the magnetic data storage media being erased through a magnetic field generated using two coils in the degaussing machine with the media on a variable-speed conveyor belt. The two coils generating a magnetic field are rotating; with one coil positioned above the media and the other coil positioned below the media. Pulse degaussing Pulse degaussing technology involves the cyclic application of electric current for a fraction of a second to the coil being used to generate a magnetic field in the degausser. The process starts with the maximum voltage applied and held for only a fraction of a second to avoid overheating the coil, and then the voltages applied in subsequent seconds are reduced in sequence at varying differences until no current is applied to the coil. Pulse degaussing saves on energy costs, produces high magnetic field strength, is suitable for degaussing large assemblies, and is reliable due to zero-error degaussing achievement. Permanent magnet degausser Permanent magnet degaussers use magnets made using rare earth materials. They do not require electricity for their operation. Permanent magnet degaussers require adequate shielding of the magnetic field they constantly have to prevent unintended degaussing. The need for shielding usually results in permanent magnet degaussers being bulky. When small-sized, permanent magnet degaussers are suited for use as mobile degaussers. See also Data remanence References External links Guide to degaussing TVs https://degaussing-101.com/what-is-degaussing/ Computer storage media Magnetic data storage Magnetic hysteresis British inventions
Degaussing
[ "Physics", "Materials_science" ]
3,433
[ "Physical phenomena", "Hysteresis", "Magnetic hysteresis" ]
1,105,907
https://en.wikipedia.org/wiki/Spark-gap%20transmitter
A spark-gap transmitter is an obsolete type of radio transmitter which generates radio waves by means of an electric spark. Spark-gap transmitters were the first type of radio transmitter, and were the main type used during the wireless telegraphy or "spark" era, the first three decades of radio, from 1887 to the end of World War I. German physicist Heinrich Hertz built the first experimental spark-gap transmitters in 1887, with which he proved the existence of radio waves and studied their properties. A fundamental limitation of spark-gap transmitters is that they generate a series of brief transient pulses of radio waves called damped waves; they are unable to produce the continuous waves used to carry audio (sound) in modern AM or FM radio transmission. So spark-gap transmitters could not transmit audio, and instead transmitted information by radiotelegraphy; the operator switched the transmitter on and off with a telegraph key, creating pulses of radio waves to spell out text messages in Morse code. The first practical spark gap transmitters and receivers for radiotelegraphy communication were developed by Guglielmo Marconi around 1896. One of the first uses for spark-gap transmitters was on ships, to communicate with shore and broadcast a distress call if the ship was sinking. They played a crucial role in maritime rescues such as the 1912 RMS Titanic disaster. After World War I, vacuum tube transmitters were developed, which were less expensive and produced continuous waves which had a greater range, produced less interference, and could also carry audio, making spark transmitters obsolete by 1920. The radio signals produced by spark-gap transmitters are electrically "noisy"; they have a wide bandwidth, creating radio frequency interference (RFI) that can disrupt other radio transmissions. This type of radio emission has been prohibited by international law since 1934. Theory of operation Electromagnetic waves are radiated by electric charges when they are accelerated. Radio waves, electromagnetic waves of radio frequency, can be generated by time-varying electric currents, consisting of electrons flowing through a conductor which suddenly change their velocity, thus accelerating. An electrically charged capacitance discharged through an electric spark across a spark gap between two conductors was the first device known which could generate radio waves. The spark itself doesn't produce the radio waves, it merely serves as a fast acting switch to excite resonant radio frequency oscillating electric currents in the conductors of the attached circuit. The conductors radiate the energy in this oscillating current as radio waves. Due to the inherent inductance of circuit conductors, the discharge of a capacitor through a low enough resistance (such as a spark) is oscillatory; the charge flows rapidly back and forth through the spark gap for a brief period, charging the conductors on each side alternately positive and negative, until the oscillations die away. A practical spark gap transmitter consists of these parts: A high-voltage transformer, to transform the low-voltage electricity from the power source, a battery or electric outlet, to a high enough voltage (from a few kilovolts to 75-100 kilovolts in powerful transmitters) to jump across the spark gap. The transformer charges the capacitor. In low-power transmitters powered by batteries this was usually an induction coil (Ruhmkorff coil). One or more resonant circuits (tuned circuits or tank circuits) which create radio frequency electrical oscillations when excited by the spark. A resonant circuit consists of a capacitor (in early days a type called a Leyden jar) which stores high-voltage electricity from the transformer, and a coil of wire called an inductor or tuning coil, connected together. The values of the capacitance and inductance determine the frequency of the radio waves produced. The earliest spark-gap transmitters before 1897 did not have a resonant circuit; the antenna performed this function, acting as a resonator. However, this meant that the electromagnetic energy produced by the transmitter was dissipated across a wide band, thereby limiting its effective range to a few kilometers at most. Most spark transmitters had two resonant circuits coupled together with an air core transformer called a resonant transformer or oscillation transformer. This was called an inductively-coupled transmitter. The spark gap and capacitor connected to the primary winding of the transformer made one resonant circuit, which generated the oscillating current. The oscillating current in the primary winding created an oscillating magnetic field that induced current in the secondary winding. The antenna and ground were connected to the secondary winding. The capacitance of the antenna resonated with the secondary winding to make a second resonant circuit. The two resonant circuits were tuned to the same resonant frequency. The advantage of this circuit was that the oscillating current persisted in the antenna circuit even after the spark stopped, creating long, ringing, lightly damped waves, in which the energy was concentrated in a narrower bandwidth, creating less interference to other transmitters. A spark gap which acts as a voltage-controlled switch in the resonant circuit, discharging the capacitor through the coil. An antenna, a metal conductor such as an elevated wire, that radiates the power in the oscillating electric currents from the resonant circuit into space as radio waves. A telegraph key to switch the transmitter on and off to communicate messages by Morse code Operation cycle The transmitter works in a rapid repeating cycle in which the capacitor is charged to a high voltage by the transformer and discharged through the coil by a spark across the spark gap. The impulsive spark excites the resonant circuit to "ring" like a bell, producing a brief oscillating current which is radiated as electromagnetic waves by the antenna. The transmitter repeats this cycle at a rapid rate, so the spark appeared continuous, and the radio signal sounded like a whine or buzz in a radio receiver. The cycle begins when current from the transformer charges up the capacitor, storing positive electric charge on one of its plates and negative charge on the other. While the capacitor is charging the spark gap is in its nonconductive state, preventing the charge from escaping through the coil. When the voltage on the capacitor reaches the breakdown voltage of the spark gap, the air in the gap ionizes, starting an electric spark, reducing its resistance to a very low level (usually less than one ohm). This closes the circuit between the capacitor and the coil. The charge on the capacitor discharges as a current through the coil and spark gap. Due to the inductance of the coil when the capacitor voltage reaches zero the current doesn't stop but keeps flowing, charging the capacitor plates with an opposite polarity, until the charge is stored in the capacitor again, on the opposite plates. Then the process repeats, with the charge flowing in the opposite direction through the coil. This continues, resulting in oscillating currents flowing rapidly back and forth between the plates of the capacitor through the coil and spark gap. The resonant circuit is connected to the antenna, so these oscillating currents also flow in the antenna, charging and discharging it. The current creates an oscillating magnetic field around the antenna, while the voltage creates an oscillating electric field. These oscillating fields radiate away from the antenna into space as an electromagnetic wave; a radio wave. The energy in the resonant circuit is limited to the amount of energy originally stored in the capacitor. The radiated radio waves, along with the heat generated by the spark, uses up this energy, causing the oscillations to decrease quickly in amplitude to zero. When the oscillating electric current in the primary circuit has decreased to a point where it is insufficient to keep the air in the spark gap ionized, the spark stops, opening the resonant circuit, and stopping the oscillations. In a transmitter with two resonant circuits, the oscillations in the secondary circuit and antenna may continue some time after the spark has terminated. Then the transformer begins charging the capacitor again, and the whole cycle repeats. The cycle is very rapid, taking less than a millisecond. With each spark, this cycle produces a radio signal consisting of an oscillating sinusoidal wave that increases rapidly to a high amplitude and decreases exponentially to zero, called a damped wave. The frequency of the oscillations, which is the frequency of the emitted radio waves, is equal to the resonant frequency of the resonant circuit, determined by the capacitance of the capacitor and the inductance of the coil: The transmitter repeats this cycle rapidly, so the output is a repeating string of damped waves. This is equivalent to a radio signal amplitude modulated with a steady frequency, so it could be demodulated in a radio receiver by a rectifying AM detector, such as the crystal detector or Fleming valve used during the wireless telegraphy era. The frequency of repetition (spark rate) is in the audio range, typically 50 to 1000 sparks per second, so in a receiver's earphones the signal sounds like a steady tone, whine, or buzz. In order to transmit information with this signal, the operator turns the transmitter on and off rapidly by tapping on a switch called a telegraph key in the primary circuit of the transformer, producing sequences of short (dot) and long (dash) strings of damped waves, to spell out messages in Morse code. As long as the key is pressed the spark gap fires repetitively, creating a string of pulses of radio waves, so in a receiver the keypress sounds like a buzz; the entire Morse code message sounds like a sequence of buzzes separated by pauses. In low-power transmitters the key directly breaks the primary circuit of the supply transformer, while in high-power transmitters the key operates a heavy duty relay that breaks the primary circuit. Charging circuit and spark rate The circuit which charges the capacitors, along with the spark gap itself, determines the spark rate of the transmitter, the number of sparks and resulting damped wave pulses it produces per second, which determines the tone of the signal heard in the receiver. The spark rate should not be confused with the frequency of the transmitter, which is the number of sinusoidal oscillations per second in each damped wave. Since the transmitter produces one pulse of radio waves per spark, the output power of the transmitter was proportional to the spark rate, so higher rates were favored. Spark transmitters generally used one of three types of power circuits: Induction coil An induction coil (Ruhmkorff coil) was used in low-power transmitters, usually less than 500 watts, often battery-powered. An induction coil is a type of transformer powered by DC, in which a vibrating arm switch contact on the coil called an interrupter repeatedly breaks the circuit that provides current to the primary winding, causing the coil to generate pulses of high voltage. When the primary current to the coil is turned on, the primary winding creates a magnetic field in the iron core which pulls the springy interrupter arm away from its contact, opening the switch and cutting off the primary current. Then the magnetic field collapses, creating a pulse of high voltage in the secondary winding, and the interrupter arm springs back to close the contact again, and the cycle repeats. Each pulse of high voltage charged up the capacitor until the spark gap fired, resulting in one spark per pulse. Interrupters were limited to low spark rates of 20–100 Hz, sounding like a low buzz in the receiver. In powerful induction coil transmitters, instead of a vibrating interrupter, a mercury turbine interrupter was used. This could break the current at rates up to several thousand hertz, and the rate could be adjusted to produce the best tone. AC transformer In higher power transmitters powered by AC, a transformer steps the input voltage up to the high voltage needed. The sinusoidal voltage from the transformer is applied directly to the capacitor, so the voltage on the capacitor varies from a high positive voltage, to zero, to a high negative voltage. The spark gap is adjusted so sparks only occur near the maximum voltage, at peaks of the AC sine wave, when the capacitor was fully charged. Since the AC sine wave has two peaks per cycle, ideally two sparks occurred during each cycle, so the spark rate was equal to twice the frequency of the AC power (often multiple sparks occurred during the peak of each half cycle). The spark rate of transmitters powered by 50 or 60 Hz mains power was thus 100 or 120 Hz. However higher audio frequencies cut through interference better, so in many transmitters the transformer was powered by a motor–alternator set, an electric motor with its shaft turning an alternator, that produced AC at a higher frequency, usually 500 Hz, resulting in a spark rate of 1000 Hz. Quenched spark gap The speed at which signals may be transmitted is naturally limited by the time taken for the spark to be extinguished. If, as described above, the conductive plasma does not, during the zero points of the alternating current, cool enough to extinguish the spark, a 'persistent spark' is maintained until the stored energy is dissipated, permitting practical operation only up to around 60 signals per second. If active measures are taken to break the arc (either by blowing air through the spark or by lengthening the spark gap), a much shorter "quenched spark" may be obtained. A simple quenched spark system still permits several oscillations of the capacitor circuit in the time taken for the spark to be quenched. With the spark circuit broken, the transmission frequency is solely determined by the antenna resonant circuit, which permits simpler tuning. Rotary spark gap In a transmitter with a "rotary" spark gap (below), the capacitor was charged by AC from a high-voltage transformer as above, and discharged by a spark gap consisting of electrodes spaced around a wheel which was spun by an electric motor, which produced sparks as they passed by a stationary electrode. The spark rate was equal to the rotations per second times the number of spark electrodes on the wheel. It could produce spark rates up to several thousand hertz, and the rate could be adjusted by changing the speed of the motor. The rotation of the wheel was usually synchronized to the AC sine wave so the moving electrode passed by the stationary one at the peak of the sine wave, initiating the spark when the capacitor was fully charged, which produced a musical tone in the receiver. When tuned correctly in this manner, the need for external cooling or quenching airflow was eliminated, as was the loss of power directly from the charging circuit (parallel to the capacitor) through the spark. History The invention of the radio transmitter resulted from the convergence of two lines of research. One was efforts by inventors to devise a system to transmit telegraph signals without wires. Experiments by a number of inventors had shown that electrical disturbances could be transmitted short distances through the air. However most of these systems worked not by radio waves but by electrostatic induction or electromagnetic induction, which had too short a range to be practical. In 1866 Mahlon Loomis claimed to have transmitted an electrical signal through the atmosphere between two 600 foot wires held aloft by kites on mountaintops 14 miles apart. Thomas Edison had come close to discovering radio in 1875; he had generated and detected radio waves which he called "etheric currents" experimenting with high-voltage spark circuits, but due to lack of time did not pursue the matter. David Edward Hughes in 1879 had also stumbled on radio wave transmission which he received with his carbon microphone detector, however he was persuaded that what he observed was induction. Neither of these individuals are usually credited with the discovery of radio, because they did not understand the significance of their observations and did not publish their work before Hertz. The other was research by physicists to confirm the theory of electromagnetism proposed in 1864 by Scottish physicist James Clerk Maxwell, now called Maxwell's equations. Maxwell's theory predicted that a combination of oscillating electric and magnetic fields could travel through space as an "electromagnetic wave". Maxwell proposed that light consisted of electromagnetic waves of short wavelength, but no one knew how to confirm this, or generate or detect electromagnetic waves of other wavelengths. By 1883 it was theorized that accelerated electric charges could produce electromagnetic waves, and George Fitzgerald had calculated the output power of a loop antenna. Fitzgerald in a brief note published in 1883 suggested that electromagnetic waves could be generated practically by discharging a capacitor rapidly; the method used in spark transmitters, however there is no indication that this inspired other inventors. The division of the history of spark transmitters into the different types below follows the organization of the subject used in many wireless textbooks. Hertzian oscillators German physicist Heinrich Hertz in 1887 built the first experimental spark gap transmitters during his historic experiments to demonstrate the existence of electromagnetic waves predicted by James Clerk Maxwell in 1864, in which he discovered radio waves, which were called "Hertzian waves" until about 1910. Hertz was inspired to try spark excited circuits by experiments with "Reiss spirals", a pair of flat spiral inductors with their conductors ending in spark gaps. A Leyden jar capacitor discharged through one spiral, would cause sparks in the gap of the other spiral. See circuit diagram. Hertz's transmitters consisted of a dipole antenna made of a pair of collinear metal rods of various lengths with a spark gap (S) between their inner ends and metal balls or plates for capacitance (C) attached to the outer ends. The two sides of the antenna were connected to an induction coil (Ruhmkorff coil) (T) a common lab power source which produced pulses of high voltage, 5 to 30 kV. In addition to radiating the waves, the antenna also acted as a harmonic oscillator (resonator) which generated the oscillating currents. High-voltage pulses from the induction coil (T) were applied between the two sides of the antenna. Each pulse stored electric charge in the capacitance of the antenna, which was immediately discharged by a spark across the spark gap. The spark excited brief oscillating standing waves of current between the sides of the antenna. The antenna radiated the energy as a momentary pulse of radio waves; a damped wave. The frequency of the waves was equal to the resonant frequency of the antenna, which was determined by its length; it acted as a half-wave dipole, which radiated waves roughly twice the length of the antenna (for example a dipole 1 meter long would generate 150 MHz radio waves). Hertz detected the waves by observing tiny sparks in micrometer spark gaps (M) in loops of wire which functioned as resonant receiving antennas. Oliver Lodge was also experimenting with spark oscillators at this time and came close to discovering radio waves before Hertz, but his focus was on waves on wires, not in free space. Hertz and the first generation of physicists who built these "Hertzian oscillators", such as Jagadish Chandra Bose, Lord Rayleigh, George Fitzgerald, Frederick Trouton, Augusto Righi and Oliver Lodge, were mainly interested in radio waves as a scientific phenomenon, and largely failed to foresee its possibilities as a communication technology. Due to the influence of Maxwell's theory, their thinking was dominated by the similarity between radio waves and light waves; they thought of radio waves as an invisible form of light. By analogy with light, they assumed that radio waves only traveled in straight lines, so they thought radio transmission was limited by the visual horizon like existing optical signalling methods such as semaphore, and therefore was not capable of longer distance communication. As late as 1894 Oliver Lodge speculated that the maximum distance Hertzian waves could be transmitted was a half mile. To investigate the similarity between radio waves and light waves, these researchers concentrated on producing short wavelength high-frequency waves with which they could duplicate classic optics experiments with radio waves, using quasioptical components such as prisms and lenses made of paraffin wax, sulfur, and pitch and wire diffraction gratings. Their short antennas generated radio waves in the VHF, UHF, or microwave bands. In his various experiments, Hertz produced waves with frequencies from 50 to 450 MHz, roughly the frequencies used today by broadcast television transmitters. Hertz used them to perform historic experiments demonstrating standing waves, refraction, diffraction, polarization and interference of radio waves. He also measured the speed of radio waves, showing they traveled at the same speed as light. These experiments established that light and radio waves were both forms of Maxwell's electromagnetic waves, differing only in frequency. Augusto Righi and Jagadish Chandra Bose around 1894 generated microwaves of 12 and 60 GHz respectively, using small metal balls as resonator-antennas. The high frequencies produced by Hertzian oscillators could not travel beyond the horizon. The dipole resonators also had low capacitance and couldn't store much charge, limiting their power output. Therefore, these devices were not capable of long distance transmission; their reception range with the primitive receivers employed was typically limited to roughly 100 yards (100 meters). Non-syntonic transmitters Italian radio pioneer Guglielmo Marconi was one of the first people to believe that radio waves could be used for long distance communication, and singlehandedly developed the first practical radiotelegraphy transmitters and receivers, mainly by combining and tinkering with the inventions of others. Starting at age 21 on his family's estate in Italy, between 1894 and 1901 he conducted a long series of experiments to increase the transmission range of Hertz's spark oscillators and receivers. He was unable to communicate beyond a half-mile until 1895, when he discovered that the range of transmission could be increased greatly by replacing one side of the Hertzian dipole antenna in his transmitter and receiver with a connection to Earth and the other side with a long wire antenna suspended high above the ground. These antennas functioned as quarter-wave monopole antennas. The length of the antenna determined the wavelength of the waves produced and thus their frequency. Longer, lower frequency waves have less attenuation with distance. As Marconi tried longer antennas, which radiated lower frequency waves, probably in the MF band around 2 MHz, he found that he could transmit further. Another advantage was that these vertical antennas radiated vertically polarized waves, instead of the horizontally polarized waves produced by Hertz's horizontal antennas. These longer vertically polarized waves could travel beyond the horizon, because they propagated as a ground wave that followed the contour of the Earth. Under certain conditions they could also reach beyond the horizon by reflecting off layers of charged particles (ions) in the upper atmosphere, later called skywave propagation. Marconi did not understand any of this at the time; he simply found empirically that the higher his vertical antenna was suspended, the further it would transmit. After failing to interest the Italian government, in 1896 Marconi moved to England, where William Preece of the British General Post Office funded his experiments. Marconi applied for a patent on his radio system 2 June 1896, often considered the first wireless patent. In May 1897 he transmitted 14 km (8.7 miles), on 27 March 1899 he transmitted across the English Channel, 46 km (28 miles), in fall 1899 he extended the range to 136 km (85 miles), and by January 1901 he had reached 315 km (196 miles). These demonstrations of wireless Morse code communication at increasingly long distances convinced the world that radio, or "wireless telegraphy" as it was called, was not just a scientific curiosity but a commercially useful communication technology. In 1897 Marconi started a company to produce his radio systems, which became the Marconi Wireless Telegraph Company. and radio communication began to be used commercially around 1900. His first large contract in 1901 was with the insurance firm Lloyd's of London to equip their ships with wireless stations. Marconi's company dominated marine radio throughout the spark era. Inspired by Marconi, in the late 1890s other researchers also began developing competing spark radio communication systems; Alexander Popov in Russia, Eugène Ducretet in France, Reginald Fessenden and Lee de Forest in America, and Karl Ferdinand Braun, Adolf Slaby, and Georg von Arco in Germany who in 1903 formed the Telefunken Co., Marconi's chief rival. Disadvantages The primitive transmitters prior to 1897 had no resonant circuits (also called LC circuits, tank circuits, or tuned circuits), the spark gap was in the antenna, which functioned as the resonator to determine the frequency of the radio waves. These were called "unsyntonized" or "plain antenna" transmitters. The average power output of these transmitters was low, because due to its low capacitance the antenna was a highly damped oscillator (in modern terminology, it had very low Q factor). During each spark the energy stored in the antenna was quickly radiated away as radio waves, so the oscillations decayed to zero quickly. The radio signal consisted of brief pulses of radio waves, repeating tens or at most a few hundreds of times per second, separated by comparatively long intervals of no output. The power radiated was dependent on how much electric charge could be stored in the antenna before each spark, which was proportional to the capacitance of the antenna. To increase their capacitance to ground, antennas were made with multiple parallel wires, often with capacitive toploads, in the "harp", "cage", "umbrella", "inverted-L", and "T" antennas characteristic of the "spark" era. The only other way to increase the energy stored in the antenna was to charge it up to very high voltages. However the voltage that could be used was limited to about 100 kV by corona discharge which caused charge to leak off the antenna, particularly in wet weather, and also energy lost as heat in the longer spark. A more significant drawback of the large damping was that the radio transmissions were electrically "noisy"; they had a very large bandwidth. These transmitters did not produce waves of a single frequency, but a continuous band of frequencies. They were essentially radio noise sources radiating energy over a large part of the radio spectrum, which made it impossible for other transmitters to be heard. When multiple transmitters attempted to operate in the same area, their broad signals overlapped in frequency and interfered with each other. The radio receivers used also had no resonant circuits, so they had no way of selecting one signal from others besides the broad resonance of the antenna, and responded to the transmissions of all transmitters in the vicinity. An example of this interference problem was an embarrassing public debacle in August 1901 when Marconi, Lee de Forest, and G. W. Pickard attempted to report the New York Yacht Race to newspapers from ships with their untuned spark transmitters. The Morse code transmissions interfered, and the reporters on shore failed to receive any information from the garbled signals. Syntonic transmitters It became clear that for multiple transmitters to operate, some system of "selective signaling" had to be devised to allow a receiver to select which transmitter's signal to receive, and reject the others. In 1892 William Crookes had given an influential lecture on radio in which he suggested using resonance (then called syntony) to reduce the bandwidth of transmitters and receivers. Using a resonant circuit (also called tuned circuit or tank circuit) in transmitters would narrow the bandwidth of the radiated signal, it would occupy a smaller range of frequencies around its center frequency, so that the signals of transmitters "tuned" to transmit on different frequencies would no longer overlap. A receiver which had its own resonant circuit could receive a particular transmitter by "tuning" its resonant frequency to the frequency of the desired transmitter, analogously to the way one musical instrument could be tuned to resonance with another. This is the system used in all modern radio. During the period 1897 to 1900 wireless researchers realized the advantages of "syntonic" or "tuned" systems, and added capacitors (Leyden jars) and inductors (coils of wire) to transmitters and receivers, to make resonant circuits (tuned circuits, or tank circuits). Oliver Lodge, who had been researching electrical resonance for years, patented the first "syntonic" transmitter and receiver in May 1897 Lodge added an inductor (coil) between the sides of his dipole antennas, which resonated with the capacitance of the antenna to make a tuned circuit. Although his complicated circuit did not see much practical use, Lodge's "syntonic" patent was important because it was the first to propose a radio transmitter and receiver containing resonant circuits which were tuned to resonance with each other. In 1911 when the patent was renewed the Marconi Company was forced to buy it to protect its own syntonic system against infringement suits. The resonant circuit functioned analogously to a tuning fork, storing oscillating electrical energy, increasing the Q factor of the circuit so the oscillations were less damped. Another advantage was the frequency of the transmitter was no longer determined by the length of the antenna but by the resonant circuit, so it could easily be changed by adjustable taps on the coil. The antenna was brought into resonance with the tuned circuit using loading coils. The energy in each spark, and thus the power output, was no longer limited by the capacitance of the antenna but by the size of the capacitor in the resonant circuit. In order to increase the power very large capacitor banks were used. The form that the resonant circuit took in practical transmitters was the inductively-coupled circuit described in the next section. Inductive coupling In developing these syntonic transmitters, researchers found it impossible to achieve low damping with a single resonant circuit. A resonant circuit can only have low damping (high Q, narrow bandwidth) if it is a "closed" circuit, with no energy dissipating components. But such a circuit does not produce radio waves. A resonant circuit with an antenna radiating radio waves (an "open" tuned circuit) loses energy quickly, giving it high damping (low Q, wide bandwidth). There was a fundamental tradeoff between a circuit which produced persistent oscillations which had narrow bandwidth, and one which radiated high power. The solution found by a number of researchers was to use two resonant circuits in the transmitter, with their coils inductively (magnetically) coupled, making a resonant transformer (called an oscillation transformer); this was called an "inductively coupled", "coupled circuit" or "two circuit" transmitter. See circuit diagram. The primary winding of the oscillation transformer (L1) with the capacitor (C1) and spark gap (S) formed a "closed" resonant circuit which generated the oscillations, while the secondary winding (L2) was connected to the wire antenna (A) and ground, forming an "open" resonant circuit with the capacitance of the antenna (C2). Both circuits were tuned to the same resonant frequency. The advantage of the inductively coupled circuit was that the "loosely coupled" transformer transferred the oscillating energy of the tank circuit to the radiating antenna circuit gradually, creating long "ringing" waves. A second advantage was that it allowed a large primary capacitance (C1) to be used which could store a lot of energy, increasing the power output enormously. Powerful transoceanic transmitters often had huge Leyden jar capacitor banks filling rooms (see pictures above). The receiver in most systems also used two inductively coupled circuits, with the antenna an "open" resonant circuit coupled through an oscillation transformer to a "closed" resonant circuit containing the detector. A radio system with a "two circuit" (inductively coupled) transmitter and receiver was called a "four circuit" system. The first person to use resonant circuits in a radio application was Nikola Tesla, who invented the resonant transformer in 1891. At a March 1893 St. Louis lecture he had demonstrated a wireless system that, although it was intended for wireless power transmission, had many of the elements of later radio communication systems. A grounded capacitance-loaded spark-excited resonant transformer (his Tesla coil) attached to an elevated wire monopole antenna transmitted radio waves, which were received across the room by a similar wire antenna attached to a receiver consisting of a second grounded resonant transformer tuned to the transmitter's frequency, which lighted a Geissler tube. This system, patented by Tesla 2 September 1897, 4 months after Lodge's "syntonic" patent, was in effect an inductively coupled radio transmitter and receiver, the first use of the "four circuit" system claimed by Marconi in his 1900 patent (below). However, Tesla was mainly interested in wireless power and never developed a practical radio communication system. In addition to Tesla's system, inductively coupled radio systems were patented by Oliver Lodge in February 1898, Karl Ferdinand Braun, in November 1899, and John Stone Stone in February 1900. Braun made the crucial discovery that low damping required "loose coupling" (reduced mutual inductance) between the primary and secondary coils. Marconi at first paid little attention to syntony, but by 1900 developed a radio system incorporating features from these systems, with a two circuit transmitter and two circuit receiver, with all four circuits tuned to the same frequency, using a resonant transformer he called the "jigger". In spite of the above prior patents, Marconi in his 26 April 1900 "four circuit" or "master tuning" patent on his system claimed rights to the inductively coupled transmitter and receiver. This was granted a British patent, but the US patent office twice rejected his patent as lacking originality. Then in a 1904 appeal a new patent commissioner reversed the decision and granted the patent, on the narrow grounds that Marconi's patent by including an antenna loading coil (J in circuit above) provided the means for tuning the four circuits to the same frequency, whereas in the Tesla and Stone patents this was done by adjusting the length of the antenna. This patent gave Marconi a near monopoly of syntonic wireless telegraphy in England and America. Tesla sued Marconi's company for patent infringement but didn't have the resources to pursue the action. In 1943 the US Supreme Court invalidated the inductive coupling claims of Marconi's patent due to the prior patents of Lodge, Tesla, and Stone, but this came long after spark transmitters had become obsolete. The inductively coupled or "syntonic" spark transmitter was the first type that could communicate at intercontinental distances, and also the first that had sufficiently narrow bandwidth that interference between transmitters was reduced to a tolerable level. It became the dominant type used during the "spark" era. A drawback of the plain inductively coupled transmitter was that unless the primary and secondary coils were very loosely coupled it radiated on two frequencies. This was remedied by the quenched-spark and rotary gap transmitters (below). In recognition of their achievements in radio, Marconi and Braun shared the 1909 Nobel Prize in physics. First transatlantic radio transmission Marconi decided in 1900 to attempt transatlantic communication, which would allow him to dominate Atlantic shipping and compete with submarine telegraph cables. This would require a major scale-up in power, a risky gamble for his company. Up to that time his small induction coil transmitters had an input power of 100 - 200 watts, and the maximum range achieved was around 150 miles. To build the first high power transmitter, Marconi hired an expert in electric power engineering, Prof. John Ambrose Fleming of University College, London, who applied power engineering principles. Fleming designed a complicated inductively-coupled transmitter (see circuit) with two cascaded spark gaps (S1, S2) firing at different rates, and three resonant circuits, powered by a 25 kW alternator (D) turned by a combustion engine. The first spark gap and resonant circuit (S1, C1, T2) generated the high voltage to charge the capacitor (C2) powering the second spark gap and resonant circuit (S2, C2, T3), which generated the output. The spark rate was low, perhaps as low as 2 - 3 sparks per second. Fleming estimated the radiated power was around 10 - 12 kW. The transmitter was built in secrecy on the coast at Poldhu, Cornwall, UK. Marconi was pressed for time because Nikola Tesla was building his own transatlantic radiotelegraphy transmitter on Long Island, New York, in a bid to be first (this was the Wardenclyffe Tower, which lost funding and was abandoned unfinished after Marconi's success). Marconi's original round 400-wire transmitting antenna collapsed in a storm 17 September 1901 and he hastily erected a temporary antenna consisting of 50 wires suspended in a fan shape from a cable between two 160 foot poles. The frequency used is not known precisely, as Marconi did not measure wavelength or frequency, but it was between 166 and 984 kHz, probably around 500 kHz. He received the signal on the coast of St. John's, Newfoundland using an untuned coherer receiver with a 400 ft. wire antenna suspended from a kite. Marconi announced the first transatlantic radio transmission took place on 12 December 1901, from Poldhu, Cornwall to Signal Hill, Newfoundland, a distance of 2100 miles (3400 km). Marconi's achievement received worldwide publicity, and was the final proof that radio was a practical communication technology. The scientific community at first doubted Marconi's report. Virtually all wireless experts besides Marconi believed that radio waves traveled in straight lines, so no one (including Marconi) understood how the waves had managed to propagate around the 300 mile high curve of the Earth between Britain and Newfoundland. In 1902 Arthur Kennelly and Oliver Heaviside independently theorized that radio waves were reflected by a layer of ionized atoms in the upper atmosphere, enabling them to return to Earth beyond the horizon. In 1924 Edward V. Appleton demonstrated the existence of this layer, now called the "Kennelly–Heaviside layer" or "E-layer", for which he received the 1947 Nobel Prize in Physics. Knowledgeable sources today doubt whether Marconi actually received this transmission. Ionospheric conditions should not have allowed the signal to be received during the daytime at that range. Marconi knew the Morse code signal to be transmitted was the letter 'S' (three dots). He and his assistant could have mistaken atmospheric radio noise ("static") in their earphones for the clicks of the transmitter. Marconi made many subsequent transatlantic transmissions which clearly establish his priority, but reliable transatlantic communication was not achieved until 1907 with more powerful transmitters. Quenched-spark transmitters The inductively-coupled transmitter had a more complicated output waveform than the non-syntonic transmitter, due to the interaction of the two resonant circuits. The two magnetically coupled tuned circuits acted as a coupled oscillator, producing beats (see top graphs). The oscillating radio frequency energy was passed rapidly back and forth between the primary and secondary resonant circuits as long as the spark continued. Each time the energy returned to the primary, some was lost as heat in the spark. In addition, unless the coupling was very loose the oscillations caused the transmitter to transmit on two separate frequencies. Since the narrow passband of the receiver's resonant circuit could only be tuned to one of these frequencies, the power radiated at the other frequency was wasted. This troublesome backflow of energy to the primary circuit could be prevented by extinguishing (quenching) the spark at the right instant, after all the energy from the capacitors was transferred to the antenna circuit. Inventors tried various methods to accomplish this, such as air blasts and Elihu Thomson's magnetic blowout. In 1906, a new type of spark gap was developed by German physicist Max Wien, called the series or quenched gap. A quenched gap consisted of a stack of wide cylindrical electrodes separated by thin insulating spacer rings to create many narrow spark gaps in series, of around . The wide surface area of the electrodes terminated the ionization in the gap quickly by cooling it after the current stopped. In the inductively coupled transmitter, the narrow gaps extinguished ("quenched") the spark at the first nodal point (Q) when the primary current momentarily went to zero after all the energy had been transferred to the secondary winding (see lower graph). Since without the spark no current could flow in the primary circuit, this effectively uncoupled the secondary from the primary circuit, allowing the secondary resonant circuit and antenna to oscillate completely free of the primary circuit after that (until the next spark). This produced output power centered on a single frequency instead of two frequencies. It also eliminated most of the energy loss in the spark, producing very lightly damped, long "ringing" waves, with decrements of only 0.08 to 0.25 (a Q of 12-38) and consequently a very "pure", narrow bandwidth radio signal. Another advantage was the rapid quenching allowed the time between sparks to be reduced, allowing higher spark rates of around 1000 Hz to be used, which had a musical tone in the receiver which penetrated radio static better. The quenched gap transmitter was called the "singing spark" system. The German wireless giant Telefunken Co., Marconi's rival, acquired the patent rights and used the quenched spark gap in their transmitters. Rotary gap transmitters A second type of spark gap that had a similar quenching effect was the "rotary gap", invented by Tesla in 1896 and applied to radio transmitters by Reginald Fessenden and others. It consisted of multiple electrodes equally spaced around a disk rotor spun at high speed by a motor, which created sparks as they passed by a stationary electrode. By using the correct motor speed, the rapidly separating electrodes extinguished the spark after the energy had been transferred to the secondary. The rotating wheel also kept the electrodes cooler, important in high-power transmitters. There were two types of rotary spark transmitter: Nonsynchronous: In the earlier rotary gaps, the motor was not synchronized with the frequency of the AC transformer, so the spark occurred at random times in the AC cycle of the voltage applied to the capacitor. The problem with this was the interval between the sparks was not constant. The voltage on the capacitor when a moving electrode approached the stationary electrode varied randomly between zero and the peak AC voltage. The exact time when the spark started varied depending on the gap length the spark could jump, which depended on the voltage. The resulting random phase variation of successive damped waves resulted in a signal that had a "hissing" or "rasping" sound in the receiver. Synchronous: In this type, invented by Fessenden around 1904, the rotor was turned by a synchronous motor in synchronism with the cycles of the AC voltage to the transformer, so the spark occurred at the same points of the voltage sine wave each cycle. Usually it was designed so there was one spark each half cycle, adjusted so the spark occurred at the peak voltage when the capacitor was fully charged. Thus the spark had a steady frequency equal to a multiple of the AC line frequency, which created harmonics with the line frequency. The synchronous gap was said to produce a more musical, easily heard tone in the receiver, which cut through interference better. To reduce interference caused by the "noisy" signals of the burgeoning numbers of spark transmitters, the 1912 US Congress "Act to Regulate Radio Communication" required that "the logarithmic decrement per oscillation in the wave trains emitted by the transmitter shall not exceed two tenths" (this is equivalent to a Q factor of 15 or greater). Virtually the only spark transmitters which could satisfy this condition were the quenched-spark and rotary gap types above, and they dominated wireless telegraphy for the rest of the spark era. Marconi's timed spark system In 1912 in his high-power stations Marconi developed a refinement of the rotary discharger called the "timed spark" system, which generated what was probably the nearest to a continuous wave that sparks could produce. He used several identical resonant circuits in parallel, with the capacitors charged by a DC dynamo. These were discharged sequentially by multiple rotary discharger wheels on the same shaft to create overlapping damped waves shifted progressively in time, which were added together in the oscillation transformer so the output was a superposition of damped waves. The speed of the discharger wheel was controlled so that the time between sparks was equal to an integer multiple of the wave period. Therefore, oscillations of the successive wave trains were in phase and reinforced each other. The result was essentially a continuous sinusoidal wave, whose amplitude varied with a ripple at the spark rate. This system was necessary to give Marconi's transoceanic stations a narrow enough bandwidth that they didn't interfere with other transmitters on the narrow VLF band. Timed spark transmitters achieved the longest transmission range of any spark transmitters, but these behemoths represented the end of spark technology. The "spark" era The first application of radio was on ships, to keep in touch with shore, and send out a distress call if the ship were sinking. The Marconi Company built a string of shore stations and in 1904 established the first Morse code distress call, the letters CQD, used until the Second International Radiotelegraphic Convention in 1906 at which SOS was agreed on. The first significant marine rescue due to radiotelegraphy was the 23 January 1909 sinking of the luxury liner RMS Republic, in which 1500 people were saved. Spark transmitters and the crystal receivers used to receive them were simple enough that they were widely built by hobbyists. During the first decades of the 20th century this exciting new high tech hobby attracted a growing community of "radio amateurs", many of them teenage boys, who used their homebuilt sets recreationally to contact distant amateurs and chat with them by Morse code, and relay messages. Low-power amateur transmitters ("squeak boxes") were often built with "trembler" ignition coils from early automobiles such as the Ford Model T. In the US prior to 1912 there was no government regulation of radio, and a chaotic "wild west" atmosphere prevailed, with stations transmitting without regard to other stations on their frequency, and deliberately interfering with each other. The expanding numbers of non-syntonic broadband spark transmitters created uncontrolled congestion in the airwaves, interfering with commercial and military wireless stations. The sinking 14 April 1912 increased public appreciation for the role of radio, but the loss of life brought attention to the disorganized state of the new radio industry, and prompted regulation which corrected some abuses. Although the Titanic radio operator's CQD distress calls summoned the which rescued 705 survivors, the rescue operation was delayed four hours because the nearest ship, the SS Californian, only a few miles away, did not hear the Titanics call as its radio operator had gone to bed. This was held responsible for most of the 1500 deaths. Existing international regulations required all ships with more than 50 passengers to carry wireless equipment, but after the disaster subsequent regulations mandated ships have enough radio officers so that a round-the-clock radio watch could be kept. US President Taft and the public heard reports of chaos on the air the night of the disaster, with amateur stations interfering with official naval messages and passing false information. In response Congress passed the 1912 Radio Act, in which licenses were required for all radio transmitters, maximum damping of transmitters was limited to a decrement of 0.2 to get old noisy non-syntonic transmitters off the air, and amateurs were mainly restricted to the unused frequencies above 1.5 MHz and output power of 1 kilowatt. The largest spark transmitters were powerful transoceanic radiotelegraphy stations with input power of 100 - 300 kW. Beginning about 1910, industrial countries built global networks of these stations to exchange commercial and diplomatic telegram traffic with other countries and communicate with their overseas colonies. During World War I, radio became a strategic defensive technology, as it was realized a nation without long distance radiotelegraph stations could be isolated by an enemy cutting its submarine telegraph cables. Most of these networks were built by the two giant wireless corporations of the age: the British Marconi Company, which constructed the Imperial Wireless Chain to link the possessions of the British Empire, and the German Telefunken Co. which was dominant outside the British Empire. Marconi transmitters used the timed spark rotary discharger, while Telefunken transmitters used its quenched spark gap technology. Paper tape machines were used to transmit Morse code text at high speed. To achieve a maximum range of around 3000 – 6000 miles, transoceanic stations transmitted mainly in the very low frequency (VLF) band, from 50 kHz to as low as 15 – 20 kHz. At these wavelengths even the largest antennas were electrically short, a tiny fraction of a wavelength tall, and so had low radiation resistance (often below 1 ohm), so these transmitters required enormous wire umbrella and flattop antennas up to several miles long with large capacitive toploads, to achieve adequate efficiency. The antenna required a large loading coil at the base, 6 – 10 feet tall, to make it resonant with the transmitter. Continuous waves Although their damping had been reduced as much as possible, spark transmitters still produced damped waves, which due to their large bandwidth caused interference between transmitters. The spark also made a very loud noise when operating, produced corrosive ozone gas, eroded the spark electrodes, and could be a fire hazard. Despite its drawbacks, most wireless experts believed along with Marconi that the impulsive "whipcrack" of a spark was necessary to produce radio waves that would communicate long distances. From the beginning, physicists knew that another type of waveform, continuous sinusoidal waves (CW), had theoretical advantages over damped waves for radio transmission. Because their energy is essentially concentrated at a single frequency, in addition to causing almost no interference to other transmitters on adjacent frequencies, continuous wave transmitters could transmit longer distances with a given output power. They could also be modulated with an audio signal to carry sound. The problem was no techniques were known for generating them. The efforts described above to reduce the damping of spark transmitters can be seen as attempts to make their output approach closer to the ideal of a continuous wave, but spark transmitters could not produce true continuous waves. Beginning about 1904, continuous wave transmitters were developed using new principles, which competed with spark transmitters. Continuous waves were first generated by two short-lived technologies: The arc converter (Poulsen arc) transmitter, invented by Valdemar Poulsen in 1904 used the negative resistance of a continuous electric arc in a hydrogen atmosphere to excite oscillations in a resonant circuit. The Alexanderson alternator transmitter, developed between 1906 and 1915 by Reginald Fessenden and Ernst Alexanderson, was a huge rotating alternating current generator (alternator) driven by an electric motor at a high enough speed that it produced radio frequency current in the very low frequency range. These transmitters, which could produce power outputs of up to one megawatt, slowly replaced the spark transmitter in high-power radiotelegraphy stations. However spark transmitters remained popular in two way communication stations because most continuous wave transmitters were not capable of a mode called "break in" or "listen in" operation. With a spark transmitter, when the telegraph key was up between Morse symbols the carrier wave was turned off and the receiver was turned on, so the operator could listen for an incoming message. This allowed the receiving station, or a third station, to interrupt or "break in" to an ongoing transmission. In contrast, these early CW transmitters had to operate continuously; the carrier wave was not turned off between Morse code symbols, words, or sentences but just detuned, so a local receiver could not operate as long as the transmitter was powered up. Therefore, these stations could not receive messages until the transmitter was turned off. Obsolescence All these early technologies were superseded by the vacuum tube feedback electronic oscillator, invented in 1912 by Edwin Armstrong and Alexander Meissner, which used the triode vacuum tube invented in 1906 by Lee de Forest. Vacuum tube oscillators were a far cheaper source of continuous waves, and could be easily modulated to carry sound. Due to the development of the first high-power transmitting tubes by the end of World War I, in the 1920s tube transmitters replaced the arc converter and alternator transmitters, as well as the last of the old noisy spark transmitters. The 1927 International Radiotelegraph Convention in Washington, D.C. saw a political battle to finally eliminate spark radio. Spark transmitters were obsolete at this point, and broadcast radio audiences and aviation authorities were complaining of the disruption to radio reception that noisy legacy marine spark transmitters were causing. But shipping interests vigorously fought a blanket prohibition on damped waves, due to the capital expenditure that would be required to replace spark equipment that was still being used on older ships. The Convention prohibited licensing of new land spark transmitters after 1929. Damped wave radio emission, called Class B, was banned after 1934 except for emergency use on ships. This loophole allowed shipowners to avoid replacing spark transmitters, which were kept as emergency backup transmitters on ships through World War II. Legacy One legacy of spark-gap transmitters is that radio operators were regularly nicknamed "Sparky" long after the devices ceased to be used. Even today, the German verb funken, literally, "to spark", also means "to send a radio message". The spark gap oscillator was also used in nonradio applications, continuing long after it became obsolete in radio. In the form of the Tesla coil and Oudin coil it was used until the 1940s in the medical field of diathermy for deep body heating. High oscillating voltages of hundreds of thousands of volts at frequencies of 0.1 - 1 MHz from a Tesla coil were applied directly to the patient's body. The treatment was not painful, because currents in the radio frequency range do not cause the physiological reaction of electric shock. In 1926 William T. Bovie discovered that RF currents applied to a scalpel could cut and cauterize tissue in medical operations, and spark oscillators were used as electrosurgery generators or "Bovies" as late as the 1980s. In the 1950s a Japanese toy company, Matsudaya, produced a line of cheap remote control toy trucks, boats and robots called Radicon, which used a low-power spark transmitter in the controller as an inexpensive way to produce the radio control signals. The signals were received in the toy by a coherer receiver. Spark gap oscillators are still used to generate high-frequency high voltage needed to initiate welding arcs in gas tungsten arc welding. Powerful spark gap pulse generators are still used to simulate EMPs. See also History of radio Invention of radio Amateur radio Antique radio Coherer Crystal radio References Further reading External links Alternator, Arc and Spark Fessenden and the Early History of Radio Science Brief history of spark Massie Spark Transmitter The new England Wireless and Steam Museum The Sparks Telegraph Key Review Radio Technology in common use circa 1914 Spark gap transmitter history & operation History of radio technology Radio electronics Electric arcs Electric power conversion Telegraphy
Spark-gap transmitter
[ "Physics", "Engineering" ]
11,608
[ "Plasma phenomena", "Physical phenomena", "Radio electronics", "Electric arcs" ]
1,105,937
https://en.wikipedia.org/wiki/Electrical%20wiring
Electrical wiring is an electrical installation of cabling and associated devices such as switches, distribution boards, sockets, and light fittings in a structure. Wiring is subject to safety standards for design and installation. Allowable wire and cable types and sizes are specified according to the circuit operating voltage and electric current capability, with further restrictions on the environmental conditions, such as ambient temperature range, moisture levels, and exposure to sunlight and chemicals. Associated circuit protection, control, and distribution devices within a building's wiring system are subject to voltage, current, and functional specifications. Wiring safety codes vary by locality, country, or region. The International Electrotechnical Commission (IEC) is attempting to harmonise wiring standards among member countries, but significant variations in design and installation requirements still exist. Wiring methods Materials for wiring interior electrical systems in buildings vary depending on: Intended use and amount of power demand on the circuit Type of occupancy and size of the building National and local regulations Environment in which the wiring must operate. Wiring systems in a single family home or duplex, for example, are simple, with relatively low power requirements, infrequent changes to the building structure and layout, usually with dry, moderate temperature and non-corrosive environmental conditions. In a light commercial environment, more frequent wiring changes can be expected, large apparatus may be installed and special conditions of heat or moisture may apply. Heavy industries have more demanding wiring requirements, such as very large currents and higher voltages, frequent changes of equipment layout, corrosive, or wet or explosive atmospheres. In facilities that handle flammable gases or liquids, special rules may govern the installation and wiring of electrical equipment in hazardous areas. Wires and cables are rated by the circuit voltage, temperature rating and environmental conditions (moisture, sunlight, oil, chemicals) in which they can be used. A wire or cable has a voltage (to neutral) rating and a maximum conductor surface temperature rating. The amount of current a cable or wire can safely carry depends on the installation conditions. The international standard wire sizes are given in the IEC 60228 standard of the International Electrotechnical Commission. In North America, the American Wire Gauge standard for wire sizes is used. Cables Modern wiring materials Modern non-metallic sheathed cables, such as (US and Canadian) Types NMB and NMC, consist of two to four wires covered with thermoplastic insulation, plus a wire for Protective Earthing/Grounding (bonding), surrounded by a flexible plastic jacket. In North America and the UK this conductor is usually bare wire but in the UK it is required that this bare Protective Earth (PE) conductor be sheathed in Green/Yellow insulating tubing where the Cable Sheathing has been removed. Most other jurisdictions now require the Protective Earth conductor to be insulated to the same standard as the current carrying conductors with Green/Yellow insulation. With some cables the individual conductors are wrapped in paper before the plastic jacket is applied. Special versions of non-metallic sheathed cables, such as US Type UF, are designed for direct underground burial (often with separate mechanical protection) or exterior use where exposure to ultraviolet radiation (UV) is a possibility. These cables differ in having a moisture-resistant construction, lacking paper or other absorbent fillers, and being formulated for UV resistance. Rubber-like synthetic polymer insulation is used in industrial cables and power cables installed underground because of its superior moisture resistance. Insulated cables are rated by their allowable operating voltage and their maximum operating temperature at the conductor surface. A cable may carry multiple usage ratings for applications, for example, one rating for dry installations and another when exposed to moisture or oil. Generally, single conductor building wire in small sizes is solid wire, since the wiring is not required to be very flexible. Building wire conductors larger than 10 AWG (or about 5 mm2) are stranded for flexibility during installation, but are not sufficiently pliable to use as appliance cord. Cables for industrial, commercial and apartment buildings may contain many insulated conductors in an overall jacket, with helical tape steel or aluminium armour, or steel wire armour, and perhaps as well an overall PVC or lead jacket for protection from moisture and physical damage. Cables intended for very flexible service or in marine applications may be protected by woven bronze wires. Power or communications cables (e.g., computer networking) that are routed in or through air-handling spaces (plenums) of office buildings are required under the model building code to be either encased in metal conduit, or rated for low flame and smoke production. For some industrial uses in steel mills and similar hot environments, no organic material gives satisfactory service. Cables insulated with compressed mica flakes are sometimes used. Another form of high-temperature cable is mineral-insulated cable, with individual conductors placed within a copper tube and the space filled with magnesium oxide powder. The whole assembly is drawn down to smaller sizes, thereby compressing the powder. Such cables have a certified fire resistance rating and are more costly than non–fire-rated cable. They have little flexibility and behave more like rigid conduit rather than flexible cables. The environment of the installed wires determine how much current a cable is permitted to carry. Because multiple conductors bundled in a cable cannot dissipate heat as easily as single insulated conductors, those circuits are always rated at a lower ampacity. Tables in electrical safety codes give the maximum allowable current based on size of conductor, voltage potential, insulation type and thickness, and the temperature rating of the cable itself. The allowable current will also be different for wet or dry locations, for hot (attic) or cool (underground) locations. In a run of cable through several areas, the part with the lowest rating becomes the rating of the overall run. Cables usually are secured with special fittings where they enter electrical apparatus; this may be a simple screw clamp for jacketed cables in a dry location, or a polymer-gasketed cable connector that mechanically engages the armour of an armoured cable and provides a water-resistant connection. Special cable fittings may be applied to prevent explosive gases from flowing in the interior of jacketed cables, where the cable passes through areas where flammable gases are present. To prevent loosening of the connections of individual conductors of a cable, cables must be supported near their entrance to devices and at regular intervals along their runs. In tall buildings, special designs are required to support the conductors of vertical runs of cable. Generally, only one cable per fitting is permitted, unless the fitting is rated or listed for multiple cables. Special cable constructions and termination techniques are required for cables installed in ships. Such assemblies are subjected to environmental and mechanical extremes. Therefore, in addition to electrical and fire safety concerns, such cables may also be required to be pressure-resistant where they penetrate a vessel's bulkheads. They must also resist corrosion caused by salt water or salt spray, which is accomplished through the use of thicker, specially constructed jackets, and by tinning the individual wire stands. In North American practice, for residential and light commercial buildings fed with a single-phase split 120/240 service, an overhead cable from a transformer on a power pole is run to the service entrance point. The cable is a three conductor twisted "triplex" cable with a bare neutral and two insulated conductors, with no overall cable jacket. The neutral conductor is often a supporting "messenger" steel wire, which is used to support the insulated line conductors. Copper conductors Electrical devices often use copper conductors because of their properties, including their high electrical conductivity, tensile strength, ductility, creep resistance, corrosion resistance, thermal conductivity, coefficient of thermal expansion, solderability, resistance to electrical overloads, compatibility with electrical insulators, and ease of installation. Copper is used in many types of electrical wiring. Aluminium conductors Aluminium wire was common in North American residential wiring from the late 1960s to mid-1970s due to the rising cost of copper. Because of its greater resistivity, aluminium wiring requires larger conductors than copper. For instance, instead of 14 AWG (American wire gauge) copper wire, aluminium wiring would need to be 12 AWG on a typical 15 ampere lighting circuit, though local building codes vary. Solid aluminium conductors were originally made in the 1960s from a utility-grade aluminium alloy that had undesirable properties for a building wire, and were used with wiring devices intended for copper conductors. These practices were found to cause defective connections and fire hazards. In the early 1970s new aluminium wire made from one of several special alloys was introduced, and all devices – breakers, switches, receptacles, splice connectors, wire nuts, etc. — were specially designed for the purpose. These newer aluminium wires and special designs address problems with junctions between dissimilar metals, oxidation on metal surfaces, and mechanical effects that occur as different metals expand at different rates with increases in temperature. Unlike copper, aluminium has a tendency to creep or cold-flow under pressure, so older plain steel screw clamped connections could become loose over time. Newer electrical devices designed for aluminium conductors have features intended to compensate for this effect. Unlike copper, aluminium forms an insulating oxide layer on the surface. This is sometimes addressed by coating aluminium conductors with an antioxidant paste (containing zinc dust in a low-residue polybutene base) at joints, or by applying a mechanical termination designed to break through the oxide layer during installation. Some terminations on wiring devices designed only for copper wire would overheat under heavy current load and cause fires when used with aluminium conductors. Revised standards for wire materials and wiring devices (such as the CO/ALR "copper-aluminium-revised" designation) were developed to reduce these problems. While larger sizes are still used to feed power to electrical panels and large devices, aluminium wiring for residential use has acquired a poor reputation and has fallen out of favour. Aluminium conductors are still heavily used for bulk power transmission, power distribution, and large feeder circuits with heavy current loads, due to the various advantages they offer over copper wiring. Aluminium conductors both cost and weigh less than copper conductors, so a much larger cross sectional area can be used for the same weight and price. This can compensate for the higher resistance and lower mechanical strength of aluminium, meaning the larger cross sectional area is needed to achieve comparable current capacity and other features. Aluminium conductors must be installed with compatible connectors and special care must be taken to ensure the contact surface does not oxidise. Raceways and cable runs Insulated wires may be run in one of several forms between electrical devices. This may be a specialised bendable pipe, called a conduit, or one of several varieties of metal (rigid steel or aluminium) or non-metallic (PVC or HDPE) tubing. Rectangular cross-section metal or PVC wire troughs (North America) or trunking (UK) may be used if many circuits are required. Wires run underground may be run in plastic tubing encased in concrete, but metal elbows may be used in severe pulls. Wiring in exposed areas, for example factory floors, may be run in cable trays or rectangular raceways having lids. Where wiring, or raceways that hold the wiring, must traverse fire-resistance rated walls and floors, the openings are required by local building codes to be firestopped. In cases where safety-critical wiring must be kept operational during an accidental fire, fireproofing must be applied to maintain circuit integrity in a manner to comply with a product's certification listing. The nature and thickness of any passive fire protection materials used in conjunction with wiring and raceways has a quantifiable impact upon the ampacity derating, because the thermal insulation properties needed for fire resistance also inhibit air cooling of power conductors. Cable trays are used in industrial areas where many insulated cables are run together. Individual cables can exit the tray at any point, simplifying the wiring installation and reducing the labour cost for installing new cables. Power cables may have fittings in the tray to maintain clearance between the conductors, but small control wiring is often installed without any intentional spacing between cables. Local electrical regulations may restrict or place special requirements on mixing of voltage levels within one cable tray. Good design practices may segregate, for example, low level measurement or signal cables from trays carrying high power branch circuits, to prevent induction of noise into sensitive circuits. Since wires run in conduits or underground cannot dissipate heat as easily as in open air, and since adjacent circuits contribute induced currents, wiring regulations give rules to establish the current capacity (ampacity). Special sealed fittings are used for wiring routed through potentially explosive atmospheres. Bus bars, bus duct, cable bus For very high currents in electrical apparatus, and for high currents distributed through a building, bus bars can be used. (The term "bus" is a contraction of the Latin omnibus – meaning "for all".) Each live ("hot") conductor of such a system is a rigid piece of copper or aluminium, usually in flat bars (but sometimes as tubing or other shapes). Open bus bars are never used in publicly accessible areas, although they are used in manufacturing plants and power company switch yards to gain the benefit of air cooling. A variation is to use heavy cables, especially where it is desirable to transpose or "roll" phases. In industrial applications, conductor bars are often pre-assembled with insulators in grounded enclosures. This assembly, known as bus duct or busway, can be used for connections to large switchgear or for bringing the main power feed into a building. A form of bus duct known as "plug-in bus" is used to distribute power down the length of a building; it is constructed to allow tap-off switches or motor controllers to be installed at designated places along the bus. The big advantage of this scheme is the ability to remove or add a branch circuit without removing voltage from the whole duct. Bus ducts may have all phase conductors in the same enclosure (non-isolated bus), or may have each conductor separated by a grounded barrier from the adjacent phases (segregated bus). For conducting large currents between devices, a cable bus is used. For very large currents in generating stations or substations, where it is difficult to provide circuit protection, an isolated-phase bus is used. Each phase of the circuit is run in a separate grounded metal enclosure. The only fault possible is a phase-to-ground fault, since the enclosures are separated. This type of bus can be rated up to 50,000 amperes and up to hundreds of kilovolts (during normal service, not just for faults), but is not used for building wiring in the conventional sense. Electrical panels Electrical panels are easily accessible junction boxes used to reroute and switch electrical services. The term is often used to refer to circuit breaker panels or fuseboxes. Local codes can specify physical clearance around the panels. Degradation by pests Squirrels, rats, and other rodents may gnaw on unprotected wiring, causing fire and shock hazards. This is especially true of PVC-insulated telephone and computer network cables. Several techniques have been developed to deter these pests, including insulation loaded with pepper dust. Early wiring methods The first interior power wiring systems used conductors that were bare or covered with cloth, which were secured by staples to the framing of the building or on running boards. Where conductors went through walls, they were protected with cloth tape. Splices were done similarly to telegraph connections, and soldered for security. Underground conductors were insulated with wrappings of cloth tape soaked in pitch, and laid in wooden troughs which were then buried. Such wiring systems were unsatisfactory because of the danger of electrocution and fire, plus the high labour cost for such installations. The first electrical codes arose in the 1880s with the commercial introduction of electrical power; however, many conflicting standards existed for the selection of wire sizes and other design rules for electrical installations, and a need was seen to introduce uniformity on the grounds of safety. Knob and tube (US) The earliest standardized method of wiring in buildings, in common use in North America from about 1880 to the 1930s, was knob and tube (K&T) wiring: single conductors were run through cavities between the structural members in walls and ceilings, with ceramic tubes forming protective channels through joists and ceramic knobs attached to the structural members to provide air between the wire and the lumber and to support the wires. Since air was free to circulate over the wires, smaller conductors could be used than required in cables. By arranging wires on opposite sides of building structural members, some protection was afforded against short-circuits that can be caused by driving a nail into both conductors simultaneously. By the 1940s, the labor cost of installing two conductors rather than one cable resulted in a decline in new knob-and-tube installations. However, the US code still allows new K&T wiring installations in special situations (some rural and industrial applications). Metal-sheathed wires In the United Kingdom, an early form of insulated cable, introduced in 1896, consisted of two impregnated-paper-insulated conductors in an overall lead sheath. Joints were soldered, and special fittings were used for lamp holders and switches. These cables were similar to underground telegraph and telephone cables of the time. Paper-insulated cables proved unsuitable for interior wiring installations because very careful workmanship was required on the lead sheaths to ensure moisture did not affect the insulation. A system later invented in the UK in 1908 employed vulcanised-rubber insulated wire enclosed in a strip metal sheath. The metal sheath was bonded to each metal wiring device to ensure earthing continuity. A system developed in Germany called "Kuhlo wire" used one, two, or three rubber-insulated wires in a brass or lead-coated iron sheet tube, with a crimped seam. The enclosure could also be used as a return conductor. Kuhlo wire could be run exposed on surfaces and painted, or embedded in plaster. Special outlet and junction boxes were made for lamps and switches, made either of porcelain or sheet steel. The crimped seam was not considered as watertight as the Stannos wire used in England, which had a soldered sheath. A somewhat similar system called "concentric wiring" was introduced in the United States around 1905. In this system, an insulated electrical wire was wrapped with copper tape which was then soldered, forming the grounded (return) conductor of the wiring system. The bare metal sheath, at earth potential, was considered safe to touch. While companies such as General Electric manufactured fittings for the system and a few buildings were wired with it, it was never adopted into the US National Electrical Code. Drawbacks of the system were that special fittings were required, and that any defect in the connection of the sheath would result in the sheath becoming energised. Other historical wiring methods Armored cables with two rubber-insulated conductors in a flexible metal sheath were used as early as 1906, and were considered at the time a better method than open knob-and-tube wiring, although much more expensive. The first rubber-insulated cables for US building wiring were introduced in 1922 with . These were two or more solid copper electrical wires with rubber insulation, plus woven cotton cloth over each conductor for protection of the insulation, with an overall woven jacket, usually impregnated with tar as a protection from moisture. Waxed paper was used as a filler and separator. Over time, rubber-insulated cables become brittle because of exposure to atmospheric oxygen, so they must be handled with care and are usually replaced during renovations. When switches, socket outlets or light fixtures are replaced, the mere act of tightening connections may cause hardened insulation to flake off the conductors. Rubber insulation further inside the cable often is in better condition than the insulation exposed at connections, due to reduced exposure to oxygen. The sulfur in vulcanized rubber insulation attacked bare copper wire so the conductors were tinned to prevent this. The conductors reverted to being bare when rubber ceased to be used. About 1950, PVC insulation and jackets were introduced, especially for residential wiring. About the same time, single conductors with a thinner PVC insulation and a thin nylon jacket (e.g. US Type THN, THHN, etc.) became common. The simplest form of cable has two insulated conductors twisted together to form a unit. Such non-jacketed cables with two (or more) conductors are used only for extra-low voltage signal and control applications such as doorbell wiring. Other methods of securing wiring that are now obsolete include: Re-use of existing gas pipes when converting gas lighting installations to electric lighting. Insulated conductors were pulled through the pipes that had formerly supplied the gas lamps. Although used occasionally, this method risked insulation damage from sharp edges inside the pipe at each joint. Wood mouldings with grooves cut for single conductor wires, covered by a wooden cap strip. These were prohibited in North American electrical codes by 1928. Wooden moulding was also used to some degree in the UK, but was never permitted by German and Austrian rules. A system of flexible twin cords supported by glass or porcelain buttons was used near the turn of the 20th century in Europe, but was soon replaced by other methods. During the first years of the 20th century, various patented forms of wiring system such as Bergman and Peschel tubing were used to protect wiring; these used very thin fibre tubes, or metal tubes which were also used as return conductors. In Austria, wires were concealed by embedding a rubber tube in a groove in the wall, plastering over it, then removing the tube and pulling wires through the cavity. Metal moulding systems, with a flattened oval section consisting of a base strip and a snap-on cap channel, were more costly than open wiring or wooden moulding, but could be easily run on wall surfaces. Similar surface mounted raceway wiring systems are still available today. See also 10603 – a frequently used MIL-SPEC compliant wire Bus duct Cable entry system Cable gland Cable management Cable tray Domestic AC power plugs and sockets Electric power distribution Electrical code Electrical conduit Electrical room Electrical wiring in North America Electrical wiring in the United Kingdom Grounding Ground and neutral Home wiring Industrial and multiphase power plugs and sockets Oxygen-free copper Portable cord Power cord Restriction of Hazardous Substances Directive (RoHS) Single-phase electric power Structured cabling Three-phase electric power Tri-rated cable References Bibliography Croft, Terrel (1915) Wiring of Finished Buildings, McGraw Hill, New York. External links Electrical wiring FAQ (oriented to US and Canadian practice) Building codes Electrical engineering Power cables
Electrical wiring
[ "Physics", "Engineering" ]
4,661
[ "Electrical systems", "Building engineering", "Physical systems", "Building codes", "Electrical engineering", "Electrical wiring" ]
1,106,055
https://en.wikipedia.org/wiki/Linear%20no-threshold%20model
The linear no-threshold model (LNT) is a dose-response model used in radiation protection to estimate stochastic health effects such as radiation-induced cancer, genetic mutations and teratogenic effects on the human body due to exposure to ionizing radiation. The model assumes a linear relationship between dose and health effects, even for very low doses where biological effects are more difficult to observe. The LNT model implies that all exposure to ionizing radiation is harmful, regardless of how low the dose is, and that the effect is cumulative over lifetime. The LNT model is commonly used by regulatory bodies as a basis for formulating public health policies that set regulatory dose limits to protect against the effects of radiation. The validity of the LNT model, however, is disputed, and other models exist: the threshold model, which assumes that very small exposures are harmless, the radiation hormesis model, which says that radiation at very small doses can be beneficial, and the supra-linear model. It has been argued that the LNT model may have created an irrational fear of radiation. Scientific organizations and government regulatory bodies generally support use of the LNT model, particularly for optimization. However, some caution against estimating health effects from doses below a certain level (see ). Introduction Stochastic health effects are those that occur by chance, and whose probability is proportional to the dose, but whose severity is independent of the dose. The LNT model assumes there is no lower threshold at which stochastic effects start, and assumes a linear relationship between dose and the stochastic health risk. In other words, LNT assumes that radiation has the potential to cause harm at any dose level, however small, and the sum of several very small exposures is just as likely to cause a stochastic health effect as a single larger exposure of equal dose value. In contrast, deterministic health effects are radiation-induced effects such as acute radiation syndrome, which are caused by tissue damage. Deterministic effects reliably occur above a threshold dose and their severity increases with dose. Because of the inherent differences, LNT is not a model for deterministic effects, which are instead characterized by other types of dose-response relationships. LNT is a common model to calculate the probability of radiation-induced cancer both at high doses where epidemiology studies support its application, but controversially, also at low doses, which is a dose region that has a lower predictive statistical confidence. Nonetheless, regulatory bodies, such as the Nuclear Regulatory Commission (NRC), commonly use LNT as a basis for regulatory dose limits to protect against stochastic health effects, as found in many public health policies. Whether the LNT model describes the reality for small-dose exposures is disputed, and challenges to the LNT model used by NRC for setting radiation protection regulations were submitted. NRC rejected the petitions in 2021 because "they fail to present an adequate basis supporting the request to discontinue use of the LNT model". Other dose models include: the threshold model, which assumes that very small exposures are harmless, and the radiation hormesis model, which claims that radiation at very small doses can be beneficial. Because the current data is inconclusive, scientists disagree on which model should be used, though most national and international cancer research organizations explicitly endorse LNT for regulating exposures to low dose radiation. The model is sometimes used to quantify the cancerous effect of collective doses of low-level radioactive contaminations, which is controversial. Such practice has been criticized by the International Commission on Radiological Protection since 2007. Origins The association of exposure to radiation with cancer had been observed as early as 1902, six years after the discovery of X-rays by Wilhelm Röntgen and radioactivity by Henri Becquerel. In 1927, Hermann Muller demonstrated that radiation may cause genetic mutation. He also suggested mutation as a cause of cancer. Gilbert N. Lewis and Alex Olson, based on Muller's discovery of the effect of radiation on mutation, proposed a mechanism for biological evolution in 1928, suggesting that genomic mutation was induced by cosmic and terrestrial radiation and first introduced the idea that such mutation may occur proportionally to the dose of radiation. Various laboratories, including Muller's, then demonstrated the apparent linear dose response of mutation frequency. Muller, who received a Nobel Prize for his work on the mutagenic effect of radiation in 1946, asserted in his Nobel lecture, The Production of Mutation, that mutation frequency is "directly and simply proportional to the dose of irradiation applied" and that there is "no threshold dose". The early studies were based on higher levels of radiation that made it hard to establish the safety of low level of radiation. Indeed, many early scientists believed that there may be a tolerance level, and that low doses of radiation may not be harmful. A later study in 1955 on mice exposed to low dose of radiation suggests that they may outlive control animals. The interest in the effects of radiation intensified after the dropping of atomic bombs on Hiroshima and Nagasaki, and studies were conducted on the survivors. Although compelling evidence on the effect of low dosage of radiation was hard to come by, by the late 1940s, the idea of LNT became more popular due to its mathematical simplicity. In 1954, the National Council on Radiation Protection and Measurements (NCRP) introduced the concept of maximum permissible dose. In 1958, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) assessed the LNT model and a threshold model, but noted the difficulty in acquiring "reliable information about the correlation between small doses and their effects either in individuals or in large populations". The United States Congress Joint Committee on Atomic Energy (JCAE) similarly could not establish if there is a threshold or "safe" level for exposure; nevertheless, it introduced the concept of "As Low As Reasonably Achievable" (ALARA). ALARA would become a fundamental principle in radiation protection policy that implicitly accepts the validity of LNT. In 1959, the United States Federal Radiation Council (FRC) supported the concept of the LNT extrapolation down to the low dose region in its first report. By the 1970s, the LNT model had become accepted as the standard in radiation protection practice by a number of bodies. In 1972, the first report of National Academy of Sciences (NAS) Biological Effects of Ionizing Radiation (BEIR), an expert panel who reviewed available peer reviewed literature, supported the LNT model on pragmatic grounds, noting that while "dose-effect relationship for x rays and gamma rays may not be a linear function", the "use of linear extrapolation ... may be justified on pragmatic grounds as a basis for risk estimation." In its seventh report of 2006, NAS BEIR VII writes, "the committee concludes that the preponderance of information indicates that there will be some risk, even at low doses". The Health Physics Society (in the United States) has published a documentary series on the origins of the LNT model. Radiation precautions and public policy Radiation precautions have led to sunlight being listed as a carcinogen at all sun exposure rates, due to the ultraviolet component of sunlight, with no safe level of sunlight exposure being suggested, following the precautionary LNT model. According to a 2007 study submitted by the University of Ottawa to the Department of Health and Human Services in Washington, D.C., there is not enough information to determine a safe level of sun exposure. The linear no-threshold model is used to extrapolate the expected number of extra deaths caused by exposure to environmental radiation, and it therefore has a great impact on public policy. The model is used to translate any radiation release, into a number of lives lost, while any reduction in radiation exposure, for example as a consequence of radon detection, is translated into a number of lives saved. When the doses are very low the model predicts new cancers only in a very small fraction of the population, but for a large population, the number of lives is extrapolated into hundreds or thousands. A linear model has long been used in health physics to set maximum acceptable radiation exposures. Controversy The LNT model has been contested by a number of scientists. It has been claimed that the early proponent of the model Hermann Joseph Muller intentionally ignored an early study that did not support the LNT model when he gave his 1946 Nobel Prize address advocating the model. In very high dose radiation therapy, it was known at the time that radiation can cause a physiological increase in the rate of pregnancy anomalies; however, human exposure data and animal testing suggests that the "malformation of organs appears to be a deterministic effect with a threshold dose", below which no rate increase is observed. A review in 1999 on the link between the Chernobyl accident and teratology (birth defects) concludes that "there is no substantive proof regarding radiation‐induced teratogenic effects from the Chernobyl accident". It is argued that the human body has defense mechanisms, such as DNA repair and programmed cell death, that would protect it against carcinogenesis due to low-dose exposures of carcinogens. However, these repair mechanisms are known to be error prone. A 2011 research of the cellular repair mechanisms support the evidence against the linear no-threshold model. According to its authors, this study published in the Proceedings of the National Academy of Sciences of the United States of America "casts considerable doubt on the general assumption that risk to ionizing radiation is proportional to dose". A 2011 review of studies addressing childhood leukaemia following exposure to ionizing radiation, including both diagnostic exposure and natural background exposure from radon, concluded that existing risk factors, excess relative risk per sievert (ERR/Sv), is "broadly applicable" to low dose or low dose-rate exposure, "although the uncertainties associated with this estimate are considerable". The study also notes that "epidemiological studies have been unable, in general, to detect the influence of natural background radiation upon the risk of childhood leukaemia" Many expert scientific panels have been convened on the risks of ionizing radiation. Most explicitly support the LNT model and none have concluded that evidence exists for a threshold, with the exception of the French Academy of Sciences in a 2005 report. Considering the uncertainty of health effects at low doses, several organizations caution against estimating health effects below certain doses, generally below natural background, as noted below: The US Nuclear Regulatory Commission upheld the LNT model in 2021 as a "sound regulatory basis for minimizing the risk of unnecessary radiation exposure to both members of the public and radiation workers" following challenges to the dose limit requirements contained in its regulations. In 2004 the United States National Research Council (part of the National Academy of Sciences) supported the linear no threshold model and stated regarding Radiation hormesis: In 2005 the United States National Academies' National Research Council published its comprehensive meta-analysis of low-dose radiation research BEIR VII, Phase 2. In its press release the Academies stated: In a 2005 report, the International Commission on Radiological Protection stated: "The report concludes that while existence of a low-dose threshold does not seem to be unlikely for radiation-related cancers of certain tissues, the evidence does not favour the existence of a universal threshold. The LNT hypothesis, combined with an uncertain DDREF for extrapolation from high doses, remains a prudent basis for radiation protection at low doses and low dose rates." In a 2007 report, ICRP noted that collective dose is effective for optimization, but aggregation of very low doses to estimate excess cancers is inappropriate because of large uncertainties. The National Council on Radiation Protection and Measurements (a body commissioned by the United States Congress), in a 2018 report, "concludes that the recent epidemiological studies support the continued use of LNT model for radiation protection. This is in accord with judgments by other national and international scientific committees, based on somewhat older data, that no alternative dose-response relationship appears more pragmatic or prudent for radiation protection purposes than the LNT model." The United States Environmental Protection Agency endorses the LNT model in its 2011 report on radiogenic cancer risk: UNSCEAR stated in Appendix C of its 2020/2021 report: A number of organisations caution against using the Linear no-threshold model to estimate risk from radiation exposure below a certain level: The French Academy of Sciences (Académie des sciences) and the National Academy of Medicine (Académie nationale de médecine) published a report in 2005 (at the same time as BEIR VII report in the United States) that rejected the linear no-threshold model in favor of a threshold dose response and a significantly reduced risk at low radiation exposure: The Health Physics Society's position statement first adopted in January 1996, last revised in February 2019, states: The American Nuclear Society states that the LNT model may not adequately describe the relationship between harm and exposure and notes the recommendation in ICRP-103 "that the LNT model not be used for estimating the health effects of trivial exposures received by large populations over long periods of time…" It further recommends additional research. UNSCEAR stated in its 2012 report: Mental health effects It has been argued that the LNT model had caused an irrational fear of radiation, whose observable effects are much more significant than non-observable effects postulated by LNT. In the wake of the 1986 Chernobyl accident in Ukraine, Europe-wide anxieties were fomented in pregnant mothers over the perception enforced by the LNT model that their children would be born with a higher rate of mutations. As far afield as the country of Switzerland, hundreds of excess induced abortions were performed on the healthy unborn, out of this no-threshold fear. Following the accident however, studies of data sets approaching a million births in the EUROCAT database, divided into "exposed" and control groups were assessed in 1999. As no Chernobyl impacts were detected, the researchers conclude "in retrospect the widespread fear in the population about the possible effects of exposure on the unborn was not justified". Despite studies from Germany and Turkey, the only robust evidence of negative pregnancy outcomes that transpired after the accident were these elective abortion indirect effects, in Greece, Denmark, Italy etc., due to the anxieties created. The consequences of low-level radiation are often more psychological than radiological. Because damage from very-low-level radiation cannot be detected, people exposed to it are left in anguished uncertainty about what will happen to them. Many believe they have been fundamentally contaminated for life and may refuse to have children for fear of birth defects. They may be shunned by others in their community who fear a sort of mysterious contagion. Forced evacuation from a radiation or nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, or suicide. Such was the outcome of the 1986 Chernobyl nuclear disaster in Ukraine. A comprehensive 2005 study concluded that "the mental health impact of Chernobyl is the largest public health problem unleashed by the accident to date". Frank N. von Hippel, a U.S. scientist, commented on the 2011 Fukushima nuclear disaster, saying that "fear of ionizing radiation could have long-term psychological effects on a large portion of the population in the contaminated areas". Such great psychological danger does not accompany other materials that put people at risk of cancer and other deadly illness. Visceral fear is not widely aroused by, for example, the daily emissions from coal burning, although as a National Academy of Sciences study found, this causes 10,000 premature deaths a year in the US. It is "only nuclear radiation that bears a huge psychological burden – for it carries a unique historical legacy". See also DNA repair Dose fractionation Nuclear power debate#Health effects on population near nuclear power plants and workers Radiation-induced cancer Radiology Radiotherapy Inge Schmitz-Feuerhake Biphasic Model, a fringe theory that low dose radiation is generally more harmful than higher doses. References External links ICRP, International Commission on Radiation Protection ICRU, International Commission on Radiation Units IAEA, International Atomic Agency Energy Agency UNSCEAR, United Nations Scientific Committee on the effects of Ionizing Radiations HPA (ex NCRP), Health Protection Agency, UK IRPA, International Radiation Protection Association NCRP, National Council on Radiation Protection and Measurements, US IRSN, Institute for Radioprotection and Nuclear Safety, France Report from the European Committee on Radiation Risk broadly supporting the Linear No Threshold model ECRR report on Chernobyl (April 2006) claiming deliberate suppression of the LNT in public health studies BBC article discussing doubts over LNT How dangerous is ionising radiation? Reprinted PowerPoint notes from a colloquium at the Physics Department, Oxford University, 24 November 2006 International Dose-Response Society – dedicated to the enhancement, exchange, and dissemination of ongoing global research in hormesis, a dose-response phenomenon characterized by low-dose stimulation and high-dose inhibition. Radiation health effects Radiobiology Nuclear medicine Oncology Medical controversies Radiation protection
Linear no-threshold model
[ "Chemistry", "Materials_science", "Biology" ]
3,527
[ "Radiobiology", "Radiation effects", "Radiation health effects", "Radioactivity" ]
1,106,101
https://en.wikipedia.org/wiki/Radiation%20hormesis
Radiation hormesis is the hypothesis that low doses of ionizing radiation (within the region of and just above natural background levels) are beneficial, stimulating the activation of repair mechanisms that protect against disease, that are not activated in absence of ionizing radiation. The reserve repair mechanisms are hypothesized to be sufficiently effective when stimulated as to not only cancel the detrimental effects of ionizing radiation but also inhibit disease not related to radiation exposure (see hormesis). It has been a mainstream concept since at least 2009. While the effects of high and acute doses of ionising radiation are easily observed and understood in humans (e.g. Japanese atomic bomb survivors), the effects of low-level radiation are very difficult to observe and highly controversial. This is because the baseline cancer rate is already very high and the risk of developing cancer fluctuates 40% because of individual life style and environmental effects, obscuring the subtle effects of low-level radiation. An acute effective dose of 100 millisieverts may increase cancer risk by ~0.8%. However, children are particularly sensitive to radioactivity, with childhood leukemias and other cancers increasing even within natural and man-made background radiation levels (under 4 mSv cumulative with 1 mSv being an average annual dose from terrestrial and cosmic radiation, excluding radon which primarily doses the lung). There is limited evidence that exposures around this dose level will cause negative subclinical health impacts to neural development. Students born in regions of higher Chernobyl fallout performed worse in secondary school, particularly in mathematics. "Damage is accentuated within families (i.e., siblings comparison) and among children born to parents with low education..." who often don't have the resources to overcome this additional health challenge. Hormesis remains largely unknown to the public. Government and regulatory bodies disagree on the existence of radiation hormesis and research points to the "severe problems and limitations" with the use of hormesis in general as the "principal dose-response default assumption in a risk assessment process charged with ensuring public health protection." Quoting results from a literature database research, the Académie des Sciences – Académie nationale de Médecine (French Academy of Sciences – National Academy of Medicine) stated in their 2005 report concerning the effects of low-level radiation that many laboratory studies have observed radiation hormesis. However, they cautioned that it is not yet known if radiation hormesis occurs outside the laboratory, or in humans. Reports by the United States National Research Council and the National Council on Radiation Protection and Measurements and the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) argue that there is no evidence for hormesis in humans and in the case of the National Research Council hormesis is outright rejected as a possibility. Therefore, estimating linear no-threshold model (LNT) continues to be the model generally used by regulatory agencies for human radiation exposure. Proposed mechanism and ongoing debate Radiation hormesis proposes that radiation exposure comparable to and just above the natural background level of radiation is not harmful but beneficial, while accepting that much higher levels of radiation are hazardous. Proponents of radiation hormesis typically claim that radio-protective responses in cells and the immune system not only counter the harmful effects of radiation but additionally act to inhibit spontaneous cancer not related to radiation exposure. Radiation hormesis stands in stark contrast to the more generally accepted linear no-threshold model (LNT), which states that the radiation dose-risk relationship is linear across all doses, so that small doses are still damaging, albeit less so than higher ones. Opinion pieces on chemical and radiobiological hormesis appeared in the journals Nature and Science in 2003. Assessing the risk of radiation at low doses (<100 mSv) and low dose rates (<0.1 mSv.min−1) is highly problematic and controversial. While epidemiological studies on populations of people exposed to an acute dose of high level radiation such as Japanese atomic bomb survivors () have robustly upheld the LNT (mean dose ~210 mSv), studies involving low doses and low dose rates have failed to detect any increased cancer rate. This is because the baseline cancer rate is already very high (~42 of 100 people will be diagnosed in their lifetime) and it fluctuates ~40% because of lifestyle and environmental effects, obscuring the subtle effects of low level radiation. Epidemiological studies may be capable of detecting elevated cancer rates as low as 1.2 to 1.3 i.e. 20% to 30% increase. But for low doses (1–100 mSv) the predicted elevated risks are only 1.001 to 1.04 and excess cancer cases, if present, cannot be detected due to confounding factors, errors and biases. In particular, variations in smoking prevalence or even accuracy in reporting smoking cause wide variation in excess cancer and measurement error bias. Thus, even a large study of many thousands of subjects with imperfect smoking prevalence information will fail to detect the effects of low level radiation than a smaller study that properly compensates for smoking prevalence. Given the absence of direct epidemiological evidence, there is considerable debate as to whether the dose-response relationship <100 mSv is supralinear, linear (LNT), has a threshold, is sub-linear, or whether the coefficient is negative with a sign change, i.e. a hormetic response. The radiation adaptive response seems to be a main origin of the potential hormetic effect. The theoretical studies indicate that the adaptive response is responsible for the shape of dose-response curve and can transform the linear relationship (LNT) into the hormetic one. While most major consensus reports and government bodies currently adhere to LNT, the 2005 French Academy of Sciences-National Academy of Medicine's report concerning the effects of low-level radiation rejected LNT as a scientific model of carcinogenic risk at low doses. Using LNT to estimate the carcinogenic effect at doses of less than 20 mSv is not justified in the light of current radiobiologic knowledge. They consider there to be several dose-effect relationships rather than only one, and that these relationships have many variables such as target tissue, radiation dose, dose rate and individual sensitivity factors. They request that further study is required on low doses (less than 100 mSv) and very low doses (less than 10 mSv) as well as the impact of tissue type and age. The Academy considers the LNT model is only useful for regulatory purposes as it simplifies the administrative task. Quoting results from literature research, they furthermore claim that approximately 40% of laboratory studies on cell cultures and animals indicate some degree of chemical or radiobiological hormesis, and state: ...its existence in the laboratory is beyond question and its mechanism of action appears well understood. They go on to outline a growing body of research that illustrates that the human body is not a passive accumulator of radiation damage but it actively repairs the damage caused via a number of different processes, including: Mechanisms that mitigate reactive oxygen species generated by ionizing radiation and oxidative stress. Apoptosis of radiation damaged cells that may undergo tumorigenesis is initiated at only few mSv. Cell death during meiosis of radiation damaged cells that were unsuccessfully repaired. The existence of a cellular signaling system that alerts neighboring cells of cellular damage. The activation of enzymatic DNA repair mechanisms around 10 mSv. Modern DNA microarray studies which show that numerous genes are activated at radiation doses well below the level that mutagenesis is detected. Radiation-induced tumorigenesis may have a threshold related to damage density, as revealed by experiments that employ blocking grids to thinly distribute radiation. A large increase in tumours in immunosuppressed individuals illustrates that the immune system efficiently destroys aberrant cells and nascent tumors. Furthermore, increased sensitivity to radiation induced cancer in the inherited condition Ataxia-telangiectasia like disorder, illustrates the damaging effects of loss of the repair gene Mre11h resulting in the inability to fix DNA double-strand breaks. The BEIR-VII report argued that, "the presence of a true dose threshold demands totally error-free DNA damage response and repair." The specific damage they worry about is double strand breaks (DSBs) and they continue, "error-prone nonhomologous end joining (NHEJ) repair in postirradiation cellular response, argues strongly against a DNA repair-mediated low-dose threshold for cancer initiation". Recent research observed that DSBs caused by CAT scans are repaired within 24 hours and DSBs may be more efficiently repaired at low doses, suggesting that the risk of ionizing radiation at low doses may not be directly proportional to the dose. However, it is not known if low-dose ionizing radiation stimulates the repair of DSBs not caused by ionizing radiation i.e. a hormetic response. Radon gas in homes is the largest source of radiation dose for most individuals and it is generally advised that the concentration be kept below 150 Bq/m³ (4 pCi/L). A recent retrospective case-control study of lung cancer risk showed substantial cancer rate reduction between 50 and 123 Bq per cubic meter relative to a group at zero to 25 Bq per cubic meter. This study is cited as evidence for hormesis, but a single study all by itself cannot be regarded as definitive. Other studies into the effects of domestic radon exposure have not reported a hormetic effect; including for example the respected "Iowa Radon Lung Cancer Study" of Field et al. (2000), which also used sophisticated radon exposure dosimetry. In addition, Darby et al. (2005) argue that radon exposure is negatively correlated with the tendency to smoke and environmental studies need to accurately control for this; people living in urban areas where smoking rates are higher usually have lower levels of radon exposure due to the increased prevalence of multi-story dwellings. When doing so, they found a significant increase in lung cancer amongst smokers exposed to radon at doses as low as 100 to 199 Bq m−3 and warned that smoking greatly increases the risk posed by radon exposure i.e. reducing the prevalence of smoking would decrease deaths caused by radon. However, the discussion about the opposite experimental results is still going on, especially the popular US and German studies have found some hormetic effects. Furthermore, particle microbeam studies show that passage of even a single alpha particle (e.g. from radon and its progeny) through cell nuclei is highly mutagenic, and that alpha radiation may have a higher mutagenic effect at low doses (even if a small fraction of cells are hit by alpha particles) than predicted by linear no-threshold model, a phenomenon attributed to bystander effect. However, there is currently insufficient evidence at hand to suggest that the bystander effect promotes carcinogenesis in humans at low doses. Statements by leading nuclear bodies Radiation hormesis has not been accepted by either the United States National Research Council, or the National Council on Radiation Protection and Measurements (NCRP). In May 2018, the NCRP published the report of an interdisciplinary group of radiation experts who critically reviewed 29 high-quality epidemiologic studies of populations exposed to radiation in the low dose and low dose-rate range, mostly published within the last 10 years. The group of experts concluded: The recent epidemiologic studies support the continued use of the LNT model for radiation protection. This is in accord with judgments by other national and international scientific committees, based on somewhat older data, that no alternative dose-response relationship appears more pragmatic or prudent for radiation protection purposes than the LNT model. In addition, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) wrote in its 2000 report: Until the [...] uncertainties on low-dose response are resolved, the Committee believes that an increase in the risk of tumour induction proportionate to the radiation dose is consistent with developing knowledge and that it remains, accordingly, the most scientifically defensible approximation of low-dose response. However, a strictly linear dose response should not be expected in all circumstances. This is a reference to the fact that very low doses of radiation have only marginal impacts on individual health outcomes. It is therefore difficult to detect the 'signal' of decreased or increased morbidity and mortality due to low-level radiation exposure in the 'noise' of other effects. The notion of radiation hormesis has been rejected by the National Research Council's (part of the National Academy of Sciences) 16-year-long study on the Biological Effects of Ionizing Radiation. "The scientific research base shows that there is no threshold of exposure below which low levels of ionizing radiation can be demonstrated to be harmless or beneficial. The health risks – particularly the development of solid cancers in organs – rise proportionally with exposure" says Richard R. Monson, associate dean for professional education and professor of epidemiology, Harvard School of Public Health, Boston. Studies of low-level radiation Cancer rates and very high natural background gamma radiation at Kerala, India Kerala's monazite sand (containing a third of the world's economically recoverable reserves of radioactive thorium) emits about 8 microsieverts per hour of gamma radiation, 80 times the dose rate equivalent in London, but a decade-long study of 69,985 residents published in Health Physics in 2009 "showed no excess cancer risk from exposure to terrestrial gamma radiation. The excess relative risk of cancer excluding leukemia was estimated to be −0.13 per Gy (95% CI: −0.58, 0.46)", indicating no statistically significant positive or negative relationship between background radiation levels and cancer risk in this sample. Cultures Studies in cell cultures can be useful for finding mechanisms for biological processes, but they also can be criticized for not effectively capturing the whole of the living organism. A study by E. I. Azzam suggested that pre-exposure to radiation causes cells to turn on protection mechanisms. A different study by de Toledo and collaborators has shown that irradiation with gamma rays increases the concentration of glutathione, an antioxidant found in cells. In 2011, an in vitro study led by S. V. Costes showed in time-lapse images a strongly non-linear response of certain cellular repair mechanisms called radiation-induced foci (RIF). The study found that low doses of radiation prompted higher rates of RIF formation than high doses, and that after low-dose exposure RIF continued to form after the radiation had ended. Measured rates of RIF formation were 15 RIF/Gy at 2 Gy, and 64 RIF/Gy at 0.1 Gy. These results suggest that low dose levels of ionizing radiation may not increase cancer risk directly proportional to dose and thus contradict the linear-no-threshold standard model. Mina Bissell, a world-renowned breast-cancer researcher and collaborator in this study stated: "Our data show that at lower doses of ionizing radiation, DNA repair mechanisms work much better than at higher doses. This non-linear DNA damage response casts doubt on the general assumption that any amount of ionizing radiation is harmful and additive." Animals An early study on mice exposed to low dose of radiation daily (0.11 R per day) suggest that they may outlive control animals. A study by Otsuka and collaborators found hormesis in animals. Miyachi conducted a study on mice and found that a 200 mGy X-ray dose protects mice against both further X-ray exposure and ozone gas. In another rodent study, Sakai and collaborators found that (1 mGy/h) gamma irradiation prevents the development of cancer (induced by chemical means, injection of methylcholanthrene). In a 2006 paper, a dose of 1 Gy was delivered to the cells (at constant rate from a radioactive source) over a series of lengths of time. These were between 8.77 and 87.7 hours, the abstract states for a dose delivered over 35 hours or more (low dose rate) no transformation of the cells occurred. Also for the 1 Gy dose delivered over 8.77 to 18.3 hours that the biological effect (neoplastic transformation) was about "1.5 times less than that measured at high dose rate in previous studies with a similar quality of [X-ray] radiation". Likewise it has been reported that fractionation of gamma irradiation reduces the likelihood of a neoplastic transformation. Pre-exposure to fast neutrons and gamma rays from Cs-137 is reported to increase the ability of a second dose to induce a neoplastic transformation. Caution must be used in interpreting these results, as it noted in the BEIR VII report, these pre-doses can also increase cancer risk: However, 75 mGy/d cannot be accurately described as a low dose rate – it is equivalent to over 27 sieverts per year. The same study on dogs showed no increase in cancer nor reduction in life expectancy for dogs irradiated at 3 mGy/d. Humans Effects of slightly increased radiation level In long-term study of Chernobyl disaster liquidators was found that: "During current research paradoxically longer telomeres were found among persons, who have received heavier long-term irradiation." and "Mortality due to oncologic diseases was lower than in general population in all age groups that may reflect efficient health care of this group." Though in conclusion interim results were ignored and conclusion followed LNT hypothesis: "The signs of premature aging were found in Chernobyl disaster clean-up workers; moreover, aging process developed in heavier form and at younger age in humans, who underwent greater exposure to ionizing radiation." A study of survivors of the Hirsohima atomic bomb explosion yielded similar results. Effects of sunlight exposure In an Australian study which analyzed the association between solar UV exposure and DNA damage, the results indicated that although the frequency of cells with chromosome breakage increased with increasing sun exposure, the misrepair of DNA strand breaks decreased as sun exposure was heightened. Effects of cobalt-60 exposure The health of the inhabitants of radioactive apartment buildings in Taiwan has received prominent attention. In 1982, more than 20,000 tons of steel was accidentally contaminated with cobalt-60, and much of this radioactive steel was used to build apartments and exposed thousands of Taiwanese to gamma radiation levels of up to >1000 times background (average 47.7 mSv, maximum 2360 mSv excess cumulative dose). The radioactive contamination was discovered in 1992. A seriously flawed 2004 study compared the building's younger residents with the much older general population of Taiwan and determined that the younger residents were less likely to have been diagnosed with cancer than older people; this was touted as evidence of a radiation hormesis effect. (Older people have much higher cancer rates even in the absence of excess radiation exposure.) In the years shortly after exposure, the total number cancer cases have been reported to be either lower than the society-wide average or slightly elevated. Leukaemia and thyroid cancer were substantially elevated. When a lower rate of "all cancers" was found, it was thought to be due to the exposed residents having a higher socioeconomic status, and thus overall healthier lifestyle. Additionally, Hwang, et al. cautioned in 2006 that leukaemia was the first cancer type found to be elevated amongst the survivors of the Hiroshima and Nagasaki bombings, so it could be decades before any increase in more common cancer types is seen. Besides the excess risks of leukaemia and thyroid cancer, a later publication notes various DNA anomalies and other health effects among the exposed population: There have been several reports concerning the radiation effects on the exposed population, including cytogenetic analysis that showed increased micronucleus frequencies in peripheral lymphocytes in the exposed population, increases in acentromeric and single or multiple centromeric cytogenetic damages, and higher frequencies of chromosomal translocations, rings and dicentrics. Other analyses have shown persistent depression of peripheral leucocytes and neutrophils, increased eosinophils, altered distributions of lymphocyte subpopulations, increased frequencies of lens opacities, delays in physical development among exposed children, increased risk of thyroid abnormalities, and late consequences in hematopoietic adaptation in children. People living in these buildings also experienced infertility. Radon therapy Intentional exposure to water and air containing increased amounts of radon is perceived as therapeutic, and "radon spas" can be found in United States, Czechia, Poland, Germany, Austria and other countries. Effects of no radiation Given the uncertain effects of low-level and very-low-level radiation, there is a pressing need for quality research in this area. An expert panel convened at the 2006 Ultra-Low-Level Radiation Effects Summit at Carlsbad, New Mexico, proposed the construction of an Ultra-Low-Level Radiation laboratory. The laboratory, if built, will investigate the effects of almost no radiation on laboratory animals and cell cultures, and it will compare these groups to control groups exposed to natural radiation levels. Precautions would be made, for example, to remove potassium-40 from the food of laboratory animals. The expert panel believes that the Ultra-Low-Level Radiation laboratory is the only experiment that can explore with authority and confidence the effects of low-level radiation; that it can confirm or discard the various radiobiological effects proposed at low radiation levels e.g. LNT, threshold and radiation hormesis. The first preliminary results of the effects of almost no-radiation on cell cultures was reported by two research groups in 2011 and 2012; researchers in the US studied cell cultures protected from radiation in a steel chamber 650 meters underground at the Waste Isolation Pilot Plant in Carlsbad, New Mexico and researchers in Europe proposed an experiment design to study the effects of almost no-radiation on mouse cells (pKZ1 transgenic chromosomal inversion assay), but did not carry out the experiment. See also Background radiation Dose fractionation Hormesis Radithor Linear no-threshold model Petkau effect Radioresistance Ramsar, Mazandaran References Further reading Sanders, Charles L. (2009). Radiation Hormesis and the Linear-No-Threshold Assumption. External links International Dose-Response Society. University of Massachusetts center for research on hormesis. Many papers on radiation hormesis. Health Risks from Exposure to Low Levels of Ionizing Radiation: BEIR VII Phase 2 Radiation Hormesis Overview by T. D. Luckey, who wrote a book on the subject (Luckey, T. D. (1991). Radiation Hormesis. Boca Raton, FL: CRC Press. ) Radiobiology Radiation health effects ru:Гормезис
Radiation hormesis
[ "Chemistry", "Materials_science", "Biology" ]
4,739
[ "Radiobiology", "Radiation effects", "Radiation health effects", "Radioactivity" ]
1,107,334
https://en.wikipedia.org/wiki/Notch%20signaling%20pathway
The Notch signaling pathway is a highly conserved cell signaling system present in most animals. Mammals possess four different notch receptors, referred to as NOTCH1, NOTCH2, NOTCH3, and NOTCH4. The notch receptor is a single-pass transmembrane receptor protein. It is a hetero-oligomer composed of a large extracellular portion, which associates in a calcium-dependent, non-covalent interaction with a smaller piece of the notch protein composed of a short extracellular region, a single transmembrane-pass, and a small intracellular region. Notch signaling promotes proliferative signaling during neurogenesis, and its activity is inhibited by Numb to promote neural differentiation. It plays a major role in the regulation of embryonic development. Notch signaling is dysregulated in many cancers, and faulty notch signaling is implicated in many diseases, including T-cell acute lymphoblastic leukemia (T-ALL), cerebral autosomal-dominant arteriopathy with sub-cortical infarcts and leukoencephalopathy (CADASIL), multiple sclerosis, Tetralogy of Fallot, and Alagille syndrome. Inhibition of notch signaling inhibits the proliferation of T-cell acute lymphoblastic leukemia in both cultured cells and a mouse model. Discovery In 1914, John S. Dexter noticed the appearance of a notch in the wings of the fruit fly Drosophila melanogaster. The alleles of the gene were identified in 1917 by American evolutionary biologist Thomas Hunt Morgan. Its molecular analysis and sequencing was independently undertaken in the 1980s by Spyros Artavanis-Tsakonas and Michael W. Young. Alleles of the two C. elegans Notch genes were identified based on developmental phenotypes: lin-12 and glp-1. The cloning and partial sequence of lin-12 was reported at the same time as Drosophila Notch by Iva Greenwald. Mechanism The Notch protein spans the cell membrane, with part of it inside and part outside. Ligand proteins binding to the extracellular domain induce proteolytic cleavage and release of the intracellular domain, which enters the cell nucleus to modify gene expression. The cleavage model was first proposed in 1993 based on work done with Drosophila Notch and C. elegans lin-12, informed by the first oncogenic mutation affecting a human Notch gene. Compelling evidence for this model was provided in 1998 by in vivo analysis in Drosophila by Gary Struhl and in cell culture by Raphael Kopan. Although this model was initially disputed, the evidence in favor of the model was irrefutable by 2001. The receptor is normally triggered via direct cell-to-cell contact, in which the transmembrane proteins of the cells in direct contact form the ligands that bind the notch receptor. The Notch binding allows groups of cells to organize themselves such that, if one cell expresses a given trait, this may be switched off in neighbouring cells by the intercellular notch signal. In this way, groups of cells influence one another to make large structures. Thus, lateral inhibition mechanisms are key to Notch signaling. lin-12 and Notch mediate binary cell fate decisions, and lateral inhibition involves feedback mechanisms to amplify initial differences. The Notch cascade consists of Notch and Notch ligands, as well as intracellular proteins transmitting the notch signal to the cell's nucleus. The Notch/Lin-12/Glp-1 receptor family was found to be involved in the specification of cell fates during development in Drosophila and C. elegans. The intracellular domain of Notch forms a complex with CBF1 and Mastermind to activate transcription of target genes. The structure of the complex has been determined. Pathway Maturation of the notch receptor involves cleavage at the prospective extracellular side during intracellular trafficking in the Golgi complex. This results in a bipartite protein, composed of a large extracellular domain linked to the smaller transmembrane and intracellular domain. Binding of ligand promotes two proteolytic processing events; as a result of proteolysis, the intracellular domain is liberated and can enter the nucleus to engage other DNA-binding proteins and regulate gene expression. Notch and most of its ligands are transmembrane proteins, so the cells expressing the ligands typically must be adjacent to the notch expressing cell for signaling to occur. The notch ligands are also single-pass transmembrane proteins and are members of the DSL (Delta/Serrate/LAG-2) family of proteins. In Drosophila melanogaster (the fruit fly), there are two ligands named Delta and Serrate. In mammals, the corresponding names are Delta-like and Jagged. In mammals there are multiple Delta-like and Jagged ligands, as well as possibly a variety of other ligands, such as F3/contactin. In the nematode C. elegans, two genes encode homologous proteins, glp-1 and lin-12. There has been at least one report that suggests that some cells can send out processes that allow signaling to occur between cells that are as much as four or five cell diameters apart. The notch extracellular domain is composed primarily of small cystine-rich motifs called EGF-like repeats. Notch 1, for example, has 36 of these repeats. Each EGF-like repeat is composed of approximately 40 amino acids, and its structure is defined largely by six conserved cysteine residues that form three conserved disulfide bonds. Each EGF-like repeat can be modified by O-linked glycans at specific sites. An O-glucose sugar may be added between the first and second conserved cysteines, and an O-fucose may be added between the second and third conserved cysteines. These sugars are added by an as-yet-unidentified O-glucosyltransferase (except for Rumi), and GDP-fucose Protein O-fucosyltransferase 1 (POFUT1), respectively. The addition of O-fucose by POFUT1 is absolutely necessary for notch function, and, without the enzyme to add O-fucose, all notch proteins fail to function properly. As yet, the manner by which the glycosylation of notch affects function is not completely understood. The O-glucose on notch can be further elongated to a trisaccharide with the addition of two xylose sugars by xylosyltransferases, and the O-fucose can be elongated to a tetrasaccharide by the ordered addition of an N-acetylglucosamine (GlcNAc) sugar by an N-Acetylglucosaminyltransferase called Fringe, the addition of a galactose by a galactosyltransferase, and the addition of a sialic acid by a sialyltransferase. To add another level of complexity, in mammals there are three Fringe GlcNAc-transferases, named lunatic fringe, manic fringe, and radical fringe. These enzymes are responsible for something called a "fringe effect" on notch signaling. If Fringe adds a GlcNAc to the O-fucose sugar then the subsequent addition of a galactose and sialic acid will occur. In the presence of this tetrasaccharide, notch signals strongly when it interacts with the Delta ligand, but has markedly inhibited signaling when interacting with the Jagged ligand. The means by which this addition of sugar inhibits signaling through one ligand, and potentiates signaling through another is not clearly understood. Once the notch extracellular domain interacts with a ligand, an ADAM-family metalloprotease called ADAM10, cleaves the notch protein just outside the membrane. This releases the extracellular portion of notch (NECD), which continues to interact with the ligand. The ligand plus the notch extracellular domain is then endocytosed by the ligand-expressing cell. There may be signaling effects in the ligand-expressing cell after endocytosis; this part of notch signaling is a topic of active research. After this first cleavage, an enzyme called γ-secretase (which is implicated in Alzheimer's disease) cleaves the remaining part of the notch protein just inside the inner leaflet of the cell membrane of the notch-expressing cell. This releases the intracellular domain of the notch protein (NICD), which then moves to the nucleus, where it can regulate gene expression by activating the transcription factor CSL. It was originally thought that these CSL proteins suppressed Notch target transcription. However, further research showed that, when the intracellular domain binds to the complex, it switches from a repressor to an activator of transcription. Other proteins also participate in the intracellular portion of the notch signaling cascade. Ligand interactions Notch signaling is initiated when Notch receptors on the cell surface engage ligands presented in trans on opposing cells. Despite the expansive size of the Notch extracellular domain, it has been demonstrated that EGF domains 11 and 12 are the critical determinants for interactions with Delta. Additional studies have implicated regions outside of Notch EGF11-12 in ligand binding. For example, Notch EGF domain 8 plays a role in selective recognition of Serrate/Jagged and EGF domains 6-15 are required for maximal signaling upon ligand stimulation. A crystal structure of the interacting regions of Notch1 and Delta-like 4 (Dll4) provided a molecular-level visualization of Notch-ligand interactions, and revealed that the N-terminal MNNL (or C2) and DSL domains of ligands bind to Notch EGF domains 12 and 11, respectively. The Notch1-Dll4 structure also illuminated a direct role for Notch O-linked fucose and glucose moieties in ligand recognition, and rationalized a structural mechanism for the glycan-mediated tuning of Notch signaling. Synthetic Notch signaling It is possible to engineer synthetic Notch receptors by replacing the extracellular receptor and intracellular transcriptional domains with other domains of choice. This allows researchers to select which ligands are detected, and which genes are upregulated in response. Using this technology, cells can report or change their behavior in response to contact with user-specified signals, facilitating new avenues of both basic and applied research into cell-cell signaling. Notably, this system allows multiple synthetic pathways to be engineered into a cell in parallel. Function The Notch signaling pathway is important for cell-cell communication, which involves gene regulation mechanisms that control multiple cell differentiation processes during embryonic and adult life. Notch signaling also has a role in the following processes: neuronal function and development stabilization of arterial endothelial fate and angiogenesis regulation of crucial cell communication events between endocardium and myocardium during both the formation of the valve primordial and ventricular development and differentiation cardiac valve homeostasis, as well as implications in other human disorders involving the cardiovascular system timely cell lineage specification of both endocrine and exocrine pancreas influencing of binary fate decisions of cells that must choose between the secretory and absorptive lineages in the gut expansion of the hematopoietic stem cell compartment during bone development and participation in commitment to the osteoblastic lineage, suggesting a potential therapeutic role for notch in bone regeneration and osteoporosis expansion of the hemogenic endothelial cells along with signaling axis involving Hedgehog signaling and Scl T cell lineage commitment from common lymphoid precursor regulation of cell-fate decision in mammary glands at several distinct development stages possibly some non-nuclear mechanisms, such as control of the actin cytoskeleton through the tyrosine kinase Abl Regulation of the mitotic/meiotic decision in the C. elegans germline development of alveoli in the lung. It has also been found that Rex1 has inhibitory effects on the expression of notch in mesenchymal stem cells, preventing differentiation. Role in embryogenesis The Notch signaling pathway plays an important role in cell-cell communication, and further regulates embryonic development. Embryo polarity Notch signaling is required in the regulation of polarity. For example, mutation experiments have shown that loss of Notch signaling causes abnormal anterior-posterior polarity in somites. Also, Notch signaling is required during left-right asymmetry determination in vertebrates. Early studies in the nematode model organism C. elegans indicate that Notch signaling has a major role in the induction of mesoderm and cell fate determination. As mentioned previously, C. elegans has two genes that encode for partially functionally redundant Notch homologs, glp-1 and lin-12. During C. elegans, GLP-1, the C. elegans Notch homolog, interacts with APX-1, the C. elegans Delta homolog. This signaling between particular blastomeres induces differentiation of cell fates and establishes the dorsal-ventral axis. Role in somitogenesis Notch signaling is central to somitogenesis. In 1995, Notch1 was shown to be important for coordinating the segmentation of somites in mice. Further studies identified the role of Notch signaling in the segmentation clock. These studies hypothesized that the primary function of Notch signaling does not act on an individual cell, but coordinates cell clocks and keep them synchronized. This hypothesis explained the role of Notch signaling in the development of segmentation and has been supported by experiments in mice and zebrafish. Experiments with Delta1 mutant mice that show abnormal somitogenesis with loss of anterior/posterior polarity suggest that Notch signaling is also necessary for the maintenance of somite borders. During somitogenesis, a molecular oscillator in paraxial mesoderm cells dictates the precise rate of somite formation. A clock and wavefront model has been proposed in order to spatially determine the location and boundaries between somites. This process is highly regulated as somites must have the correct size and spacing in order to avoid malformations within the axial skeleton that may potentially lead to spondylocostal dysostosis. Several key components of the Notch signaling pathway help coordinate key steps in this process. In mice, mutations in Notch1, Dll1 or Dll3, Lfng, or Hes7 result in abnormal somite formation. Similarly, in humans, the following mutations have been seen to lead to development of spondylocostal dysostosis: DLL3, LFNG, or HES7. Role in epidermal differentiation Notch signaling is known to occur inside ciliated, differentiating cells found in the first epidermal layers during early skin development. Furthermore, it has found that presenilin-2 works in conjunction with ARF4 to regulate Notch signaling during this development. However, it remains to be determined whether gamma-secretase has a direct or indirect role in modulating Notch signaling. Role in central nervous system development and function Early findings on Notch signaling in central nervous system (CNS) development were performed mainly in Drosophila with mutagenesis experiments. For example, the finding that an embryonic lethal phenotype in Drosophila was associated with Notch dysfunction indicated that Notch mutations can lead to the failure of neural and Epidermal cell segregation in early Drosophila embryos. In the past decade, advances in mutation and knockout techniques allowed research on the Notch signaling pathway in mammalian models, especially rodents. The Notch signaling pathway was found to be critical mainly for neural progenitor cell (NPC) maintenance and self-renewal. In recent years, other functions of the Notch pathway have also been found, including glial cell specification, neurites development, as well as learning and memory. Neuron cell differentiation The Notch pathway is essential for maintaining NPCs in the developing brain. Activation of the pathway is sufficient to maintain NPCs in a proliferating state, whereas loss-of-function mutations in the critical components of the pathway cause precocious neuronal differentiation and NPC depletion. Modulators of the Notch signal, e.g., the Numb protein are able to antagonize Notch effects, resulting in the halting of cell cycle and the differentiation of NPCs. Conversely, the fibroblast growth factor pathway promotes Notch signaling to keep stem cells of the cerebral cortex in the proliferative state, amounting to a mechanism regulating cortical surface area growth and, potentially, gyrification. In this way, Notch signaling controls NPC self-renewal as well as cell fate specification. A non-canonical branch of the Notch signaling pathway that involves the phosphorylation of STAT3 on the serine residue at amino acid position 727 and subsequent Hes3 expression increase (STAT3-Ser/Hes3 Signaling Axis) has been shown to regulate the number of NPCs in culture and in the adult rodent brain. In adult rodents and in cell culture, Notch3 promotes neuronal differentiation, having a role opposite to Notch1/2. This indicates that individual Notch receptors can have divergent functions, depending on cellular context. Neurite development In vitro studies show that Notch can influence neurite development. In vivo, deletion of the Notch signaling modulator, Numb, disrupts neuronal maturation in the developing cerebellum, whereas deletion of Numb disrupts axonal arborization in sensory ganglia. Although the mechanism underlying this phenomenon is not clear, together these findings suggest Notch signaling might be crucial in neuronal maturation. Gliogenesis In gliogenesis, Notch appears to have an instructive role that can directly promote the differentiation of many glial cell subtypes. For example, activation of Notch signaling in the retina favors the generation of Muller glia cells at the expense of neurons, whereas reduced Notch signaling induces production of ganglion cells, causing a reduction in the number of Muller glia. Adult brain function Apart from its role in development, evidence shows that Notch signaling is also involved in neuronal apoptosis, neurite retraction, and neurodegeneration of ischemic stroke in the brain In addition to developmental functions, Notch proteins and ligands are expressed in cells of the adult nervous system, suggesting a role in CNS plasticity throughout life. Adult mice heterozygous for mutations in either Notch1 or Cbf1 have deficits in spatial learning and memory. Similar results are seen in experiments with presenilins1 and 2, which mediate the Notch intramembranous cleavage. To be specific, conditional deletion of presenilins at 3 weeks after birth in excitatory neurons causes learning and memory deficits, neuronal dysfunction, and gradual neurodegeneration. Several gamma secretase inhibitors that underwent human clinical trials in Alzheimer's disease and MCI patients resulted in statistically significant worsening of cognition relative to controls, which is thought to be due to its incidental effect on Notch signalling. Role in cardiovascular development The Notch signaling pathway is a critical component of cardiovascular formation and morphogenesis in both development and disease. It is required for the selection of endothelial tip and stalk cells during sprouting angiogenesis. Cardiac development Notch signal pathway plays a crucial role in at least three cardiac development processes: Atrioventricular canal development, myocardial development, and cardiac outflow tract (OFT) development. Atrioventricular (AV) canal development AV boundary formation Notch signaling can regulate the atrioventricular boundary formation between the AV canal and the chamber myocardium. Studies have revealed that both loss- and gain-of-function of the Notch pathway results in defects in AV canal development. In addition, the Notch target genes HEY1 and HEY2 are involved in restricting the expression of two critical developmental regulator proteins, BMP2 and Tbx2, to the AV canal. AV epithelial-mesenchymal transition (EMT) Notch signaling is also important for the process of AV EMT, which is required for AV canal maturation. After the AV canal boundary formation, a subset of endocardial cells lining the AV canal are activated by signals emanating from the myocardium and by interendocardial signaling pathways to undergo EMT. Notch1 deficiency results in defective induction of EMT. Very few migrating cells are seen and these lack mesenchymal morphology. Notch may regulate this process by activating matrix metalloproteinase2 (MMP2) expression, or by inhibiting vascular endothelial (VE)-cadherin expression in the AV canal endocardium while suppressing the VEGF pathway via VEGFR2. In RBPJk/CBF1-targeted mutants, the heart valve development is severely disrupted, presumably because of defective endocardial maturation and signaling. Ventricular development Some studies in Xenopus and in mouse embryonic stem cells indicate that cardiomyogenic commitment and differentiation require Notch signaling inhibition. Active Notch signaling is required in the ventricular endocardium for proper trabeculae development subsequent to myocardial specification by regulating BMP10, NRG1, and EphrinB2 expression. Notch signaling sustains immature cardiomyocyte proliferation in mammals and zebrafish. A regulatory correspondence likely exists between Notch signaling and Wnt signaling, whereby upregulated Wnt expression downregulates Notch signaling, and a subsequent inhibition of ventricular cardiomyocyte proliferation results. This proliferative arrest can be rescued using Wnt inhibitors. The downstream effector of Notch signaling, HEY2, was also demonstrated to be important in regulating ventricular development by its expression in the interventricular septum and the endocardial cells of the cardiac cushions. Cardiomyocyte and smooth muscle cell-specific deletion of HEY2 results in impaired cardiac contractility, malformed right ventricle, and ventricular septal defects. Ventricular outflow tract development During development of the aortic arch and the aortic arch arteries, the Notch receptors, ligands, and target genes display a unique expression pattern. When the Notch pathway was blocked, the induction of vascular smooth muscle cell marker expression failed to occur, suggesting that Notch is involved in the differentiation of cardiac neural crest cells into vascular cells during outflow tract development. Angiogenesis Endothelial cells use the Notch signaling pathway to coordinate cellular behaviors during the blood vessel sprouting that occurs sprouting angiogenesis. Activation of Notch takes place primarily in "connector" cells and cells that line patent stable blood vessels through direct interaction with the Notch ligand, Delta-like ligand 4 (Dll4), which is expressed in the endothelial tip cells. VEGF signaling, which is an important factor for migration and proliferation of endothelial cells, can be downregulated in cells with activated Notch signaling by lowering the levels of Vegf receptor transcript. Zebrafish embryos lacking Notch signaling exhibit ectopic and persistent expression of the zebrafish ortholog of VEGF3, flt4, within all endothelial cells, while Notch activation completely represses its expression. Notch signaling may be used to control the sprouting pattern of blood vessels during angiogenesis. When cells within a patent vessel are exposed to VEGF signaling, only a restricted number of them initiate the angiogenic process. Vegf is able to induce DLL4 expression. In turn, DLL4 expressing cells down-regulate Vegf receptors in neighboring cells through activation of Notch, thereby preventing their migration into the developing sprout. Likewise, during the sprouting process itself, the migratory behavior of connector cells must be limited to retain a patent connection to the original blood vessel. Role in endocrine development During development, definitive endoderm and ectoderm differentiates into several gastrointestinal epithelial lineages, including endocrine cells. Many studies have indicated that Notch signaling has a major role in endocrine development. Pancreatic development The formation of the pancreas from endoderm begins in early development. The expression of elements of the Notch signaling pathway have been found in the developing pancreas, suggesting that Notch signaling is important in pancreatic development. Evidence suggests Notch signaling regulates the progressive recruitment of endocrine cell types from a common precursor, acting through two possible mechanisms. One is the "lateral inhibition", which specifies some cells for a primary fate but others for a secondary fate among cells that have the potential to adopt the same fate. Lateral inhibition is required for many types of cell fate determination. Here, it could explain the dispersed distribution of endocrine cells within pancreatic epithelium. A second mechanism is "suppressive maintenance", which explains the role of Notch signaling in pancreas differentiation. Fibroblast growth factor10 is thought to be important in this activity, but the details are unclear. Intestinal development The role of Notch signaling in the regulation of gut development has been indicated in several reports. Mutations in elements of the Notch signaling pathway affect the earliest intestinal cell fate decisions during zebrafish development. Transcriptional analysis and gain of function experiments revealed that Notch signaling targets Hes1 in the intestine and regulates a binary cell fate decision between adsorptive and secretory cell fates. Bone development Early in vitro studies have found the Notch signaling pathway functions as down-regulator in osteoclastogenesis and osteoblastogenesis. Notch1 is expressed in the mesenchymal condensation area and subsequently in the hypertrophic chondrocytes during chondrogenesis. Overexpression of Notch signaling inhibits bone morphogenetic protein2-induced osteoblast differentiation. Overall, Notch signaling has a major role in the commitment of mesenchymal cells to the osteoblastic lineage and provides a possible therapeutic approach to bone regeneration. Role in cell cycle control Notch signaling is critical for cell fate identity and differentiation and regulates these processes in part by controlling cell cycle progression. Specifically, Notch has been shown to promote cell cycle progression at the G1/S transition in various systems. Photoreceptor development In Drosophila eye development, photoreceptors undergo two waves of differentiation, where five out of eight photoreceptors differentiate in the first wave (R8, R2, R5, R3, and R4), and the other three differentiate in the second wave (R1, R6, and R7). Notch has been shown to promote the second mitotic wave in Drosophila eye development. Specifically, it mediates the G1/S transition by promoting dE2F activation (Drosophila E2F), a member of the E2F transcription factor family, which regulates the expression of genes important for cell proliferation, specifically those involved in the G1/S transition. Notch does this by inhibiting RBF1 (the Drosophila homolog of the tumor suppressor Rb), which represses dE2F. Additionally, Notch is required for cyclin A activation, which accumulates during the G1/S transition and may be involved in S phase onset. Health and disease The role of Notch signaling in cell cycle regulation also has implications in health and disease. For example, Notch has been found to promote the expression of cyclin D3 and Cdk4/6 in human T-cells, thereby promoting the phosphorylation of Rb and facilitating the G1/S transition, implicating its role in cancer as several gain-of-function mutations in NOTCH1 have been identified in human acute T-cell lymphoblastic leukemias and lymphomas. Additionally, in ventricular cardiomyocytes, which stop dividing shortly after birth, NOTCH2 signaling activation promotes cell cycle reentry. It induces the expression and nuclear translocation of cyclin D, which along with Cdk4/6 promotes the phosphorylation of Rb and causes cell cycle progression through the G1/S transition. This suggests that Notch signaling might regulate ventricular growth as well as cardiomyocyte regeneration, though this is unclear. Migratory identity In the zebrafish trunk neural crest (TNC), cells migrate collectively in single-file chains, with a cell “leader” at the front of the chain that instructs the directionality of the trailing “follower” cells. Notch has been found to specify cell migratory identity in the trunk neural crest – specifically, high Notch specifies leaders while low Notch specifies followers. Further, cell cycle progression required for migration is regulated by Notch such that leader cells with high Notch activity quickly undergo the G1/S transition while cells with low Notch activity remain in the G1 phase for longer and thus become followers. Role in cancer Leukemia Aberrant Notch signaling is a driver of T cell acute lymphoblastic leukemia (T-ALL) and is mutated in at least 65% of all T-ALL cases. Notch signaling can be activated by mutations in Notch itself, inactivating mutations in FBXW7 (a negative regulator of Notch1), or rarely by t(7;9)(q34;q34.3) translocation. In the context of T-ALL, Notch activity cooperates with additional oncogenic lesions such as c-MYC to activate anabolic pathways such as ribosome and protein biosynthesis thereby promoting leukemia cell growth. Urothelial bladder cancer Loss of Notch activity is a driving event in urothelial cancer. A study identified inactivating mutations in components of the Notch pathway in over 40% of examined human bladder carcinomas. In mouse models, genetic inactivation of Notch signaling results in Erk1/2 phosphorylation leading to tumorigenesis in the urinary tract. As not all NOTCH receptors are equally involved in the urothelial bladder cancer, 90% of samples in one study had some level of NOTCH3 expression, suggesting that NOTCH3 plays an important role in urothelial bladder cancer. A higher level of NOTCH3 expression was observed in high-grade tumors, and a higher level of positivity was associated with a higher mortality risk. NOTCH3 was identified as an independent predictor of poor outcome. Therefore, it is suggested that NOTCH3 could be used as a marker for urothelial bladder cancer-specific mortality risk. It was also shown that NOTCH3 expression could be a prognostic immunohistochemical marker for clinical follow-up of urothelial bladder cancer patients, contributing to a more individualized approach by selecting patients to undergo control cystoscopy after a shorter time interval. Liver cancer In hepatocellular carcinoma, for instance, it was suggesting that AXIN1 mutations would provoke Notch signaling pathway activation, fostering the cancer development, but a recent study demonstrated that such an effect cannot be detected. Thus the exact role of Notch signaling in the cancer process awaits further elucidation. Notch inhibitors The involvement of Notch signaling in many cancers has led to investigation of notch inhibitors (especially gamma-secretase inhibitors) as cancer treatments which are in different phases of clinical trials. at least 7 notch inhibitors were in clinical trials. MK-0752 has given promising results in an early clinical trial for breast cancer. Preclinical studies showed beneficial effects of gamma-secretase inhibitors in endometriosis, a disease characterised by increased expression of notch pathway constituents. Several notch inhibitors, including the gamma-secretase inhibitor LY3056480, are being studied for their potential ability to regenerate hair cells in the cochlea, which could lead to treatments for hearing loss and tinnitus. Mathematical modeling Mathematical modeling in Notch-Delta signaling has become a pivotal tool in understanding pattern formation driven by cell-cell interactions, particularly in the context of lateral-inhibition mechanisms. The Collier model, a cornerstone in this field, employs a system of coupled ordinary differential equations to describe the feedback loop between adjacent cells. The model is defined by the equations: where and represent the levels of Notch and Delta activity in cell , respectively. Functions and are typically Hill functions, reflecting the regulatory dynamics of the signaling process. The term denotes the average level of Delta activity in the cells adjacent to cell , integrating juxtacrine signaling effects. Recent extensions of this model incorporate long-range signaling, acknowledging the role of cell protrusions like filopodia (cytonemes) that reach non-neighboring cells. One extended model, often referred to as the -Collier model, introduces a weighting parameter to balance juxtacrine and long-range signaling. The interaction term is modified to include these protrusions, creating a more complex, non-local signaling network. This model is instrumental in exploring pattern formation robustness and biological pattern refinement, considering the stochastic nature of filopodia dynamics and intrinsic noise. The application of mathematical modeling in Notch-Delta signaling has been particularly illuminating in understanding the patterning of sensory organ precursors (SOPs) in the Drosophila's notum and wing margin. The mathematical modeling of Notch-Delta signaling thus provides significant insights into lateral inhibition mechanisms and pattern formation in biological systems. It enhances the understanding of cell-cell interaction variations leading to diverse tissue structures, contributing to developmental biology and offering potential therapeutic pathways in diseases related to Notch-Delta dysregulation. See also Alagille syndrome Netpath – A curated resource of signal transduction pathways in humans References External links Diagram: notch signaling pathway in Homo sapiens Diagram: Notch signaling in Drosophila Signal transduction
Notch signaling pathway
[ "Chemistry", "Biology" ]
7,004
[ "Biochemistry", "Neurochemistry", "Signal transduction" ]
1,107,385
https://en.wikipedia.org/wiki/Thermal%20cycler
The thermal cycler (also known as a thermocycler, PCR machine or DNA amplifier) is a laboratory apparatus most commonly used to amplify segments of DNA via the polymerase chain reaction (PCR). Thermal cyclers may also be used in laboratories to facilitate other temperature-sensitive reactions, including restriction enzyme digestion or rapid diagnostics. The device has a thermal block with holes where tubes holding the reaction mixtures can be inserted. The cycler then raises and lowers the temperature of the block in discrete, pre-programmed steps. History The earliest thermal cyclers were designed for use with the Klenow fragment of DNA polymerase I. Since this enzyme is destroyed during each heating step of the amplification process, new enzyme had to be added every cycle. This led to a cumbersome machine based on an automated pipettor, with open reaction tubes. Later, the PCR process was adapted to the use of thermostable DNA polymerase from Thermus aquaticus, which greatly simplified the design of the thermal cycler. While in some old machines the block is submerged in an oil bath to control temperature, in modern PCR machines a Peltier element is commonly used. Quality thermal cyclers often contain silver blocks to achieve fast temperature changes and uniform temperature throughout the block. Other cyclers have multiple blocks with high heat capacity, each of which is kept at a constant temperature, and the reaction tubes are moved between them by means of an automated process. Miniaturized thermal cyclers have been created in which the reaction mixture moves via channel through hot and cold zones on a microfluidic chip. Thermal cyclers designed for quantitative PCR have optical systems which enable fluorescence to be monitored during reaction cycling. Modern innovation Modern thermal cyclers are equipped with a heated lid that presses against the lids of the reaction tubes. This prevents condensation of water from the reaction mixtures on the insides of the lids. Traditionally, a layer of mineral oil was used for this purpose. Some thermal cyclers are equipped with a fully adjustable heated lid to allow for nonstandard or diverse types of PCR plasticware. Some thermal cyclers are equipped with multiple blocks allowing several different PCRs to be carried out simultaneously. Some models also have a gradient function to allow for different temperatures in different parts of the block. This is particularly useful when testing suitable annealing temperatures for PCR primers. References External links OpenPCR, an open-source PCR thermal cycler Molecular biology laboratory equipment Polymerase chain reaction
Thermal cycler
[ "Chemistry", "Biology" ]
520
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction", "Molecular biology laboratory equipment", "Molecular biology techniques" ]
1,108,036
https://en.wikipedia.org/wiki/Birmabright
Birmabright is a trade name of the former Birmetals Co. (Birmabright works in Clapgate Lane, Quinton, Birmingham, UK) for various types of lightweight sheet metal in an aluminium–magnesium alloy. The alloy was introduced by the Birmid Group in 1929 and was particularly noted for its corrosion resistance. Birmal Boats was created in 1930 for the building of light-alloy boats. Birmetals Ltd was formed in 1936 and during the war produced both copper bearing aluminium alloys and the Birmabright magnesium bearing alloys, mainly for aircraft production. The constituents were from 1% to 7% magnesium, with <1% manganese, and the remainder aluminium. The alloys were provided in different temper conditions. e.g. soft, 1/4 hard, 1/2 hard and were designed to be work hardened e.g. by cold pressing into shape. They do not exhibit age hardening, or have a precipitation heat treatment to promote hardening (unlike other contemporary aluminium alloys such as Duralumin). Weldability is good but machinability is only fair to poor. The alloy has good seawater corrosion resistance (unlike Duralumin). Birmetals Ltd Birmabright grades included the following : The Birmabright designations are obsolete, but equivalent grades exist, for example BB2 equivalent specifications are British standard NS4, American 5251 and ISO designation AlMg2. Gas welding of Birmabright is easier than that of pure aluminium and may be carried out autogenously using scraps of the same material as a filler rod. Birmabright is best known as the material used in the body of the Land Rover from its launch in April 1948, and in a few other classic British vehicles. The doors, boot lid and bonnet of most Rover P4 models were also Birmabright, however towards the end of production this was changed to steel to reduce costs. An early use in the 1930s was for the bodywork of the land speed record car, Thunderbolt. Also used for the bodywork of Bluebird K7 used for the Coniston speed record attempt by Donald Campbell. The well known Laurent Giles designed 46ft sailboat Beyond was built of riveted sheets of Birmabright and circumnavigated in the early 1950s by Tom and Ann Worth. The hull proved corrosion resistant but required re-riveting later due to crystallisation of the rivet heads, and lasted well until being sunk in the Caribbean as an artificial diving reef in the 1980s. Birmetals (part of the Birmid Qualcast Group) closed its factory in 1980 after losing money in 1977-1979, followed by a protracted strike over wages by key staff which prevented production. See also Magnalium References Aluminium–magnesium alloys Aluminium alloys Aerospace materials Land Rover vehicles
Birmabright
[ "Chemistry", "Engineering" ]
582
[ "Alloy stubs", "Aerospace materials", "Aluminium alloys", "Alloys", "Aerospace engineering" ]
1,108,664
https://en.wikipedia.org/wiki/Canadian%20Light%20Source
The Canadian Light Source (CLS) () is Canada's national synchrotron light source facility, located on the grounds of the University of Saskatchewan in Saskatoon, Saskatchewan, Canada. The CLS has a third-generation 2.9 GeV storage ring, and the building occupies a footprint the size of a Canadian football field. It opened in 2004 after a 30-year campaign by the Canadian scientific community to establish a synchrotron radiation facility in Canada. It has expanded both its complement of beamlines and its building in two phases since opening. As a national synchrotron facility with over 1000 individual users, it hosts scientists from all regions of Canada and around 20 other countries. Research at the CLS has ranged from viruses to superconductors to dinosaurs, and it has also been noted for its industrial science and its high school education programs. History The road to the CLS: 1972–1999 Canadian interest in synchrotron radiation dates from 1972, when Bill McGowan of the University of Western Ontario (UWO) organised a workshop on its uses. At that time there were no users of synchrotron radiation in Canada. In 1973 McGowan submitted an unsuccessful proposal to the National Research Council (NRC) for a feasibility study on a possible synchrotron lightsource in Canada. In 1975 a proposal to build a dedicated synchrotron lightsource in Canada was submitted to NRC. This was also unsuccessful. In 1977 Mike Bancroft, also of UWO, submitted a proposal to NRC to build a Canadian beamline, as the Canadian Synchrotron Radiation Facility (CSRF), at the existing Synchrotron Radiation Center at the University of Wisconsin-Madison, USA, and in 1978 newly created NSERC awarded capital funding. CSRF, owned and operated by NRC, grew from the initial beamline to a total of three by 1998. A further push towards a Canadian synchrotron light source started in 1990 with formation of the Canadian Institute for Synchrotron Radiation (CISR), initiated by Bruce Bigham of Atomic Energy of Canada Limited (AECL). AECL and TRIUMF showed interest in designing the ring, but the Saskatchewan Accelerator Laboratory (SAL) at the University of Saskatchewan became prominent in the design. In 1991 CISR submitted a proposal to NSERC for a final design study. This was turned down, but in later years, under President Peter Morand, NSERC became more supportive. In 1994 NSERC committee recommended a Canadian synchrotron light source and a further NSERC committee was formed to select between two bids to host such a facility, from the Universities of Saskatchewan and Western Ontario. In 1996 this committee recommended that the Canadian Light Source be built in Saskatchewan. With NSERC unable to supply the required funds it was not clear where funding would come from. In 1997 the Canada Foundation for Innovation (CFI) was created to fund large scientific projects, possibly to provide a mechanism to fund the CLS. In 1998 a University of Saskatchewan team led by Dennis Skopik, the SAL director, submitted a proposal to CFI. The proposal was to fund 40% of the construction costs, with remaining money having to come from elsewhere. Assembling these required matching funds has been called "an unprecedented level of collaboration among governments, universities, and industry in Canada" and Bancroft – leader of the rival UWO bid – anckowledged the "Herculean" efforts of the Saskatchewan team in obtaining funds from the University, the City of Saskatoon, Saskatchewan Power, NRC, the Provincial Government of Saskatchewan, and Western Economic Diversification. At a late hour CFI told the proponents that it would not accept the SAL LINAC as part of the proposal, and the resulting shortfall was met in part by the spontaneous announcement by the Saskatoon city council and then Mayor Henry Dayday that they would double their contribution as long as other partners would. On 31 March 1999 the success of the CFI proposal was announced. The following month Skopik took a position at Jefferson Lab in the USA. He decided not to stay on as director of the Saskatoon facility because his expertise was in subatomic particles, and, he argued, the head of the CLS should be a researcher who specializes in using such a facility. His successor was Mike Bancroft Construction: 1999–2004 At the start of the project, all staff members with the former SAL were transferred into a new not-for-profit corporation, Canadian Light Source Inc., CLSI, which had primary responsibility for the technical design, construction and operation of the facility. As a separate corporation from the University, CLSI had the legal and organizational freedom suitable for this responsibility. UMA, an experienced engineering firm, now part of AECOM, with extensive experience managing large technical and civil construction projects, was hired as project managers. The new building – attached to the existing SAL building, and measuring 84m by 83m in area with a maximum height of 23m – was completed in early 2001. Bancroft's appointment ended in October 2001 and he returned to UWO, with Mark de Jong appointed acting director. Bancroft remained as acting Scientific Director until 2004. The SAL LINAC was refurbished and placed back into service in 2002 while the booster and storage rings were still under construction. First turn was achieved in the booster ring in July 2002 with full booster commissioning completed by September 2002. New director Bill Thomlinson, an expert in synchrotron medical imaging, arrived in November 2002. He was recruited from the European Synchrotron Radiation Facility where he had been the head of the medical research group. The 1991 proposal to NSERC envisioned a 1.5 GeV storage ring, since at this time the interest of the user community was mainly in the soft X-ray range. The ring was a racetrack layout of four to six bend regions surrounding straights with extra quadrupoles to allow for variable functions in the straights. The design contemplated the use of superconducting bends in some locations to boost the photon energies produced. The drawback of this design was the limited number of straight sections. In 1994 a more conventional machine with 8 straight sections was proposed, again with 1.5 GeV energy. At this time more users of hard X-rays were interested and it was felt that both the energy and number of straight sections were too low. By the time funding was secured in 1999 the design had changed to 2.9 GeV, with longer straight sections to enable two insertion devices per straight, delivering beam to two independent beamlines. Construction of the storage ring was completed in August 2003 and commissioning began the following month. Although beam could be stored, in March 2004 a large obstruction was found across the center of the chamber. Commissioning proceeded quickly after this was removed, and by June 2004 currents of 100mA could be achieved . On 22 October 2004 the CLS officially opened, with an opening ceremony attended by federal and provincial dignitaries, including then-Federal Minister of Finance Ralph Goodale and then-Saskatchewan Premier Lorne Calvert, university presidents and leading scientists. October 2004 was declared "Synchrotron Month" by the city of Saskatoon and the Saskatchewan government. Peter Mansbridge broadcast the CBC's nightly newscast The National from the top of the storage ring the day before the official opening. In parliament local MP Lynne Yelich said "There were many challenges to overcome, but thanks to the vision, dedication and persistence of its supporters, the Canadian Light Source synchrotron is open for business in Saskatoon." Operation and expansion: 2005–2012 The initial funding included seven beamlines, referred to as Phase I, which covered the full spectral range: two infrared beamlines, three soft X-ray beamlines and two hard X-ray beamlines. Further beamlines were built in two further phases, II (7 beamlines) and III (5 beamlines), announced in 2004 and 2006 respectively. Most of these were funded through applications to CFI by individual universities including UWO, the University of British Columbia and Guelph University In March 2005 leading infrared researcher Tom Ellis joined the CLS from Acadia University as Director of Research. He had previously spent 16 years at the Université de Montréal. The first external user was hosted in 2005, and the first research papers with results from the CLS were published in March 2006 – one from the University of Saskatchewan on peptides and the other from the University of Western Ontario on materials for organic light-emitting diodes. A committee was set up in 2006 to peer review proposals for beamtime, under the chairmanship of Adam Hitchcock of McMaster University. By 2007 more than 150 external users had used the CLS, and all seven of the initial beamlines had achieved significant results. The CLS building was also expanded in two phases. A glass and steel expansion was completed in 2007 to house the phase II medical imaging beamline BMIT, and construction on the expansion needed to house the phase III Brockhouse beamline started in July 2011 and is still ongoing as of July 2012. Bill Thomlinson retired in 2008, and in May of that year physics professor Josef Hormes of the University of Bonn, former director of the CAMD synchrotron at Louisiana State University was announced as the new director. Science fiction author Robert J. Sawyer was writer-in-residence for two months in 2009 in what he called a "once in a lifetime opportunity to hang out with working scientists" While there he wrote most of the novel "Wonder", which won the 2012 Prix Aurora Award for best novel." By the end of 2010 more than 1000 individual researchers had used the facility, and the number of publications had passed 500. From 2009–2012 several key metrics doubled, including the number of users and the number of publications, with more than 190 papers published in 2011. More than 400 proposals were received for beam time in 2012, with approximately a 50% oversubscription rate averaged over the operational beamlines. By 2012 the user community spanned all regions of Canada and around 20 other countries. That year a high school group from La Loche Saskatchewan became the first to use the purpose built educational beamline IDEAS. Also in 2012 the CLS signed an agreement with the Advanced Photon Source synchrotron in the USA to allow Canadian researchers access to their facilities. Science An international team led by University of Calgary professor Ken Ng solved the detailed structure of RNA polymerase using X-ray crystallography at the CLS. This enzyme replicates itself as the Norwalk virus spreads through the body, and has been linked to other superviruses such as hepatitis C, West Nile virus and the common cold. Its duplication is responsible for the onset of such viruses. CLS scientist Luca Quaroni and University of Saskatchewan professor Alan Casson used infrared microscopy to identify biomarkers inside individual cells from tissue associated with Barrett's esophagus. This disease can lead to an aggressive form of cancer known as esophageal adenocarcinoma. Researchers from Lakehead University and the University of Saskatchewan used the CLS to investigate the deaths of Royal Navy sailors buried in Antigua in the late 1700s. They used X-ray fluorescence to look for trace elements such as lead and strontium in bones from a recently excavated naval cemetery Scientists from Stanford University worked with CLS scientists to design a cleaner, faster battery. The new battery charges in less than two minutes, thanks to a newly developed carbon nanostructure. The team grew nanocrystals of iron and nickel on carbon. Traditional batteries lack this structure, mixing iron and nickel with conductors more or less randomly. The result was a strong chemical bond between the materials, which the team identified and studied at the synchrotron. A team led by the Politecnico di Milano, including scientists from the University of Waterloo and the University of British Columbia, found the first experimental evidence that a charge density wave instability competes with superconductivity in high-temperature superconductors. They used four synchrotrons including the REIXS beamline at CLS. Using the X-ray spectromicroscopy beamline, a research team led by scientists from the State University of New York, Buffalo produced images of graphene showing how folds and ripples act as speed bumps for electrons, affecting its conductivity. This has implications for the use of graphene in a variety of future products. A collaboration between the University of Regina and the Royal Saskatchewan Museum has been investigating dinosaur fossils at the CLS, including "Scotty," a Tyrannosaurus found in Saskatchewan in 1991, one of the most complete and largest T-rex skeletons ever found. They looked at the concentration of elements in bones to study the impact of the environment on such animals. Industrial program and economic impact From inception, the CLS showed a "strong commitment to industrial users and private/public partnerships", with then-director Bancroft reporting "more than 40 letters of support from industry indicating that [the CLS] is important for what they do". The CLS has an industrial group, within the larger experimental facilities division, with industrial liaison scientists who make synchrotron techniques available to a "non-traditional" user base who are not synchrotron experts. By 2007 more than 60 projects had been carried out, although in a speech in the same year, then-CLS director Bill Thomlinson said that "one of the biggest challenges for the synchrotron...is to get private users through the door", with less than 10% of time actually used by industry. In 1999 then-Saskatoon mayor Dayday stated that "the CLS will add $122 million to Canada's GDP during construction and $12 million annually after that". An economic impact study of the two financial years 2009/10 and 10/11 showed the CLS had added $45 million per year to the Canadian GDP, or about $3 for every $1 of operating funding. The CLS has stated that "the primary means of accessing the CLS is through a system of peer review, which ensures that the proposed science is of the highest quality and permits access to the facility to any interested researcher, regardless of regional, national, academic, industrial or governmental affiliation." Official visitors Then-Prime Minister Jean Chrétien visited the CLS in November 2000 during an election campaign stop in Saskatoon. He gave a speech on the mezzanine level of the building following his tour of the facility, praising the project for helping to reverse the brain drain of scientists from Canada. In August 2010 then-Governor General Michaëlle Jean visited the CLS as part of a two-day tour of Saskatchewan. In April 2012 the CLS was "visited" remotely by Governor General David Johnston. He was visiting the LNLS synchrotron in Brazil, during a live link-up, by video chat and remote control software, between the two facilities. January 18, 2017 Canadian Science Minister Kirsty Duncan toured the complex. Medical isotope project With the NRU reactor at the Chalk River Laboratories due to close in 2016, there was a need to find alternative sources of the medical isotope technetium-99m, a mainstay of nuclear medicine. In 2011 the Canadian Light Source received $14 Million in funding to investigate the feasibility of using an electron LINAC to produce molybdenum-99, the parent isotope of technetium-99. As part of this project a 35MeV LINAC has been installed in an unused underground experimental hall previously used for photonuclear experiments with the SAL LINAC. First irradiations are planned for late summer 2012, with the results to be evaluated by the Winnipeg Health Sciences Centre. This project lead to the founding a spin-off company — Canadian Isotope Innovations Corporation (CIIC), which was described as part of CEO Rob Lamb's 'legacy of accomplishment' when he departed the facility in 2021. The CIIC declared bankruptcy in 2024. Education program The CLS has an education program – "Students on the Beamlines" – funded by NSERC Promoscience. This outreach program for science allows high school students to fully experience the work of a scientist, in addition to having the chance to use the CLS beamlines. "The program allows students the development of active research, a very rare phenomena in schools and provides direct access to the use of a particle accelerator, something even rarer!" said teacher Steve Desfosses form College Saint-Bernard, Drummondville, Quebec. Dene students from La Loche, Saskatchewan have taken part in this program twice, looking at effects of acid rain. Student Jontae DesRoches commented "Elders have noticed that the landscape, where trees used to grow, there's none growing anymore. They're pretty concerned because wildlife is disappearing. Like, here there used to be rabbits and now there's none". In May 2012 three student groups were at the CLS simultaneously, with the La Loche students as the first to use the IDEAS beamline. "The aim for the students," according to CLS education and outreach coordinator Tracy Walker, "is to get an authentic scientific inquiry that's different from the examples in textbooks that have been done thousands of times." Students from six provinces as well as the Northwest Territories have been directly involved in experiments, some of which have yielded publishable-quality research. In 2012 the CLS was awarded the Canadian Nuclear Society's Education and Communication Award "in recognition of its commitment to community outreach, increasing public awareness of synchrotron science, and developing innovative and outstanding secondary educational programs such as Students on the Beamlines". Technical description Accelerators Injection system The injection system consists of a 250 MeV LINAC, a low energy transfer line, a 2.9 GeV booster synchrotron and a high energy transfer line. The LINAC was operated for over 30 years as part of the Saskatchewan Accelerator Lab and operates at 2856 MHz. The 78m low energy transfer line takes the electrons from the below-ground LINAC to the ground level booster in the newer CLS building, via two vertical chicanes. The full energy 2.9 GeV booster, chosen to give high orbit stability in the storage ring, operates at 1 Hz, with an RF frequency of 500 MHz, unsynchronised with the LINAC. This results in significant beam loss at the extraction energy. Storage ring The storage ring cell structure has a fairly compact lattice with twelve straight sections available for injection, RF cavities and 9 sections available for insertion devices. Each cell has two bending magnets detuned to allow some dispersion in the straights – the so-called double-bend achromat structure – and thus reduce the overall beam size. As well as the two bend magnets each cell has three families of quadrupole magnets and two families of sextupole magnets. The ring circumference is 171m, with a straight section length of 5.2m. The CLS is the smallest of the newer synchrotron facilities, which results in a relatively high horizontal beam emittance of 18.2 nm-rad. The CLS was also one of the first facilities to chicane two undulators in one straight section, to maximize the number of insertion device beamlines. All five of the phase I X-ray beamlines use insertion devices. Four use permanent magnet undulators designed and assembled at the CLS, including one in-vacuum undulator and one elliptically polarized undulator (EPU). The HXMA beamline uses a superconducting wiggler built by the Budker Institute of Nuclear Physics in Novosibirsk. Phase II added two further devices including another Budker superconducting wiggler, for the BMIT beamline. Phase III will add four more devices, filling 8 of the 9 available straight sections. Longer term development includes the replacement of two of the phase I undulators with elliptically polarizing devices. Since 2021, the ring operates in a top-up mode during normal user operations, injecting every few minutes to maintain a stable ring current just below 220 mA. Prior to this change, the ring operated with a fill current of 250mA in decay mode, with two injections per day. Facility status is shown on a "machine status" webpage, and using the CLSFC account on Twitter. Superconducting RF cavity The CLS was the first light source to use a superconducting RF (SRF) cavity in the storage ring from the beginning of operations. The niobium cavity is based on the 500 MHz design used at the Cornell Electron Storage Ring (CESR) which allows potentially beam-perturbing high order modes to propagate out of the cavity where they can be very effectively damped. The superconducting nature of the niobium cavity means only 0.02% of the RF power put into the cavity is wasted in heating the cavity as compared to roughly 40% for normal-conducting (copper) cavities. However, a large portion of this power saving - about 160 kW out of the 250 kW saved - is needed to power the cryogenic plant required to supply liquid helium to the cavity. The SRF cavity at CLS is fed with RF from a 310 kW Thales klystron. Beamlines See also List of synchrotron radiation facilities Plasma Physics Laboratory (Saskatchewan) Saskatchewan Accelerator Laboratory Canadian Synchrotron Radiation Facility G. Michael Bancroft Amira Abdelrasoul Innovation Place Research Park EPICS (Used for Accelerator and Beamline Control Systems) Canadian government scientific research organizations Canadian university scientific research organizations Canadian industrial research and development organizations Singularity Principle (movie filmed at the Canadian Light Source) References External links Canadian Light Source Website Nuclear research institutes Research institutes in Canada Synchrotron radiation facilities University of Saskatchewan Non-profit organizations based in Saskatchewan Buildings and structures in Saskatoon 1999 establishments in Saskatchewan Companies based in Saskatoon Federal government buildings in Saskatchewan
Canadian Light Source
[ "Materials_science", "Engineering" ]
4,521
[ "Nuclear research institutes", "Materials testing", "Nuclear organizations", "Synchrotron radiation facilities" ]
4,390,618
https://en.wikipedia.org/wiki/Solketal
Solketal is a protected form of glycerol with an isopropylidene acetal group joining two neighboring hydroxyl groups. Solketal contains a chiral center on the center carbon of the glycerol backbone, and so can be purchased as either the racemate or as one of the two enantiomers. Solketal has been used extensively in the synthesis of mono-, di- and triglycerides by ester bond formation. The free hydroxyl group of solketal can be esterified with a carboxylic acid to form the protected monoglyceride. The isopropylene group can then be removed using an acid catalyst in aqueous or alcoholic medium. The unprotected diol can then be esterified further to form either the di- or triglyceride. Another route to specific di- or triglycerides involves converting the solketal to glycidol (2,3-epoxy-1-propanol) and esterifying this with one fatty acid before opening the epoxy by heating in the presence of a second fatty acid and a catalyst. This second fatty acid is put on the third carbon atom, and then a third fatty acid can be added to the second carbon atom. References Ketals Primary alcohols
Solketal
[ "Chemistry" ]
278
[ "Ketals", "Functional groups" ]
4,393,727
https://en.wikipedia.org/wiki/Stokes%20relations
In physical optics, the Stokes relations, named after Sir George Gabriel Stokes, describe the relative phase of light reflected at a boundary between materials of different refractive indices. They also relate the transmission and reflection coefficients for the interaction. Their derivation relies on a time-reversal argument, so they only work when there is no absorption in the system. A reflection of the incoming field (E) is transmitted at the dielectric boundary to give rE and tE (where r and t are the amplitude reflection and transmission coefficients, respectively). Since there is no absorption this system is reversible, as shown in the second picture (where the direction of the beams has been reversed). If this reversed process were actually taking place, there will be parts of the incoming fields (rE and tE) that are themselves transmitted and reflected at the boundary. In the third picture, this is shown by the coefficients r' and t' (for reflection and transmission of the reversed fields). Everything must interfere so that the second and third pictures agree; beam x has amplitude E and beam y has amplitude 0, providing Stokes relations. The most interesting result here is that r=-r’. Thus, whatever phase is associated with reflection on one side of the interface, it is 180 degrees different on the other side of the interface. For example, if r has a phase of 0, r’ has a phase of 180 degrees. Explicit values for the transmission and reflection coefficients are provided by the Fresnel equations References Optics
Stokes relations
[ "Physics", "Chemistry" ]
304
[ "Applied and interdisciplinary physics", "Optics", " molecular", "Atomic", " and optical physics" ]
4,393,828
https://en.wikipedia.org/wiki/Superconducting%20coherence%20length
In superconductivity, the superconducting coherence length, usually denoted as (Greek lowercase xi), is the characteristic exponent of the variations of the density of superconducting component. The superconducting coherence length is one of two parameters in the Ginzburg–Landau theory of superconductivity. It is given by: where is a parameter in the Ginzburg–Landau equation for with the form , where is a constant. In Landau mean-field theory, at temperatures near the superconducting critical temperature , . Up to a factor of , it is equivalent to the characteristic exponent describing a recovery of the order parameter away from a perturbation in the theory of the second order phase transitions. In some special limiting cases, for example in the weak-coupling BCS theory of isotropic s-wave superconductor it is related to characteristic Cooper pair size: where is the reduced Planck constant, is the mass of a Cooper pair (twice the electron mass), is the Fermi velocity, and is the superconducting energy gap. The superconducting coherence length is a measure of the size of a Cooper pair (distance between the two electrons) and is of the order of cm. The electron near or at the Fermi surface moving through the lattice of a metal produces behind itself an attractive potential of range of the order of cm, the lattice distance being of order cm. For a very authoritative explanation based on physical intuition see the CERN article by V.F. Weisskopf. The ratio , where is the London penetration depth, is known as the Ginzburg–Landau parameter. Type-I superconductors are those with , and type-II superconductors are those with . In strong-coupling, anisotropic and multi-component theories these expressions are modified. References Superconductivity
Superconducting coherence length
[ "Physics", "Materials_science", "Engineering" ]
396
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
4,394,717
https://en.wikipedia.org/wiki/Body%20plan
A body plan, (), or ground plan is a set of morphological features common to many members of a phylum of animals. The vertebrates share one body plan, while invertebrates have many. This term, usually applied to animals, envisages a "blueprint" encompassing aspects such as symmetry, layers, segmentation, nerve, limb, and gut disposition. Evolutionary developmental biology seeks to explain the origins of diverse body plans. Body plans have historically been considered to have evolved in a flash in the Ediacaran biota; filling the Cambrian explosion with the results, and a more nuanced understanding of animal evolution suggests gradual development of body plans throughout the early Palaeozoic. Recent studies in animals and plants started to investigate whether evolutionary constraints on body plan structures can explain the presence of developmental constraints during embryogenesis such as the phenomenon referred to as phylotypic stage. History Among the pioneering zoologists, Linnaeus identified two body plans outside the vertebrates; Cuvier identified three; and Haeckel had four, as well as the Protista with eight more, for a total of twelve. For comparison, the number of phyla recognised by modern zoologists has risen to 36. Linnaeus, 1735 In his 1735 book Systema Naturæ, Swedish botanist Linnaeus grouped the animals into quadrupeds, birds, "amphibians" (including tortoises, lizards and snakes), fish, "insects" (Insecta, in which he included arachnids, crustaceans and centipedes) and "worms" (Vermes). Linnaeus's Vermes included effectively all other groups of animals, not only tapeworms, earthworms and leeches but molluscs, sea urchins and starfish, jellyfish, squid and cuttlefish. Cuvier, 1817 In his 1817 work, Le Règne Animal, French zoologist Georges Cuvier combined evidence from comparative anatomy and palaeontology to divide the animal kingdom into four body plans. Taking the central nervous system as the main organ system which controlled all the others, such as the circulatory and digestive systems, Cuvier distinguished four body plans or embranchements: Grouping animals with these body plans resulted in four branches: vertebrates, molluscs, articulata (including insects and annelids) and zoophytes or Radiata. Haeckel, 1866 Ernst Haeckel, in his 1866 Generelle Morphologie der Organismen, asserted that all living things were monophyletic (had a single evolutionary origin), being divided into plants, protista, and animals. His protista were divided into moneres, protoplasts, flagellates, diatoms, myxomycetes, myxocystodes, rhizopods, and sponges. His animals were divided into groups with distinct body plans: he named these phyla. Haeckel's animal phyla were coelenterates, echinoderms, and (following Cuvier) articulates, molluscs, and vertebrates. Gould, 1979 Stephen J. Gould explored the idea that the different phyla could be perceived in terms of a Bauplan, illustrating their fixity. However, he later abandoned this idea in favor of punctuated equilibrium. Origin 20 out of the 36 body plans originated in the Cambrian period, in the "Cambrian explosion". However, complete body plans of many phyla emerged much later, in the Palaeozoic or beyond. The current range of body plans is far from exhaustive of the possible patterns for life: the Precambrian Ediacaran biota includes body plans that differ from any found in currently living organisms, even though the overall arrangement of unrelated modern taxa is quite similar. Thus the Cambrian explosion appears to have more or less completely replaced the earlier range of body plans. Genetic basis Genes, embryos and development together determine the form of an adult organism's body, through the complex switching processes involved in morphogenesis. Developmental biologists seek to understand how genes control the development of structural features through a cascade of processes in which key genes produce morphogens, chemicals that diffuse through the body to produce a gradient that acts as a position indicator for cells, turning on other genes, some of which in turn produce other morphogens. A key discovery was the existence of groups of homeobox genes, which function as switches responsible for laying down the basic body plan in animals. The homeobox genes are remarkably conserved between species as diverse as the fruit fly and humans, the basic segmented pattern of the worm or fruit fly being the origin of the segmented spine in humans. The field of animal evolutionary developmental biology ('Evo Devo'), which studies the genetics of morphology in detail, is rapidly expanding with many of the developmental genetic cascades, particularly in the fruit fly Drosophila, catalogued in considerable detail. See also References External links Developmental Biology 8e Online: Patterning of the Mesoderm by Activin Videos The Science of Evolution: Sean B. Carroll explains the genetics of the fruit fly body plan. Animal anatomy Comparative anatomy Evolution by phenotype Evolutionary biology Morphology (biology) Taxonomy (biology)
Body plan
[ "Biology" ]
1,090
[ "Evolutionary biology", "Taxonomy (biology)", "Morphology (biology)" ]
4,395,348
https://en.wikipedia.org/wiki/Viral%20life%20cycle
Viruses are only able to replicate themselves by commandeering the reproductive apparatus of cells and making them reproduce the virus's genetic structure and particles instead. How viruses do this depends mainly on the type of nucleic acid DNA or RNA they contain, which is either one or the other but never both. Viruses cannot function or reproduce outside a cell, and are totally dependent on a host cell to survive. Most viruses are species specific, and related viruses typically only infect a narrow range of plants, animals, bacteria, or fungi. Life cycle process Viral entry For the virus to reproduce and thereby establish infection, it must enter cells of the host organism and use those cells' materials. To enter the cells, proteins on the surface of the virus interact with proteins of the cell. Attachment, or adsorption, occurs between the viral particle and the host cell membrane. A hole forms in the cell membrane, then the virus particle or its genetic contents are released into the host cell, where replication of the viral genome may commence. Viral replication Next, a virus must take control of the host cell's replication mechanisms. It is at this stage a distinction between susceptibility and permissibility of a host cell is made. Permissibility determines the outcome of the infection. After control is established and the environment is set for the virus to begin making copies of itself, replication occurs quickly by the millions. Viral shedding After a virus has made many copies of itself, the progeny may begin to leave the cell by several methods. This is called shedding and is the final stage in the viral life cycle. Viral latency Some viruses can "hide" within a cell, which may mean that they evade the host cell defenses or immune system and may increase the long-term "success" of the virus. This hiding is deemed latency. During this time, the virus does not produce any progeny, it remains inactive until external stimuli—such as light or stress—prompts it to activate. See also Viral phenomena, which get their name from the way in which their propagation is analogous to the propagation of viruses among hosts References
Viral life cycle
[ "Biology" ]
430
[ "Viral life cycle" ]
4,395,461
https://en.wikipedia.org/wiki/Shuuto
The or shootball is a baseball pitch. It is commonly thrown by right-handed Japanese pitchers such as Hiroki Kuroda, Noboru Akiyama, Kenjiro Kawasaki, Daisuke Matsuzaka, Yu Darvish and Masumi Kuwata. The most renowned shuuto pitcher in history was Masaji Hiramatsu, whose famous pitch was dubbed the razor shuuto because it seemed to "cut the air" when thrown. The pitch is mainly designed to break down and in on right-handed batters, to prevent them from making solid contact with the ball. It can be thrown to left-handers to keep them off balance. Good shuuto pitches often break the bats of right-handed hitters because they get jammed when trying to swing at this pitch. It could be said that the shuuto has a somewhat similar break and purpose as the screwball. If the shuuto was thrown off the outside part of the plate, it would tail back over the outside border of the strike zone. Conversely, if it was thrown on the inside part of the plate, it would move even further inside. The shuuto is often described in English as a reverse slider, but this is not strictly the case. The shuuto generally has more velocity and less break than a slider. The two-seam fastball, the sinker, and the screwball, in differing degrees, move down and in towards a right-handed batter when thrown, or in the opposite manner of a curveball and a slider. The shuuto is often confused with the gyroball, perhaps because of an article by Will Carroll that erroneously equated the two pitches. Although Carroll later corrected himself, the confusion persists. According to baseball analyst Mike Fast, the shuuto "can describe any pitch that tails to the pitcher's arm side, including the two-seam fastball, the circle change-up, the screwball, and the split-finger fastball". Popular culture The shuuto is mentioned in the 1992 film Mr. Baseball. This is the type of pitch that Tom Selleck's character is continually unable to hit, even though he is a left-handed batter. The shuuto is described as "the great equalizer". Etymology In the third edition of The Dickson Baseball Dictionary, "shoot" is explained as: Japan and the United States Many Japanese pitchers utilize the pitching style of the shot due to the varying results of the ball twisting or sinking in flight. This is done in an attempt to outwit the batter and cause them to either miss the ball when swinging or increase the chance of a foul ball or easily fielded ball. A shot is more difficult to hit compared to a straight pitch because the batter must compensate for the eccentric movement of the ball between the time the ball leaves the pitcher’s hand and crosses home plate. American baseball utilizes terms such as slider, screwball, breaking ball, changeup or knuckleball instead of the Japanese term. Technique The pitch can be thrown with the same grip as a two-seam fastball, or a variation thereof, depending on the pitcher. Twisting the wrist and forearm using an inward rotation burdens the elbow and can easily lead to injury. Kudo naturally rotated inward after beginning with his forearm outwardly rotated. He released it with a push of the ball using the middle finger without straining the elbow. By contrast Hiramatsu did not put his fingers on the seams. A baseball cliché states, "Good pitchers of the curve slider have poor shots, good pitching pitchers have poor curves". This is thought to be due to the difference in hand shape and the difference in pitching form. Aoyama Noboru stated, "In Japan's history of baseball, both curves and shoots were top notches is about Takehiko Bessho." Type Razor Shoots with good sharpness are called razor shoot or high speed shoot. Hisamfumi Kawamura, Ryutaro Imanishi, Akiyama Climbing, Hiramatsu's shoots are called razor shoots, Hiramatsu's shot demonstrated its power especially to the right batter. Shoots such as Morita Yukihi and Kobayashi Masahide are sometimes referred to as high-speed shoots because ball speeds exceeding 150  km/h have emerged. Rotation If the pitcher throws a straight ball, or if the release point shifts, the ball may have a rotation similar to the shoot, which is called shoot rotation or natural shoot. It is a mistake to throw a shot with the intention of throwing a straight ball. This requires a shift in the release point and the rotation become loose, the lateral change is also small, and the control is often lost. When a straight ball intended to be thrown aimed on a diagonal (crossfire) shoot rotates, it heads towards the center of the strike zone, so it may be easy to hit. It is recognized that this method is not stable, but some pitchers use this as a weapon. Teruhiro Kuroki says "put power in the middle finger" against the alternative "put power in the index finger" of other pitchers when throwing the slider. References Baseball pitches Aerodynamics
Shuuto
[ "Chemistry", "Engineering" ]
1,059
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
4,396,171
https://en.wikipedia.org/wiki/Earth%27s%20rotation
Earth's rotation or Earth's spin is the rotation of planet Earth around its own axis, as well as changes in the orientation of the rotation axis in space. Earth rotates eastward, in prograde motion. As viewed from the northern polar star Polaris, Earth turns counterclockwise. The North Pole, also known as the Geographic North Pole or Terrestrial North Pole, is the point in the Northern Hemisphere where Earth's axis of rotation meets its surface. This point is distinct from Earth's north magnetic pole. The South Pole is the other point where Earth's axis of rotation intersects its surface, in Antarctica. Earth rotates once in about 24 hours with respect to the Sun, but once every 23 hours, 56 minutes and 4 seconds with respect to other distant stars (see below). Earth's rotation is slowing slightly with time; thus, a day was shorter in the past. This is due to the tidal effects the Moon has on Earth's rotation. Atomic clocks show that the modern day is longer by about 1.7 milliseconds than a century ago, slowly increasing the rate at which UTC is adjusted by leap seconds. Analysis of historical astronomical records shows a slowing trend; the length of a day increased by about 2.3 milliseconds per century since the 8th century BCE. Scientists reported that in 2020 Earth had started spinning faster, after consistently spinning slower than 86,400 seconds per day in the decades before. On June 29, 2022, Earth's spin was completed in 1.59 milliseconds under 24 hours, setting a new record. Because of that trend, engineers worldwide are discussing a 'negative leap second' and other possible timekeeping measures. This increase in speed is thought to be due to various factors, including the complex motion of its molten core, oceans, and atmosphere, the effect of celestial bodies such as the Moon, and possibly climate change, which is causing the ice at Earth's poles to melt. The masses of ice account for the Earth's shape being that of an oblate spheroid, bulging around the equator. When these masses are reduced, the poles rebound from the loss of weight, and Earth becomes more spherical, which has the effect of bringing mass closer to its centre of gravity. Conservation of angular momentum dictates that a mass distributed more closely around its centre of gravity spins faster. History Among the ancient Greeks, several of the Pythagorean school believed in the rotation of Earth rather than the apparent diurnal rotation of the heavens. Perhaps the first was Philolaus (470–385 BCE), though his system was complicated, including a counter-earth rotating daily about a central fire. A more conventional picture was supported by Hicetas, Heraclides and Ecphantus in the fourth century BCE who assumed that Earth rotated but did not suggest that Earth revolved about the Sun. In the third century BCE, Aristarchus of Samos suggested the Sun's central place. However, Aristotle in the fourth century BCE criticized the ideas of Philolaus as being based on theory rather than observation. He established the idea of a sphere of fixed stars that rotated about Earth. This was accepted by most of those who came after, in particular Claudius Ptolemy (2nd century CE), who thought Earth would be devastated by gales if it rotated. In 499 CE, the Indian astronomer Aryabhata suggested that the spherical Earth rotates about its axis daily and that the apparent movement of the stars is a relative motion caused by the rotation of the Earth. He provided the following analogy: "Just as a man in a boat going in one direction sees the stationary things on the bank as moving in the opposite direction, in the same way to a man at Lanka the fixed stars appear to be going westward." In the 10th century, some Muslim astronomers accepted that the Earth rotates around its axis. According to al-Biruni, al-Sijzi (d. c. 1020) invented an astrolabe called al-zūraqī based on the idea believed by some of his contemporaries "that the motion we see is due to the Earth's movement and not to that of the sky." The prevalence of this view is further confirmed by a reference from the 13th century which states: "According to the geometers [or engineers] (muhandisīn), the Earth is in constant circular motion, and what appears to be the motion of the heavens is actually due to the motion of the Earth and not the stars." Treatises were written to discuss its possibility, either as refutations or expressing doubts about Ptolemy's arguments against it. At the Maragha and Samarkand observatories, Earth's rotation was discussed by Tusi (born 1201) and Qushji (born 1403); the arguments and evidence they used resemble those used by Copernicus. In medieval Europe, Thomas Aquinas accepted Aristotle's view and so, reluctantly, did John Buridan and Nicole Oresme in the fourteenth century. Not until Nicolaus Copernicus in 1543 adopted a heliocentric world system did the contemporary understanding of Earth's rotation begin to be established. Copernicus pointed out that if the movement of the Earth is violent, then the stars' movement must be much more so. He acknowledged the contribution of the Pythagoreans and pointed to examples of relative motion. For Copernicus, this was the first step in establishing the simpler pattern of planets circling a central Sun. Tycho Brahe, who produced accurate observations on which Kepler based his laws of planetary motion, used Copernicus's work as the basis of a system assuming a stationary Earth. In 1600, William Gilbert strongly supported Earth's rotation in his treatise on Earth's magnetism and thereby influenced many of his contemporaries. Those like Gilbert who did not openly support or reject the motion of Earth about the Sun are called "semi-Copernicans". A century after Copernicus, Riccioli disputed the model of a rotating Earth due to the lack of then-observable eastward deflections in falling bodies; such deflections would later be called the Coriolis effect. However, the contributions of Kepler, Galileo, and Newton gathered support for the theory of the rotation of the Earth. Empirical tests Earth's rotation implies that the Equator bulges and the geographical poles are flattened. In his Principia, Newton predicted this flattening would amount to one part in 230, and pointed to the pendulum measurements taken by Richer in 1673 as corroboration of the change in gravity, but initial measurements of meridian lengths by Picard and Cassini at the end of the 17th century suggested the opposite. However, measurements by Maupertuis and the French Geodesic Mission in the 1730s established the oblateness of Earth, thus confirming the positions of both Newton and Copernicus. In Earth's rotating frame of reference, a freely moving body follows an apparent path that deviates from the one it would follow in a fixed frame of reference. Because of the Coriolis effect, falling bodies veer slightly eastward from the vertical plumb line below their point of release, and projectiles veer right in the Northern Hemisphere (and left in the Southern) from the direction in which they are shot. The Coriolis effect is mainly observable at a meteorological scale, where it is responsible for the opposite directions of cyclone rotation in the Northern and Southern hemispheres (anticlockwise and clockwise, respectively). Hooke, following a suggestion from Newton in 1679, tried unsuccessfully to verify the predicted eastward deviation of a body dropped from a height of , but definitive results were obtained later, in the late 18th and early 19th centuries, by Giovanni Battista Guglielmini in Bologna, Johann Friedrich Benzenberg in Hamburg and Ferdinand Reich in Freiberg, using taller towers and carefully released weights. A ball dropped from a height of 158.5 m departed by 27.4 mm from the vertical compared with a calculated value of 28.1 mm. The most celebrated test of Earth's rotation is the Foucault pendulum first built by physicist Léon Foucault in 1851, which consisted of a lead-filled brass sphere suspended from the top of the Panthéon in Paris. Because of Earth's rotation under the swinging pendulum, the pendulum's plane of oscillation appears to rotate at a rate depending on latitude. At the latitude of Paris, the predicted and observed shift was about clockwise per hour. Foucault pendulums now swing in museums worldwide. Periods True solar day Earth's rotation period relative to the Sun (solar noon to solar noon) is its true solar day or apparent solar day. It depends on Earth's orbital motion and is thus affected by changes in the eccentricity and inclination of Earth's orbit. Both vary over thousands of years, so the annual variation of the true solar day also varies. Generally, it is longer than the mean solar day during two periods of the year and shorter during another two. The true solar day tends to be longer near perihelion when the Sun apparently moves along the ecliptic through a greater angle than usual, taking about longer to do so. Conversely, it is about shorter near aphelion. It is about longer near a solstice when the projection of the Sun's apparent motion along the ecliptic onto the celestial equator causes the Sun to move through a greater angle than usual. Conversely, near an equinox the projection onto the equator is shorter by about . Currently, the perihelion and solstice effects combine to lengthen the true solar day near by solar seconds, but the solstice effect is partially cancelled by the aphelion effect near when it is only longer. The effects of the equinoxes shorten it near and by and , respectively. Mean solar day The average of the true solar day during the course of an entire year is the mean solar day, which contains 86,400 mean solar seconds. Currently, each of these seconds is slightly longer than an SI second because Earth's mean solar day is now slightly longer than it was during the 19th century due to tidal friction. The average length of the mean solar day since the introduction of the leap second in 1972 has been about 0 to 2 ms longer than 86,400 SI seconds. Random fluctuations due to core-mantle coupling have an amplitude of about 5 ms. The mean solar second between 1750 and 1892 was chosen in 1895 by Simon Newcomb as the independent unit of time in his Tables of the Sun. These tables were used to calculate the world's ephemerides between 1900 and 1983, so this second became known as the ephemeris second. In 1967 the SI second was made equal to the ephemeris second. The apparent solar time is a measure of Earth's rotation and the difference between it and the mean solar time is known as the equation of time. Stellar and sidereal day Earth's rotation period relative to the International Celestial Reference Frame, called its stellar day by the International Earth Rotation and Reference Systems Service (IERS), is seconds of mean solar time (UT1) , ). Earth's rotation period relative to the precessing mean vernal equinox, named sidereal day, is of mean solar time (UT1) , ). Thus, the sidereal day is shorter than the stellar day by about . Both the stellar day and the sidereal day are shorter than the mean solar day by about . This is a result of the Earth turning 1 additional rotation, relative to the celestial reference frame, as it orbits the Sun (so 366.24 rotations/y). The mean solar day in SI seconds is available from the IERS for the periods and . Recently (1999–2010) the average annual length of the mean solar day in excess of 86,400 SI seconds has varied between and , which must be added to both the stellar and sidereal days given in mean solar time above to obtain their lengths in SI seconds (see Fluctuations in the length of day). Angular speed The angular speed of Earth's rotation in inertial space is ± . Multiplying by (180°/π radians) × (86,400 seconds/day) yields , indicating that Earth rotates more than 360 degrees relative to the fixed stars in one solar day. Earth's movement along its nearly circular orbit while it is rotating once around its axis requires that Earth rotate slightly more than once relative to the fixed stars before the mean Sun can pass overhead again, even though it rotates only once (360°) relative to the mean Sun. Multiplying the value in rad/s by Earth's equatorial radius of (WGS84 ellipsoid) (factors of 2π radians needed by both cancel) yields an equatorial speed of . Some sources state that Earth's equatorial speed is slightly less, or . This is obtained by dividing Earth's equatorial circumference by . However, the use of the solar day is incorrect; it must be the sidereal day, so the corresponding time unit must be a sidereal hour. This is confirmed by multiplying by the number of sidereal days in one mean solar day, , which yields the equatorial speed in mean solar hours given above of 1,674.4 km/h. The tangential speed of Earth's rotation at a point on Earth can be approximated by multiplying the speed at the equator by the cosine of the latitude. For example, the Kennedy Space Center is located at latitude 28.59° N, which yields a speed of: cos(28.59°) × 1,674.4 km/h = 1,470.2 km/h. Latitude is a placement consideration for spaceports. The peak of the Cayambe volcano is the point of Earth's surface farthest from its axis; thus, it rotates the fastest as Earth spins. Changes In rotational axis Earth's rotation axis moves with respect to the fixed stars (inertial space); the components of this motion are precession and nutation. It also moves with respect to Earth's crust; this is called polar motion. Precession is a rotation of Earth's rotation axis, caused primarily by external torques from the gravity of the Sun, Moon and other bodies. The polar motion is primarily due to free core nutation and the Chandler wobble. In rotational speed Tidal interactions Over millions of years, Earth's rotation has been slowed significantly by tidal acceleration through gravitational interactions with the Moon. Thus angular momentum is slowly transferred to the Moon at a rate proportional to , where is the orbital radius of the Moon. This process has gradually increased the length of the day to its current value, and resulted in the Moon being tidally locked with Earth. This gradual rotational deceleration is empirically documented by estimates of day lengths obtained from observations of tidal rhythmites and stromatolites; a compilation of these measurements found that the length of the day has increased steadily from about 21 hours at 600 Myr ago to the current 24-hour value. By counting the microscopic lamina that form at higher tides, tidal frequencies (and thus day lengths) can be estimated, much like counting tree rings, though these estimates can be increasingly unreliable at older ages. Resonant stabilization The current rate of tidal deceleration is anomalously high, implying Earth's rotational velocity must have decreased more slowly in the past. Empirical data tentatively shows a sharp increase in rotational deceleration about 600 Myr ago. Some models suggest that Earth maintained a constant day length of 21 hours throughout much of the Precambrian. This day length corresponds to the semidiurnal resonant period of the thermally driven atmospheric tide; at this day length, the decelerative lunar torque could have been canceled by an accelerative torque from the atmospheric tide, resulting in no net torque and a constant rotational period. This stabilizing effect could have been broken by a sudden change in global temperature. Recent computational simulations support this hypothesis and suggest the Marinoan or Sturtian glaciations broke this stable configuration about 600 Myr ago; the simulated results agree quite closely with existing paleorotational data. Global events Some recent large-scale events, such as the 2004 Indian Ocean earthquake, have caused the length of a day to shorten by 3 microseconds by reducing Earth's moment of inertia. Post-glacial rebound, ongoing since the last ice age, is also changing the distribution of Earth's mass, thus affecting the moment of inertia of Earth and, by the conservation of angular momentum, Earth's rotation period. The length of the day can also be influenced by man-made structures. For example, NASA scientists calculated that the water stored in the Three Gorges Dam has increased the length of Earth's day by 0.06 microseconds due to the shift in mass. Measurement The primary monitoring of Earth's rotation is performed by very-long-baseline interferometry coordinated with the Global Positioning System, satellite laser ranging, and other satellite geodesy techniques. This provides an absolute reference for the determination of universal time, precession and nutation. The absolute value of Earth rotation including UT1 and nutation can be determined using space geodetic observations, such as very-long-baseline interferometry and lunar laser ranging, whereas their derivatives, denoted as length-of-day excess and nutation rates can be derived from satellite observations, such as GPS, GLONASS, Galileo and satellite laser ranging to geodetic satellites. Ancient observations There are recorded observations of solar and lunar eclipses by Babylonian and Chinese astronomers beginning in the 8th century BCE, as well as from the medieval Islamic world and elsewhere. These observations can be used to determine changes in Earth's rotation over the last 27 centuries, since the length of the day is a critical parameter in the calculation of the place and time of eclipses. A change in day length of milliseconds per century shows up as a change of hours and thousands of kilometers in eclipse observations. The ancient data are consistent with a shorter day, meaning Earth was turning faster throughout the past. Cyclic variability Around every 25–30 years Earth's rotation slows temporarily by a few milliseconds per day, usually lasting around five years. 2017 was the fourth consecutive year that Earth's rotation has slowed. The cause of this variability has not yet been determined. Origin Earth's original rotation was a vestige of the original angular momentum of the cloud of dust, rocks and gas that coalesced to form the Solar System. This primordial cloud was composed of hydrogen and helium produced in the Big Bang, as well as heavier elements ejected by supernovas. As this interstellar dust is heterogeneous, any asymmetry during gravitational accretion resulted in the angular momentum of the eventual planet. However, if the giant-impact hypothesis for the origin of the Moon is correct, this primordial rotation rate would have been reset by the Theia impact 4.5 billion years ago. Regardless of the speed and tilt of Earth's rotation before the impact, it would have experienced a day some five hours long after the impact. Tidal effects would then have slowed this rate to its modern value. See also Allais effect Diurnal cycle Earth's orbit Earth orientation parameters Formation and evolution of the Solar System Geodesic (in mathematics) Geodesics in general relativity Geodesy History of Earth History of geodesy Inner core super-rotation Nychthemeron Rossby wave Spherical Earth World Geodetic System Notes References External links USNO Earth Orientation new site, being populated USNO IERS old site, to be abandoned IERS Earth Orientation Center: Earth rotation data and interactive analysis International Earth Rotation and Reference Systems Service (IERS) If the Earth's rotation period is less than 24 hours, why don't our clocks fall out of sync with the Sun? Dynamics of the Solar System Rotation Rotation
Earth's rotation
[ "Physics", "Astronomy" ]
4,153
[ "Physical phenomena", "Dynamics of the Solar System", "Classical mechanics", "Rotation", "Motion (physics)", "Solar System" ]
4,396,962
https://en.wikipedia.org/wiki/P%C3%B3lya%20enumeration%20theorem
The Pólya enumeration theorem, also known as the Redfield–Pólya theorem and Pólya counting, is a theorem in combinatorics that both follows from and ultimately generalizes Burnside's lemma on the number of orbits of a group action on a set. The theorem was first published by J. Howard Redfield in 1927. In 1937 it was independently rediscovered by George Pólya, who then greatly popularized the result by applying it to many counting problems, in particular to the enumeration of chemical compounds. The Pólya enumeration theorem has been incorporated into symbolic combinatorics and the theory of combinatorial species. Simplified, unweighted version Let X be a finite set and let G be a group of permutations of X (or a finite symmetry group that acts on X). The set X may represent a finite set of beads, and G may be a chosen group of permutations of the beads. For example, if X is a necklace of n beads in a circle, then rotational symmetry is relevant so G is the cyclic group Cn, while if X is a bracelet of n beads in a circle, rotations and reflections are relevant so G is the dihedral group Dn of order 2n. Suppose further that Y is a finite set of colors — the colors of the beads — so that YX is the set of colored arrangements of beads (more formally: YX is the set of functions .) Then the group G acts on YX. The Pólya enumeration theorem counts the number of orbits under G of colored arrangements of beads by the following formula: where is the number of colors and c(g) is the number of cycles of the group element g when considered as a permutation of X. Full, weighted version In the more general and more important version of the theorem, the colors are also weighted in one or more ways, and there could be an infinite number of colors provided that the set of colors has a generating function with finite coefficients. In the univariate case, suppose that is the generating function of the set of colors, so that there are fw colors of weight w for each integer w ≥ 0. In the multivariate case, the weight of each color is a vector of integers and there is a generating function f(t1, t2, ...) that tabulates the number of colors with each given vector of weights. The enumeration theorem employs another multivariate generating function called the cycle index: where n is the number of elements of X and ck(g) is the number of k-cycles of the group element g as a permutation of X. A colored arrangement is an orbit of the action of G on the set YX (where Y is the set of colors and YX denotes the set of all functions φ: X→Y). The weight of such an arrangement is defined as the sum of the weights of φ(x) over all x in X. The theorem states that the generating function F of the number of colored arrangements by weight is given by: or in the multivariate case: To reduce to the simplified version given earlier, if there are m colors and all have weight 0, then f(t) = m and In the celebrated application of counting trees (see below) and acyclic molecules, an arrangement of "colored beads" is actually an arrangement of arrangements, such as branches of a rooted tree. Thus the generating function f for the colors is derived from the generating function F for arrangements, and the Pólya enumeration theorem becomes a recursive formula. Examples Necklaces and bracelets Colored cubes How many ways are there to color the sides of a three-dimensional cube with m colors, up to rotation of the cube? The rotation group C of the cube acts on the six sides of the cube, which are equivalent to beads. Its cycle index is which is obtained by analyzing the action of each of the 24 elements of C on the 6 sides of the cube, see here for the details. We take all colors to have weight 0 and find that there are different colorings. Graphs on three and four vertices A graph on m vertices can be interpreted as an arrangement of colored beads. The set X of "beads" is the set of possible edges, while the set of colors Y = {black, white} corresponds to edges that are present (black) or absent (white). The Pólya enumeration theorem can be used to calculate the number of graphs up to isomorphism with a fixed number of vertices, or the generating function of these graphs according to the number of edges they have. For the latter purpose, we can say that a black or present edge has weight 1, while an absent or white edge has weight 0. Thus is the generating function for the set of colors. The relevant symmetry group is the symmetric group on m letters. This group acts on the set X of possible edges: a permutation φ turns the edge {a, b} into the edge {φ(a), φ(b)}. With these definitions, an isomorphism class of graphs with m vertices is the same as an orbit of the action of G on the set YX of colored arrangements; the number of edges of the graph equals the weight of the arrangement. The eight graphs on three vertices (before identifying isomorphic graphs) are shown at the right. There are four isomorphism classes of graphs, also shown at the right. The cycle index of the group S3 acting on the set of three edges is (obtained by inspecting the cycle structure of the action of the group elements; see here). Thus, according to the enumeration theorem, the generating function of graphs on 3 vertices up to isomorphism is which simplifies to Thus there is one graph each with 0 to 3 edges. The cycle index of the group S4 acting on the set of 6 edges is (see here.) Hence which simplifies to These graphs are shown at the right. Rooted ternary trees The set T3 of rooted ternary trees consists of rooted trees where every node (or non-leaf vertex) has exactly three children (leaves or subtrees). Small ternary trees are shown at right. Note that rooted ternary trees with n nodes are equivalent to rooted trees with n vertices of degree at most 3 (by ignoring the leaves). In general, two rooted trees are isomorphic when one can be obtained from the other by permuting the children of its nodes. In other words, the group that acts on the children of a node is the symmetric group S3. We define the weight of such a ternary tree to be the number of nodes (or non-leaf vertices). One can view a rooted, ternary tree as a recursive object which is either a leaf or a node with three children which are themselves rooted ternary trees. These children are equivalent to beads; the cycle index of the symmetric group S3 that acts on them is The Polya enumeration theorem translates the recursive structure of rooted ternary trees into a functional equation for the generating function F(t) of rooted ternary trees by number of nodes. This is achieved by "coloring" the three children with rooted ternary trees, weighted by node number, so that the color generating function is given by which by the enumeration theorem gives as the generating function for rooted ternary trees, weighted by one less than the node number (since the sum of the children weights does not take the root into account), so that This is equivalent to the following recurrence formula for the number tn of rooted ternary trees with n nodes: where a, b and c are nonnegative integers. The first few values of are 1, 1, 1, 2, 4, 8, 17, 39, 89, 211, 507, 1238, 3057, 7639, 19241 . Proof of theorem The simplified form of the Pólya enumeration theorem follows from Burnside's lemma, which says that the number of orbits of colorings is the average of the number of elements of fixed by the permutation g of G over all permutations g. The weighted version of the theorem has essentially the same proof, but with a refined form of Burnside's lemma for weighted enumeration. It is equivalent to apply Burnside's lemma separately to orbits of different weight. For clearer notation, let be the variables of the generating function f of . Given a vector of weights , let denote the corresponding monomial term of f. Applying Burnside's lemma to orbits of weight , the number of orbits of this weight is where is the set of colorings of weight that are also fixed by g. If we then sum over all possible weights, we obtain Meanwhile a group element g with cycle structure will contribute the term to the cycle index of G. The element g fixes an element if and only if the function φ is constant on every cycle q of g. For every such cycle q, the generating function by weight of |q| identical colors from the set enumerated by f is It follows that the generating function by weight of the points fixed by g is the product of the above term over all cycles of g, i.e. Substituting this in the sum over all g yields the substituted cycle index as claimed. See also Labelled enumeration theorem References External links Applying the Pólya-Burnside Enumeration Theorem by Hector Zenil and Oleksandr Pavlyk, The Wolfram Demonstrations Project. Frederic Chyzak Enumerating alcohols and other classes of chemical molecules, an example of Pólya theory. Enumerative combinatorics Articles containing proofs Graph enumeration Theorems in combinatorics
Pólya enumeration theorem
[ "Mathematics" ]
2,015
[ "Graph enumeration", "Theorems in combinatorics", "Graph theory", "Enumerative combinatorics", "Theorems in discrete mathematics", "Combinatorics", "Mathematical relations", "Articles containing proofs" ]
5,826,615
https://en.wikipedia.org/wiki/Representation%20rigid%20group
In mathematics, in the representation theory of groups, a group is said to be representation rigid if for every , it has only finitely many isomorphism classes of complex irreducible representations of dimension . External links The proalgebraic completion of rigid groups Properties of groups Representation theory of groups
Representation rigid group
[ "Mathematics" ]
61
[ "Mathematical structures", "Algebraic structures", "Properties of groups" ]
5,827,181
https://en.wikipedia.org/wiki/Diffuse%20reflectance%20spectroscopy
Diffuse reflectance spectroscopy, or diffuse reflection spectroscopy, is a subset of absorption spectroscopy. It is sometimes called remission spectroscopy. Remission is the reflection or back-scattering of light by a material, while transmission is the passage of light through a material. The word remission implies a direction of scatter, independent of the scattering process. Remission includes both specular and diffusely back-scattered light. The word reflection often implies a particular physical process, such as specular reflection. The use of the term remission spectroscopy is relatively recent, and found first use in applications related to medicine and biochemistry. While the term is becoming more common in certain areas of absorption spectroscopy, the term diffuse reflectance is firmly entrenched, as in diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and diffuse-reflectance ultraviolet–visible spectroscopy. Mathematical treatments related to diffuse reflectance and transmittance The mathematical treatments of absorption spectroscopy for scattering materials were originally largely borrowed from other fields. The most successful treatments use the concept of dividing a sample into layers, called plane parallel layers. The treatments are generally those consistent with a two-flux or two-stream approximation. Some of the treatments require all the scattered light, both remitted and transmitted light, to be measured. Others apply only to remitted light, with the assumption that the sample is "infinitely thick" and transmits no light. These are special cases of the more general treatments. There are several general treatments, all of which are compatible with each other, related to the mathematics of plane parallel layers. They are the Stokes formulas, equations of Benford, Hecht finite difference formula, and the Dahm equation. For the special case of infinitesimal layers, the Kubelka–Munk and Schuster–Kortüm treatments also give compatible results. Treatments which involve different assumptions and which yield incompatible results are the Giovanelli exact solutions, and the particle theories of Melamed and Simmons. George Gabriel Stokes George Gabriel Stokes (not to neglect the later work of Gustav Kirchhoff) is often given credit for having first enunciated the fundamental principles of spectroscopy. In 1862, Stokes published formulas for determining the quantities of light remitted and transmitted from "a pile of plates". He described his work as addressing a "mathematical problem of some interest". He solved the problem using summations of geometric series, but the results are expressed as continuous functions. This means that the results can be applied to fractional numbers of plates, though they have the intended meaning only for an integral number. The results below are presented in a form compatible with discontinuous functions. Stokes used the term "reflexion", not "remission", specifically referring to what is often called regular or specular reflection. In regular reflection, the Fresnel equations describe the physics, which includes both reflection and refraction, at the optical boundary of a plate. A "pile of plates" is still a term of art used to describe a polarizer in which a polarized beam is obtained by tilting a pile of plates at an angle to an unpolarized incident beam. The area of polarization was specifically what interested Stokes in this mathematical problem. Stokes formulas for remission from and transmission through a "pile of plates" For a sample that consists of layers, each having its absorption, remission, and transmission (ART) fractions symbolized by , with , one may symbolize the ART fractions for the sample as and calculate their values by where and Franz Arthur Friedrich Schuster In 1905, in an article entitled "Radiation through a foggy atmosphere", Arthur Schuster published a solution to the equation of radiative transfer, which describes the propagation of radiation through a medium, affected by absorption, emission, and scattering processes. His mathematics used a two flux approximation; i.e., all light is assumed to travel with a component either in the same direction as the incident beam, or in the opposite direction. He used the word scattering rather than reflection, and considered scatter to be in all directions. He used the symbols k and s for absorption and isotropic scattering coefficients, and repeatedly refers to radiation entering a "layer", which ranges in size from infinitesimal to infinitely thick. In his treatment, the radiation enters the layers at all possible angles, referred to as "diffuse illumination". Kubelka and Munk In 1931, Paul Kubelka (with Franz Munk) published "An article on the optics of paint", the contents of which has come to be known as the Kubelka-Munk theory. They used absorption and remission (or back-scatter) constants, noting (as translated by Stephen H. Westin) that "an infinitesimal layer of the coating absorbs and scatters a certain constant portion of all the light passing through it". While symbols and terminology are changed here, it seems clear from their language that the terms in their differential equations stand for absorption and backscatter (remission) fractions. They also noted that the reflectance from an infinite number of these infinitesimal layers is "solely a function of the ratio of the absorption and back-scatter (remission) constants , but not in any way on the absolute numerical values of these constants". This turns out to be incorrect for layers of finite thickness, and the equation was modified for spectroscopic purposes (below), but Kubelka-Munk theory has found extensive use in coatings. However, in revised presentations of their mathematical treatment, including that of Kubelka, Kortüm and Hecht (below), the following symbolism became popular, using coefficients rather than fractions: is the Absorption Coefficient ≡ the limiting fraction of absorption of light energy per unit thickness, as thickness becomes very small. is the Back-Scattering Coefficient ≡ the limiting fraction of light energy scattered backwards per unit thickness as thickness tends to zero. The Kubelka–Munk equation The Kubelka–Munk equation describes the remission from a sample composed of an infinite number of infinitesimal layers, each having as an absorption fraction, and as a remission fraction. Deane B. Judd Deane Judd was very interested the effect of light polarization and degree of diffusion on the appearance of objects. He made important contributions to the fields of colorimetry, color discrimination, color order, and color vision. Judd defined the scattering power for a sample as , where is the particle diameter. This is consistent with the belief that the scattering from a single particle is conceptually more important than the derived coefficients. The above Kubelka–Munk equation can be resolved for the ratio in terms of . This led to a very early (perhaps the first) use of the term "remission" in place of "reflectance" when Judd defined a "remission function" as , where and are absorption and scattering coefficients, which replace and in the Kubelka–Munk equation above. Judd tabulated the remission function as a function of percent reflectance from an infinitely thick sample. This function, when used as a measure of absorption, was sometimes referred to as "pseudo-absorbance", a term which has been used later with other definitions as well. General Electric In the 1920s and 30s, Albert H. Taylor, Arthur C. Hardy, and others of the General Electric company developed a series of instruments that were capable of easily recording spectral data "in reflection". Their display preference for the data was "% Reflectance". In 1946, Frank Benford published a series of parametric equations that gave results equivalent to the Stokes formulas. The formulas used fractions to express reflectance and transmittance. Equations of Benford If , , and are known for the representative layer of a sample, and , and are known for a layer composed of representative layers, the ART fractions for a layer with thickness of are If , and are known for a layer with thickness , the ART fractions for a layer with thickness of are and the fractions for a layer with thickness of are If , and are known for layer and and are known for layer , the ART fractions for a sample composed of layer and layer are The symbol refers to the reflectance of layer when the direction of illumination is antiparallel to that of the incident beam. The difference in direction is important when dealing with inhomogeneous layers. This consideration was added by Paul Kubelka in 1954. Giovanelli and Chandrasekhar In 1955, Ron Giovanelli published explicit expressions for several cases of interest which are touted as exact solutions to the radiative transfer equation for a semi-infinite ideal diffuser. His solutions have become the standard against which results from approximate theoretical treatments are measured. Many of the solutions appear deceptively simple due to the work of Subrahmanyan (Chandra) Chandrasekhar. For example, the total reflectance for light incident in the direction μ0 is Here is known as the albedo of single scatter , representing the fraction of the radiation lost by scattering in a medium where both absorption () and scattering () take place. The function is called the H-integral, the values of which were tabulated by Chandrasekhar. Gustav Kortüm Kortüm was a physical chemist who had a broad range of interests, and published prolifically. His research covered many aspects of light scattering. He began to pull together what was known in various fields into an understanding of how “reflectance spectroscopy” worked. In 1969, the English translation of his book entitled Reflectance Spectroscopy (long in preparation and translation) was published. This book came to dominate thinking of the day for 20 years in the emerging fields of both DRIFTS and NIR Spectroscopy. Kortüm's position was that since regular (or specular) reflection is governed by different laws than diffuse reflection, they should therefore be accorded different mathematical treatments. He developed an approach based on Schuster's work by ignoring the emissivity of the clouds in the "foggy atmosphere". If we take as the fraction of incident light absorbed and as the fraction scattered isotropically by a single particle (referred to by Kortüm as the "true coefficients of single scatter"), and define the absorption and isotropic scattering for a layer as and then: This is the same "remission function" as used by Judd, but Kortüm's translator referred to it as "the so-called reflectance function". If we substitute back for the particle properties, we obtain and then we obtain the Schuster equation for isotropic scattering: Additionally, Kortüm derived "the Kubelka-Munk exponential solution" by defining and as the absorption and scattering coefficient per centimeter of the material and substituting: and , while pointing out in a footnote that is a back-scattering coefficient. He wound up with what he called the "Kubelka–Munk function", commonly called the Kubelka–Munk equation: Kortüm concluded that "the two constant theory of Kubelka and Munk leads to conclusions accessible to experimental test. In practice these are found to be at least qualitatively confirmed, and suitable conditions fulfilling the assumptions made, quantitatively as well." Kortüm tended to eschew the "particle theories", though he did record that one author, N.T. Melamed of Westinghouse Research Labs, "abandoned the idea of plane parallel layers and substituted them with a statistical summation over individual particles." Hecht and Simmons In 1966, Harry G. Hecht (with Wesley W. Wendlandt) published a book entitled "Reflectance Spectroscopy", because "unlike transmittance spectroscopy, there were no reference books written on the subject" of "diffuse reflectance spectroscopy", and "the fundamentals were only to be found in the old literature, some of which was not readily accessible". Hecht describes himself as a novice in the field at the time, and said that if he had known that Gustav Kortüm, "a great pillar in the field", was in the process of writing a book on the subject, he "would not have undertaken the task". Hecht was asked to write a review of Kortüm's book and their correspondence concerning it led to Hecht spending a year in Kortüm's laboratories. Kortüm is the author most often cited in the book. One of the features of the remission function emphasized by Hecht was the fact that should yield the absorption spectrum displaced by . While the scattering coefficient might change with particle size, the absorption coefficient, which should be proportional to concentration of an absorber, would be obtainable by a background correction for a spectrum. However, experimental data showed the relationship did not hold in strongly absorbing materials. Many papers were published with various explanations for this failure of the Kubelka-Munk equation. Proposed culprits included: incomplete diffusion, anisotropic scatter ("the invalid assumption that radiation is returned equally in all directions from a given particle"), and presence of regular reflection. The situation resulted in a myriad of models and theories being proposed to correct these supposed deficiencies. The various alternative theories were evaluated and compared. In his book, Hecht reported the mathematics of Stokes and Melamed formulas (which he called “statistical methods”). He believed the approach of Melamed, which “involve a summation over individual particles” was more satisfactory than summations over “plane parallel layers”. Unfortunately, Melamed's method failed as the refractive index of the particles approached unity, but he did call attention to the importance of using individual particle properties, as opposed to coefficients that represent averaged properties for a sample. E.L. Simmons used a simplified modification of the particle model to relate diffuse reflectance to fundamental optical constants without the use of the cumbersome equations. In 1975, Simmons evaluated various theories of diffuse reflectance spectroscopy and concluded that a modified particle model theory is probably the most nearly correct. In 1976, Hecht wrote a lengthy paper comprehensively describing the myriad of mathematical treatments that had been proposed to deal with diffuse reflectance. In this paper, Hecht states that he assumed (as did Simmons) that in the plane-parallel treatment, the layers could not be made infinitesimally small, but should be restricted to layers of finite thickness interpreted as the mean particle diameter of the sample. This is also supported by the observation that the ratio of the Kubelka–Munk absorption and scattering coefficients is that of corresponding ratio of the Mie coefficients for a sphere. That factor can be rationalized by simple geometric considerations, recognizing that to a first approximation, the absorption is proportional to volume and the scatter is proportional to cross sectional surface area. This is entirely consistent with the Mie coefficients measuring absorption and scatter at a point, and the Kubelka–Munk coefficients measuring scatter by a sphere. To correct this deficiency of the Kubelka–Munk approach, for the case of an infinitely thick sample, Hecht blended the particle and layer methods by replacing the differential equations in the Kubelka–Munk treatment by finite difference equations, and obtained the Hecht finite difference formula: Hecht apparently did not know that this result could be generalized, but he realized that the above formula "represents an improvement … and shows the need to consider the particulate nature of scattering media in developing a more precise theory". Karl Norris (USDA), Gerald Birth Karl Norris pioneered the field of near-infrared spectroscopy. He began by using log(1/R) as a metric of absorption. While often the samples examined were “infinitely thick”, partially transparent samples were analyzed (especially later) in cells that had a rear reflecting surface (reflector) in a mode called "transflectance". Therefore, the remission from the sample contained light that was back-scattered from the sample, as well as light that was transmitted through the sample, then reflected back to be transmitted through the sample again, thereby doubling the path length. Having no sound theoretical basis for data treatment, Norris used the same electronic processing that was used for absorption data collected in transmission. He pioneered the use of multiple linear regression for analysis of data. Gerry Birth was the founder of the International Diffuse Reflectance Conference (IDRC). He also worked at the USDA. He was known to have a deep desire to have a better understanding of the process of light scattering. He teamed up with Harry Hecht (who was active in the early meetings of IDRC) to write the Physics theory chapter, with many photographic illustrations, in an influential Handbook edited by Phil Williams and Karl Norris: Nearinfrared Technology in the Agriculture and Food Industries. Donald J. Dahm, Kevin D. Dahm In 1994, Donald and Kevin Dahm began using numerical techniques to calculate remission and transmission from samples of varying numbers of plane parallel layers from absorption and remission fractions for a single layer. Their plan was to "start with a simple model, treat the problem numerically rather than analytically, then look for analytical functions that describe the numerical results. Assuming success with that, the model would be made more complex, allowing more complex analytical expressions to be derived, eventually, leading to an understanding of diffuse reflection at a level that appropriately approximated particulate samples." They were able to show the fraction of incident light remitted, , and transmitted, , by a sample composed of layers, each absorbing a fraction and remitting a fraction of the light incident upon it, could be quantified by an Absorption/Remission function (symbolized and called the ART function), which is constant for a sample composed of any number of identical layers. Dahm equation Also from this process came results for several special cases of two stream solutions for plane parallel layers. For the case of zero absorption, . For the case of infinitesimal layers, , and the ART function gives results approaching equivalence to the remission function. As the void fraction of a layer becomes large, . The ART is related to the Kortüm–Schuster equation for isotopic scatter by . The Dahms argued that the conventional absorption and scattering coefficients, as well as the differential equations which employ them, implicitly assume that a sample is homogenous at the molecular level. While this is a good approximation for absorption, as the domain of absorption is molecular, the domain of scattering is the particle as a whole. Any approach using continuous mathematics will therefore tend to fail as particles become large. Successful application of theory to a real world sample using the mathematics of plane parallel layers requires assigning properties to the layers that are representative of the sample as a whole (which does not require extensively reworking the mathematics). Such a layer was termed a representative layer, and the theory was termed the representative layer theory. Furthermore, they argued that it was irrelevant whether the light moving from one layer to another was reflected specularly or diffusely. The reflection and back scatter is lumped together as remission. All light leaving the sample on the same side as the incident beam is termed remission, whether it arises from reflection or back scatter. All light leaving the sample on the opposite side from the incident beam is termed transmission. (In a three-flux or higher treatment, such as Giovanelli's, the forward scatter is not indistinguishable from the directly transmitted light. Additionally, Giovanelli's treatment makes the implied assumption of infinitesimal particles.) They developed a scheme, subject to the limitations of a two-flux model, to calculate the "scatter corrected absorbance" for a sample. The decadic absorbance of a scattering sample is defined as or . For a non scattering sample, , and the expression becomes or , which is more familiar. In a non-scattering sample, the absorbance has the property that the numerical value is proportional to sample thickness. Consequently, a scatter-corrected absorbance might reasonably be defined as one that has that property. If one has measured the remission and transmission fractions for a sample, and , then the scatter-corrected absorbance should have half the value for half the sample thickness. By calculating the values for and for successively thinner samples () using the Benford's equations for half thickness, a place will be reached where, for successive values of (0,1,2,3,...), the expression becomes constant to within a some specified limit, typically 0.01 absorbance units. This value is the scatter-corrected absorbance. Definitions Remission In spectroscopy, remission refers to the reflection or back-scattering of light by a material. While seeming similar to the word "re-emission", it is the light which is scattered back from a material, as opposed to that which is "transmitted" through the material. The word "re-emission" connotes no such directional character. Based on the origin of the word "emit", which means "to send out or away", "re-emit" means "to send out again", "transmit" means "to send across or through", and "remit" means "to send back". Plane-parallel layers In spectroscopy, the term "plane parallel layers" may be employed as a mathematical construct in discussing theory. The layers are considered to be semi-infinite. (In mathematics, semi-infinite objects are objects which are infinite or unbounded in some, but not all, possible ways.) Generally, a semi-infinite layer is envisioned as a being bounded by two flat parallel planes, each extending indefinitely, and normal (perpendicular) to the direction of a collimated (or directed) incident beam. The planes are not necessarily physical surfaces which refract and reflect light, but may just describe a mathematical plane, suspended in space. When the plane parallel layers have surfaces, they have been variously called plates, sheets, or slabs. Representative layer The term "representative layer" refers to a hypothetical plane parallel layer that has properties relevant to absorption spectroscopy that are representative of a sample as a whole. For particulate samples, a layer is representative if each type of particle in the sample makes up the same fraction of volume and surface area in the layer as in the sample. The void fraction in the layer is also the same as in the sample. Implicit in the representative layer theory is that absorption occurs at the molecular level, but that scatter is from a whole particle. List of principal symbols used Note: Where a given letter is used in both capital and lower case form (, and , ) the capital letter refers to the macroscopic observable and the lower case letter to the corresponding variable for an individual particle or layer of the material. Greek symbols are used for properties of a single particle. – absorption fraction of a single layer – remission fraction of a single layer – transmission fraction of a single layer , , – the absorption, remission, and transmission fractions for a sample composed of layers – absorption fraction of a particle – back-scattering from a particle – isotropic scattering from a particle – absorption coefficient, defined as the fraction of incident light absorbed by a very thin layer divided by the thickness of that layer – scattering coefficient, defined as the fraction of incident light scattered by a very thin layer divided by the thickness of that layer References Spectroscopy Scattering, absorption and radiative transfer (optics)
Diffuse reflectance spectroscopy
[ "Physics", "Chemistry" ]
4,821
[ " absorption and radiative transfer (optics)", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Scattering", "Spectroscopy" ]
5,827,346
https://en.wikipedia.org/wiki/Endothelin%20receptor
There are at least four known endothelin receptors, ETA, ETB1, ETB2 and ETC, all of which are G protein-coupled receptors whose activation result in elevation of intracellular-free calcium, which constricts the smooth muscles of the blood vessels, raising blood pressure, or relaxes the smooth muscles of the blood vessels, lowering blood pressure, among other functions. Physiological functions ETA is a subtype for vasoconstriction These receptors are found in the smooth muscle tissue of blood vessels, and binding of endothelin to ETA increases vasoconstriction (contraction of the blood vessel walls) and the retention of sodium, leading to increased blood pressure. ETB1 mediates vasodilation, When endothelin binds to ETB1 receptors, this leads to the release of nitric oxide (also called endothelium-derived relaxing factor), natriuresis and diuresis (the production and elimination of urine) and mechanisms that lower blood pressure. ETB2 mediates vasoconstriction ETC has yet no clearly defined function. ET receptors are also found in the nervous system where they may mediate neurotransmission and vascular functions. Brain and nerves Widely distributed in the body, receptors for endothelin are present in blood vessels and cells of the brain, choroid plexus and peripheral nerves. When applied directly to the brain of rats in picomolar quantities as an experimental model of stroke, endothelin-1 caused severe metabolic stimulation and seizures with substantial decreases in blood flow to the same brain regions, both effects mediated by calcium channels. A similar strong vasoconstrictor action of endothelin-1 was demonstrated in a peripheral neuropathy model in rats. Clinical significance Mutations in the EDNRB gene are associated with ABCD syndrome and some forms of Waardenburg syndrome. See also Endothelin receptor antagonist References Further reading Davenport AP, Hyndman KA, Dhaun N, Southan C, Kohan DE, Pollock JS, Pollock DM, Webb DJ, Maguire JJ. (2016) 'Endothelin' Pharmacol. Rev. 68: 357-418. pmid =26956245 doi =10.1124/pr.115.011833 External links G protein-coupled receptors
Endothelin receptor
[ "Chemistry" ]
482
[ "G protein-coupled receptors", "Signal transduction" ]
5,827,808
https://en.wikipedia.org/wiki/Vitamin%20D%20receptor
The vitamin D receptor (VDR also known as the calcitriol receptor) is a member of the nuclear receptor family of transcription factors. Calcitriol (the active form of vitamin D, 1,25-(OH)2vitamin D3) binds to VDR, which then forms a heterodimer with the retinoid-X receptor. The VDR heterodimer then enters the nucleus and binds to Vitamin D responsive elements (VDRE) in genomic DNA. VDR binding results in expression or transrepression of many specific gene products. VDR is also involved in microRNA-directed post transcriptional mechanisms. In humans, the vitamin D receptor is encoded by the VDR gene located on chromosome 12q13.11. VDR is expressed in most tissues of the body, and regulates transcription of genes involved in intestinal and renal transport of calcium and other minerals. Glucocorticoids decrease VDR expression. Many types of immune cells also express VDR. Function The VDR gene encodes the nuclear hormone receptor for vitamin D. The most potent natural agonist is calcitriol () and the vitamin D2 homologue ercalcitriol, ) is also a strong activator. Other forms of vitamin D bind with lower affinity, as does the secondary bile acid lithocholic acid. The receptor belongs to the family of trans-acting transcriptional regulatory factors and shows similarity of sequence to the steroid and thyroid hormone receptors. Downstream targets of this nuclear hormone receptor include many genes involved in mineral metabolism. The receptor regulates a variety of other metabolic pathways, such as those involved in the immune response and cancer. VDR variants that bolster vitamin-D action and that are directly correlated with AIDS progression rates and VDR association with progression to AIDS follows an additive model. FokI polymorphism is a risk factor for enveloped virus infection as revealed in a meta-analysis. The importance of this gene has also been noted in the natural aging process were 3’UTR haplotypes of the gene showed an association with longevity. Clinical relevance Mutations in this gene are associated with type II vitamin D-resistant rickets. A single nucleotide polymorphism in the initiation codon results in an alternate translation start site three codons downstream. Alternative splicing results in multiple transcript variants encoding the same protein. VDR gene variants seem to influence many biological endpoints, including those related to osteoporosis The vitamin D receptor plays an important role in regulating the hair cycle. Loss of VDR is associated with hair loss in experimental animals. Experimental studies have shown that the unliganded VDR interacts with regulatory regions in cWnt (wnt signaling pathway) and sonic hedgehog target genes and is required for the induction of these pathways during the postnatal hair cycle. These studies have revealed novel actions of the unliganded VDR in regulating the post-morphogenic hair cycle. Researchers have focused their efforts in elucidating the role of VDR polymorphisms in different diseases and normal phenotypes such as the HIV-1 infection susceptibility and progression or the natural aging process. The most remarkable findings include the report of VDR variants that bolster vitamin-D action and that are directly correlated with AIDS progression rates, that VDR association with progression to AIDS follows an additive model and the role of FokI polymorphism as a risk factor for enveloped virus infection as revealed in a meta-analysis. Interactions Vitamin D receptor has been shown to interact with many other factors which will affect transcription activation: BAG1, CAV3, MED12, MED24, NCOR1, NCOR2, NCOA2 RXRA, RUNX1, RUNX1T1, SNW1, STAT1, and ZBTB16. Interactive pathway map References Further reading External links Nuclear Receptor Resource Vitamin D Receptor: Molecule of the Month IUPHAR: Vitamin D receptor Vitamin D Intracellular receptors Transcription factors
Vitamin D receptor
[ "Chemistry", "Biology" ]
820
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
5,828,178
https://en.wikipedia.org/wiki/List%20of%20gravitationally%20rounded%20objects%20of%20the%20Solar%20System
This is a list of most likely gravitationally rounded objects (GRO) of the Solar System, which are objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium). Apart from the Sun itself, these objects qualify as planets according to common geophysical definitions of that term. The radii of these objects range over three orders of magnitude, from planetary-mass objects like dwarf planets and some moons to the planets and the Sun. This list does not include small Solar System bodies, but it does include a sample of possible planetary-mass objects whose shapes have yet to be determined. The Sun's orbital characteristics are listed in relation to the Galactic Center, while all other objects are listed in order of their distance from the Sun. Star The Sun is a G-type main-sequence star. It contains almost 99.9% of all the mass in the Solar System. Planets In 2006, the International Astronomical Union (IAU) defined a planet as a body in orbit around the Sun that was large enough to have achieved hydrostatic equilibrium and to have "cleared the neighbourhood around its orbit". The practical meaning of "cleared the neighborhood" is that a planet is comparatively massive enough for its gravitation to control the orbits of all objects in its vicinity. In practice, the term "hydrostatic equilibrium" is interpreted loosely. Mercury is round but not actually in hydrostatic equilibrium, but it is universally regarded as a planet nonetheless. According to the IAU's explicit count, there are eight planets in the Solar System; four terrestrial planets (Mercury, Venus, Earth, and Mars) and four giant planets, which can be divided further into two gas giants (Jupiter and Saturn) and two ice giants (Uranus and Neptune). When excluding the Sun, the four giant planets account for more than 99% of the mass of the Solar System. Dwarf planets Dwarf planets are bodies orbiting the Sun that are massive and warm enough to have achieved hydrostatic equilibrium, but have not cleared their neighbourhoods of similar objects. Since 2008, there have been five dwarf planets recognized by the IAU, although only Pluto has actually been confirmed to be in hydrostatic equilibrium (Ceres is close to equilibrium, though some anomalies remain unexplained). Ceres orbits in the asteroid belt, between Mars and Jupiter. The others all orbit beyond Neptune. Astronomers usually refer to solid bodies such as Ceres as dwarf planets, even if they are not strictly in hydrostatic equilibrium. They generally agree that several other trans-Neptunian objects (TNOs) may be large enough to be dwarf planets, given current uncertainties. However, there has been disagreement on the required size. Early speculations were based on the small moons of the giant planets, which attain roundness around a threshold of 200 km radius. However, these moons are at higher temperatures than TNOs and are icier than TNOs are likely to be. Estimates from an IAU question-and-answer press release from 2006, giving 800 km radius and mass as cut-offs that normally would be enough for hydrostatic equilibrium, while stating that observation would be needed to determine the status of borderline cases. Many TNOs in the 200–500 km radius range are dark and low-density bodies, which suggests that they retain internal porosity from their formation, and hence are not planetary bodies (as planetary bodies have sufficient gravitation to collapse out such porosity). In 2023, Emery et al. wrote that near-infrared spectroscopy by the James Webb Space Telescope (JWST) in 2022 suggests that Sedna, Gonggong, and Quaoar underwent internal melting, differentiation, and chemical evolution, like the larger dwarf planets Pluto, Eris, Haumea, and Makemake, but unlike "all smaller KBOs". This is because light hydrocarbons are present on their surfaces (e.g. ethane, acetylene, and ethylene), which implies that methane is continuously being resupplied, and that methane would likely come from internal geochemistry. On the other hand, the surfaces of Sedna, Gonggong, and Quaoar have low abundances of CO and CO2, similar to Pluto, Eris, and Makemake, but in contrast to smaller bodies. This suggests that the threshold for dwarf planethood in the trans-Neptunian region is around 500 km radius. In 2024, Kiss et al. found that Quaoar has an ellipsoidal shape incompatible with hydrostatic equilibrium for its current spin. They hypothesised that Quaoar originally had a rapid rotation and was in hydrostatic equilibrium, but that its shape became "frozen in" and did not change as it spun down due to tidal forces from its moon Weywot. If so, this would resemble the situation of Saturn's moon Iapetus, which is too oblate for its current spin. Iapetus is generally still considered a planetary-mass moon nonetheless, though not always. The table below gives Orcus, Quaoar, Gonggong, and Sedna as additional consensus dwarf planets; slightly smaller Salacia, which is larger than 400 km radius, has been included as a borderline case for comparison, (and is therefore italicized). As for objects in the asteroid belt, none are generally agreed as dwarf planets today among astronomers other than Ceres. The second- through fifth-largest asteroids have been discussed as candidates. Vesta (radius ), the second-largest asteroid, appears to have a differentiated interior and therefore likely was once a dwarf planet, but it is no longer very round today. Pallas (radius ), the third-largest asteroid, appears never to have completed differentiation and likewise has an irregular shape. Vesta and Pallas are nonetheless sometimes considered small terrestrial planets anyway by sources preferring a geophysical definition, because they do share similarities to the rocky planets of the inner solar system. The fourth-largest asteroid, Hygiea (radius ), is icy. The question remains open if it is currently in hydrostatic equilibrium: while Hygiea is round today, it was probably previously catastrophically disrupted and today might be just a gravitational aggregate of the pieces. The fifth-largest asteroid, Interamnia (radius ), is icy and has a shape consistent with hydrostatic equilibrium for a slightly shorter rotation period than it now has. Satellites There are at least 19 natural satellites in the Solar System that are known to be massive enough to be close to hydrostatic equilibrium: seven of Saturn, five of Uranus, four of Jupiter, and one each of Earth, Neptune, and Pluto. Alan Stern calls these satellite planets, although the term major moon is more common. The smallest natural satellite that is gravitationally rounded is Saturn I Mimas (radius ). This is smaller than the largest natural satellite that is known not to be gravitationally rounded, Neptune VIII Proteus (radius ). Several of these were once in equilibrium but are no longer: these include Earth's moon and all of the moons listed for Saturn apart from Titan and Rhea. The status of Callisto, Titan, and Rhea is uncertain, as is that of the moons of Uranus, Pluto and Eris. The other large moons (Io, Europa, Ganymede, and Triton) are generally believed to still be in equilibrium today. Other moons that were once in equilibrium but are no longer very round, such as Saturn IX Phoebe (radius ), are not included. In addition to not being in equilibrium, Mimas and Tethys have very low densities and it has been suggested that they may have non-negligible internal porosity, in which case they would not be satellite planets. The moons of the trans-Neptunian objects (other than Charon) have not been included, because they appear to follow the normal situation for TNOs rather than the moons of Saturn and Uranus, and become solid at a larger size (900–1000 km diameter, rather than 400 km as for the moons of Saturn and Uranus). Eris I Dysnomia and Orcus I Vanth, though larger than Mimas, are dark bodies in the size range that should allow for internal porosity, and in the case of Dysnomia a low density is known. Satellites are listed first in order from the Sun, and second in order from their parent body. For the round moons, this mostly matches the Roman numeral designations, with the exceptions of Iapetus and the Uranian system. This is because the Roman numeral designations originally reflected distance from the parent planet and were updated for each new discovery until 1851, but by 1892, the numbering system for the then-known satellites had become "frozen" and from then on followed order of discovery. Thus Miranda (discovered 1948) is Uranus V despite being the innermost of Uranus' five round satellites. The missing Saturn VII is Hyperion, which is not large enough to be round (mean radius ). See also List of Solar System objects by size Lists of astronomical objects List of former planets Planetary-mass object Notes Unless otherwise cited Manual calculations (unless otherwise cited) Individual calculations Other notes References Hydrostatic equilibrium Solar System
List of gravitationally rounded objects of the Solar System
[ "Astronomy" ]
1,902
[ "Planetary-mass objects", "Astronomical objects", "Outer space", "Solar System" ]
5,828,847
https://en.wikipedia.org/wiki/Strategic%20communication
Strategic communication is the purposeful use of communication by an organization to reach a specific goal. Organizations like governments, corporations, NGOs and militaries seeking to communicate a concept, process, or data to satisfy their organizational or strategic goals will use strategic communication. The modern process features advanced planning, international telecommunications, and dedicated global network assets. Targeted organizational goals can include commercial, non-commercial, military business, combat, political warfare and logistic goals. Strategic communication can either be internal or external to the organization. The interdisciplinary study of strategic communications includes organizational communication, management, military history, mass communication, PR, advertising and marketing. Definitions Strategic communication refers to policy-making and guidance for consistent information activity within an organization and between organizations. Equivalent business management terms include integrated (marketing) communication, organizational communication, corporate communication, institutional communication, etc. (see paragraph on 'Business and Commercial Application' below). It involves a strategic approach to planning, developing, and eventually executing communication campaigns in order to achieve specific goals and objectives. It also includes analyzing communication needs and overall effectiveness. Strategic communication management could be defined as the systematic planning and realization of information flow, communication, media development, and image care on a long-term horizon. It conveys deliberate messages through the most suitable media to the designated audiences at the appropriate time to contribute to and achieve the desired long-term effect. Communication management is process creation. It has to bring three factors into balance: the message, the media channel, and the audience. Business and commercial application In business and commercial settings, strategic communication is communication aligned with the company's overall strategy to enhance its strategic positioning. Strategic communication, sometimes known as public relations, is a conscious, planned, and ongoing effort made by organizations. The goal is to create a receptive environment for improving cooperation, reducing conflict, and marketing products or services. Government and Defense application The U.S. government outlines its use of strategic communication as "government efforts to understand and engage key audiences to create, strengthen, or preserve conditions favorable for the advancement of United States Government interests, policies, and objectives through the use of coordinated programs, plans, themes, messages, and products synchronized with the actions of all instruments of national power." Further, in the US DoD's Principles of Strategic Communication," Robert T. Hastings Jr. (2008), acting Assistant Secretary of Defense for Public Affairs, described strategic communication as "the synchronization of images, actions and words to achieve a desired effect." NATO Policy defines its strategic communication as "the coordinated and appropriate use of NATO communications activities and capabilities – Public Diplomacy, Military Public Affairs, Information Operations, and Psychological Operations, as appropriate – in support of Alliance policies, operations and activities, and in order to advance NATO's aims". Strategic Communication is a process that supports and strengthens efforts to achieve objectives. It guides and informs decisions rather than the organization. Considerations of Strategic Communication should be integrated from the early planning stages and be followed by communication activities. Steve Tatham of the UK Defence Academy offers an alternative view of strategic communication. He argues that while strategic communication is desirable to bind and coordinate communications together, particularly from governments or the military, it should be viewed as something more fundamental than just a process. The "informational effect" should be placed at the center of command and action should be calibrated against that effect, which includes the evaluation of second and third order effects. Tatham argues that "strategic communication" (singular) refers to the abstract concept, while "strategic communications" (plural) refers to the actual process of communicating, which includes target audience analysis, evaluation of conduits, measurements of effect, etc. Objectives Strategic communication provides a conceptual umbrella that enables organizations to integrate their disparate messaging efforts. It allows them to create and share communications that, while varying in style and purpose, maintain an inner coherence. This can reinforce the organizational message and brand. Strategic communicators aim to prevent contradictory and confusing messaging to different groups across all media platforms. Communication is strategic when it is consistent with the organization's mission, vision, values, and when it is able to enhance the strategic positioning and competitiveness between their competitors. As a result of this communication, strategic communication follows ‘the nature of organizational communication in general, and strategic communication in particular, is defined as the purposeful use of communication by an organization to fulfill its mission.' The Strategic Communications Framework uses an objective that aims to communicate with the audience/organization. The application of the specific content will help achieve the business goal clearly. While communication is something that happens in the organization, businesses that implement strategies impacting the effectiveness of their business communications can achieve measurable results. Technological advancements have been a factor in such business because information can be communicated through various channels and media. The growth of technology has accelerated communication and allows customers to connect and communicate with others. This can make it easier for them to reach each other through a type of communication that suits their needs. Mulhern (2009) states, ‘These changes mean that marketers are in a far more challenging competitive environment in attempting to fulfil customers wants and needs, while simultaneously seeking to develop long-term relationships.' Changes in communication will help communication goals, organization, and communication channels. This will measure the effectiveness of the communication tactics used in a business for their audience. Strategic planning To have an object, the first thing to do is have a plan for the business to communicate how the business is formed and to see how strong its core is. Ensure that alignment with the organization's understanding of where it is currently at. An approach that could be used to determine the current state of the objective, is to do a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis. When using a SWOT analysis, the strengths and weaknesses must be realistic. This is to help make improvements or adjustments that were not effective. The analysis will help get a better understanding of the business and will help plan and make the objectives more solid because it shows the strengths, weaknesses, opportunities, and threats the business is facing. This helps decide where the currently business is and where it will be in the future. Planning is a continuous process that involves research, analysis, execution, and assessment. Success in this process requires diligent and continual analysis, which should be fed back into the planning and action stages. Communicate with key stakeholders Communicate with stakeholders, audiences, and the public. Have interviews with the customers to learn their priorities and what goals they want to achieve with the organization. Having a good understanding of the business issues will allow the organization to offer effective solutions that will help the objective. Ask questions to see what the customer's aim is, with the main goal of focusing on what needs to be achieved and done and not what he/ or she wants. Tailoring messages to specific stakeholder groups can increase their effectiveness since it perfectly demonstrates an understanding of their priorities and concerns. ‘Sustainability calls for a value chain approach, whereby firms need to take wider responsibility and collaborate with a range of stakeholders to ensure that unsustainable practices are addressed’. Develop actionable objectives Objectives should have a specific end points to provide an indicator of success. Understanding the consumer and the marketplace, he or she can design a marketing strategy. Having an understanding of what is happening around the organization will ensure that planning the marketing strategy will be easy because the vision ensures the objectives are SMART. Objectives are the intended goals of a business campaigns, to show what is achievable. The objectives are effective when using SMART goals: they need to be specific, measurable, achievable, realistic, and time-sensitive. Have assignments for individuals or groups so the responsibilities for each of these objectives are already set and no adjustments are needed because they have been assigned to a specific person or group. The responsibility is in their hands. This is to indicate the specific individual or group has an assigned direct preliminary objective. They will need to develop a range of possible strategies and tactics to achieve the objectives given to them. Develop and prioritize potential strategies and tactics Brainstorm a list of potential strategies achievable for each of the objectives given out by the business and its customers, and have tactics that will support these strategies and objectives. Gather as a team to discuss the merits of each proposed strategy for the organization. The discussion must be about the strategies that will most likely be able to be used and those that are unlikely to be used. Some strategies will not be achievable, will be difficult, or no solution will be available for them so these will be crossed off the list. This shortens the list and helps to round up the best strategies left to be used. Collectively decide which strategies and tactics are going to be pursued to provide a clear objective for the business. The main focus is to achieve the objectives that were given out by the organization. Metrics, timelines, and responsibilities Include detail behind those strategies and tactics named out so that there is a clear objective and what is needed to be focused on. Explain how it will be successful, how it is measured, the time frame and who will be responsible. Planning is essential to ensure things are successfully planned out and strategies and tactics are successful. Planning does more than help a business achieve its objectives; it also improves communication within the group. Everyone should be assigned a responsibility so that these strategies and tactics are met. Concept development and experimentation (CD&E) Strategic communication is subject to multinational CD&E, led by the military, because communication is applicable to crisis management and compliance strategies. Across the spectrum of missions and broadly covering all levels of involvement in a civil-military, comprehensive approach context, the function of strategic communication and its military tool for implementation – information operations – have evolved and are still under development, in particular concerning their exact delineation of responsibilities and the integration of non-military and non-coalition actors. Three major lines of development are acknowledged as state of the art, with practical impact on current crisis management operations and/or multinational interoperability: (1) U.S. national developments, which one can argue have resulted in the most mature concepts for both strategic communication and information operations so far; (2) NATO concept development, which in the case of strategic communication is very much driven by current mission requirements (such as ISAF in Afghanistan), but also has benefitted from multinational CD&E in the case of information operations; and (3) multinational CD&E projects such as the U.S.-led Multinational Experiment (MNE) series and the Multinational Information Operations Experiment (MNIOE), led by Germany. Intensive discussions involving civil and military practitioners of strategic communication and information operations - with a view on existing national and NATO approaches to strategic communication - have questioned whether an updated approach and definition of strategic communication is required. Consequently, a reorientation of CD&E efforts was suggested, focussing on "Integrated Communication," which reflects the shared baseline assessment with a broader scope, including but not limited to strategic communication: the ineffective top-down approach to communication (mission-specific, strategic-political guidance for information activities; information strategy; corporate vision; shared narrative) and the insufficient horizontal and vertical integration of communication (cohesion of a coalition; corporate identity; cultural awareness; communication by words and deeds - the "say-do-gap"; involvement of non-coalition actors - participatory communication). This change should prevent false expectations of potential customers of resulting concepts who currently are reluctant to engage in CD&E on the implemented subject of strategic communication. See also Audience analysis Brand management Impression management Marketing communications Media intelligence Media manipulation Reputation management Public diplomacy Public relations Notes References Grigorescu, A., & Lupu, M-M. (2015). Integrated Communication as Strategic Communication. Review of international comparative management [revista de management comparat international]. 16(4), 479–490. Hallahan, K., Holtzhausen, D., Van Ruler, B., Veri, D., & Sriramesh, K. (2007). Defining strategic communication. International Journal of Strategic Communication. 1(1), 3–35. doi:10.1080/15531180701285244 Hoover, Cris. “The Strategic Communication Plan.” FBI Law Enforcement Bulletin 79, no. 8 (August 2010): 16–21. Kitchen, P. J., Inga Burgmann, I. (2015). Integrated marketing communication: making it work at a strategic level. Journal of Business Strategy. 36(4), 34 – 39. doi:10.1108/JBS-05-2014-0052 Knudsen, G. H., & Lemmergaard, g. (2014). Strategic serendipity: How one organization planned for and took advantage of unexpected communicative opportunities. Culture & Organization. 20(5), 392–409. doi:10.1080/14759551.2014.948440 Kotler, P., Burton, S., Deans, K., Brown, L., Armstrong, G. (2013). Designing a customer-driven marketing strategy. Marketing 9th ed (pp 10). NSW, Australia: Pearson Australia. Kotler, P., Burton, S., Deans, K., Brown, L., Armstrong, G. (2013). Marketing planning. Marketing 9th ed (pp 78–81). NSW, Australia: Pearson Australia. Mulhern, F. (2009). Integrated marketing communications: from media channels to digital connectivity. Journal of Marketing Communications. 15(2/3), 85-102. doi:10.1108/JBS-05-2014-0052 Scandelius, C., Cohen, G. (2015). Achieving collaboration with diverse stakeholders. The role of strategic ambiguity in CSR communication. doi:10.1016/j.jbusres.2016.01.037 Sources Tatham S A Cdr, RN. Military terminology Mass media technology Nuclear command and control Military railways Military radio systems Maritime communication Planning Communication Promotion and marketing communications ru:Стратегическая коммуникация
Strategic communication
[ "Technology" ]
2,934
[ "Information and communications technology", "Mass media technology" ]
7,613,472
https://en.wikipedia.org/wiki/Lab%20on%20a%20Chip%20%28journal%29
Lab on a Chip is a peer-reviewed scientific journal which publishes original (primary) research and review articles on any aspect of miniaturisation at the micro and nano scale. Lab on a Chip is published twice monthly by the Royal Society of Chemistry (RSC) and the editor-in-chief is Aaron Wheeler (University of Toronto). The journal was established in 2001 and hosts other RSC publications: Highlights in Chemical Technology and Highlights in Chemical Biology. According to the Journal Citation Reports, the journal has a 2023 impact factor of 6.1. Subject coverage Lab on a Chip publishes research at the micro- and nano-scale across a variety of disciplines including chemistry, biology, bioengineering, physics, electronics, clinical science, chemical engineering, and materials science focusing on lab on a chip technologies. Article types Lab on a Chip publishes full research papers, urgent communications, critical and tutorial reviews. Chips & Tips Chips & Tips is an online resource launched in 2006. It is moderated by David Beebe (University of Wisconsin–Madison) and Glenn Walker (North Carolina State University and University of North Carolina at Chapel Hill). Chips & Tips pages are brief practical tips for the miniaturisation community, including pictures, videos, and schematics. See also List of chemistry journals List of scientific journals References External links Chips & Tips Chemistry journals Royal Society of Chemistry academic journals Microtechnology Biweekly journals English-language journals Nanotechnology journals Academic journals established in 2010
Lab on a Chip (journal)
[ "Materials_science", "Engineering" ]
304
[ "Nanotechnology journals", "Materials science journals", "Microtechnology", "Materials science" ]
7,617,705
https://en.wikipedia.org/wiki/Turner%20Museum%20of%20Glass
The Turner Museum of Glass is housed in the Department of Materials Science and Engineering at the University of Sheffield, in England. It is in the Sir Robert Hadfield Building with the entrance from Portobello Street. It contains examples from ancient Egypt and Rome but mainly from major European and American glassworkers, with a particular focus on those from the 1920s to 1950s. It was founded in 1943 by Professor W. E. S. Turner of the University, who additionally was the senior author on many papers on glass technology. One of the exhibits is the wedding dress of his wife Helen Monro Turner (Helen Nairn, married 1 July 1943) which is made of glass fibre, as are the matching shoes. This has been selected as one of the items in the BBC's extended collection based on A History of the World in 100 Objects. References External links Turner Museum of Glass website Museums established in 1943 Museums in Sheffield Glass museums and galleries Sheffield University buildings and structures University museums in England Decorative arts museums in England 1943 establishments in England
Turner Museum of Glass
[ "Materials_science", "Engineering" ]
208
[ "Glass engineering and science", "Glass museums and galleries" ]
7,620,308
https://en.wikipedia.org/wiki/Elementary%20reaction
An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes. In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s) At constant temperature, the rate of such a reaction is proportional to the concentration of the species In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s) The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and The rate expression for an elementary bimolecular reaction is sometimes referred to as the law of mass action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction. This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments. According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations. Notes Chemical kinetics Physical chemistry
Elementary reaction
[ "Physics", "Chemistry" ]
392
[ "Chemical reaction engineering", "Applied and interdisciplinary physics", "nan", "Chemical kinetics", "Physical chemistry" ]
16,693,318
https://en.wikipedia.org/wiki/Dose-volume%20histogram
A dose-volume histogram (DVH) is a histogram relating radiation dose to tissue volume in radiation therapy planning. DVHs are most commonly used as a plan evaluation tool and to compare doses from different plans or to structures. DVHs were introduced by Michael Goitein (who introduced radiation therapy concepts such as the "beam's-eye-view," "digitally reconstructed radiograph," and uncertainty/error in planning and positioning, among others) and Verhey in 1979. DVH summarizes 3D dose distributions in a graphical 2D format. In modern radiation therapy, 3D dose distributions are typically created in a computerized treatment planning system (TPS) based on a 3D reconstruction of a CT scan. The "volume" referred to in DVH analysis is a target of radiation treatment, a healthy organ nearby a target, or an arbitrary structure. Types DVHs can be visualized in either of two ways: differential DVHs or cumulative DVHs. A DVH is created by first determining the size of the dose bins of the histogram. Bins can be of arbitrary size, 0.005 Gy, 0.2 Gy or 1 Gy for instance. The size is often a matter of tradeoff between accuracy and computational or memory cost (if we store the DVH in a database). In a differential DVH, bar or column height indicates the volume of structure receiving a dose given by the bin. Bin doses are along the horizontal axis, and structure volumes (either percent or absolute volumes) are on the vertical. The differential DVH takes the appearance of a typical histogram. The total volume of the organ that receives a certain dose is plotted in the appropriate dose bin. This volume is determined by the total number of voxels characterized by a specified range of dosage for the organ considered. The differential DVH provides information about changes in dose within the structure considered as well as visualization of minimum and maximum dose. The cumulative DVH is plotted with bin doses along the horizontal axis, as well. However, the column height of the first bin (e.g., [0, 1] Gy) represents the volume of structure receiving a dose greater than or equal to that dose. The column height of the second bin (e.g., (1, 2] Gy) represents the volume of structure receiving greater than or equal to that dose, etc. With very fine (small) bin sizes, the cumulative DVH takes on the appearance of a smooth line graph. The lines always slope and start from top-left to bottom-right. For a structure receiving a very homogeneous dose (100% of the volume receiving exactly 10 Gy, for example) the cumulative DVH will appear as a horizontal line at the top of the graph, at 100% volume as plotted vertically, with a vertical drop at 10 Gy on the horizontal axis. A DVH used clinically usually includes all structures and targets of interest in the radiotherapy plan, each line plotted a different color, representing a different structure. The vertical axis is almost always plotted as percent volume (rather than absolute volume), as well. Drawbacks Clinical studies have shown that DVH metrics correlate with patient toxicity outcomes. A drawback of the DVH methodology is that it offers no spatial information; i.e., a DVH does not show where within a structure a dose is received. Also, DVHs from initial radiotherapy plans represent the doses to structures at the start of radiation treatment. As treatment progresses and time elapses, if there are changes (i.e. if patients lose weight, if tumors shrink, if organs change shape, etc.), the original DVH loses its accuracy. References External links Radiation therapy Medical physics
Dose-volume histogram
[ "Physics" ]
798
[ "Applied and interdisciplinary physics", "Medical physics" ]
16,697,638
https://en.wikipedia.org/wiki/Benjamin%E2%80%93Ono%20equation
In mathematics, the Benjamin–Ono equation is a nonlinear partial integro-differential equation that describes one-dimensional internal waves in deep water. It was introduced by and . The Benjamin–Ono equation is where H is the Hilbert transform. It possesses infinitely many conserved densities and symmetries; thus it is a completely integrable system. See also Bretherton equation References Sources External links Benjamin-Ono equations: Solitons and Shock Waves Nonlinear partial differential equations Integrable systems
Benjamin–Ono equation
[ "Physics", "Mathematics" ]
100
[ "Integrable systems", "Applied mathematics", "Theoretical physics", "Applied mathematics stubs", "Theoretical physics stubs" ]
16,698,745
https://en.wikipedia.org/wiki/Maintenance%20dose
In pharmacokinetics, a maintenance dose is the maintenance rate [mg/h] of drug administration equal to the rate of elimination at steady state. This is not to be confused with dose regimen, which is a type of drug therapy in which the dose [mg] of a drug is given at a regular dosing interval on a repetitive basis. Continuing the maintenance dose for about 4 to 5 half-lives (t1/2) of the drug will approximate the steady state level. One or more doses higher than the maintenance dose can be given together at the beginning of therapy with a loading dose. A loading dose is most useful for drugs that are eliminated from the body relatively slowly. Such drugs need only a low maintenance dose in order to keep the amount of the drug in the body at the appropriate level, but this also means that, without an initial higher dose, it would take a long time for the amount of the drug in the body to reach that level. Calculating the maintenance dose The required maintenance dose may be calculated as: Where: {| | MD || is the maintenance dose rate [mg/h] |- | Cp || = desired peak concentration of drug [mg/L] |- | CL || = clearance of drug in body [L/h] |- | F || = bioavailability |} For an intravenously administered drug, the bioavailability F will equal 1, since the drug is directly introduced to the bloodstream. If the patient requires an oral dose, bioavailability will be less than 1 (depending upon absorption, first pass metabolism etc.), requiring a larger loading dose. See also Therapeutic index References Pharmacokinetics
Maintenance dose
[ "Chemistry" ]
354
[ "Pharmacology", "Pharmacokinetics" ]
16,699,552
https://en.wikipedia.org/wiki/Heavy%20lift
In transportation, heavy lift refers to the handling and installation of heavy items which are indivisible, and of weights generally accepted to be over 100 tons and of widths/heights of more than 100 meters. These oversized items are transported from one place to another (sometimes across country borders), then lifted or installed into place. Characteristic for heavy-lift goods is the absence of standardization, which requires individual transport planning. Mode Of Transport Road Transport Air Transport Sea Transport Rail Transport Typical cargo Typical heavy-lift cargo includes generators, turbines, reactors, boilers, towers, casting, heaters, presses, locomotives, boats, satellites, military personnel and equipment. In the offshore industry, parts of oil rigs and production platforms are also lifted; some of these are also removed at the end of an installation's working life. Recent notable lifts have included several of >2000 metric tons in the de-commissioning of the North West Hutton oil field in the British sector of the North Sea. Typical cargo consist of Wind turbines Farm machinery Earth moving machinery Cranes Mining equipment and accessories Storage tanks Heavy industrial components Construction machinery Mode of transport Road Transport Road transport of heavy and oversized load is called heavy haulage specialized equipment is used to haul these load, which are only employed for heavy-duty work. This type of transport requires route planning and escort vehicles. Road transport is carried out from or to manufacturing plants or factories. Heavy Lift Road Equipment. Lowboy trailers Hydraulic modular trailers Tractor unit Ballast tractor self-propelled modular trailer Mobile cranes Air Transport Heavy lift transport of project cargo is done using cargo planes, which are one of the largest aircraft due to the size of the load these loads are carried to or from airports via road transport. In 2021 Gebrüder Weiss a logistics company chartered an Antonov An-225 Mriya world's largest cargo plane to transport project cargo from China to Poland for a Polish factory. This was done due to scarcity of time in the COVID-19 pandemic. Largest Cargo Planes Antonov An-225 Mriya Antonov An-124 Condor Boeing 747-8 Freighter Lockheed C-5 Galaxy Airbus Beluga XL Sea Transport Sea transport is the preferred mode of transport for long distance due to large space and low cost when compared to air transport, but this mode of transport requires more planning and the transport itself takes long time. Loads are carried to the port via road transport, later cranes and gantry are used to place loads onto or into the vessels. Heavy Lift Sea Equipment. Heavy lift ship Deck barge Roro ship Lolo ship Crane vessel Tugboat Rail Transport For land transport, rail transport is also considered as an option for hauling heavy lift cargo over road transport due to advantages of higher speeds, bulk transportation and lower cost compared to road transport. Companies have managed to haul cargos over 250 tons and 30 meters long, and some have also started moves longer cargos like windmill blades. Disadvantage to the rail transport is the access ability and size restrictions. Railroads are not connected to airports and ports directly, ultimately the cargo has to be transmitted to road transport to reach the final destination. Heavy Lift Rail Equipment Flatcar Crane car Lowmac car Schnabel car Push-pull trains See also Oversize load SPMT Cargo aircraft Cargo ship Crane Heavy hauler Jack Gantry Rigging References Literature Pieper, Marcus: Durchführung eines Schwergut-Transportes mit Binnenschiff und Straßenfahrzeug aus technischer und organisatorischer Sicht. Bremen 1997. Internationale Transport-Zeitschrift; 2008, 13/14. Spezial: Break Bulk, Schwergut-Special Freight transport Heavy haulage Transport
Heavy lift
[ "Physics" ]
748
[ "Physical systems", "Transport" ]
16,700,587
https://en.wikipedia.org/wiki/Particle%20technology
Particle technology is the science and technology of handling and processing particles and powders. It encompasses the production, handling, modification, and use of a wide variety of particulate materials, both wet and dry. Particle handling may include transportation and storage. Particle sizes range from nanometers to centimeters. Particles can be characterized by diverse metrics. The scope of particle technology spans many industries including chemical, petrochemical, agricultural, food, pharmaceuticals, mineral processing, civil engineering, advanced materials, energy, and the environment. Subjects of particle technology Particle technology thus deals with: Behaviour of solids in bulk, including soil mechanics, bulk material handling, silos, conveying, powder metallurgy, nanotechnology; Size reduction including crushing and grinding; Increasing size by flocculation, granulation, powder compaction, tableting, and crystallization; Particle separation, such as sieving, tabling, flotation, magnetic separation, and/or electrostatic precipitation, fluidization, centrifugal separation, and liquid filtration; Analytical procedures such as particle size analysis. Particle characterization Particles are characterized by their individual size and shape, and by the distribution of these properties in bulk quantities. Spherical particles are defined by diameter or radius, and non-spherical particles are defined by the dimensions of their geometric equivalent. The space between particles in bulk means that the bulk density is less than the density of individual particles. The difference between bulk density and particle density may have implications for storage, transportation or other handling of particles. The way in which they move over each other or lock together determines stability or flowability, which is tested by the triaxial shear test. Particle samples can be visualized using microscopy, most commonly by scanning electron microscopy (SEM) or transmission electron microscopy (TEM). Both SEM and TEM can determine pore structure, surface area and structure of a particle. SEM achieves particle visualization by directing a beam of electrons at the particle sample and creating signals upon interaction with the sample, building a 3D image of the sample's topography and surface structure. TEM uses a similar beam of electrons, but the electrons are directed at a thin slice of the sample to form an image of the electrons that pass through the slice. Particle microscopy can reveal properties or defects in a particle. Optics can quantify particle size. Measuring light scattering and diffraction caused by a particle are detectable methods of identifying particle size, and are commonly used in the following techniques: Laser phase Doppler shift: Incident light on a particle is not uniformly distributed, as it is partially reflected and refracted in multiple directions. Particle velocity can be calculated using the Doppler frequency from any signal, while the phase difference between two detectors determines particle size. Fraunhofer diffraction: When a particle is at least 10 times larger than the laser wavelength and the scattering angle is 30° or smaller, the light intensity distribution pattern can be used to calculate the particle size. Production and applications Many industries use particle technologies for particle transportation, separation and fluidization. A variety of production methods are required for particulate materials due to the large differences between them. Three major areas of production techniques and their common applications are listed below. Size enlargement Agglomeration is the process of primary particles (of smaller size) coming into contact with each other and forming larger clusters. It occurs in dry powders when particle size is smaller than around 10 μm or when conditions are humid, and in liquids when particles have zero surface charge. It is often induced by Brownian motion in liquids. Aggregation is another process of forming clusters from particles, but where the particles have stronger bonds due to larger surface area of contact. It occurs mostly in homogenous liquid mixtures. Crystallization, either in batches or continuous processes, allows the formation of high-purity crystalline particles from solutions. The product usually has particle size in the millimeter range. Precipitation also forms particulate product from solution. It occurs from two soluble compounds forming an insoluble product in a medium, often aqueous. While the initial particle size of the precipitate formed is only in the nanometer range, the primary particles often spontaneously agglomerate or aggregate to form much larger particles. Polymerization is a special form of precipitation where minimally soluble monomers in an aqueous solution form emulsion droplets with zero solubility. Granulation is the process of forming granular material from powders or smaller particles. It occurs when a binder liquid is mixed with ingredient particles to form compact clusters. These clusters can be further processed and compressed into tablet form for other applications. Extrusion forms objects of a fixed cross-sectional shape when the starting material is pushed through a die with the desired cross-section. This technique is often used for plastic, metal and rubber granules. In the food industry, extrusion is also used extensively for making pasta, crouton, cereal, cookie dough, pet food, etc. to achieve uniformity of these items. Size reduction Comminution is the mechanical reduction of the size of solid materials. It includes crushing, cutting, grinding, milling, vibrating, and other processes. Crushing and cutting breaks down large pieces of dry or tough material to the centimeter range. Milling can be applied to both dry and wet material, resulting in particle size in the millimeter range. Atomization is the process of breaking liquids into a spray of much smaller droplets, like an aerosol. The resulting size of these particles or droplets is usually in the nanometer to micrometer range. There are many industrial applications of liquid atomization, including spray drying, film coating, making nano-emulsions, etc. Other applications include fire sprinklers, crop sprayers, dry shampoos, etc. Emulsification Emulsification is the process of dispersing particles from two or more immiscible liquids together. Oftentimes, one of the immiscible liquids is aqueous (water as solvent) and the other is organic (oil as solvent). Industrial processes usually involve dispersion of the organic solution into the aqueous solution by mixing with high-energy shears or strong turbulence. Due to the unstable nature of emulsions, surfactants or emulsifiers are required to stabilize the final product to achieve longer shelf life. Common applications of emulsions include food, pharmaceuticals and lubricants. Some examples of food emulsions are milk, mayonnaise, butter, and ice cream. Some examples of pharmaceutical and lubricant emulsions are ointments, creams, oil-soluble vitamins, and some medications. References Further reading
Particle technology
[ "Chemistry", "Engineering" ]
1,372
[ "Particle technology", "Chemical engineering", "Environmental engineering" ]
662,175
https://en.wikipedia.org/wiki/Silicon%E2%80%93germanium
SiGe ( or ), or silicon–germanium, is an alloy with any molar ratio of silicon and germanium, i.e. with a molecular formula of the form Si1−xGex. It is commonly used as a semiconductor material in integrated circuits (ICs) for heterojunction bipolar transistors or as a strain-inducing layer for CMOS transistors. IBM introduced the technology into mainstream manufacturing in 1989. This relatively new technology offers opportunities in mixed-signal circuit and analog circuit IC design and manufacture. SiGe is also used as a thermoelectric material for high-temperature applications (>700 K). History The first paper on SiGe was published in 1955 on the magnetoresistance of silicon germanium alloys . The first mention of SiGe devices was actually in the original patent for the bipolar transistor where the idea of a SiGe base in a heterojunction bipolar transistor (HBT) was discussed with a description of the physics in the 1957. The first epitaxial growth of SiGe heterostructures which is required for a transistor was not demonstrated until 1975 by Erich Kasper and colleagues at the AEG Research Centre (now Daimler Benz) in Ulm, Germany using molecular beam epitaxy (MBE). Production The use of silicon–germanium as a semiconductor was championed by Bernie Meyerson. The challenge that had delayed its realization for decades was that germanium atoms are roughly 4% larger than silicon atoms. At the usual high temperatures at which silicon transistors were fabricated, the strain induced by adding these larger atoms into crystalline silicon produced vast numbers of defects, precluding the resulting material being of any use. Meyerson and co-workers discovered that the then believed requirement for high temperature processing was flawed, allowing SiGe growth at sufficiently low temperatures such that for all practical purposes no defects were formed. Once having resolved that basic roadblock, it was shown that resultant SiGe materials could be manufactured into high performance electronics using conventional low cost silicon processing toolsets. More relevant, the performance of resulting transistors far exceeded what was then thought to be the limit of traditionally manufactured silicon devices, enabling a new generation of low cost commercial wireless technologies such as WiFi. SiGe processes achieve costs similar to those of silicon CMOS manufacturing and are lower than those of other heterojunction technologies such as gallium arsenide. Recently, organogermanium precursors (e.g. isobutylgermane, alkylgermanium trichlorides, and dimethylaminogermanium trichloride) have been examined as less hazardous liquid alternatives to germane for MOVPE deposition of Ge-containing films such as high purity Ge, SiGe, and strained silicon. SiGe foundry services are offered by several semiconductor technology companies. AMD disclosed a joint development with IBM for a SiGe stressed-silicon technology, targeting the 65 nm process. TSMC also sells SiGe manufacturing capacity. In July 2015, IBM announced that it had created working samples of transistors using a 7 nm silicon–germanium process, promising a quadrupling in the amount of transistors compared to a contemporary process. SiGe transistors SiGe allows CMOS logic to be integrated with heterojunction bipolar transistors, making it suitable for mixed-signal integrated circuits. Heterojunction bipolar transistors have higher forward gain and lower reverse gain than traditional homojunction bipolar transistors. This translates into better low-current and high-frequency performance. Being a heterojunction technology with an adjustable band gap, the SiGe offers the opportunity for more flexible bandgap tuning than silicon-only technology. Silicon–germanium on insulator (SGOI) is a technology analogous to the silicon on insulator (SOI) technology currently employed in computer chips. SGOI increases the speed of the transistors inside microchips by straining the crystal lattice under the MOS transistor gate, resulting in improved electron mobility and higher drive currents. SiGe MOSFETs can also provide lower junction leakage due to the lower bandgap value of SiGe. However, a major issue with SGOI MOSFETs is the inability to form stable oxides with silicon–germanium using standard silicon oxidation processing. Thermoelectric application The thermoelectric properties of SiGe was first measured in 1964 with p-SiGe having a ZT up to ~0.7 at 1000˚C and n-SiGe a ZT up to ~1.0 at 1000˚C which are some of the highest performance thermoelectrics at high temperatures. A silicon–germanium thermoelectric device MHW-RTG3 was used in the Voyager 1 and 2 spacecraft. Silicon–germanium thermoelectric devices were also used in other MHW-RTGs and GPHS-RTGs aboard Cassini, Galileo, Ulysses. Light emission By controlling the composition of a hexagonal SiGe alloy, researchers from Eindhoven University of Technology developed a material that can emit light. In combination with its electronic properties, this opens up the possibility of producing a laser integrated into a single chip to enable data transfer using light instead of electric current, speeding up data transfer while reducing energy consumption and need for cooling systems. The international team, with lead authors Elham Fadaly, Alain Dijkstra and Erik Bakkers at Eindhoven University of Technology in the Netherlands and Jens Renè Suckert at Friedrich-Schiller-Universität Jena in Germany, were awarded the 2020 Breakthrough of the Year award by the magazine Physics World. See also Low-κ dielectric Silicon on insulator Silicon-tin Application of silicon-germanium thermoelectrics in space exploration References Further reading External links Ge Precursors for Strained Si and Compound Semiconductors; Semiconductor International, April 1, 2006. Integrated circuits Germanium Silicon alloys Thermoelectricity
Silicon–germanium
[ "Chemistry", "Technology", "Engineering" ]
1,239
[ "Silicon alloys", "Alloys", "Computer engineering", "Integrated circuits" ]
662,267
https://en.wikipedia.org/wiki/Foundry%20model
The foundry model is a microelectronics engineering and manufacturing business model consisting of a semiconductor fabrication plant, or foundry, and an integrated circuit design operation, each belonging to separate companies or subsidiaries. It was first conceived by Morris Chang, the founder of the Taiwan Semiconductor Manufacturing Company Limited (TSMC). Integrated device manufacturers (IDMs) design and manufacture integrated circuits. Many companies, known as fabless semiconductor companies, only design devices; merchant or pure play foundries only manufacture devices for other companies, without designing them. Examples of IDMs are Intel, Samsung, and Texas Instruments, examples of pure play foundries are GlobalFoundries, TSMC, and UMC, and examples of fabless companies are AMD, Nvidia, and Qualcomm. Integrated circuit production facilities are expensive to build and maintain. Unless they can be kept at nearly full use, they will become a drain on the finances of the company that owns them. The foundry model uses two methods to avoid these costs: fabless companies avoid costs by not owning such facilities. Merchant foundries, on the other hand, find work from the worldwide pool of fabless companies, through careful scheduling, pricing, and contracting, keep their plants in full use. History Companies that both designed and produced the devices were originally responsible for manufacturing microelectronic devices. These manufacturers were involved in both the research and development of manufacturing processes and the research and development of microcircuit design. The first pure play semiconductor company is the Taiwan Semiconductor Manufacturing Corporation founded by Morris Chang, a spin-off of the government Industrial Technology Research Institute, which split its design and fabrication divisions in 1987, a model advocated for by Carver Mead in the U.S., but deemed too costly to pursue. The separation of design and fabrication became known as the foundry model, with fabless manufacturing outsourcing to semiconductor foundries. Fabless semiconductor companies do not have any semiconductor fabrication capability, instead contracting with a merchant foundry for fabrication. The fabless company concentrates on the research and development of an IC-product; the foundry concentrates on manufacturing and testing the physical product. If the foundry does not have any semiconductor design capability, it is a pure-play semiconductor foundry. An absolute separation into fabless and foundry companies is not necessary. Many companies continue to exist that perform both operations and benefit from the close coupling of their skills. Some companies manufacture some of their own designs and contract out to have others manufactured or designed, in cases where they see value or seek special skills. The foundry model is a business model that seeks to optimize productivity. MOSIS The very first merchant foundries were part of the MOSIS service. The MOSIS service gave limited production access to designers with limited means, such as students, university researchers, and engineers at small startups. The designer submitted designs, and these submissions were manufactured with the commercial company's extra capacity. Manufacturers could insert some wafers for a MOSIS design into a collection of their own wafers when a processing step was compatible with both operations. The commercial company (serving as foundry) was already running the process, so they were effectively being paid by MOSIS for something they were already doing. A factory with excess capacity during slow periods could also run MOSIS designs to avoid having expensive capital equipment stand idle. Under-use of an expensive manufacturing plant could lead to the financial ruin of the owner, so selling surplus wafer capacity was a way to maximize the fab's use. Hence, economic factors created a climate where fab operators wanted to sell surplus wafer-manufacturing capacity and designers wanted to purchase manufacturing capacity rather than try to build it. Although MOSIS opened the doors to some fabless customers, earning additional revenue for the foundry and providing inexpensive service to the customer, running a business around MOSIS production was difficult. The merchant foundries sold wafer capacity on a surplus basis, as a secondary business activity. Services to the customers were secondary to the commercial business, with little guarantee of support. The choice of merchant dictated the design, development flow, and available techniques to the fabless customer. Merchant foundries might require proprietary and non-portable preparation steps. Foundries concerned with protecting what they considered trade secrets of their methodologies might only be willing to release data to designers after an onerous nondisclosure procedure. Dedicated foundry In 1987, the world's first dedicated merchant foundry opened its doors: Taiwan Semiconductor Manufacturing Company (TSMC). The distinction of 'dedicated' is in reference to the typical merchant foundry of the era, whose primary business activity was building and selling of its own IC-products. The dedicated foundry offers several key advantages to its customers: first, it does not sell finished IC-products into the supply channel; thus a dedicated foundry will never compete directly with its fabless customers (obviating a common concern of fabless companies). Second, the dedicated foundry can scale production capacity to a customer's needs, offering low-quantity shuttle services in addition to full-scale production lines. Finally, the dedicated foundry offers a "COT-flow" (customer owned tooling) based on industry-standard EDA systems, whereas many IDM merchants required its customers to use proprietary (non-portable) development tools. The COT advantage gave the customer complete control over the design process, from concept to final design. Foundry sales leaders by year Pure-play semiconductor foundry is a company that does not offer a significant amount of IC products of its own design, but instead operates semiconductor fabrication plants focused on producing ICs for other companies. Integrated device manufacturer (IDM) semiconductor foundry is where companies such as Texas Instruments, IBM, and Samsung join in to provide foundry services as long as there is no conflict of interest between relevant parties. 2023 2017 2016–2014 2013 2011 2010 2009–2007 As of 2009, the top 17 semiconductor foundries were: (1) Now acquired by GlobalFoundries 2008–2006 As of 2008, the top 18 pure-play semiconductor foundries were: (1) Merged with CR Logic in 2008, reclassified as an IDM foundry 2007–2005 As of 2007, the top 14 semiconductor foundries include: For ranking in worldwide: 2004 As of 2004, the top 10 pure-play semiconductor foundries were: Financial and IP issues Like all industries, the semiconductor industry faces upcoming challenges and obstacles. The cost to stay on the leading edge has steadily increased with each generation of chips. The financial strain is being felt by both large merchant foundries and their fabless customers. The cost of a new foundry exceeds $1 billion. These costs must be passed on to customers. Many merchant foundries have entered into joint ventures with their competitors in an effort to split research and design expenditures and fab-maintenance expenses. Chip design companies sometimes avoid other companies' patents simply by purchasing the products from a licensed foundry with broad cross-license agreements with the patent owner. Stolen design data is also a concern; data is rarely directly copied, because blatant copies are easily identified by distinctive features in the chip, placed there either for this purpose or as a byproduct of the design process. However, the data including any procedure, process system, method of operation or concept may be sold to a competitor, who may save months or years of tedious reverse engineering. See also Fabless manufacturing Contract manufacturer Integrated device manufacturer Semiconductor fabrication Original design manufacturer Original equipment manufacturer Semiconductor fabless sales leaders by year Semiconductor equipment sales leaders by year References External links Compound Semiconductor.net: "Foundry model could be key to InP industry future" Semiconductor device fabrication Semiconductors
Foundry model
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,547
[ "Electrical resistance and conductance", "Physical quantities", "Microtechnology", "Semiconductors", "Semiconductor device fabrication", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
662,336
https://en.wikipedia.org/wiki/Earthquake%20insurance
Earthquake insurance is a form of property insurance that pays the policyholder in the event of an earthquake that causes damage to the property. Most ordinary homeowners insurance policies do not cover earthquake damage. Most earthquake insurance policies feature a high deductible, which makes this type of insurance useful if the entire home is destroyed, but not useful if the home is merely damaged. Rates depend on location and the probability of an earthquake loss. Rates may be lower for homes made of wood, which withstand earthquakes better than homes made of brick. In the past, earthquake loss was assessed using a collection of mass inventory data and was based mostly on experts' opinions. Today it is estimated using a Damage Ratio (DR), a ratio of the earthquake damage money amount to the total value of a building. Another method is the use of HAZUS, a computerized procedure for loss estimation. As with flood insurance or insurance on damage from a hurricane or other large-scale disasters, insurance companies must be careful when assigning this type of insurance, because an earthquake strong enough to destroy one home will probably destroy dozens of homes in the same area. If one company has written insurance policies on numerous homes in a particular city, then a devastating earthquake will quickly drain all the company's resources. Insurance companies devote much study and effort toward risk management to avoid such cases. In the United States, insurance companies stop selling coverage for a few weeks after a sizeable earthquake has occurred. This is because damaging aftershocks can occur after the initial quake, and rarely, it may be foreshock. Although aftershocks are smaller in magnitude, they deviate from the original epicenter. If an aftershock is significantly closer to a populated area, it can cause much more damage than the initial quake. One such example is the 2011 Christchurch earthquake in New Zealand which killed 185 people following a much larger and more distant quake with no fatalities at all. California After the 1994 Northridge earthquake, nearly all insurance companies completely stopped writing homeowners' insurance policies altogether in the state, because under California law (the "mandatory offer law"), companies offering homeowners' insurance must also offer earthquake insurance. Eventually the legislature created a "mini policy" that could be sold by any insurer to comply with the mandatory offer law: only earthquake loss due to structural damage need be covered, with a 15% deductible. Claims on personal property losses and "loss of use" are limited. The legislature also created a quasi-public (privately funded, publicly managed) agency called the CEA California Earthquake Authority. Membership in the CEA by insurers is voluntary and member companies satisfy the mandatory offer law by selling the CEA mini policy. Premiums are paid to the insurer, and then pooled in the CEA to cover claims from homeowners with a CEA policy from member insurers. The state of California specifically states that it does not back up CEA earthquake insurance, in the event that claims from a major earthquake were to drain all CEA funds, nor will it cover claims from non-CEA insurers if they were to become insolvent due to earthquake losses. Canada There are 4,000 recorded earthquakes in Canada each year. Earthquake damage is not covered by a standard home insurance policy. In the next 50 years, there is a 30% chance of a significant earthquake in British Columbia. Japan The government of Japan created the "Japanese Earthquake Reinsurance" scheme in 1966, and the scheme has been revised several times since. Homeowners may buy earthquake insurance from an insurance company as an optional rider to a fire insurance policy. Insurers enrolled in the JER scheme who have to pay earthquake claims to homeowners share the risk among themselves and also the government, through the JER. The government pays a much larger proportion of the claims if a single earthquake causes aggregate damage of over about 1 trillion yen (about US$8.75 billion). The maximum payout in a single year to all JER insurance claim filers is 5.5 trillion yen (about US$39.4 billion); if claims exceed this amount, then the claims are pro-rated among all claimants. New Zealand New Zealand's Earthquake Commission (EQC) is a Government-owned Crown entity which provides primary natural disaster insurance to the owners of residential properties in New Zealand. In addition to its insurance role, EQC also undertakes research and provides training and information on disaster recovery. EQC was established in 1945 as the Earthquake and War Damage Commission, as part of the New Zealand Government, and was originally intended to provide coverage for earthquakes as well as war damage. Coverage was eventually extended from solely earthquake and war damage to include other natural disasters such as natural landslips, volcanic eruptions, hydrothermal activity, and tsunamis, with coverage for war damage later being removed. For residential land, storm and flood damage is covered. Cover extends over fire damage caused by any of these natural disasters. Turkey Industry Earthquake insurers use simulations to estimate the risk of an earthquake; companies which do that work include CoreLogic, which acquired earthquake modeler Eqecat in 2013 and AIR Worldwide, which is owned by the insurance analytics firm Verisk Analytics. See also Earthquake Commission Earthquake engineering Earthquake simulation Global Earthquake Model Seismic retrofit References Types of insurance Earthquake and seismic risk mitigation Earthquake engineering
Earthquake insurance
[ "Engineering" ]
1,102
[ "Structural engineering", "Earthquake engineering", "Earthquake and seismic risk mitigation", "Civil engineering" ]
662,399
https://en.wikipedia.org/wiki/Affymetrix
Affymetrix is now Applied Biosystems, a brand of DNA microarray products sold by Thermo Fisher Scientific that originated with an American biotechnology research and development and manufacturing company of the same name. The Santa Clara, California-based Affymetrix, Inc. now a part of Thermo Fisher Scientific was co-founded by Alex Zaffaroni and Stephen Fodor. Stephen Fodor and his group, based on their earlier development of methods to fabricate DNA microarrays using semiconductor manufacturing techniques. In 1994, the company's first product under the "GeneChip" Affymetrix trademark, an HIV genotyping chip was introduced, and the company went public in 1996. After incorporation, Affymetrix grew in part by acquiring technologies from other companies, including Genetic MicroSystems (slide-based Microarrays and scanners) and Neomorphic (for bioinformatics) in 2000, ParAllele Bioscience (custom SNP genotyping), USB/Anatrace (biochemical reagents) in 2008, Panomics (low to mid-plex applications) in 2008, and eBioscience (flow cytometry) in 2012. Affymetrix spun off Perlegen Sciences in 2000, as a discrete business focusing on wafer-scale genomics to characterize population-variance of genomic markers. In January 2014, the Food and Drug Administration approved Affymetrix's postnatal blood test, CytoScan Dx Assay, looking at whole-genome correlates of congenital abnormalities and other causes of childhood developmental delay. On January 8, 2016, Thermo Fisher Scientific announced its acquisition of Affymetrix for approximately $1.3 billion, which closed on March 31, 2016. History Affymetrix, Inc. was spun-off from Affymax Research Institute by Alex Zaffaroni in 1993, and was eventually based in Santa Clara, California, United States. It began as a unit in Affymax N.V. in 1991 under Fodor and his group. In the late 1980s, that group had developed methods for fabricating DNA microarrays, under the "GeneChip" Affymetrix trademark, using semiconductor manufacturing techniques. The company's first product, an HIV genotyping GeneChip, was introduced in 1994 and the company went public in 1996. Description of products Affymetrix, Inc. made glass chips for analysis of DNA Microarrays called GeneChip arrays, and sold mass-produced GeneChip arrays intended to match scientifically important parts of human and other animal genomes. Manufactured using photolithography, Affymetrix's GeneChip arrays assisted researchers in quickly scanning for the presence of particular genes in a biological sample. In this area, Affymetrix was focused on oligonucleotide microarrays, which could be used to address the presence of genes through detection of specific corresponding segments of mRNA. The single-use chips could be used to analyze thousands of genes in a single assay. The company also manufactured machinery for high speed analysis of biological samples, and its GeneChip Operating Software as a system for managing Affymetrix microarray data. Affymetix's competitors in the DNA Microarray business include Illumina, GE Healthcare, Applied Biosystems, Beckman Coulter, Eppendorf Biochip Systems, and Agilent. Acquisitions Prior to its acquisition by Thermo Fisher, and its becoming a line of its products, Affymetrix, Inc. had acquired the technologies of a number of companies. It acquired Genetic MicroSystems for slide-based microarrays and scanners and Neomorphic for bioinformatics, both in 2000, ParAllele Bioscience for custom SNP genotyping, USB/Anatrace for biochemical reagents in 2008, eBioscience for flow cytometry in 2012, and Panomics in 2008 and True Materials to expand its offering of low to mid-plex applications. In 2000, Perlegen Sciences spun out from Affymetrix to focus on wafer-scale genomics for massive data creation and collection required for characterizing population variance of genomic markers and expression for the drug discovery process. FDA test approval In January 2014, the Food and Drug Administration cleared a first-of-a-kind whole-genome postnatal blood test that can aid physicians in identifying the underlying genetic cause of developmental delay, intellectual disability, congenital anomalies, or dysmorphic features in children, where it was noted that "[a]bout 2 to 3 percent of U.S. children have some sort of intellectual disability, according to the National Institutes of Health." The test, known as CytoScan Dx Assay, was designed to diagnose these disabilities earlier to expedite appropriate care and support. References External links Detailed, multi-document resource on the originating company's history, from its perspective, featuring a detailed timeline of acquisitions, photographs of early technology, etc. Thermo Fisher Scientific Research support companies Biotechnology companies of the United States Companies based in Santa Clara, California Biotechnology companies established in 1992 Microarrays Companies formerly listed on the Nasdaq Technology companies based in the San Francisco Bay Area 1992 establishments in California 2016 mergers and acquisitions
Affymetrix
[ "Chemistry", "Materials_science", "Biology" ]
1,115
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Microarrays", "Bioinformatics", "Molecular biology techniques" ]
663,039
https://en.wikipedia.org/wiki/Two-phase%20flow
In fluid mechanics, two-phase flow is a flow of gas and liquid — a particular example of multiphase flow. Two-phase flow can occur in various forms, such as flows transitioning from pure liquid to vapor as a result of external heating, separated flows, and dispersed two-phase flows where one phase is present in the form of particles, droplets, or bubbles in a continuous carrier phase (i.e. gas or liquid). Categorization The widely accepted method to categorize two-phase flows is to consider the velocity of each phase as if there is not other phases available. The parameter is a hypothetical concept called Superficial velocity. Examples and applications Historically, probably the most commonly studied cases of two-phase flow are in large-scale power systems. Coal and gas-fired power stations used very large boilers to produce steam for use in turbines. In such cases, pressurised water is passed through heated pipes and it changes to steam as it moves through the pipe. The design of boilers requires a detailed understanding of two-phase flow heat-transfer and pressure drop behaviour, which is significantly different from the single-phase case. Even more critically, nuclear reactors use water to remove heat from the reactor core using two-phase flow. A great deal of study has been performed on the nature of two-phase flow in such cases, so that engineers can design against possible failures in pipework, loss of pressure, and so on (a loss-of-coolant accident (LOCA)). Another case where two-phase flow can occur is in pump cavitation. Here a pump is operating close to the vapor pressure of the fluid being pumped. If pressure drops further, which can happen locally near the vanes for the pump, for example, then a phase change can occur and gas will be present in the pump. Similar effects can also occur on marine propellers; wherever it occurs, it is a serious problem for designers. When the vapor bubble collapses, it can produce very large pressure spikes, which over time will cause damage on the propeller or turbine. The above two-phase flow cases are for a single fluid occurring by itself as two different phases, such as steam and water. The term 'two-phase flow' is also applied to mixtures of different fluids having different phases, such as air and water, or oil and natural gas. Sometimes even three-phase flow is considered, such as in oil and gas pipelines where there might be a significant fraction of solids. Although oil and water are not strictly distinct phases (since they are both liquids) they are sometimes considered as a two-phase flow; and the combination of oil, gas and water (e.g. the flow from an offshore oil well) may also be considered a three-phase flow. Other interesting areas where two-phase flow is studied includes water electrolysis, climate systems such as clouds, and in groundwater flow, in which the movement of water and air through the soil is studied. Other examples of two-phase flow include bubbles, rain, waves on the sea, foam, fountains, mousse, cryogenics, and oil slicks. One final example is in the electrical explosion of metal. Characteristics of two-phase flow Several features make two-phase flow an interesting and challenging branch of fluid mechanics: Surface tension makes all dynamical problems nonlinear (see Weber number) In the case of air and water at standard temperature and pressure, the density of the two phases differs by a factor of about 1000. Similar differences are typical of water liquid/water vapor densities The sound speed changes dramatically for materials undergoing phase change, and can be orders of magnitude different. This introduces compressible effects into the problem The phase changes are not instantaneous, and the liquid vapor system will not necessarily be in phase equilibrium The change of phase means flow-induced pressure drops can cause further phase-change (e.g. water can evaporate through a valve) increasing the relative volume of the gaseous, compressible medium and increasing exit velocities, unlike single-phase incompressible flow where closing a valve would decrease exit velocities Can give rise to other counter-intuitive, negative resistance-type instabilities, like Ledinegg instability, geysering, chugging, relaxation instability, and flow maldistribution instabilities as examples of static instabilities, and other dynamic instabilities Additional exhaustive information, like applied mathematical models can be found in. Acoustics Gurgling is a characteristic sound made by unstable two-phase fluid flow, for example, as liquid is poured from a bottle, or during gargling. See also Multiphase flow Buckley–Leverett equation Darcy's law for multiphase flow (for flow through porous media such as soil) Slip ratio (gas–liquid flow) Mass flow meter Modelling Modelling of two phase flow is still under development. Known methods are Volume of fluid method Level-set method Front tracking by Gretar Tryggvason Lattice Boltzmann methods Smoothed-particle hydrodynamics (SPH) References Fluid dynamics
Two-phase flow
[ "Chemistry", "Engineering" ]
1,044
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
663,426
https://en.wikipedia.org/wiki/Quantum%20logic
In the mathematical study of logic and the physical analysis of quantum foundations, quantum logic is a set of rules for manip­ulation of propositions inspired by the structure of quantum theory. The formal system takes as its starting point an obs­ervation of Garrett Birkhoff and John von Neumann, that the structure of experimental tests in classical mechanics forms a Boolean algebra, but the structure of experimental tests in quantum mechanics forms a much more complicated structure. A number of other logics have also been proposed to analyze quantum-mechanical phenomena, unfortunately also under the name of "quantum logic(s)". They are not the subject of this article. For discussion of the similarities and differences between quantum logic and some of these competitors, see . Quantum logic has been proposed as the correct logic for propositional inference generally, most notably by the philosopher Hilary Putnam, at least at one point in his career. This thesis was an important ingredient in Putnam's 1968 paper "Is Logic Empirical?" in which he analysed the epistemological status of the rules of propositional logic. Modern philosophers reject quantum logic as a basis for reasoning, because it lacks a material conditional; a common alternative is the system of linear logic, of which quantum logic is a fragment. Mathematically, quantum logic is formulated by weakening the distributive law for a Boolean algebra, resulting in an ortho­complemented lattice. Quantum-mechanical observables and states can be defined in terms of functions on or to the lattice, giving an alternate formalism for quantum computations. Introduction The most notable difference between quantum logic and classical logic is the failure of the propositional distributive law: p and (q or r) = (p and q) or (p and r), where the symbols p, q and r are propositional variables. To illustrate why the distributive law fails, consider a particle moving on a line and (using some system of units where the reduced Planck constant is 1) let p = "the particle has momentum in the interval " q = "the particle is in the interval " r = "the particle is in the interval " We might observe that: p and (q or r) = true in other words, that the state of the particle is a weighted superposition of momenta between 0 and +1/6 and positions between −1 and +3. On the other hand, the propositions "p and q" and "p and r" each assert tighter restrictions on simultaneous values of position and momentum than are allowed by the uncertainty principle (they each have uncertainty 1/3, which is less than the allowed minimum of 1/2). So there are no states that can support either proposition, and (p and q) or (p and r) = false History and modern criticism In his classic 1932 treatise Mathematical Foundations of Quantum Mechanics, John von Neumann noted that projections on a Hilbert space can be viewed as propositions about physical observables; that is, as potential yes-or-no questions an observer might ask about the state of a physical system, questions that could be settled by some measurement. Principles for manipulating these quantum propositions were then called quantum logic by von Neumann and Birkhoff in a 1936 paper. George Mackey, in his 1963 book (also called Mathematical Foundations of Quantum Mechanics), attempted to axiomatize quantum logic as the structure of an ortho­complemented lattice, and recognized that a physical observable could be defined in terms of quantum propositions. Although Mackey's presentation still assumed that the ortho­complemented lattice is the lattice of closed linear subspaces of a separable Hilbert space, Constantin Piron, Günther Ludwig and others later developed axiomatizations that do not assume an underlying Hilbert space. Inspired by Hans Reichenbach's then-recent defence of general relativity, the philosopher Hilary Putnam popularized Mackey's work in two papers in 1968 and 1975, in which he attributed the idea that anomalies associated to quantum measurements originate with a failure of logic itself to his coauthor, physicist David Finkelstein. Putnam hoped to develop a possible alternative to hidden variables or wavefunction collapse in the problem of quantum measurement, but Gleason's theorem presents severe difficulties for this goal. Later, Putnam retracted his views, albeit with much less fanfare, but the damage had been done. While Birkhoff and von Neumann's original work only attempted to organize the calculations associated with the Copenhagen interpretation of quantum mechanics, a school of researchers had now sprung up, either hoping that quantum logic would provide a viable hidden-variable theory, or obviate the need for one. Their work proved fruitless, and now lies in poor repute. Most philosophers find quantum logic an unappealing competitor to classical logic. It is far from evident (albeit true) that quantum logic is a logic, in the sense of describing a process of reasoning, as opposed to a particularly convenient language to summarize the measurements performed by quantum apparatuses. In particular, modern philosophers of science argue that quantum logic attempts to substitute metaphysical difficulties for unsolved problems in physics, rather than properly solving the physics problems. Tim Maudlin writes that quantum "logic 'solves' the [measurement] problem by making the problem impossible to state." Quantum logic remains in limited use among logicians as an extremely pathological counterexample (Dalla Chiara and Giuntini: "Why quantum logics? Simply because 'quantum logics are there!'"). Although the central insight to quantum logic remains mathematical folklore as an intuition pump for categorification, discussions rarely mention quantum logic. Quantum logic's best chance at revival is through the recent development of quantum computing, which has engendered a proliferation of new logics for formal analysis of quantum protocols and algorithms (see also ). The logic may also find application in (computational) linguistics. Algebraic structure Quantum logic can be axiomatized as the theory of propositions modulo the following identities: a ¬¬a ∨ is commutative and associative. There is a maximal element ⊤, and ⊤ b∨¬b for any b. a∨¬(¬a∨b) a. ("¬" is the traditional notation for "not", "∨" the notation for "or", and "∧" the notation for "and".) Some authors restrict to orthomodular lattices, which additionally satisfy the orthomodular law: If ⊤ ¬(¬a∨¬b)∨¬(a∨b) then a b. ("⊤" is the traditional notation for truth and ""⊥" the traditional notation for falsity.) Alternative formulations include propositions derivable via a natural deduction, sequent calculus or tableaux system. Despite the relatively developed proof theory, quantum logic is not known to be decidable. Quantum logic as the logic of observables The remainder of this article assumes the reader is familiar with the spectral theory of self-adjoint operators on a Hilbert space. However, the main ideas can be under­stood in the finite-dimensional case. Logic of classical mechanics The Hamiltonian formulations of classical mechanics have three ingredients: states, observables and dynamics. In the simplest case of a single particle moving in R3, the state space is the position–momentum space R6. An observable is some real-valued function f on the state space. Examples of observables are position, momentum or energy of a particle. For classical systems, the value f(x), that is the value of f for some particular system state x, is obtained by a process of measurement of f. The propositions concerning a classical system are generated from basic statements of the form "Measurement of f yields a value in the interval [a, b] for some real numbers a, b." through the conventional arithmetic operations and pointwise limits. It follows easily from this characterization of propositions in classical systems that the corresponding logic is identical to the Boolean algebra of Borel subsets of the state space. They thus obey the laws of classical propositional logic (such as de Morgan's laws) with the set operations of union and intersection corresponding to the Boolean conjunctives and subset inclusion corresponding to material implication. In fact, a stronger claim is true: they must obey the infinitary logic . We summarize these remarks as follows: The proposition system of a classical system is a lattice with a distinguished orthocomplementation operation: The lattice operations of meet and join are respectively set intersection and set union. The orthocomplementation operation is set complement. Moreover, this lattice is sequentially complete, in the sense that any sequence {Ei}i∈N of elements of the lattice has a least upper bound, specifically the set-theoretic union: Propositional lattice of a quantum mechanical system In the Hilbert space formulation of quantum mechanics as presented by von Neumann, a physical observable is represented by some (possibly unbounded) densely defined self-adjoint operator A on a Hilbert space H. A has a spectral decomposition, which is a projection-valued measure E defined on the Borel subsets of R. In particular, for any bounded Borel function f on R, the following extension of f to operators can be made: In case f is the indicator function of an interval [a, b], the operator f(A) is a self-adjoint projection onto the subspace of generalized eigenvectors of A with eigenvalue in . That subspace can be interpreted as the quantum analogue of the classical proposition Measurement of A yields a value in the interval [a, b]. This suggests the following quantum mechanical replacement for the orthocomplemented lattice of propositions in classical mechanics, essentially Mackey's Axiom VII: The propositions of a quantum mechanical system correspond to the lattice of closed subspaces of H; the negation of a proposition V is the orthogonal complement V⊥. The space Q of quantum propositions is also sequentially complete: any pairwise-disjoint sequence {Vi}i of elements of Q has a least upper bound. Here disjointness of W1 and W2 means W2 is a subspace of W1⊥. The least upper bound of {Vi}i is the closed internal direct sum. Standard semantics The standard semantics of quantum logic is that quantum logic is the logic of projection operators in a separable Hilbert or pre-Hilbert space, where an observable p is associated with the set of quantum states for which p (when measured) has eigenvalue 1. From there, ¬p is the orthogonal complement of p (since for those states, the probability of observing p, P(p) = 0), p∧q is the intersection of p and q, and p∨q = ¬(¬p∧¬q) refers to states that superpose p and q. This semantics has the nice property that the pre-Hilbert space is complete (i.e., Hilbert) if and only if the propositions satisfy the orthomodular law, a result known as the Solèr theorem. Although much of the development of quantum logic has been motivated by the standard semantics, it is not the characterized by the latter; there are additional properties satisfied by that lattice that need not hold in quantum logic. Differences with classical logic The structure of Q immediately points to a difference with the partial order structure of a classical proposition system. In the classical case, given a proposition p, the equations ⊤ = p∨q and ⊥ = p∧q have exactly one solution, namely the set-theoretic complement of p. In the case of the lattice of projections there are infinitely many solutions to the above equations (any closed, algebraic complement of p solves it; it need not be the orthocomplement). More generally, propositional valuation has unusual properties in quantum logic. An orthocomplemented lattice admitting a total lattice homomorphism to must be Boolean. A standard workaround is to study maximal partial homomorphisms q with a filtering property: if a≤b and q(a) = ⊤, then q(b) = ⊤. Failure of distributivity Expressions in quantum logic describe observables using a syntax that resembles classical logic. However, unlike classical logic, the distributive law a ∧ (b ∨ c) = (a ∧ b) ∨ (a ∧ c) fails when dealing with noncommuting observables, such as position and momentum. This occurs because measurement affects the system, and measurement of whether a disjunction holds does not measure which of the disjuncts is true. For example, consider a simple one-dimensional particle with position denoted by x and momentum by p, and define observables: a — |p| ≤ 1 (in some units) b — x ≤ 0 c — x ≥ 0 Now, position and momentum are Fourier transforms of each other, and the Fourier transform of a square-integrable nonzero function with a compact support is entire and hence does not have non-isolated zeroes. Therefore, there is no wave function that is both normalizable in momentum space and vanishes on precisely x ≥ 0. Thus, a ∧ b and similarly a ∧ c are false, so (a ∧ b) ∨ (a ∧ c) is false. However, a ∧ (b ∨ c) equals a, which is certainly not false (there are states for which it is a viable measurement outcome). Moreover: if the relevant Hilbert space for the particle's dynamics only admits momenta no greater than 1, then a is true. To understand more, let p1 and p2 be the momentum functions (Fourier transforms) for the projections of the particle wave function to x ≤ 0 and x ≥ 0 respectively. Let |pi|↾≥1 be the restriction of pi to momenta that are (in absolute value) ≥1. (a ∧ b) ∨ (a ∧ c) corresponds to states with |p1|↾≥1 = |p2|↾≥1 = 0 (this holds even if we defined p differently so as to make such states possible; also, a ∧ b corresponds to |p1|↾≥1=0 and p2=0). Meanwhile, a corresponds to states with |p|↾≥1 = 0. As an operator, p = p1 + p2, and nonzero |p1|↾≥1 and |p2|↾≥1 might interfere to produce zero |p|↾≥1. Such interference is key to the richness of quantum logic and quantum mechanics. Relationship to quantum measurement Mackey observables Given a orthocomplemented lattice Q, a Mackey observable φ is a countably additive homomorphism from the orthocomplemented lattice of Borel subsets of R to Q. In symbols, this means that for any sequence {Si}i of pairwise-disjoint Borel subsets of R, {φ(Si)}i are pairwise-orthogonal propositions (elements of Q) and Equivalently, a Mackey observable is a projection-valued measure on R. Theorem (Spectral theorem). If Q is the lattice of closed subspaces of Hilbert H, then there is a bijective correspondence between Mackey observables and densely-defined self-adjoint operators on H. Quantum probability measures A quantum probability measure is a function P defined on Q with values in [0,1] such that P("⊥)=0, P(⊤)=1 and if {Ei}i is a sequence of pairwise-orthogonal elements of Q then Every quantum probability measure on the closed subspaces of a Hilbert space is induced by a density matrix — a nonnegative operator of trace 1. Formally, Theorem. Suppose Q is the lattice of closed subspaces of a separable Hilbert space of complex dimension at least 3. Then for any quantum probability measure P on Q there exists a unique trace class operator S such that for any self-adjoint projection E in Q. Relationship to other logics Quantum logic embeds into linear logic and the modal logic B. Indeed, modern logics for the analysis of quantum computation often begin with quantum logic, and attempt to graft desirable features of an extension of classical logic thereonto; the results then necessarily embed quantum logic. The orthocomplemented lattice of any set of quantum propositions can be embedded into a Boolean algebra, which is then amenable to classical logic. Limitations Although many treatments of quantum logic assume that the underlying lattice must be orthomodular, such logics cannot handle multiple interacting quantum systems. In an example due to Foulis and Randall, there are orthomodular propositions with finite-dimensional Hilbert models whose pairing admits no orthomodular model. Likewise, quantum logic with the orthomodular law falsifies the deduction theorem. Quantum logic admits no reasonable material conditional; any connective that is monotone in a certain technical sense reduces the class of propositions to a Boolean algebra. Consequently, quantum logic struggles to represent the passage of time. One possible workaround is the theory of quantum filtrations developed in the late 1970s and 1980s by Belavkin. It is known, however, that System BV, a deep inference fragment of linear logic that is very close to quantum logic, can handle arbitrary discrete spacetimes. See also Fuzzy logic HPO formalism (An approach to temporal quantum logic) Linear logic Mathematical formulation of quantum mechanics Multi-valued logic Quantum Bayesianism Quantum cognition Quantum contextuality Quantum field theory Quantum probability Quasi-set theory Solèr's theorem Vector logic Notes Citations Sources Historical works Arranged chronologically Modern philosophical perspectives Mathematical study and computational applications N. Papanikolaou, "Reasoning Formally About Quantum Systems: An Overview", ACM SIGACT News, 36(3), 2005. pp. 51–66. arXiv cs/0508005. Quantum foundations D. Cohen, An Introduction to Hilbert Space and Quantum Logic, Springer-Verlag, 1989. Elementary and well-illustrated; suitable for advanced undergraduates. Mathematical logic Systems of formal logic Non-classical logic Quantum mechanics
Quantum logic
[ "Physics", "Mathematics" ]
3,802
[ "Mathematical logic", "Applied and interdisciplinary physics", "Quantum mechanics", "Applications of quantum mechanics" ]
664,332
https://en.wikipedia.org/wiki/Econophysics
Econophysics is a non-orthodox (in economics) interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics, usually those including uncertainty or stochastic processes and nonlinear dynamics. Some of its application to the study of financial markets has also been termed statistical finance referring to its roots in statistical physics. Econophysics is closely related to social physics. History Physicists' interest in the social sciences is not new (see e.g.,); Daniel Bernoulli, as an example, was the originator of utility-based preferences. One of the founders of neoclassical economic theory, former Yale University Professor of Economics Irving Fisher, was originally trained under the renowned Yale physicist, Josiah Willard Gibbs. Likewise, Jan Tinbergen, who won the first Nobel Memorial Prize in Economic Sciences in 1969 for having developed and applied dynamic models for the analysis of economic processes, studied physics with Paul Ehrenfest at Leiden University. In particular, Tinbergen developed the gravity model of international trade that has become the workhorse of international economics. Econophysics was started in the mid-1990s by several physicists working in the subfield of statistical mechanics. Unsatisfied with the traditional explanations and approaches of economists – which usually prioritized simplified approaches for the sake of soluble theoretical models over agreement with empirical data – they applied tools and methods from physics, first to try to match financial data sets, and then to explain more general economic phenomena. One driving force behind econophysics arising at this time was the sudden availability of large amounts of financial data, starting in the 1980s. It became apparent that traditional methods of analysis were insufficient – standard economic methods dealt with homogeneous agents and equilibrium, while many of the more interesting phenomena in financial markets fundamentally depended on heterogeneous agents and far-from-equilibrium situations. The term "econophysics" was coined by H. Eugene Stanley, to describe the large number of papers written by physicists in the problems of (stock and other) markets, in a conference on statistical physics in Kolkata (erstwhile Calcutta) in 1995 and first appeared in its proceedings publication in Physica A 1996. The inaugural meeting on econophysics was organised in 1998 in Budapest by János Kertész and Imre Kondor. The first book on econophysics was by R. N. Mantegna & H. E. Stanley in 2000. The almost regular meeting series on the topic include: Econophys-Kolkata (held in Kolkata & Delhi), Econophysics Colloquium, ESHIA/ WEHIA. In recent years network science, heavily reliant on analogies from statistical mechanics, has been applied to the study of productive systems. That is the case with the works done at the Santa Fe Institute in European Funded Research Projects as Forecasting Financial Crises and the Harvard-MIT Observatory of Economic Complexity Basic tools Basic tools of econophysics are probabilistic and statistical methods often taken from statistical physics. Physics models that have been applied in economics include the kinetic theory of gas (called the kinetic exchange models of markets), percolation models, chaotic models developed to study cardiac arrest, and models with self-organizing criticality as well as other models developed for earthquake prediction. Moreover, there have been attempts to use the mathematical theory of complexity and information theory, as developed by many scientists among whom are Murray Gell-Mann and Claude E. Shannon, respectively. For potential games, it has been shown that an emergence-producing equilibrium based on information via Shannon information entropy produces the same equilibrium measure (Gibbs measure from statistical mechanics) as a stochastic dynamical equation which represents noisy decisions, both of which are based on bounded rationality models used by economists. The fluctuation-dissipation theorem connects the two to establish a concrete correspondence of "temperature", "entropy", "free potential/energy", and other physics notions to an economics system. The statistical mechanics model is not constructed a-priori - it is a result of a boundedly rational assumption and modeling on existing neoclassical models. It has been used to prove the "inevitability of collusion" result of Huw Dixon in a case for which the neoclassical version of the model does not predict collusion. Here the demand is increasing, as with Veblen goods, stock buyers with the "hot hand" fallacy preferring to buy more successful stocks and sell those that are less successful, or among short traders during a short squeeze as occurred with the WallStreetBets group's collusion to drive up GameStop stock price in 2021. Nobel laureate and founder of experimental economics Vernon L. Smith has used econophysics to model sociability via implementation of ideas in Humanomics. There, noisy decision making and interaction parameters that facilitate the social action responses of reward and punishment result in spin glass models identical to those in physics. Quantifiers derived from information theory were used in several papers by econophysicist Aurelio F. Bariviera and coauthors in order to assess the degree in the informational efficiency of stock markets. Zunino et al. use an innovative statistical tool in the financial literature: the complexity-entropy causality plane. This Cartesian representation establish an efficiency ranking of different markets and distinguish different bond market dynamics. It was found that more developed countries have stock markets with higher entropy and lower complexity, while those markets from emerging countries have lower entropy and higher complexity. Moreover, the authors conclude that the classification derived from the complexity-entropy causality plane is consistent with the qualifications assigned by major rating companies to the sovereign instruments. A similar study developed by Bariviera et al. explore the relationship between credit ratings and informational efficiency of a sample of corporate bonds of US oil and energy companies using also the complexity–entropy causality plane. They find that this classification agrees with the credit ratings assigned by Moody's. Another good example is random matrix theory, which can be used to identify the noise in financial correlation matrices. One paper has argued that this technique can improve the performance of portfolios, e.g., in applied in portfolio optimization. The ideology of econophysics is embodied in a new probabilistic economic theory and, on its basis, a unified theory of stock markets. There are also analogies between finance theory and diffusion theory. For instance, the Black–Scholes equation for option pricing is a diffusion-advection equation (see however for a critique of the Black–Scholes methodology). The Black–Scholes theory can be extended to provide an analytical theory of main factors in economic activities. Subfields Various other tools from physics that have so far been used, such as fluid dynamics, classical mechanics and quantum mechanics (including so-called classical economy, quantum economics and quantum finance), and the Feynman–Kac formula of statistical mechanics. Statistical mechanics When mathematician Mark Kac attended a lecture by Richard Feynman he realized their work overlapped. Together they worked out a new approach to solving stochastic differential equations. Their approach is used to efficiently calculate solutions to the Black–Scholes equation to price options on stocks. Quantum finance Quantum statistical models have been successfully applied to finance by several groups of econophysicists using different approaches, but the origin of their success may not be due to quantum analogies. Quantum economics The editorial in the inaugural issue of the journal Quantum Economics and Finance says: "Quantum economics and finance is the application of probability based on projective geometry—also known as quantum probability—to modelling in economics and finance. It draws on related areas such as quantum cognition, quantum game theory, quantum computing, and quantum physics." In his overview article in the same issue, David Orrell outlines how neoclassical economics benefited from the concepts of classical mechanics, and yet concepts of quantum mechanics "apparently left economics untouched". He reviews different avenues for quantum economics, some of which he notes are contradictory, settling on "quantum economics therefore needs to take a different kind of leaf from the book of quantum physics, by adopting quantum methods, not because they appear natural or elegant or come pre-approved by some higher authority or bear resemblance to something else, but because they capture in a useful way the most basic properties of what is being studied." Main results Econophysics is having some impacts on the more applied field of quantitative finance, whose scope and aims significantly differ from those of economic theory. Various econophysicists have introduced models for price fluctuations in physics of financial markets or original points of view on established models. Presently, one of the main results of econophysics comprises the explanation of the "fat tails" in the distribution of many kinds of financial data as a universal self-similar scaling property (i.e. scale invariant over many orders of magnitude in the data), arising from the tendency of individual market competitors, or of aggregates of them, to exploit systematically and optimally the prevailing "microtrends" (e.g., rising or falling prices). These "fat tails" are not only mathematically important, because they comprise the risks, which may be on the one hand, very small such that one may tend to neglect them, but which - on the other hand - are not negligible at all, i.e. they can never be made exponentially tiny, but instead follow a measurable algebraically decreasing power law, for example with a failure probability of only where x is an increasingly large variable in the tail region of the distribution considered (i.e. a price statistics with much more than 108 data). I.e., the events considered are not simply "outliers" but must really be taken into account and cannot be "insured away". It appears that it also plays a role that near a change of the tendency (e.g. from falling to rising prices) there are typical "panic reactions" of the selling or buying agents with algebraically increasing bargain rapidities and volumes. As in quantum field theory the "fat tails" can be obtained by complicated "nonperturbative" methods, mainly by numerical ones, since they contain the deviations from the usual Gaussian approximations, e.g. the Black–Scholes theory. Fat tails can, however, also be due to other phenomena, such as a random number of terms in the central-limit theorem, or any number of other, non-econophysics models. Due to the difficulty in testing such models, they have received less attention in traditional economic analysis. Criticism In 2006 economists Mauro Gallegati, Steve Keen, Thomas Lux, and Paul Ormerod, published a critique of econophysics. They cite important empirical contributions primarily in the areas of finance and industrial economics, but list four concerns with work in the field: lack of awareness of economics work, resistance to rigor, a misplaced belief in universal empirical regularity, and inappropriate models. See also Bose–Einstein condensation (network theory) Potential game Complexity economics Complex network Detrended fluctuation analysis Kinetic exchange models of markets Long-range dependency Network theory Network science Thermoeconomics Quantum finance Sznajd model References Further reading Emmanual Farjoun and Moshé Machover, Laws of Chaos: a probabilistic approach to political economy, Verso (London, 1983) Vladimir Pokrovskii, Econodynamics. The Theory of Social Production, https://www.springer.com/gp/book/9783319720739 (Springer, 2018) Philip Mirowski, More Heat than Light - Economics as Social Physics, Physics as Nature's Economics, Cambridge University Press (Cambridge, UK, 1989) Rosario N. Mantegna, H. Eugene Stanley, An Introduction to Econophysics: Correlations and Complexity in Finance, Cambridge University Press (Cambridge, UK, 1999) Bertrand Roehner, Patterns of Speculation - A Study in Observational Econophysics, Cambridge University Press (Cambridge, UK, 2002) Joseph McCauley, Dynamics of Markets, Econophysics and Finance, Cambridge University Press (Cambridge, UK, 2004) Surya Y., Situngkir, H., Dahlan, R. M., Hariadi, Y., Suroso, R. (2004). Aplikasi Fisika dalam Analisis Keuangan (Physics Applications in Financial Analysis. Bina Sumber Daya MIPA. Anatoly V. Kondratenko. Physical Modeling of Economic Systems. Classical and Quantum Economies. Novosibirsk, Nauka (Science) (2005), Arnab Chatterjee, Sudhakar Yarlagadda, Bikas K Chakrabarti, Econophysics of Wealth Distributions, Springer-Verlag Italia (Milan, 2005) Sitabhra Sinha, Arnab Chatterjee, Anirban Chakraborti, Bikas K Chakrabarti. Econophysics: An Introduction, Wiley-VCH (2010) Ubaldo Garibaldi and Enrico Scalas, Finitary Probabilistic Methods in Econophysics, Cambridge University Press (Cambridge, UK, 2010). Mark Buchanan, What has econophysics ever done for us?, Nature 2013 Nature Physics Focus issue: Complex networks in finance March 2013 Volume 9 No 3 pp 119–128 Martin Shubik and Eric Smith, The Guidance of an Enterprise Economy, MIT Press, Book Details MIT Press (2016) Abergel, F., Aoyama, H., Chakrabarti, B.K., Chakraborti, A., Deo, N., Raina, D., Vodenska, I. (Eds.), Econophysics and Sociophysics: Recent Progress and Future Directions, Econophysics and Sociophysics: Recent Progress and Future Directions, New Economic Windows Series, Springer (2017) Marcelo Byrro Ribeiro, Income Distribution Dynamics of Economic Systems: An Econophysical Approach, Cambridge University Press (Cambridge, UK, 2020). Max Greenberg and H. Oliver Gao, "Twenty-five years of random asset exchange modeling"Twenty-five years of random asset exchange modeling European Physical Journal B, vol. 97 art. 69 (2024). External links Is Inequality Inevitable?; Scientific American, November 2019 When Physics Became Undisciplined (& Fathers of Econophysics): Cambridge University Thesis (2018) Conference to mark 25th anniversary of Farjoun and Machover's book Econophysics Colloquium Lectures Economic Fluctuations and Statistical Physics: Quantifying Extremely Rare and Much Less Rare Events, Eugene Stanley, Videolectures.net Applications of Statistical Physics to Understanding Complex Systems, Eugene Stanley, Videolectures.net Financial Bubbles, Real Estate Bubbles, Derivative Bubbles, and the Financial and Economic Crisis, Didier Sornette, Videolectures.net Financial crises and risk management, Didier Sornette, Videolectures.net Bubble trouble: how physics can quantify stock-market crashes, Tobias Preis, Physics World Online Lecture Series Applied and interdisciplinary physics Mathematical finance Schools of economic thought Statistical mechanics Interdisciplinary subfields of economics
Econophysics
[ "Physics", "Mathematics" ]
3,113
[ "Applied mathematics", "Statistical mechanics", "Applied and interdisciplinary physics", "Mathematical finance" ]
664,488
https://en.wikipedia.org/wiki/Reciprocal%20lattice
The reciprocal lattice is a term associated with solids with translational symmetry, and plays a major role in many areas such as X-ray and electron diffraction as well as the energies of electrons in a solid. It emerges from the Fourier transform of the lattice associated with the arrangement of the atoms. The direct lattice or real lattice is a periodic function in physical space, such as a crystal system (usually a Bravais lattice). The reciprocal lattice exists in the mathematical space of spatial frequencies, known as reciprocal space or k space, which is the dual of physical space considered as a vector space, and the reciprocal lattice is the sublattice of that space that is dual to the direct lattice. In quantum physics, reciprocal space is closely related to momentum space according to the proportionality , where is the momentum vector and is the reduced Planck constant. The reciprocal lattice of a reciprocal lattice is equivalent to the original direct lattice, because the defining equations are symmetrical with respect to the vectors in real and reciprocal space. Mathematically, direct and reciprocal lattice vectors represent covariant and contravariant vectors, respectively. The reciprocal lattice is the set of all vectors , that are wavevectors of plane waves in the Fourier series of a spatial function whose periodicity is the same as that of a direct lattice . Each plane wave in this Fourier series has the same phase or phases that are differed by multiples of at each direct lattice point (so essentially same phase at all the direct lattice points). The Brillouin zone is a Wigner–Seitz cell of the reciprocal lattice. Wave-based description Reciprocal space Reciprocal space (also called -space) provides a way to visualize the results of the Fourier transform of a spatial function. It is similar in role to the frequency domain arising from the Fourier transform of a time dependent function; reciprocal space is a space over which the Fourier transform of a spatial function is represented at spatial frequencies or wavevectors of plane waves of the Fourier transform. The domain of the spatial function itself is often referred to as real space. In physical applications, such as crystallography, both real and reciprocal space will often each be two or three dimensional. Whereas the number of spatial dimensions of these two associated spaces will be the same, the spaces will differ in their quantity dimension, so that when the real space has the dimension length (L), its reciprocal space will have inverse length, so L−1 (the reciprocal of length). Reciprocal space comes into play regarding waves, both classical and quantum mechanical. Because a sinusoidal plane wave with unit amplitude can be written as an oscillatory term , with initial phase , angular wavenumber and angular frequency , it can be regarded as a function of both and (and the time-varying part as a function of both and ). This complementary role of and leads to their visualization within complementary spaces (the real space and the reciprocal space). The spatial periodicity of this wave is defined by its wavelength , where ; hence the corresponding wavenumber in reciprocal space will be . In three dimensions, the corresponding plane wave term becomes , which simplifies to at a fixed time , where is the position vector of a point in real space and now is the wavevector in the three dimensional reciprocal space. (The magnitude of a wavevector is called wavenumber.) The constant is the phase of the wavefront (a plane of a constant phase) through the origin at time , and is a unit vector perpendicular to this wavefront. The wavefronts with phases , where represents any integer, comprise a set of parallel planes, equally spaced by the wavelength . Reciprocal lattice In general, a geometric lattice is an infinite, regular array of vertices (points) in space, which can be modelled vectorially as a Bravais lattice. Some lattices may be skew, which means that their primary lines may not necessarily be at right angles. In reciprocal space, a reciprocal lattice is defined as the set of wavevectors of plane waves in the Fourier series of any function whose periodicity is compatible with that of an initial direct lattice in real space. Equivalently, a wavevector is a vertex of the reciprocal lattice if it corresponds to a plane wave in real space whose phase at any given time is the same (actually differs by with an integer ) at every direct lattice vertex. One heuristic approach to constructing the reciprocal lattice in three dimensions is to write the position vector of a vertex of the direct lattice as , where the are integers defining the vertex and the are linearly independent primitive translation vectors (or shortly called primitive vectors) that are characteristic of the lattice. There is then a unique plane wave (up to a factor of negative one), whose wavefront through the origin contains the direct lattice points at and , and with its adjacent wavefront (whose phase differs by or from the former wavefront passing the origin) passing through . Its angular wavevector takes the form , where is the unit vector perpendicular to these two adjacent wavefronts and the wavelength must satisfy , means that is equal to the distance between the two wavefronts. Hence by construction and . Cycling through the indices in turn, the same method yields three wavevectors with , where the Kronecker delta equals one when and is zero otherwise. The comprise a set of three primitive wavevectors or three primitive translation vectors for the reciprocal lattice, each of whose vertices takes the form , where the are integers. The reciprocal lattice is also a Bravais lattice as it is formed by integer combinations of the primitive vectors, that are , , and in this case. Simple algebra then shows that, for any plane wave with a wavevector on the reciprocal lattice, the total phase shift between the origin and any point on the direct lattice is a multiple of (that can be possibly zero if the multiplier is zero), so the phase of the plane wave with will essentially be equal for every direct lattice vertex, in conformity with the reciprocal lattice definition above. (Although any wavevector on the reciprocal lattice does always take this form, this derivation is motivational, rather than rigorous, because it has omitted the proof that no other possibilities exist.) The Brillouin zone is a primitive cell (more specifically a Wigner–Seitz cell) of the reciprocal lattice, which plays an important role in solid state physics due to Bloch's theorem. In pure mathematics, the dual space of linear forms and the dual lattice provide more abstract generalizations of reciprocal space and the reciprocal lattice. Mathematical description Assuming a three-dimensional Bravais lattice and labelling each lattice vector (a vector indicating a lattice point) by the subscript as 3-tuple of integers, where where is the set of integers and is a primitive translation vector or shortly primitive vector. Taking a function where is a position vector from the origin to any position, if follows the periodicity of this lattice, e.g. the function describing the electronic density in an atomic crystal, it is useful to write as a multi-dimensional Fourier series where now the subscript , so this is a triple sum. As follows the periodicity of the lattice, translating by any lattice vector we get the same value, hence Expressing the above instead in terms of their Fourier series we have Because equality of two Fourier series implies equality of their coefficients, , which only holds when where Mathematically, the reciprocal lattice is the set of all vectors , that are wavevectors of plane waves in the Fourier series of a spatial function whose periodicity is the same as that of a direct lattice as the set of all direct lattice point position vectors , and satisfy this equality for all . Each plane wave in the Fourier series has the same phase (actually can be differed by a multiple of ) at all the lattice point . As shown in the section multi-dimensional Fourier series, can be chosen in the form of where . With this form, the reciprocal lattice as the set of all wavevectors for the Fourier series of a spatial function which periodicity follows , is itself a Bravais lattice as it is formed by integer combinations of its own primitive translation vectors , and the reciprocal of the reciprocal lattice is the original lattice, which reveals the Pontryagin duality of their respective vector spaces. (There may be other form of . Any valid form of results in the same reciprocal lattice.) Two dimensions For an infinite two-dimensional lattice, defined by its primitive vectors , its reciprocal lattice can be determined by generating its two reciprocal primitive vectors, through the following formulae, where is an integer and Here represents a 90 degree rotation matrix, i.e. a quarter turn. The anti-clockwise rotation and the clockwise rotation can both be used to determine the reciprocal lattice: If is the anti-clockwise rotation and is the clockwise rotation, for all vectors . Thus, using the permutation we obtain Notably, in a 3D space this 2D reciprocal lattice is an infinitely extended set of Bragg rods—described by Sung et al. Three dimensions For an infinite three-dimensional lattice , defined by its primitive vectors and the subscript of integers , its reciprocal lattice with the integer subscript can be determined by generating its three reciprocal primitive vectors where is the scalar triple product. The choice of these is to satisfy as the known condition (There may be other condition.) of primitive translation vectors for the reciprocal lattice derived in the heuristic approach above and the section multi-dimensional Fourier series. This choice also satisfies the requirement of the reciprocal lattice mathematically derived above. Using column vector representation of (reciprocal) primitive vectors, the formulae above can be rewritten using matrix inversion: This method appeals to the definition, and allows generalization to arbitrary dimensions. The cross product formula dominates introductory materials on crystallography. The above definition is called the "physics" definition, as the factor of comes naturally from the study of periodic structures. An essentially equivalent definition, the "crystallographer's" definition, comes from defining the reciprocal lattice . which changes the reciprocal primitive vectors to be and so on for the other primitive vectors. The crystallographer's definition has the advantage that the definition of is just the reciprocal magnitude of in the direction of , dropping the factor of . This can simplify certain mathematical manipulations, and expresses reciprocal lattice dimensions in units of spatial frequency. It is a matter of taste which definition of the lattice is used, as long as the two are not mixed. is conventionally written as or , called Miller indices; is replaced with , replaced with , and replaced with . Each lattice point in the reciprocal lattice corresponds to a set of lattice planes in the real space lattice. (A lattice plane is a plane crossing lattice points.) The direction of the reciprocal lattice vector corresponds to the normal to the real space planes. The magnitude of the reciprocal lattice vector is given in reciprocal length and is equal to the reciprocal of the interplanar spacing of the real space planes. Higher dimensions The formula for dimensions can be derived assuming an -dimensional real vector space with a basis and an inner product . The reciprocal lattice vectors are uniquely determined by the formula . Using the permutation they can be determined with the following formula: Here, is the volume form, is the inverse of the vector space isomorphism defined by and denotes the inner multiplication. One can verify that this formula is equivalent to the known formulas for the two- and three-dimensional case by using the following facts: In three dimensions, and in two dimensions, , where is the rotation by 90 degrees (just like the volume form, the angle assigned to a rotation depends on the choice of orientation). Reciprocal lattices of various crystals Reciprocal lattices for the cubic crystal system are as follows. Simple cubic lattice The simple cubic Bravais lattice, with cubic primitive cell of side , has for its reciprocal a simple cubic lattice with a cubic primitive cell of side (or in the crystallographer's definition). The cubic lattice is therefore said to be self-dual, having the same symmetry in reciprocal space as in real space. Face-centered cubic (FCC) lattice The reciprocal lattice to an FCC lattice is the body-centered cubic (BCC) lattice, with a cube side of . Consider an FCC compound unit cell. Locate a primitive unit cell of the FCC; i.e., a unit cell with one lattice point. Now take one of the vertices of the primitive unit cell as the origin. Give the basis vectors of the real lattice. Then from the known formulae, you can calculate the basis vectors of the reciprocal lattice. These reciprocal lattice vectors of the FCC represent the basis vectors of a BCC real lattice. The basis vectors of a real BCC lattice and the reciprocal lattice of an FCC resemble each other in direction but not in magnitude. Body-centered cubic (BCC) lattice The reciprocal lattice to a BCC lattice is the FCC lattice, with a cube side of . It can be proven that only the Bravais lattices which have 90 degrees between (cubic, tetragonal, orthorhombic) have primitive translation vectors for the reciprocal lattice, , parallel to their real-space vectors. Simple hexagonal lattice The reciprocal to a simple hexagonal Bravais lattice with lattice constants and is another simple hexagonal lattice with lattice constants and rotated through 90° about the c axis with respect to the direct lattice. The simple hexagonal lattice is therefore said to be self-dual, having the same symmetry in reciprocal space as in real space. Primitive translation vectors for this simple hexagonal Bravais lattice vectors are Arbitrary collection of atoms One path to the reciprocal lattice of an arbitrary collection of atoms comes from the idea of scattered waves in the Fraunhofer (long-distance or lens back-focal-plane) limit as a Huygens-style sum of amplitudes from all points of scattering (in this case from each individual atom). This sum is denoted by the complex amplitude in the equation below, because it is also the Fourier transform (as a function of spatial frequency or reciprocal distance) of an effective scattering potential in direct space: Here g = q/(2) is the scattering vector q in crystallographer units, N is the number of atoms, fj[g] is the atomic scattering factor for atom j and scattering vector g, while rj is the vector position of atom j. The Fourier phase depends on one's choice of coordinate origin. For the special case of an infinite periodic crystal, the scattered amplitude F = M Fh,k,ℓ from M unit cells (as in the cases above) turns out to be non-zero only for integer values of , where when there are j = 1,m atoms inside the unit cell whose fractional lattice indices are respectively {uj, vj, wj}. To consider effects due to finite crystal size, of course, a shape convolution for each point or the equation above for a finite lattice must be used instead. Whether the array of atoms is finite or infinite, one can also imagine an "intensity reciprocal lattice" I[g], which relates to the amplitude lattice F via the usual relation I = F*F where F* is the complex conjugate of F. Since Fourier transformation is reversible, of course, this act of conversion to intensity tosses out "all except 2nd moment" (i.e. the phase) information. For the case of an arbitrary collection of atoms, the intensity reciprocal lattice is therefore: Here rjk is the vector separation between atom j and atom k. One can also use this to predict the effect of nano-crystallite shape, and subtle changes in beam orientation, on detected diffraction peaks even if in some directions the cluster is only one atom thick. On the down side, scattering calculations using the reciprocal lattice basically consider an incident plane wave. Thus after a first look at reciprocal lattice (kinematic scattering) effects, beam broadening and multiple scattering (i.e. dynamical) effects may be important to consider as well. Generalization of a dual lattice There are actually two versions in mathematics of the abstract dual lattice concept, for a given lattice L in a real vector space V, of finite dimension. The first, which generalises directly the reciprocal lattice construction, uses Fourier analysis. It may be stated simply in terms of Pontryagin duality. The dual group V^ to V is again a real vector space, and its closed subgroup L^ dual to L turns out to be a lattice in V^. Therefore, L^ is the natural candidate for dual lattice, in a different vector space (of the same dimension). The other aspect is seen in the presence of a quadratic form Q on V; if it is non-degenerate it allows an identification of the dual space V* of V with V. The relation of V* to V is not intrinsic; it depends on a choice of Haar measure (volume element) on V. But given an identification of the two, which is in any case well-defined up to a scalar, the presence of Q allows one to speak to the dual lattice to L while staying within V. In mathematics, the dual lattice of a given lattice L in an abelian locally compact topological group G is the subgroup L∗ of the dual group of G consisting of all continuous characters that are equal to one at each point of L. In discrete mathematics, a lattice is a locally discrete set of points described by all integral linear combinations of linearly independent vectors in Rn. The dual lattice is then defined by all points in the linear span of the original lattice (typically all of Rn) with the property that an integer results from the inner product with all elements of the original lattice. It follows that the dual of the dual lattice is the original lattice. Furthermore, if we allow the matrix B to have columns as the linearly independent vectors that describe the lattice, then the matrix has columns of vectors that describe the dual lattice. See also References External links http://newton.umsl.edu/run//nano/known.html – Jmol-based electron diffraction simulator lets you explore the intersection between reciprocal lattice and Ewald sphere during tilt. DoITPoMS Teaching and Learning Package on Reciprocal Space and the Reciprocal Lattice Learn easily crystallography and how the reciprocal lattice explains the diffraction phenomenon, as shown in chapters 4 and 5 Crystallography Fourier analysis Lattice points Neutron-related techniques Synchrotron-related techniques Diffraction Condensed matter physics
Reciprocal lattice
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
3,789
[ "Spectrum (physical sciences)", "Lattice points", "Phases of matter", "Materials science", "Number theory", "Crystallography", "Diffraction", "Condensed matter physics", "Spectroscopy", "Matter" ]
665,728
https://en.wikipedia.org/wiki/Shape-memory%20alloy
In metallurgy, a shape-memory alloy (SMA) is an alloy that can be deformed when cold but returns to its pre-deformed ("remembered") shape when heated. It is also known in other names such as memory metal, memory alloy, smart metal, smart alloy, and muscle wire. The "memorized geometry" can be modified by fixating the desired geometry and subjecting it to a thermal treatment, for example a wire can be taught to memorize the shape of a coil spring. Parts made of shape-memory alloys can be lightweight, solid-state alternatives to conventional actuators such as hydraulic, pneumatic, and motor-based systems. They can also be used to make hermetic joints in metal tubing, and it can also replace a sensor-actuator closed loop to control water temperature by governing hot and cold water flow ratio. Overview The two most prevalent shape-memory alloys are copper-aluminium-nickel and nickel-titanium (NiTi), but SMAs can also be created by alloying zinc, copper, gold and iron. Although iron-based and copper-based SMAs, such as Fe-Mn-Si, Cu-Zn-Al and Cu-Al-Ni, are commercially available and cheaper than NiTi, NiTi-based SMAs are preferable for most applications due to their stability and practicability as well as their superior thermo-mechanical performance. SMAs can exist in two different phases, with three different crystal structures (i.e. twinned martensite, detwinned martensite, and austenite) and six possible transformations. The thermo-mechanic behavior of the SMAs is governed by a phase transformation between the austenite and the martensite. NiTi alloys change from austenite to martensite upon cooling starting from a temperature below Ms; Mf is the temperature at which the transition to martensite completes upon cooling. Accordingly, during heating As and Af are the temperatures at which the transformation from martensite to austenite starts and finishes. Applying a mechanical load to the martensite leads to a re-orientation of the crystals, referred to as “de-twinning”, which results in a deformation which is not recovered (remembered) after releasing the mechanical load. De-twinning starts at a certain stress σs and ends at σf above which martensite continue exhibiting only elastic behavior (as long as the load is below the yield stress). The memorized deformation from detwinning is recovered after heating to austenite. The phase transformation from austenite to martensite can also occur at constant temperature by applying a mechanical load above a certain level. The transformation is reversed when the load is released. The transition from the martensite phase to the austenite phase is only dependent on temperature and stress, not time, as most phase changes are, as there is no diffusion involved. Similarly, the austenite structure receives its name from steel alloys of a similar structure. It is the reversible diffusionless transition between these two phases that results in special properties. While martensite can be formed from austenite by rapidly cooling carbon-steel, this process is not reversible, so steel does not have shape-memory properties. In this figure the vertical axis represents the martensite fraction. The difference between the heating transition and the cooling transition gives rise to hysteresis where some of the mechanical energy is lost in the process. The shape of the curve depends on the material properties of the shape-memory alloy, such as the alloy's composition and work hardening. Shape memory effect The shape memory effect (SME) occurs because a temperature-induced phase transformation reverses deformation, as shown in the previous hysteresis curve. Typically the martensitic phase is monoclinic or orthorhombic (B19' or B19). Since these crystal structures do not have enough slip systems for easy dislocation motion, they deform by twinning—or rather, detwinning. Martensite is thermodynamically favored at lower temperatures, while austenite (B2 cubic) is thermodynamically favored at higher temperatures. Since these structures have different lattice sizes and symmetry, cooling austenite into martensite introduces internal strain energy in the martensitic phase. To reduce this energy, the martensitic phase forms many twins—this is called "self-accommodating twinning" and is the twinning version of geometrically necessary dislocations. Since the shape memory alloy will be manufactured from a higher temperature and is usually engineered so that the martensitic phase is dominant at operating temperature to take advantage of the shape memory effect, SMAs "start" highly twinned. When the martensite is loaded, these self-accommodating twins provide an easy path for deformation. Applied stresses will detwin the martensite, but all of the atoms stay in the same position relative to the nearby atoms—no atomic bonds are broken or reformed (as they would be by dislocation motion). Thus, when the temperature is raised and austenite becomes thermodynamically favored, all of the atoms rearrange to the B2 structure which happens to be the same macroscopic shape as the B19' pre-deformation shape. This phase transformation happens extremely quickly and gives SMAs their distinctive "snap". Repeated use of the shape-memory effect may lead to a shift of the characteristic transformation temperatures (this effect is known as functional fatigue, as it is closely related with a change of microstructural and functional properties of the material). The maximum temperature at which SMAs can no longer be stress induced is called Md, where the SMAs are permanently deformed. One-way vs. two-way shape memory Shape-memory alloys have different shape-memory effects. The two common effects are one-way SMA and two-way SMA. A schematic of the effects is shown below. The procedures are very similar: starting from martensite, adding a deformation, heating the sample and cooling it again. One-way memory effect When a shape-memory alloy is in its cold state (below Mf), the metal can be bent or stretched and will hold those shapes until heated above the transition temperature. Upon heating, the shape changes to its original. When the metal cools again, it will retain the shape, until deformed again. With the one-way effect, cooling from high temperatures does not cause a macroscopic shape change. A deformation is necessary to create the low-temperature shape. On heating, transformation starts at As and is completed at Af (typically 2 to 20 °C or hotter, depending on the alloy or the loading conditions). As is determined by the alloy type and composition and can vary between and . Two way effect The two-way shape-memory effect is the effect that the material remembers two different shapes: one at low temperatures, and one at the high temperature. A material that shows a shape-memory effect during both heating and cooling is said to have two-way shape memory. This can also be obtained without the application of an external force (intrinsic two-way effect). The reason the material behaves so differently in these situations lies in training. Training implies that a shape memory can "learn" to behave in a certain way. Under normal circumstances, a shape-memory alloy "remembers" its low-temperature shape, but upon heating to recover the high-temperature shape, immediately "forgets" the low-temperature shape. However, it can be "trained" to "remember" to leave some reminders of the deformed low-temperature condition in the high-temperature phases. One way of training the SMA consists in applying a cyclic thermal load under constant stress field. During this process, internal defects are introduced into the microstructure which generates internal permanent stresses that facilitate the orientation of the martensitic crystals. Therefore, while cooling a trained SMA in austenitic phase under no applied stress, the martensite is formed detwinned due to the internal stresses, which leads to the material shape change. And while heating back the SMA into austenite, it recovers its initial shape. There are several ways of doing this. A shaped, trained object heated beyond a certain point will lose the two-way memory effect. Pseudoelasticity SMAs display a phenomenon sometimes called superelasticity, but is more accurately described as pseudoelasticity. “Superelasticity” implies that the atomic bonds between atoms stretch to an extreme length without incurring plastic deformation. Pseudoelasticity still achieves large, recoverable strains with little to no permanent deformation, but it relies on more complex mechanisms. SMAs exhibit at least 3 kinds of pseudoelasticty. The two less-studied kinds of pseudoelasticity are pseudo-twin formation and rubber-like behavior due to short range order. The main pseudoelastic effect comes from a stress-induced phase transformation. The figure on the right exhibits how this process occurs. Here a load is isothermally applied to a SMA above the austenite finish temperature, Af, but below the martensite deformation temperature, Md. The figure above illustrates how this is possible, by relating the pseudoelastic stress-induced phase transformation to the shape memory effect temperature induced phase transformation. For a particular point on Af, it is possible to choose a point on the Ms  line with a higher temperature, as long as that point Md also has a higher stress. The material initially exhibits typical elastic-plastic behavior for metals. However, once the material reaches the martensitic stress, the austenite will transform to martensite and detwin. As previously discussed, this detwinning is reversible when transforming back from martensite to austenite. If large stresses are applied, plastic behavior such as detwinning and slip of the martensite will initiate at sites such as grain boundaries or inclusions. If the material is unloaded before plastic deformation occurs, it will revert to austenite once a critical stress for austenite is reached (σas). The material will recover nearly all strain that was induced from the structural change, and for some SMAs this can be strains greater than 10 percent. This hysteresis loop shows the work done for each cycle of the material between states of small and large deformations, which is important for many applications. In a plot of strain versus temperature, the austenite and martensite start and finish lines run parallel. The SME and pseudoelasticity are actually different parts of the same phenomenon, as shown on the left. The key to the large strain deformations is the difference in crystal structure between the two phases. Austenite generally has a cubic structure while martensite can be monoclinic or another structure different from the parent phase, typically with lower symmetry. For a monoclinic martensitic material such as Nitinol, the monoclinic phase has lower symmetry which is important as certain crystallographic orientations will accommodate higher strains compared to other orientations when under an applied stress. Thus it follows that the material will tend to form orientations that maximize the overall strain prior to any increase in applied stress. One mechanism that aids in this process is the twinning of the martensite phase. In crystallography, a twin boundary is a two-dimensional defect in which the stacking of atomic planes of the lattice are mirrored across the plane of the boundary. Depending on stress and temperature, these deformation processes will compete with permanent deformation such as slip. σms is dependent on parameters such as temperature and the number of nucleation sites for phase nucleation. Interfaces and inclusions will provide general sites for the transformation to begin, and if these are great in number, it will increase the driving force for nucleation. A smaller σms will be needed than for homogeneous nucleation. Likewise, increasing temperature will reduce the driving force for the phase transformation, so a larger σms will be necessary. One can see that as you increase the operational temperature of the SMA, σms will be greater than the yield strength, σy, and superelasticity will no longer be observable. History The first reported steps towards the discovery of the shape-memory effect were taken in the 1930s. According to Otsuka and Wayman, Arne Ölander discovered the pseudoelastic behavior of the Au-Cd alloy in 1932. Greninger and Mooradian (1938) observed the formation and disappearance of a martensitic phase by decreasing and increasing the temperature of a Cu-Zn alloy. The basic phenomenon of the memory effect governed by the thermoelastic behavior of the martensite phase was widely reported a decade later by Kurdjumov and Khandros (1949) and also by Chang and Read (1951). The nickel-titanium alloys were first developed in 1962–1963 by the United States Naval Ordnance Laboratory and commercialized under the trade name Nitinol (an acronym for Nickel Titanium Naval Ordnance Laboratories). Their remarkable properties were discovered by accident. A sample that was bent out of shape many times was presented at a laboratory management meeting. One of the associate technical directors, Dr. David S. Muzzey, decided to see what would happen if the sample was subjected to heat and held his pipe lighter underneath it. To everyone's amazement the sample stretched back to its original shape. There is another type of SMA, called a ferromagnetic shape-memory alloy (FSMA), that changes shape under strong magnetic fields. These materials are of particular interest as the magnetic response tends to be faster and more efficient than temperature-induced responses. Metal alloys are not the only thermally-responsive materials; shape-memory polymers have also been developed, and became commercially available in the late 1990s. Crystal structures Many metals have several different crystal structures at the same composition, but most metals do not show this shape-memory effect. The special property that allows shape-memory alloys to revert to their original shape after heating is that their crystal transformation is fully reversible. In most crystal transformations, the atoms in the structure will travel through the metal by diffusion, changing the composition locally, even though the metal as a whole is made of the same atoms. A reversible transformation does not involve this diffusion of atoms, instead all the atoms shift at the same time to form a new structure, much in the way a parallelogram can be made out of a square by pushing on two opposing sides. At different temperatures, different structures are preferred and when the structure is cooled through the transition temperature, the martensitic structure forms from the austenitic phase. Manufacture Shape-memory alloys are typically made by casting, using vacuum arc melting or induction melting. These are specialist techniques used to keep impurities in the alloy to a minimum and ensure the metals are well mixed. The ingot is then hot rolled into longer sections and then drawn to turn it into wire. The way in which the alloys are "trained" depends on the properties wanted. The "training" dictates the shape that the alloy will remember when it is heated. This occurs by heating the alloy so that the dislocations re-order into stable positions, but not so hot that the material recrystallizes. They are heated to between and for 30 minutes, shaped while hot, and then are cooled rapidly by quenching in water or by cooling with air. Properties The copper-based and NiTi-based shape-memory alloys are considered to be engineering materials. These compositions can be manufactured to almost any shape and size. The yield strength of shape-memory alloys is lower than that of conventional steel, but some compositions have a higher yield strength than plastic or aluminum. The yield stress for Ni Ti can reach . The high cost of the metal itself and the processing requirements make it difficult and expensive to implement SMAs into a design. As a result, these materials are used in applications where the super elastic properties or the shape-memory effect can be exploited. The most common application is in actuation. One of the advantages to using shape-memory alloys is the high level of recoverable plastic strain that can be induced. The maximum recoverable strain these materials can hold without permanent damage is up to for some alloys. This compares with a maximum strain for conventional steels. Practical limitations SMA have many advantages over traditional actuators, but do suffer from a series of limitations that may impede practical application. In numerous studies, it was emphasised that only a few of patented shape memory alloy applications are commercially successful due to material limitations combined with a lack of material and design knowledge and associated tools, such as improper design approaches and techniques used. The challenges in designing SMA applications are to overcome their limitations, which include a relatively small usable strain, low actuation frequency, low controllability, low accuracy and low energy efficiency. Response time and response symmetry SMA actuators are typically actuated electrically, where an electric current results in Joule heating. Deactivation typically occurs by free convective heat transfer to the ambient environment. Consequently, SMA actuation is typically asymmetric, with a relatively fast actuation time and a slow deactuation time. A number of methods have been proposed to reduce SMA deactivation time, including forced convection, and lagging the SMA with a conductive material in order to manipulate the heat transfer rate. Novel methods to enhance the feasibility of SMA actuators include the use of a conductive "lagging". this method uses a thermal paste to rapidly transfer heat from the SMA by conduction. This heat is then more readily transferred to the environment by convection as the outer radii (and heat transfer area) are significantly greater than for the bare wire. This method results in a significant reduction in deactivation time and a symmetric activation profile. As a consequence of the increased heat transfer rate, the required current to achieve a given actuation force is increased. Structural fatigue and functional fatigue SMA is subject to structural fatigue – a failure mode by which cyclic loading results in the initiation and propagation of a crack that eventually results in catastrophic loss of function by fracture. The physics behind this fatigue mode is accumulation of microstructural damage during cyclic loading. This failure mode is observed in most engineering materials, not just SMAs. SMAs are also subject to functional fatigue, a failure mode not typical of most engineering materials, whereby the SMA does not fail structurally but loses its shape-memory/superelastic characteristics over time. As a result of cyclic loading (both mechanical and thermal), the material loses its ability to undergo a reversible phase transformation. For example, the working displacement in an actuator decreases with increasing cycle numbers. The physics behind this is gradual change in microstructure—more specifically, the buildup of accommodation slip dislocations. This is often accompanied by a significant change in transformation temperatures. Design of SMA actuators may also influence both structural and functional fatigue of SMA, such as the pulley configurations in SMA-Pulley system. Unintended actuation SMA actuators are typically actuated electrically by Joule heating. If the SMA is used in an environment where the ambient temperature is uncontrolled, unintentional actuation by ambient heating may occur. Applications Industrial Aircraft and spacecraft Boeing, General Electric Aircraft Engines, Goodrich Corporation, NASA, Texas A&M University and All Nippon Airways developed the Variable Geometry Chevron using a NiTi SMA. Such a variable area fan nozzle (VAFN) design would allow for quieter and more efficient jet engines in the future. In 2005 and 2006, Boeing conducted successful flight testing of this technology. SMAs are being explored as vibration dampers for launch vehicles and commercial jet engines. The large amount of hysteresis observed during the superelastic effect allow SMAs to dissipate energy and dampen vibrations. These materials show promise for reducing the high vibration loads on payloads during launch as well as on fan blades in commercial jet engines, allowing for more lightweight and efficient designs. SMAs also exhibit potential for other high shock applications such as ball bearings and landing gear. There is also strong interest in using SMAs for a variety of actuator applications in commercial jet engines, which would significantly reduce their weight and boost efficiency. Further research needs to be conducted in this area, however, to increase the transformation temperatures and improve the mechanical properties of these materials before they can be successfully implemented. A review of recent advances in high-temperature shape-memory alloys (HTSMAs) is presented by Ma et al. A variety of wing-morphing technologies are also being explored. Automotive The first high-volume product (> 5Mio actuators / year) is an automotive valve used to control low pressure pneumatic bladders in a car seat that adjust the contour of the lumbar support / bolsters. The overall benefits of SMA over traditionally-used solenoids in this application (lower noise/EMC/weight/form factor/power consumption) were the crucial factor in the decision to replace the old standard technology with SMA. The 2014 Chevrolet Corvette became the first vehicle to incorporate SMA actuators, which replaced heavier motorized actuators to open and close the hatch vent that releases air from the trunk, making it easier to close. A variety of other applications are also being targeted, including electric generators to generate electricity from exhaust heat and on-demand air dams to optimize aerodynamics at various speeds. Robotics There have also been limited studies on using these materials in robotics, for example the hobbyist robot Stiquito (and "Roboterfrau Lara"), as they make it possible to create very lightweight robots. Recently, a prosthetic hand was introduced by Loh et al. that can almost replicate the motions of a human hand [Loh2005]. Other biomimetic applications are also being explored. Weak points of the technology are energy inefficiency, slow response times, and large hysteresis. Valves SMAs are also used for actuating valves. The SMA valves are particularly compact in design. Bio-engineered robotic hand There is some SMA-based prototypes of robotic hand that using shape memory effect (SME) to move fingers. Civil structures SMAs find a variety of applications in civil structures such as bridges and buildings. In the form of rebars or plates, they can be used for flexural, shear and seismic strengthening of concrete and steel structures. Another application is Intelligent Reinforced Concrete (IRC), which incorporates SMA wires embedded within the concrete. These wires can sense cracks and contract to heal micro-sized cracks. Also the active tuning of structural natural frequency using SMA wires to dampen vibrations is possible, as well as the usage of SMA fibers in concrete. Piping The first consumer commercial application was a shape-memory coupling for piping, e.g. oil pipe lines, for industrial applications, water pipes and similar types of piping for consumer/commercial applications. Consumer electronics Smartphone cameras Several smartphone companies have released handsets with optical image stabilisation (OIS) modules incorporating SMA actuators, manufactured under licence from Cambridge Mechatronics. Medicine Shape-memory alloys are applied in medicine, for example, as fixation devices for osteotomies in orthopaedic surgery, as the actuator in surgical tools; active steerable surgical needles for minimally invasive percutaneous cancer interventions in the surgical procedures such as biopsy and brachytherapy, in dental braces to exert constant tooth-moving forces on the teeth, in Capsule Endoscopy they can be used as a trigger for biopsy action. The late 1980s saw the commercial introduction of Nitinol as an enabling technology in a number of minimally invasive endovascular medical applications. While more costly than stainless steel, the self expanding properties of Nitinol alloys manufactured to BTR (Body Temperature Response), have provided an attractive alternative to balloon expandable devices in stent grafts where it gives the ability to adapt to the shape of certain blood vessels when exposed to body temperature. On average, of all peripheral vascular stents currently available on the worldwide market are manufactured with Nitinol. Optometry Eyeglass frames made from titanium-containing SMAs are marketed under the trademarks Flexon and TITANflex. These frames are usually made out of shape-memory alloys that have their transition temperature set below the expected room temperature. This allows the frames to undergo large deformation under stress, yet regain their intended shape once the metal is unloaded again. The very large apparently elastic strains are due to the stress-induced martensitic effect, where the crystal structure can transform under loading, allowing the shape to change temporarily under load. This means that eyeglasses made of shape-memory alloys are more robust against being accidentally damaged. Orthopedic surgery Memory metal has been utilized in orthopedic surgery as a fixation-compression device for osteotomies, typically for lower extremity procedures. The device, usually in the form of a large staple, is stored in a refrigerator in its malleable form and is implanted into pre-drilled holes in the bone across an osteotomy. As the staple warms it returns to its non-malleable state and compresses the bony surfaces together to promote bone union. Dentistry The range of applications for SMAs has grown over the years, a major area of development being dentistry. One example is the prevalence of dental braces using SMA technology to exert constant tooth-moving forces on the teeth; the nitinol archwire was developed in 1972 by orthodontist George Andreasen. This revolutionized clinical orthodontics. Andreasen's alloy has a patterned shape memory, expanding and contracting within given temperature ranges because of its geometric programming. Harmeet D. Walia later utilized the alloy in the manufacture of root canal files for endodontics. Essential tremor Traditional active cancellation techniques for tremor reduction use electrical, hydraulic, or pneumatic systems to actuate an object in the direction opposite to the disturbance. However, these systems are limited due to the large infrastructure required to produce large amplitudes of power at human tremor frequencies. SMAs have proven to be an effective method of actuation in hand-held applications, and have enabled a new class active tremor cancellation devices. One recent example of such device is the Liftware spoon, developed by Verily Life Sciences subsidiary Lift Labs. Engines Experimental solid state heat engines, operating from the relatively small temperature differences in cold and hot water reservoirs, have been developed since the 1970s, including the Banks Engine, developed by Ridgway Banks. Crafts Sold in small round lengths for use in affixment-free bracelets. Heating and cooling German scientists at Saarland University have produced a prototype machine that transfers heat using a nickel-titanium ("nitinol") alloy wire wrapped around a rotating cylinder. As the cylinder rotates, heat is absorbed on one side and released on the other, as the wire changes from its "superelastic" state to its unloaded state. According to a 2019 article released by Saarland University, the efficiency by which the heat is transferred appears to be higher than that of a typical heat pump or air conditioner. Almost all air conditioners and heat pumps in use today employ vapor-compression of refrigerants. Over time, some of the refrigerants used in these systems leak into the atmosphere and contribute to global warming. If the new technology, which uses no refrigerants, proves economical and practical, it might offer a significant breakthrough in the effort to reduce climate change. Clamping Sytems Shape memory alloys (SMAs), such as nickel-titanium (Nitinol), are used in clamping systems due to their unique thermo-responsive behavior. The clamps made from SMA are used in the dentofacial surgery to heal mandibular fractures. Materials A variety of alloys exhibit the shape-memory effect. Alloying constituents can be adjusted to control the transformation temperatures of the SMA. Some common systems include the following (by no means an exhaustive list): Ag-Cd 44/49 at.% Cd Au-Cd 46.5/50 at.% Cd Co-Ni-Al Co-Ni-Ga Cu-Al-Be-X(X:Zr, B, Cr, Gd) Cu-Al-Ni 14/14.5 wt.% Al, 3/4.5 wt.% Ni Cu-Al-Ni-Hf Cu-Sn approx. 15 at.% Sn Cu-Zn 38.5/41.5 wt.% Zn Cu-Zn-X (X = Si, Al, Sn) Fe-Mn-Si Fe-Pt approx. 25 at.% Pt Mn-Cu 5/35 at.% Cu Ni-Fe-Ga Ni-Ti approx. 55–60 wt.% Ni Ni-Ti-Hf Ni-Ti-Pd Ni-Mn-Ga Ni-Mn-Ga-Cu Ni-Mn-Ga-Co Ti-Nb References External links Veritasium - How NASA Reinvented The Wheel Alloys Smart materials Metallurgy Nickel–titanium alloys
Shape-memory alloy
[ "Chemistry", "Materials_science", "Engineering" ]
6,087
[ "Metallurgy", "Materials science", "Alloys", "Chemical mixtures", "nan", "Smart materials" ]