id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
25,766,500 | https://en.wikipedia.org/wiki/Knemometry | Knemometry () is the medical term for measuring the distance between knee and heel of a sitting child or adolescent using a technical device, the knemometer. Knemometric measurements show a measurement error of less than 160 μm (0.16 mm) and enable illustrating growth at intervals of few weeks (short-term growth).
The device
Ignaz Maria Valk developed this technique in 1983 in Nijmegen/Netherlands. It is being practiced since and used for basic research in growth and the treatment of growth disorders. Hermanussen introduced the mini-knemometer for accurate growth measurements of prematures and newborn infants. Mini-knemometry determines the lower leg length with an accuracy of less than 100 μm (0.1 mm). This enables substantiating growth within 24 hours. In an animal model, the technique was used to investigate the effects of steroids and growth hormone on short-term growth. These studies were an important prerequisite for improving growth therapies.
The measurement of short-term growth
The auxological term "short-term growth" denotes growth characteristics that become evident when measurements are being performed within intervals of less than one year (e.g. monthly, weekly or even daily). Mainly knemometry has been used for this purpose, but also very frequent measurements using conventional devices for height measurements.
Short-term growth consists of small growth spurts (mini growth spurts). In the human neonate, these spurts occur at intervals of 2 to 10 days, and they reach maximum velocities of up to 0.2 mm per hour at the lower leg. Growth hormone therapies have been shown to significantly alter the dynamics of short-term growth. Catch-up growth (compensatory growth after periods of growth impairment due to illness, starvation and other unfavourable conditions) is characterised by repetitive series of broadened mini growth spurts.
Footnotes
Dimensional instruments
Games and sports introduced in 1983 | Knemometry | [
"Physics",
"Mathematics"
] | 409 | [
"Quantity",
"Dimensional instruments",
"Physical quantities",
"Size"
] |
25,772,247 | https://en.wikipedia.org/wiki/ZND%20detonation%20model | The ZND detonation model is a one-dimensional model for the process of detonation of an explosive. It was proposed during World War II independently by Yakov Zeldovich, John von Neumann, and Werner Döring, hence the name.
This model admits finite-rate chemical reactions and thus the process of detonation consists of the following stages. First, an infinitesimally thin shock wave compresses the explosive to a high pressure called the von Neumann spike. At the von Neumann spike point the explosive still remains unreacted. The spike marks the onset of the zone of exothermic chemical reaction, which finishes at the Chapman–Jouguet condition. After that, the detonation products expand backward.
In the reference frame in which the shock is stationary, the flow following the shock is subsonic. Because of this, energy release behind the shock is able to be transported acoustically to the shock for its support. For a self-propagating detonation, the shock relaxes to a speed given by the Chapman–Jouguet condition, which induces the material at the end of the reaction zone to have a locally sonic speed in the reference frame in which the shock is stationary. In effect, all of the chemical energy is harnessed to propagate the shock wave forward.
However, in the 1960s, experiments revealed that gas-phase detonations were most often characterized by unsteady, three-dimensional structures, which can only in an averaged sense be predicted by one-dimensional steady theories. Indeed, such waves are quenched as their structure is destroyed. The Wood–Kirkwood detonation theory can correct for some of these limitations.
References
Further reading
Explosives
Explosives engineering
Combustion
Fluid dynamics | ZND detonation model | [
"Chemistry",
"Engineering"
] | 357 | [
"Explosives engineering",
"Chemical engineering",
"Combustion",
"Explosives",
"Explosions",
"Piping",
"Fluid dynamics"
] |
25,774,874 | https://en.wikipedia.org/wiki/Y%20alloy | Y alloy is a nickel-containing aluminium alloy. It was developed by the British National Physical Laboratory during World War I, in an attempt to find an aluminium alloy that would retain its strength at high temperatures.
Duralumin, an aluminium alloy containing 4% copper was already known at this time. Its strength, and its previously unknown age hardening behaviour had made it a popular choice for zeppelins. Aircraft of the period were largely constructed of wood, but there was a need for an aluminium alloy suitable for making engines, particularly pistons, that would have the strength of duralumin but could retain this when in service at high temperatures for long periods.
The National Physical Laboratory began a series of experiments to study new aluminium alloys. Experimental series "Y" was successful, and gave its name to the new alloy. Like duralumin, this was a 4% copper alloy, but with the addition of 2% nickel and 1.5% magnesium. This addition of nickel was an innovation for aluminium alloys. These alloys are one of the three main groups of high-strength aluminium alloys, the nickel–aluminium alloys having the advantage of retaining strength at high temperatures.
The alloy was first used in the cast form, but was soon used for forging as well. One of the most pressing needs was to develop reliable pistons for aircraft engines. The first experts at forging this alloy were Peter Hooker Limited of Walthamstow, who were better known as The British Gnôme and Le Rhône Engine Co. They license-built the Gnome engine and fitted it with pistons of Y alloy, rather than their previous cast iron. These pistons were highly successful, although impressions of the alloy as a panacea suitable for all applications were less successful; a Gnôme cylinder in Y alloy failed on its first revolution. Frank Halford used connecting rods of this alloy for his de Havilland Gipsy engine, but these other uses failed to impress Rod Banks.
Air Ministry Specification D.T.D 58A of April 1927 specified the composition and heat treatment of wrought Y alloy. The alloy became extremely important for pistons, and for engine components in general, but was little used for structural members of airframes.
In the late 1920s, further research on nickel-aluminium alloys gave rise to the successful Hiduminium or "R.R. alloys", developed by Rolls-Royce.
Alloy composition
Heat treatment
As for many of the aluminium alloys, Y alloy age hardens spontaneously at normal temperatures after solution heat treating. The heat treatment is to heat it to for 6 hours, then to allow it to age naturally for 7–10 days. The precipitation hardening that takes place during this ageing forms precipitates of both CuAl2 and NiAl3.
The times required depend on the grain structure of the alloy. Forged parts have the coarsest eutectic masses and so take the longest times. When cast, chill casting is favoured over sand casting as this gives a finer structure that is more amenable to heat treatment.
References
See also
2218 aluminium alloy
Aluminium alloys
Nickel–aluminium alloys
Aerospace materials
Aluminium–copper alloys
National Physical Laboratory (United Kingdom) | Y alloy | [
"Chemistry",
"Engineering"
] | 634 | [
"Aerospace engineering",
"Aerospace materials",
"Alloys",
"Aluminium alloys"
] |
25,776,973 | https://en.wikipedia.org/wiki/Ottoman%20units%20of%20measurement | The list of traditional Turkish units of measurement, a.k.a. Ottoman units of measurement, is given below.
History
The Ottoman Empire (1299–1923), the predecessor of modern Turkey was one of the 17 signatories of the Metre Convention in 1875. For 58 years both the international and the traditional units were in use, but after the proclamation of the Turkish Republic, the traditional units became obsolete. In 1931 by Act No. 1782, international units became compulsory and the traditional units were banned from use starting 1 January 1933.
List of units
Length
Area
Volume
Weight
Volumetric flow
Time
The traditional calendar of the Ottoman Empire was, like in most Muslim countries, the Islamic calendar. Its era begins from the Hijra in 622 CE and each year is calculated using the 12 Arabian lunar months, approximately eleven days shorter than a Gregorian solar year. In 1839, however, a second calendar was put in use for official matters. The new calendar, which was called the Rumi also began by 622, but with an annual duration equal to a solar year after 1840. In modern Turkey, the Gregorian calendar was adopted as the legal calendar, beginning by the end of 1925. But the Islamic calendar is still used when discussing dates in an Islamic context.
See also
Measurement
Notes
References
Systems of units
Obsolete units of measurement
Turkey-related lists
Economy of the Ottoman Empire
Economic history of Turkey
Ottoman Empire-related lists
Human-based units of measurement
Science and technology in Turkey
Units of measurement by country | Ottoman units of measurement | [
"Mathematics"
] | 302 | [
"Obsolete units of measurement",
"Systems of units",
"Units of measurement by country",
"Quantity",
"Units of measurement"
] |
34,476,926 | https://en.wikipedia.org/wiki/Monod%20equation | The Monod equation is a mathematical model for the growth of microorganisms. It is named for Jacques Monod (1910–1976, a French biochemist, Nobel Prize in Physiology or Medicine in 1965), who proposed using an equation of this form to relate microbial growth rates in an aqueous environment to the concentration of a limiting nutrient. The Monod equation has the same form as the Michaelis–Menten equation, but differs in that it is empirical while the latter is based on theoretical considerations.
The Monod equation is commonly used in environmental engineering. For example, it is used in the activated sludge model for sewage treatment.
Equation
]
The empirical Monod equation is
where:
μ is the growth rate of a considered microorganism,
μmax is the maximum growth rate of this microorganism,
[S] is the concentration of the limiting substrate S for growth,
Ks is the "half-velocity constant"—the value of [S] when μ/μmax = 0.5.
μmax and Ks are empirical (experimental) coefficients to the Monod equation. They will differ between microorganism species and will also depend on the ambient environmental conditions, e.g., on the temperature, on the pH of the solution, and on the composition of the culture medium.
Application notes
The rate of substrate utilization is related to the specific growth rate as
where
X is the total biomass (since the specific growth rate μ is normalized to the total biomass),
Y is the yield coefficient.
rs is negative by convention.
In some applications, several terms of the form [S] / (Ks + [S]) are multiplied together where more than one nutrient or growth factor has the potential to be limiting (e.g. organic matter and oxygen are both necessary to heterotrophic bacteria). When the yield coefficient, being the ratio of mass of microorganisms to mass of substrate utilized, becomes very large, this signifies that there is deficiency of substrate available for utilization.
Graphical determination of constants
As with the Michaelis–Menten equation graphical methods may be used to fit the coefficients of the Monod equation:
Eadie–Hofstee diagram
Hanes–Woolf plot
Lineweaver–Burk plot
See also
Activated sludge model (uses the Monod equation to model bacterial growth and substrate utilization)
Bacterial growth
Hill equation (biochemistry)
Hill contribution to Langmuir equation
Langmuir adsorption model (equation with the same mathematical form)
Michaelis–Menten kinetics (equation with the same mathematical form)
Gompertz function
Victor Henri, who first wrote the general equation form in 1901
Von Bertalanffy function
References
Catalysis
Chemical kinetics
Environmental engineering
Enzyme kinetics
Ordinary differential equations
Sewerage | Monod equation | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 566 | [
"Catalysis",
"Chemical reaction engineering",
"Enzyme kinetics",
"Chemical engineering",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering",
"Chemical kinetics"
] |
34,484,142 | https://en.wikipedia.org/wiki/Non-intrusive%20stress%20measurement%20system | Non-intrusive stress measurement (system), or NSMS, is a method for determining dynamic blade stresses in rotating turbomachinery. NSMS is also known by the names "blade tip timing" (BTT), "arrival time analysis" (ATA), "blade vibration monitoring" (BVM), Beruehrungslose Schaufel Schwingungsmessung (BSSM), and "blade health monitoring" (BHM). NSMS uses externally mounted sensors to determine the passing times of turbomachinery blades. The passing times after conversion to deflections, can be used to measure each blade's vibratory response characteristics such as amplitude/stress, phase, frequency and damping. Since every blade is measured, stage effects such as flutter, blade mistuning, and nodal diameter can also be characterized.
The measurement method has been used successfully in all stages of the gas turbine engine (fan, compressor, and turbine) and on other turbo-machinery equipment ranging from turbochargers to rocket pumps. The ability to apply the technology to a given situation is dependent upon a sensor type that can meet the environmental requirements.
Method
A set of sensors is used to measure the arrival times of rotating blades. These arrival times, in comparison to a baseline, are used to determine blade deflections. The blade deflections over a number of revolutions and/or across a number of sensors can be used to determine vibratory characteristics. This information, in conjunction with a finite element model (FEM), can then be used to determine the dynamic stresses in a rotating part.
Hardware
Any sensor or probe that can provide a precise indication of a blade's passing can be used in NSMS. Commonly optical probes using fiber optics are used to obtain a high level of spatial resolution. A light source, typically a laser, and fiber optics are used to direct a constant beam of light into the path of a rotating blade. As a blade passes the probe, light is reflected and captured by fiber optic and routed to a photo detector for conversion to an electrical signal.
Field type sensors such as Eddy Current, Capacitive, and Microwave sensors are useful in harsh environments and for long duration testing.
History
NSMS technology was patented as early as 1949, with many improvements patented since that date. Some recent US practitioners believe that NSMS was first developed in 1980, by major aircraft engine OEMs (and the US Air Force). It may be envisioned as a replacement technology to rotating strain gauges, however it is currently used as an essential complement to strain gauges. The technology is widely used for turbo-machinery development and high cycle fatigue (HCF) troubleshooting. Due to the direct blade observation, external mounting and inherent serviceability of the sensors, NSMS has been used since at least 2006 as a long-term health monitoring method. More recently, multiple practitioners are becoming active in this application of the technology.
References
Mechanical tests | Non-intrusive stress measurement system | [
"Engineering"
] | 615 | [
"Mechanical tests",
"Mechanical engineering"
] |
31,490,211 | https://en.wikipedia.org/wiki/Maximum%20clade%20credibility%20tree | A maximum clade credibility tree is a tree that summarises the results of a Bayesian phylogenetic inference. Whereas a majority-rule tree combines the most common clades, and usually yields a tree that wasn't sampled in the analysis, the maximum-credibility method evaluates each of the sampled posterior trees. Each clade within the tree is given a score based on the fraction of times that it appears in the set of sampled posterior trees, and the product of these scores are taken as the tree's score. The tree with the highest score is then the maximum clade credibility tree.
References
Phylogenetics
Trees (data structures) | Maximum clade credibility tree | [
"Biology"
] | 127 | [
"Bioinformatics",
"Phylogenetics",
"Taxonomy (biology)"
] |
31,490,462 | https://en.wikipedia.org/wiki/Database%20of%20protein%20conformational%20diversity | The Database of protein conformational diversity (PCDB) is a database of diversity of protein tertiary structures within protein domains as determined by X-ray crystallography. Proteins are inherently flexible and this database collects information on this subject for use in molecular research. It uses the CATH database as a source of structures for each protein and reports the range of differences in the structures based on their superposition and reports a maximum RMSD. The interface for the database allows researchers to find proteins with a range of conformational flexibility allowing them to find highly flexible proteins for example. The database is run and maintained by a group of researchers based at the Universidad Nacional de Quilmes in Argentina.
See also
Crystallography
Protein structure
References
External links
Protein databases
Protein structure
Crystallographic databases | Database of protein conformational diversity | [
"Chemistry",
"Materials_science"
] | 158 | [
"Crystallographic databases",
"Crystallography",
"Protein structure",
"Structural biology"
] |
31,491,236 | https://en.wikipedia.org/wiki/Ventura%20County%20Air%20Pollution%20Control%20District | The Ventura County Air Pollution Control District (VCAPCD) protects public health and agriculture from the adverse effects of air pollution by identifying problems and developing a comprehensive program to achieve and maintain state and federal air quality standards. The Ventura County Board of Supervisors formed the district in response to the county's first air pollution study, which found the area had a severe air quality problem.
Currently, Ventura County does not meet state or federal standards for ozone or the state standard for particulate matter with a diameter up to 10 micrometers, or PM10.
Organizational structure
The district is governed by the Air Pollution Control Board. This 10-member board consists of the Ventura County Board of Supervisors and five elected officials representing Ventura County cities. The Air Pollution Control Board establishes policy, approves new rules and appoints the Air Pollution Control Officer and members of the Hearing Board, Advisory Committee and Clean Air Fund Advisory Committee. The Air Pollution Control Officer makes policy recommendations to the Air Pollution Control Board, implements the board's decisions and directs the staff.
Ventura County Air Pollution Control Board
The current members are Chair Vianey Lopez, Vice Chair Martha R. McQueen-Legohn, Liz Campos, Jeff Gorrell, Matt LaVere, Kelly Long, Albert Mendez, Janice S. Parvin, Andrew K. Whitman and John Zaragoza.:
Divisions
The district's employees include engineers, inspectors, planners, technicians and support staff. They are grouped into the following divisions:
Compliance
Engineering
Fiscal and Administration
Information Systems
Monitoring
Planning, Rules and Incentives
Public Information
Hearing Board
The Air Pollution Control District Hearing Board is an independent quasi-judicial body established by state law to grant variances and uphold or overturn district decisions regarding denials of and the operating conditions of permits. It also may revoke permits to operate, issue orders of abatement, allow citizen appeals and settle disputes between the district and permit holders.
The Hearing Board consists of five members appointed by the Air Pollution Control Board for three-year terms.
Current members are Chair Mike Stubblefield, Valarie Grossman, Victor Kamhi, Dr. Lewis Kanter and Kathleen Paulson:
Advisory Committee
The members of the Air Pollution Control District Advisory Committee are appointed by the Air Pollution Control Board. The committee was created to ensure that private citizens, health and environmental organizations, government agencies and industry representatives have an in-depth forum to discuss district rule development and air pollution concerns. The committee reviews staff proposals for new and revised rules and makes recommendations to the Air Pollution Control Board.
Current members are Chair Sara Head, Vice Chair Paul Meehan, Donald Bird, Joan Burns, Edward Carloni, Steve Colome, Leslie Cornejo, Stephen Frank, Jan Hauser, Rainford Hunter, Mary Kennedy, Thomas Lucas, Kirsten Marble, Hugh McTernan, Richard Nick and Arecely Preciado.
Objectives
The district's main goals are to:
Attain federal and state ambient air quality standards.
Implement the requirements of the California Clean Air Act and the federal Clean Air Act.
Conduct public awareness and education programs.
Develop attainment plans for new U.S. Environmental Protection Agency (EPA) ambient air quality standards.
Minimize the socioeconomic impacts of clean air programs.
Implement California Air Resources Board regulations to reduce greenhouse gas emissions at landfills and at oil production and refrigeration facilities.
Major district programs include:
Air Quality Management Plan development and implementation.
Permit processing and renewal.
Enforcement of district rules and applicable state and federal laws.
Air quality and meteorological monitoring at five locations throughout the county.
Air quality impact analyses of sources and projects.
Air quality and meteorological forecasting.
Declaring agricultural burn days based on forecasts.
Rule development.
Air pollution emissions inventory.
Air toxics inventory and risk assessment.
Employer transportation outreach.
Incentives for emission-reduction projects.
Public information and education.
Implementation of delegated state climate change measures.
Community Air Protection implementation.
See also
California Air Resources Board
California Code of Regulations
California Environmental Protection Agency
Ecology of California
Emission standards
Greenhouse gas
Greenhouse gas emissions by the United States
List of California Air Districts
NAAQS (National Ambient Air Quality Standards)
NESHAP (National Emissions Standards for Hazardous Air Pollutants)
Pollution in California
Public Smog
Timeline of major US environmental and occupational health regulation
US Emission standard
References
External links
Official Ventura County Air Pollution Control District—VCAPCD website
California Local Air District Directory
Air pollution in California
Government of Ventura County, California
Environmental agencies in the United States
Environmental agencies of country subdivisions
Atmospheric dispersion modeling
Special districts of California
Southern California | Ventura County Air Pollution Control District | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 908 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
31,494,514 | https://en.wikipedia.org/wiki/Organocerium%20chemistry | Organocerium chemistry is the science of organometallic compounds that contain one or more chemical bond between carbon and cerium. These compounds comprise a subset of the organolanthanides. Most organocerium compounds feature Ce(III) but some Ce(IV) derivatives are known.
Alkyl derivatives
Simple alkylcerium reagents are well known. One example is .
Although they are described as RCeCl2, their structures are far more complex.. Furthermore, the solvent seems to alter the solution structure of the complex, with differences noted between reagents prepared in diethyl ether and tetrahydrofuran. There is evidence that the parent chloride forms a polymeric species in THF solution, of the form [Ce(μ-Cl)2(H2O)(THF)2]n, but whether this type of polymer exists once the organometallic reagent is formed is unknown.
Cyclopentadienyl derivatives
Cyclopentadienyl derivatives of Ce are particularly well characterized. Hundreds have been examined by X-ray crystallography. The depicted is one of many.
Some of the best characterized organocerium(IV) compounds feature cyclopentadienyl ligands, e.g.
Applications to organic synthesis
As reagents in organic chemistry, organocerium compounds are typically prepared in situ by treatment of cerium trichloride with organolithium or Grignard reagent. Reagents are derived from alkyl, alkynyl, and alkenyl organometallic reagents as well as enolates have been described. The most common cerium source for this purpose is cerium(III) chloride, which can be obtained in anhydrous form via dehydration of the commercially available heptahydrate. Precomplexation with tetrahydrofuran is important for the success of the transmetallation, with most procedures involving "vigorous stirring for a period of no less than 2 hours". The structures depicted (as below) for organocerium reagent, however are highly simplified.
These reagents add 1,2 to conjugated ketones and aldehydes. This preference for direct addition is attributed to the oxophilicity of the cerium reagent, which activates the carbonyl for nucleophilic attack.
Reactions
Organocerium reagents are used almost exclusively for addition reactions in the same vein as organolithium and Grignard reagents.They are highly nucleophilic, allowing additions to imines in the absence of additional Lewis acid catalysts, making them useful for substrates in which typical conditions fail.
Despite this high reactivity, organocerium reagents are almost entirely non-basic, tolerating the presence of free alcohols and amines as well as enolizable α-protons.
They undergo 1,2-addition in reactions with conjugated electrophiles. At the same time, organocerium reagents can be used to synthesize ketones from acyl compounds without over-addition, as seen with organocuprates.
Organocerium reagents have been employed in a number of total syntheses. Shown below is a key coupling step in the total synthesis of roseophilin, a potent antitumor antibiotic.
See also
Luche reduction
References
cerium
cerium
Cerium | Organocerium chemistry | [
"Chemistry"
] | 725 | [
"Organometallic chemistry"
] |
22,832,517 | https://en.wikipedia.org/wiki/Crystal%20structure%20prediction | Crystal structure prediction (CSP) is the calculation of the crystal structures of solids from first principles. Reliable methods of predicting the crystal structure of a compound, based only on its composition, has been a goal of the physical sciences since the 1950s. Computational methods employed include simulated annealing, evolutionary algorithms, distributed multipole analysis, random sampling, basin-hopping, data mining, density functional theory and molecular mechanics.
History
The crystal structures of simple ionic solids have long been rationalised in terms of Pauling's rules, first set out in 1929 by Linus Pauling. For metals and semiconductors one has different rules involving valence electron concentration. However, prediction and rationalization are rather different things. Most commonly, the term crystal structure prediction means a search for the minimum-energy arrangement of its constituent atoms (or, for molecular crystals, of its molecules) in space. The problem has two facets: combinatorics (the "search phase space", in practice most acute for inorganic crystals), and energetics (or "stability ranking", most acute for molecular organic crystals).
For complex non-molecular crystals (where the "search problem" is most acute), major recent advances have been the development of the Martonak version of metadynamics, the Oganov-Glass evolutionary algorithm USPEX, and first principles random search. The latter are capable of solving the global optimization problem with up to around a hundred degrees of freedom, while the approach of metadynamics is to reduce all structural variables to a handful of "slow" collective variables (which often works).
Molecular crystals
Predicting organic crystal structures is important in academic and industrial science, particularly for pharmaceuticals and pigments, where understanding polymorphism is beneficial. The crystal structures of molecular substances, particularly organic compounds, are very hard to predict and rank in order of stability. Intermolecular interactions are relatively weak and non-directional and long range. This results in typical lattice and free energy differences between polymorphs that are often only a few kJ/mol, very rarely exceeding 10 kJ/mol. Crystal structure prediction methods often locate many possible structures within this small energy range. These small energy differences are challenging to predict reliably without excessive computational effort.
Since 2007, significant progress has been made in the CSP of small organic molecules, with several different methods proving effective. The most widely discussed method first ranks the energies of all possible crystal structures using a customised MM force field, and finishes by using a dispersion-corrected DFT step to estimate the lattice energy and stability of each short-listed candidate structure. More recent efforts to predict crystal structures have focused on estimating crystal free energy by including the effects of temperature and entropy in organic crystals using vibrational analysis or molecular dynamics.
Crystal structure prediction software
The following codes can predict stable and metastable structures given chemical composition and external conditions (pressure, temperature):
AIRSS - Ab Initio Random Structure Searching based on stochastic sampling of configuration space and with the possibility to use symmetry, chemical, and physical constraints. Has been used to study bulk crystals, low-dimensional materials, clusters, point defects, and interfaces. Released under the GPL2 licence. Regularly updated.
CALYPSO - The Crystal structure AnaLYsis by Particle Swarm Optimization, implementing the particle swarm optimization (PSO) algorithm to identify/determine the crystal structure. As with other codes, knowledge of the structure can be used to design multi-functional materials (e.g., superconductive, thermoelectric, superhard, and energetic materials). Free for academic researchers. Regularly updated.
GASP - predicts the structure and composition of stable and metastable phases of crystals, molecules, atomic clusters and defects from first-principles. Can be interfaced to other energy codes including: VASP, LAMMPS, MOPAC, Gulp, JDFTx etc. Free to use and regularly updated.
GRACE - for predicting molecular crystal structures, especially for the pharmaceutical industry. Based on dispersion-corrected density functional theory. Commercial software under active development.
GULP - Monte Carlo and genetic algorithms for atomic crystals. GULP is based on classical force fields and works with many types of force fields. Free for academic researchers. Regularly updated.
USPEX - multi-method software that includes evolutionary algorithms and other methods (random sampling, evolutionary metadynamics, improved PSO, variable-cell NEB method and transition path sampling method for phase transition mechanisms). Can be used for atomic and molecular crystals; bulk crystals, nanoparticles, polymers, surface reconstructions, interfaces; can optimize the energy or other physical properties. In addition to finding the structure for a given composition, can identify all stable compositions in a multicomponent variable-composition system and perform simultaneous optimisation of several properties. Free for academic researchers. Used by >4500 researchers. Regularly updated.
XtalOpt - open source code implementing an evolutionary algorithm.
FLAME - open source code implementing the minima hopping method.
Further reading
References
Crystallography
Computational chemistry
Theoretical chemistry
Solid-state chemistry | Crystal structure prediction | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,041 | [
"Materials science",
"Theoretical chemistry",
"Crystallography",
"Computational chemistry",
"Condensed matter physics",
"nan",
"Solid-state chemistry"
] |
22,833,268 | https://en.wikipedia.org/wiki/Geometric%20design | Geometrical design (GD) is a branch of computational geometry. It deals with the construction and representation of free-form curves, surfaces, or volumes and is closely related to geometric modeling. Core problems are curve and surface modelling and representation. GD studies especially the construction and manipulation of curves and surfaces given by a set of points using polynomial, rational, piecewise polynomial, or piecewise rational methods. The most important instruments here are parametric curves and parametric surfaces, such as Bézier curves, spline curves and surfaces. An important non-parametric approach is the level-set method.
Application areas include shipbuilding, aircraft, and automotive industries, as well as architectural design. The modern ubiquity and power of computers means that even perfume bottles and shampoo dispensers are designed using techniques unheard of by shipbuilders of 1960s.
Geometric models can be built for objects of any dimension in any geometric space. Both 2D and 3D geometric models are extensively used in computer graphics. 2D models are important in computer typography and technical drawing. 3D models are central to computer-aided design and manufacturing, and many applied technical fields such as geology and medical image processing.
Geometric models are usually distinguished from procedural and object-oriented models, which define the shape implicitly by an algorithm. They are also contrasted with digital images and volumetric models; and with mathematical models such as the zero set of an arbitrary polynomial. However, the distinction is often blurred: for instance, geometric shapes can be represented by objects; a digital image can be interpreted as a collection of colored squares; and geometric shapes such as circles are defined by implicit mathematical equations. Also, the modeling of fractal objects often requires a combination of geometric and procedural techniques.
Geometric problems originating in architecture can lead to interesting research and results in geometry processing, computer-aided geometric design, and discrete differential geometry.
In architecture, geometric design is associated with the pioneering explorations of Chuck Hoberman into transformational geometry as a design idiom, and applications of this design idiom within the domain of architectural geometry.
See also
Architectural geometry
Computational topology
CAD/CAM/CAE
Digital geometry
Geometric design of roads
List of interactive geometry software
Parametric curves
Parametric surfaces
Solid modeling
Space partitioning
Wikiversity:Topic:Computational geometry
Progressive-iterative approximation method
References
External links
Evolute Research and Consulting
Computer Aided Geometric Design
Geometric algorithms
Computational science
Computer-aided design
Applied geometry | Geometric design | [
"Mathematics",
"Engineering"
] | 496 | [
"Computer-aided design",
"Design engineering",
"Applied mathematics",
"Computational science",
"Geometry",
"Applied geometry"
] |
22,833,956 | https://en.wikipedia.org/wiki/Molecular%20models%20of%20DNA | Molecular models of DNA structures are representations of the molecular geometry and topology of deoxyribonucleic acid (DNA) molecules using one of several means, with the aim of simplifying and presenting the essential, physical and chemical, properties of DNA molecular structures either in vivo or in vitro. These representations include closely packed spheres (CPK models) made of plastic, metal wires for skeletal models, graphic computations and animations by computers, artistic rendering. Computer molecular models also allow animations and molecular dynamics simulations that are very important for understanding how DNA functions in vivo.
The more advanced, computer-based molecular models of DNA involve molecular dynamics simulations and quantum mechanics computations of vibro-rotations, delocalized molecular orbitals (MOs), electric dipole moments, hydrogen-bonding, and so on. DNA molecular dynamics modeling involves simulating deoxyribonucleic acid (DNA) molecular geometry and topology changes with time as a result of both intra- and inter- molecular interactions of DNA. Whereas molecular models of DNA molecules such as closely packed spheres (CPK models) made of plastic or metal wires for skeletal models are useful representations of static DNA structures, their usefulness is very limited for representing complex DNA dynamics. Computer molecular modeling allows both animations and molecular dynamics simulations that are very important to understand how DNA functions in vivo.
History
From the very early stages of structural studies of DNA by X-ray diffraction and biochemical means, molecular models such as the Watson-Crick nucleic acid double helix model were successfully employed to solve the 'puzzle' of DNA structure, and also find how the latter relates to its key functions in living cells. The first high quality X-ray diffraction patterns
of A-DNA were reported by Rosalind Franklin and Raymond Gosling in 1953. Rosalind Franklin made the critical observation that DNA exists in two distinct forms, A and B, and produced the sharpest pictures of both through X-ray diffraction technique. The first calculations of the Fourier transform of an atomic helix were reported one year earlier by Cochran, Crick and Vand, and were followed in 1953 by the computation of the Fourier transform of a coiled-coil by Crick.
Structural information is generated from X-ray diffraction studies of oriented DNA fibers with the help of molecular models of DNA that are combined with crystallographic and mathematical analysis of the X-ray patterns.
The first reports of a double helix molecular model of B-DNA structure were made by James Watson and Francis Crick in 1953. That same year, Maurice F. Wilkins,
A. Stokes and H.R. Wilson, reported the first X-ray patterns
of in vivo B-DNA in partially oriented salmon sperm heads.
The development of the first correct double helix molecular model of DNA by Crick and Watson may not have been possible without the biochemical evidence for the nucleotide base-pairing ([A---T]; [C---G]), or Chargaff's rules. Although such initial studies of DNA structures with the help of molecular models were essentially static, their consequences for explaining the in vivo functions of DNA were significant in the areas of protein biosynthesis and the quasi-universality of the genetic code. Epigenetic transformation studies of DNA in vivo were however much slower to develop despite their importance for embryology, morphogenesis and cancer research. Such chemical dynamics and biochemical reactions of DNA are much more complex than the molecular dynamics of DNA physical interactions with water, ions and proteins/enzymes in living cells.
Importance
An old standing dynamic problem is how DNA "self-replication" takes place in living cells that should involve transient uncoiling of supercoiled DNA fibers. Although DNA consists of relatively rigid, very large elongated biopolymer molecules called fibers or chains (that are made of repeating nucleotide units of four basic types, attached to deoxyribose and phosphate groups), its molecular structure in vivo undergoes dynamic configuration changes that involve dynamically attached water molecules and ions. Supercoiling, packing with histones in chromosome structures, and other such supramolecular aspects also involve in vivo DNA topology which is even more complex than DNA molecular geometry, thus turning molecular modeling of DNA into an especially challenging problem for both molecular biologists and biotechnologists. Like other large molecules and biopolymers, DNA often exists in multiple stable geometries (that is, it exhibits conformational isomerism) and configurational, quantum states which are close to each other in energy on the potential energy surface of the DNA molecule.
Such varying molecular geometries can also be computed, at least in principle, by employing ab initio quantum chemistry methods that can attain high accuracy for small molecules, although claims that acceptable accuracy can be also achieved for polynuclelotides, and DNA conformations, were recently made on the basis of vibrational circular dichroism (VCD) spectral data. Such quantum geometries define an important class of ab initio molecular models of DNA which exploration has barely started, especially related to results obtained by VCD in solutions. More detailed comparisons with such ab initio quantum computations are in principle obtainable through 2D-FT NMR spectroscopy and relaxation studies of polynucleotide solutions or specifically labeled DNA, as for example with deuterium labels.
In an interesting twist of roles, the DNA molecule was proposed to be used for quantum computing via DNA. Both DNA nanostructures and DNA computing biochips have been built.
Fundamental concepts
The chemical structure of DNA is insufficient to understand the complexity of the 3D structures of DNA. In contrast, animated molecular models allow one to visually explore the three-dimensional (3D) structure of DNA. The DNA model shown (far right) is a space-filling, or CPK, model of the DNA double helix. Animated molecular models, such as the wire, or skeletal, type shown at the top of this article, allow one to visually explore the three-dimensional (3D) structure of DNA. Another type of DNA model is the space-filling, or CPK, model.
The hydrogen bonding dynamics and proton exchange is very different by many orders of magnitude between the two systems of fully hydrated DNA and water molecules in ice. Thus, the DNA dynamics is complex, involving nanosecond and several tens of picosecond time scales, whereas that of liquid ice is on the picosecond time scale, and that of proton exchange in ice is on the millisecond time scale. The proton exchange rates in DNA and attached proteins may vary from picosecond to nanosecond, minutes or years, depending on the exact locations of the exchanged protons in the large biopolymers.
A simple harmonic oscillator 'vibration' is only an oversimplified dynamic representation of the longitudinal vibrations of the DNA intertwined helices which were found to be anharmonic rather than harmonic as often assumed in quantum dynamic simulations of DNA.
DNA structure
The structure of DNA shows a variety of forms, both double-stranded and single-stranded. The mechanical properties of DNA, which are directly related to its structure, are a significant problem for cells. Every process which binds or reads DNA is able to use or modify the mechanical properties of DNA for purposes of recognition, packaging and modification. The extreme length (a chromosome may contain a 10 cm long DNA strand), relative rigidity and helical structure of DNA has led to the evolution of histones and of enzymes such as topoisomerases and helicases to manage a cell's DNA. The properties of DNA are closely related to its molecular structure and sequence, particularly the weakness of the hydrogen bonds and electronic interactions that hold strands of DNA together compared to the strength of the bonds within each strand.
Experimental methods which can directly measure the mechanical properties of DNA are relatively new, and high-resolution visualization in solution is often difficult. Nevertheless, scientists have uncovered large amount of data on the mechanical properties of this polymer, and the implications of DNA's mechanical properties on cellular processes is a topic of active current research.
The DNA found in many cells can be macroscopic in length: a few centimetres long for each human chromosome. Consequently, cells must compact or package DNA to carry it within them. In eukaryotes this is carried by spool-like proteins named histones, around which DNA winds. It is the further compaction of this DNA-protein complex which produces the well known mitotic eukaryotic chromosomes.
In the late 1970s, alternate non-helical models of DNA structure were briefly considered as a potential solution to problems in DNA replication in plasmids and chromatin. However, the models were set aside in favor of the double-helical model due to subsequent experimental advances such as X-ray crystallography of DNA duplexes, and later the nucleosome core particle, and the discovery of topoisomerases. Such non-double-helical models are not currently accepted by the mainstream scientific community.
DNA structure determination using molecular modeling and DNA X-ray patterns
After DNA has been separated and purified by standard biochemical methods, one has a sample in a jar much like in the figure at the top of this article. Below are the main steps involved in generating structural information from X-ray diffraction studies of oriented DNA fibers that are drawn from the hydrated DNA sample with the help of molecular models of DNA that are combined with crystallographic and mathematical analysis of the X-ray patterns.
Paracrystalline lattice models of B-DNA structures
A paracrystalline lattice, or paracrystal, is a molecular or atomic lattice with significant amounts (e.g., larger than a few percent) of partial disordering of molecular arrangements. Limiting cases of the paracrystal model are nanostructures, such as glasses, liquids, etc., that may possess only local ordering and no global order. A simple example of a paracrystalline lattice is shown in the following figure for a silica glass:
Liquid crystals also have paracrystalline rather than crystalline structures.
Highly hydrated B-DNA occurs naturally in living cells in such a paracrystalline state, which is a dynamic one despite the relatively rigid DNA double helix stabilized by parallel hydrogen bonds between the nucleotide base-pairs in the two complementary, helical DNA chains (see figures). For simplicity most DNA molecular models omit both water and ions dynamically bound to B-DNA, and are thus less useful for understanding the dynamic behaviors of B-DNA in vivo. The physical and mathematical analysis of X-ray and spectroscopic data for paracrystalline B-DNA is thus far more complex than that of crystalline, A-DNA X-ray diffraction patterns. The paracrystal model is also important for DNA technological applications such as DNA nanotechnology. Novel methods that combine X-ray diffraction of DNA with X-ray microscopy in hydrated living cells are now also being developed.
Genomic and biotechnology applications of DNA molecular modeling
There are various uses of DNA molecular modeling in Genomics and Biotechnology research applications, from DNA repair to PCR and DNA nanostructures. Two-dimensional DNA junction arrays have been visualized by Atomic force microscopy.
DNA molecular modeling has various uses in genomics and biotechnology, with research applications ranging from DNA repair to PCR and DNA nanostructures. These include computer molecular models of molecules as varied as RNA polymerase, an E. coli, bacterial DNA primase template suggesting very complex dynamics at the interfaces between the enzymes and the DNA template, and molecular models of the mutagenic, chemical interaction of potent carcinogen molecules with DNA. These are all represented in the gallery below.
Technological application include a DNA biochip and DNA nanostructures designed for DNA computing and other dynamic applications of DNA nanotechnology.
The image at right is of self-assembled DNA nanostructures. The DNA "tile" structure in this image consists of four branched junctions oriented at 90° angles. Each tile consists of nine DNA oligonucleotides as shown; such tiles serve as the primary "building block" for the assembly of the DNA nanogrids shown in the AFM micrograph.
Quadruplex DNA may be involved in certain cancers. Images of quadruplex DNA are in the gallery below.
Gallery of DNA models
See also
References
Further reading
Applications of Novel Techniques to Health Foods, Medical and Agricultural Biotechnology.(June 2004) I. C. Baianu, P. R. Lozano, V. I. Prisecaru and H. C. Lin., q-bio/0406047.
F. Bessel, Untersuchung des Theils der planetarischen Störungen, Berlin Abhandlungen (1824), article 14.
Sir Lawrence Bragg, FRS. The Crystalline State, A General survey. London: G. Bells and Sons, Ltd., vols. 1 and 2., 1966., 2024 pages.
Cantor, C. R. and Schimmel, P.R. Biophysical Chemistry, Parts I and II., San Francisco: W.H. Freeman and Co. 1980. 1,800 pages.
Voet, D. and J.G. Voet. Biochemistry, 2nd Edn., New York, Toronto, Singapore: John Wiley & Sons, Inc., 1995, ., 1361 pages.
Watson, G. N. A Treatise on the Theory of Bessel Functions., (1995) Cambridge University Press. .
Watson, James D. Molecular Biology of the Gene. New York and Amsterdam: W.A. Benjamin, Inc. 1965., 494 pages.
Wentworth, W.E. Physical Chemistry. A short course., Malden ( Mass.): Blackwell Science, Inc. 2000.
Herbert R. Wilson, FRS. Diffraction of X-rays by proteins, Nucleic Acids and Viruses., London: Edward Arnold (Publishers) Ltd. 1966.
Kurt Wuthrich. NMR of Proteins and Nucleic Acids., New York, Brisbane, Chicester, Toronto, Singapore: J. Wiley & Sons. 1986., 292 pages.
External links
DNA the Double Helix Game From the official Nobel Prize website
MDDNA: Structural Bioinformatics of DNA
Double Helix 1953–2003 National Centre for Biotechnology Education
DNAlive: a web interface to compute DNA physical properties. Also allows cross-linking of the results with the UCSC Genome browser and DNA dynamics.
Further details of mathematical and molecular analysis of DNA structure based on X-ray data
Bessel functions corresponding to Fourier transforms of atomic or molecular helices.
overview of STM/AFM/SNOM principles with educative videos
Databases for DNA molecular models and sequences
X-ray diffraction
NDB ID: UD0017 Database
X-ray Atlas -database
PDB files of coordinates for nucleic acid structures from X-ray diffraction by NA (incl. DNA) crystals
Structure factors downloadable files in CIF format
Neutron scattering
ISIS neutron source: ISIS pulsed neutron source:A world centre for science with neutrons & muons at Harwell, near Oxford, UK.
X-ray microscopy
Electron microscopy
DNA under electron microscope
NMR databases
NMR Atlas--database
mmcif downloadable coordinate files of nucleic acids in solution from 2D-FT NMR data
NMR constraints files for NAs in PDB format
Genomic and structural databases
CBS Genome Atlas Database — contains examples of base skews.
The Z curve database of genomes — a 3-dimensional visualization and analysis tool of genomes.
DNA and other nucleic acids' molecular models: Coordinate files of nucleic acids molecular structure models in PDB and CIF formats
Atomic force microscopy
How SPM Works
SPM image gallery: AFM STM SEM MFM NSOM, more
DNA
Molecular geometry
Molecular biology
Molecular genetics
Genomics | Molecular models of DNA | [
"Physics",
"Chemistry",
"Biology"
] | 3,306 | [
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Molecular genetics",
"Molecular biology",
"Biochemistry",
"Matter"
] |
22,835,055 | https://en.wikipedia.org/wiki/Protein%20dynamics | In molecular biology, proteins are generally thought to adopt unique structures determined by their amino acid sequences. However, proteins are not strictly static objects, but rather populate ensembles of (sometimes similar) conformations. Transitions between these states occur on a variety of length scales (tenths of angstroms to nm) and time scales (ns to s),
and have been linked to functionally relevant phenomena such as allosteric signaling and enzyme catalysis.
The study of protein dynamics is most directly concerned with the transitions between these states, but can also involve the nature and equilibrium populations of the states themselves.
These two perspectives—kinetics and thermodynamics, respectively—can be conceptually synthesized in an "energy landscape" paradigm:
highly populated states and the kinetics of transitions between them can be described by the depths of energy wells and the heights of energy barriers, respectively.
Local flexibility: atoms and residues
Portions of protein structures often deviate from the equilibrium state.
Some such excursions are harmonic, such as stochastic fluctuations of chemical bonds and bond angles.
Others are anharmonic, such as sidechains that jump between separate discrete energy minima, or rotamers.
Evidence for local flexibility is often obtained from NMR spectroscopy. Flexible and potentially disordered regions of a protein can be detected using the random coil index. Flexibility in folded proteins can be identified by analyzing the spin relaxation of individual atoms in the protein. Flexibility can also be observed in very high-resolution electron density maps produced by X-ray crystallography,
particularly when diffraction data is collected at room temperature instead of the traditional cryogenic temperature (typically near 100 K). Information on the frequency distribution and dynamics of local protein flexibility can be obtained using Raman and optical Kerr-effect spectroscopy as well as anisotropic microspectroscopy in the terahertz frequency domain.
Regional flexibility: intra-domain multi-residue coupling
Many residues are in close spatial proximity in protein structures. This is true for most residues that are contiguous in the primary sequence, but also for many that are distal in sequence yet are brought into contact in the final folded structure. Because of this proximity, these residue's energy landscapes become coupled based on various biophysical phenomena such as hydrogen bonds, ionic bonds, and van der Waals interactions (see figure).
Transitions between states for such sets of residues therefore become correlated.
This is perhaps most obvious for surface-exposed loops, which often shift collectively to adopt different conformations in different crystal structures (see figure). However, coupled conformational heterogeneity is also sometimes evident in secondary structure. For example, consecutive residues and residues offset by 4 in the primary sequence often interact in α helices. Also, residues offset by 2 in the primary sequence point their sidechains toward the same face of β sheets and are close enough to interact sterically, as are residues on adjacent strands of the same β sheet. Some of these conformational changes are induced by post-translational modifications in protein structure, such as phosphorylation and methylation.
When these coupled residues form pathways linking functionally important parts of a protein,
they may participate in allosteric signaling.
For example, when a molecule of oxygen binds to one subunit of the hemoglobin tetramer,
that information is allosterically propagated to the other three subunits, thereby enhancing their affinity for oxygen.
In this case, the coupled flexibility in hemoglobin allows for cooperative oxygen binding,
which is physiologically useful because it allows rapid oxygen loading in lung tissue and rapid oxygen unloading in oxygen-deprived tissues (e.g. muscle).
Global flexibility: multiple domains
The presence of multiple domains in proteins gives rise to a great deal of flexibility and mobility, leading to protein domain dynamics.
Domain motions can be inferred by comparing different structures of a protein (as in Database of Molecular Motions), or they can be directly observed using spectra
measured by neutron spin echo spectroscopy.
They can also be suggested by sampling in extensive molecular dynamics trajectories and principal component analysis. Domain motions are important for:
ABC transporters
adherens junction
catalysis
cellular locomotion and motor proteins
formation of protein complexes
ion channels
mechanoreceptors and mechanotransduction
regulatory activity
transport of metabolites across cell membranes
One of the largest observed domain motions is the 'swivelling' mechanism in pyruvate phosphate dikinase. The phosphoinositide domain swivels between two states in order to bring a phosphate group from the active site of the nucleotide binding domain to that of the phosphoenolpyruvate/pyruvate domain. The phosphate group is moved over a distance of 45 Å involving a domain motion of about 100 degrees around a single residue. In enzymes, the closure of one domain onto another captures a substrate by an induced fit, allowing the reaction to take place in a controlled way. A detailed analysis by Gerstein led to the classification of two basic types of domain motion; hinge and shear. Only a relatively small portion of the chain, namely the inter-domain linker and side chains undergo significant conformational changes upon domain rearrangement.
Hinge motions
A study by Hayward found that the termini of α-helices and β-sheets form hinges in a large number of cases. Many hinges were found to involve two secondary structure elements acting like hinges of a door, allowing an opening and closing motion to occur. This can arise when two neighbouring strands within a β-sheet situated in one domain, diverge apart as they join the other domain. The two resulting termini then form the bending regions between the two domains. α-helices that preserve their hydrogen bonding network when bent are found to behave as mechanical hinges, storing `elastic energy' that drives the closure of domains for rapid capture of a substrate. Khade et. al. worked on prediction of the hinges in any conformation and further built an Elastic Network Model called hdANM that can model those motions.
Helical to extended conformation
The interconversion of helical and extended conformations at the site of a domain boundary is not uncommon. In calmodulin, torsion angles change for five residues in the middle of a domain linking α-helix. The helix is split into two, almost perpendicular, smaller helices separated by four residues of an extended strand.
Shear motions
Shear motions involve a small sliding movement of domain interfaces, controlled by the amino acid side chains within the interface. Proteins displaying shear motions often have a layered architecture: stacking of secondary structures. The interdomain linker has merely the role of keeping the domains in close proximity.
Domain motion and functional dynamics in enzymes
The analysis of the internal dynamics of structurally different, but functionally similar enzymes
has highlighted a common relationship between the positioning of the
active site and the two principal protein sub-domains. In fact, for several members of the hydrolase superfamily, the catalytic site is located close to the interface separating the two principal quasi-rigid domains. Such positioning appears instrumental for maintaining the precise geometry of the active site, while allowing for an appreciable functionally oriented modulation of the flanking regions resulting from the relative motion of the two sub-domains.
Implications for macromolecular evolution
Evidence suggests that protein dynamics are important for function, e.g. enzyme catalysis in dihydrofolate reductase (DHFR),
yet they are also posited to facilitate the acquisition of new functions by molecular evolution.
This argument suggests that proteins have evolved to have stable, mostly unique folded structures,
but the unavoidable residual flexibility leads to some degree of functional promiscuity,
which can be amplified/harnessed/diverted by subsequent mutations.
Research on promiscuous proteins within the BCL-2 family revealed that nanosecond-scale protein dynamics can play a crucial role in protein binding behaviour and thus promiscuity.
However, there is growing awareness that intrinsically unstructured proteins are quite prevalent in eukaryotic genomes,
casting further doubt on the simplest interpretation of Anfinsen's dogma: "sequence determines structure (singular)".
In effect, the new paradigm is characterized by the addition of two caveats: "sequence and cellular environment determine structural ensemble".
References
Protein folding
Protein biosynthesis | Protein dynamics | [
"Chemistry"
] | 1,725 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
22,836,986 | https://en.wikipedia.org/wiki/METATOY | A METATOY is a sheet, formed by a two-dimensional array of small, telescopic optical components, that switches the path of transmitted light rays. METATOY is an acronym for "metamaterial for rays", representing a number of analogies with metamaterials; METATOYs even satisfy a few definitions of metamaterials, but are certainly not metamaterials in the usual sense. When seen from a distance, the view through each individual telescopic optical component acts as one pixel of the view through the METATOY as a whole. In the simplest case, the individual optical components are all identical; the METATOY then behaves like a homogeneous, but pixellated, window that can have very unusual optical properties (see the picture of the view through a METATOY).
METATOYs are usually treated within the framework of geometrical optics; the light-ray-direction change performed by a METATOY is described by a mapping of the direction of any incoming light ray onto the corresponding direction of the outgoing ray. The light-ray-direction mappings can be very general. METATOYs can even create pixellated light-ray fields that could not exist in non-pixellated form due to a condition imposed by wave optics.
Much of the work on METATOYs is currently theoretical, backed up by computer simulations. A small number of experiments have been performed to date; more experimental work is ongoing.
Examples of METATOYs
Telescopic optical components that have been used as the unit cell of two-dimensional arrays, and which therefore form homogeneous METATOYs, include a pair of identical lenses (focal length ) that share the same optical axis (perpendicular to the METATOY) and that are separated by , that is they share one focal plane (a special case of a refracting telescope with angular magnification -1); a pair of non-identical lenses (focal lengths and ) that share the same optical axis (again perpendicular to the METATOY) and that are separated by , that is they again share one focal plane (a generalization of the former case, a refracting telescope with any angular magnification); a pair of non-identical lenses (focal lengths and ) that share one focal plane, that is, they share the direction of the optical axis, which is not necessarily perpendicular to the METATOY, and they are separated by (a generalization of the former case); a prism; and a Dove prism
Examples of inhomogeneous METATOYs include the moiré magnifier, which is based on deliberately "mis-aligned" pairs of confocal microlens arrays; Fresnel lenses, which can be seen as non-homogeneous METATOYs made from prisms;
and frosted glass, which can be seen as an extreme case of an inhomogeneous, random METATOY made from prisms.
Examples of METATOYs as defined above have existed long before analogies with metamaterials were noted and it was recognized that METATOYs can perform wave-optically forbidden ray-direction mappings (in pixellated form).
Wave-optical constraints on light-ray fields and METATOYs
Wave optics describes light at a more fundamental level than geometrical optics. In the ray-optics limit (in which the optical wavelength tends towards zero) of scalar optics (in which light is described as a scalar wave, an approximation that works well for paraxial light with uniform polarization), the light-ray field r corresponding to a light wave is its phase gradient,
where is the phase of the wave . But according to vector calculus, the curl of any gradient is zero, that is
and therefore
This last equation is a condition, derived from wave optics, on light-ray fields.
(Each of the three equations that makes up this vector equation expresses the symmetry of the second spatial derivatives, which is how the condition was initially formulated.)
Using the example of ray-rotation sheets, it was shown that METATOYs can create light-ray fields that do not satisfy the above condition on light-ray fields.
Relationship with metamaterials
METATOYs are not metamaterials in the standard sense. The acronym "metamaterial for rays" was chosen because of a number of similarities between METATOYs and metamaterials, which are discussed below, along with the differences.
In addition, metamaterials provided the inspiration for early METATOYs research, as summarized in the following quote:
Motivated by the desire to build optical elements that have some of the visual properties of metamaterials on an everyday size scale and across the entire visible wavelength spectrum, we recently started to investigate sheets formed by miniaturized optical elements that change the direction of transmitted light rays.
Similarities with metamaterials
In a number of ways, METATOYs are analogous to metamaterials:
structure: metamaterials are arrays of small (sub-wavelength size) wave-optical components (electro-magnetic circuits resonant with the optical frequency), whereas METATOYs are arrays of small (so that they work well as pixels), telescopic, "ray-optical components";
functionality: both metamaterials and METATOYs can behave like homogeneous materials, in the case of metamaterials a volume of material, in the case of METATOYs a sheet material, in both cases with very unusual optical properties such as negative refraction.
Differences with metamaterials
Arguably amongst the most startling properties of metamaterials are some that are fundamentally wave-optical, and therefore not reproduced in METATOYs. These include amplification of evanescent waves, which can, in principle, lead to perfect lenses ("superlenses") and magnifying superlenses ("hyperlenses"); reversal of the phase velocity; reversal of the Doppler shift.
However, because they are not bound by wave-optical constraints on light-ray fields, it can be argued that METATOYs can perform light-ray-direction changes that metamaterials could not, unless a METATOY was effectively built out of metamaterials.
See also
Fresnel lens
Zone plate
References
External links
Didactic article on METATOYs
Geometrical optics
Optical devices
Optical materials
Imaging | METATOY | [
"Physics",
"Materials_science",
"Engineering"
] | 1,292 | [
"Glass engineering and science",
"Optical devices",
"Materials",
"Optical materials",
"Matter"
] |
22,837,269 | https://en.wikipedia.org/wiki/Flying%20platform | A Flying Platform is a type of VTOL aircraft for low cost individual usage for short range within an area. It is usually flown using kinesthetic control, similar to that of a surf board.
Examples
De Lackner HZ-1 Aerocycle
Hiller VZ-1 Pawnee
Williams X-Jet
See also
Personal air vehicle
References
Aircraft configurations
Flying platforms
VTOL aircraft
Lift fan
Standing pilot aircraft | Flying platform | [
"Engineering"
] | 81 | [
"Aircraft configurations",
"Aerospace engineering"
] |
22,837,466 | https://en.wikipedia.org/wiki/Weight-shift%20control | Weight-shift control as a means of aircraft flight control is widely used in hang gliders, powered hang gliders, and ultralight trikes. Control is usually by the pilot using their weight against a triangular control bar that is rigidly attached to the wing structure. The wing is mounted on a pivot above the trike carriage or hang glider harness allowing the weight-shift forces to produce changes in pitch and bank.
References
See also
Ultralight aircraft
Aircraft controls
Applications of control engineering
Aircraft categories | Weight-shift control | [
"Engineering"
] | 101 | [
"Control engineering",
"Applications of control engineering"
] |
35,670,796 | https://en.wikipedia.org/wiki/Valley-fill%20circuit | A valley-fill circuit is a type of passive power-factor correction (PFC) circuit.
For purposes of illustration, a basic full-wave diode-bridge rectifier is shown in the first stage, which converts the AC input voltage to a DC voltage.
Operation
When the AC voltage is applied, the rectified line voltage is applied across C1 and C2, as they are both charged via D3 and R1, until C1 and C2 are each charged up to approximately half of the peak line voltage.
When the line voltage falls below the peak, into the "valley" phase, Vout begins to fall toward half of the peak line voltage. At this point, C1 and C2 begin to discharge into the load at Vout, via D1 and D2 respectively.
R1 is needed to prevent a large in-rush current, and electromagnetic interference (EMI).
Advantages and disadvantages
An advantage of this design is that it is rather simple. A disadvantage is that the ripple voltage can still be 50% of peak, and have total harmonic distortion (THD) of 35%, which is rather high. A 1998 United States patent, US6141230A, provides a power factor of 0.98 and a THD of 9.61% and is most suited to constant load applications such as fluorescent lamp ballasts.
References
Electric power
Electrical circuits | Valley-fill circuit | [
"Physics",
"Engineering"
] | 281 | [
"Physical quantities",
"Power (physics)",
"Electronic engineering",
"Electric power",
"Electrical engineering",
"Electrical circuits"
] |
911,229 | https://en.wikipedia.org/wiki/Cavity%20ring-down%20spectroscopy | Cavity ring-down spectroscopy (CRDS) is a highly sensitive optical spectroscopic technique that enables measurement of absolute optical extinction by samples that scatter and absorb light. It has been widely used to study gaseous samples which absorb light at specific wavelengths, and in turn to determine mole fractions down to the parts per trillion level. The technique is also known as cavity ring-down laser absorption spectroscopy (CRLAS).
A typical CRDS setup consists of a laser that is used to illuminate a high-finesse optical cavity, which in its simplest form consists of two highly reflective mirrors. When the laser is in resonance with a cavity mode, intensity builds up in the cavity due to constructive interference. The laser is then turned off in order to allow the measurement of the exponentially decaying light intensity leaking from the cavity. During this decay, light is reflected back and forth thousands of times between the mirrors giving an effective path length for the extinction on the order of a few kilometers.
If a light-absorbing material is now placed in the cavity, the mean lifetime decreases as fewer bounces through the medium are required before the light is fully absorbed, or absorbed to some fraction of its initial intensity. A CRDS setup measures how long it takes for the light to decay to 1/e of its initial intensity, and this "ringdown time" can be used to calculate the concentration of the absorbing substance in the gas mixture in the cavity.
Detailed description
Cavity ring-down spectroscopy is a form of laser absorption spectroscopy. In CRDS, a laser pulse is trapped in a highly reflective (typically R > 99.9%) detection cavity. The intensity of the trapped pulse will decrease by a fixed percentage during each round trip within the cell due to absorption, scattering by the medium within the cell, and reflectivity losses. The intensity of light within the cavity is then determined as an exponential function of time.
The principle of operation is based on the measurement of a decay rate rather than an absolute absorbance. This is one reason for the increased sensitivity over traditional absorption spectroscopy, as the technique is then immune to shot-to-shot laser fluctuations. The decay constant, τ, which is the time taken for the intensity of light to fall to 1/e of the initial intensity, is called the ring-down time and is dependent on the loss mechanism(s) within the cavity. For an empty cavity, the decay constant is dependent on mirror loss and various optical phenomena like scattering and refraction:
where n is the index of refraction within the cavity, c is the speed of light in vacuum, l is the cavity length, R is the mirror reflectivity, and X takes into account other miscellaneous optical losses. This equation uses the approximation that ln(1+x) ≈ x for x close to zero, which is the case under cavity ring-down conditions. Often, the miscellaneous losses are factored into an effective mirror loss for simplicity. An absorbing species in the cavity will increase losses according to the Beer-Lambert law. Assuming the sample fills the entire cavity,
where α is the absorption coefficient for a specific analyte concentration at the cavity's resonance wavelength. The decadic absorbance, A, due to the analyte can be determined from both ring-down times.
Alternatively, the molar absorptivity, ε, and analyte concentration, C, can be determined from the ratio of both ring-down times. If X can be neglected, one obtains
When a ratio of species' concentrations is the analytical objective, as for example in carbon-13 to carbon-12 measurements in carbon dioxide, the ratio of ring-down times measured for the same sample at the relevant absorption frequencies can be used directly with extreme accuracy and precision.
Advantages of CRDS
There are two main advantages to CRDS over other absorption methods:
First, it is not affected by fluctuations in the laser intensity. In most absorption measurements, the light source must be assumed to remain steady between blank (no analyte), standard (known amount of analyte), and sample (unknown amount of analyte). Any drift (change in the light source) between measurements will introduce errors. In CRDS, the ringdown time does not depend on the intensity of the laser, so fluctuations of this type are not a problem. Independency from laser intensity makes CRDS needless to any calibration and comparison with standards.
Second, it is very sensitive due to its long pathlength. In absorption measurements, the smallest amount that can be detected is proportional to the length that the light travels through a sample. Since the light reflects many times between the mirrors, it ends up traveling long distances. For example, a laser pulse making 500 round trips through a 1-meter cavity will effectively have traveled through 1 kilometer of sample.
Thus, the advantages include:
High sensitivity due to the multipass nature (i.e. long pathlength) of the detection cell.
Immunity to shot variations in laser intensity due to the measurement of a rate constant.
Wide range of use for a given set of mirrors; typically, ±5% of the center wavelength.
High throughput, individual ring down events occur on the millisecond time scale.
No need for a fluorophore, which makes it more attractive than laser-induced fluorescence (LIF) or resonance-enhanced multiphoton ionization (REMPI) for some (e.g. rapidly predissociating) systems.
Commercial systems available.
Disadvantages of CRDS
Spectra cannot be acquired quickly due to the monochromatic laser source which is used. Having said this, some groups are now beginning to develop the use of broadband LED or supercontinuum sources for CRDS, the light of which can then be dispersed by a grating onto a CCD, or Fourier transformed spectrometer (mainly in broadband analogues of CRDS). Perhaps more importantly, the development of CRDS based techniques have now been demonstrated over the range from the near UV to the mid-infrared. In addition, the frequency-agile rapid scanning (FARS) CRDS technique has been developed to overcome the mechanical or thermal frequency tuning which typically limits CRDS acquisition rates. The FARS method utilizes an electro-optic modulator to step a probe laser side band to successive cavity modes, eliminating tuning time between data points and allowing for acquisition rates about 2 orders of magnitude faster than traditional thermal tuning.
Analytes are limited both by the availability of tunable laser light at the appropriate wavelength and also the availability of high reflectance mirrors at those wavelengths.
Expense: the requirement for laser systems and high reflectivity mirrors often makes CRDS orders of magnitude more expensive than some alternative spectroscopic techniques.
See also
Absorption spectroscopy
Laser absorption spectrometry
Noise-Immune Cavity-Enhanced Optical-Heterodyne Molecular Spectroscopy (NICE-OHMS)
Tunable Diode Laser Absorption Spectroscopy (TDLAS)
References
Spectroscopy | Cavity ring-down spectroscopy | [
"Physics",
"Chemistry"
] | 1,418 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
911,606 | https://en.wikipedia.org/wiki/Oktay%20Sinano%C4%9Flu | Oktay Sinanoğlu (February 25, 1935 – April 19, 2015) was a Turkish physical chemist and molecular biophysicist who made contributions to the theory of electron correlation in molecules, the statistical mechanics of clathrate hydrates, quantum chemistry, and the theory of solvation.
Early life and education
Sinanoğlu was born in Bari, Italy on February 25, 1935. His parents were Rüveyde (Karacabey) Sinanoğlu and Nüzhet Haşim. His father Rüveyde was a writer, and a consular official in the Bari consulate of Turkey. Following his father's recall to Turkey in July 1938, the family returned to Turkey before the start of World War II. He had a sister, Esin Afşar (1936-2011), who became a well-known singer and actress.
Sinanoğlu graduated from TED Ankara Koleji in 1951. He went to the United States in 1953, where he studied in University of California, Berkeley graduating with a BSc degree in 1956. The following year, he completed an MSc degree at MIT (1957), and was awarded a Sloan Research Fellowship. He completed a predoctoral fellowship (1958-1959) and earned his PhD in physical chemistry (1959-1960) at the University of California, Berkeley, advised by Kenneth Pitzer.
Academic career
In 1960, Sinanoğlu joined the chemistry department at Yale University. He was appointed full professor of chemistry in 1963. At age 28, he became the youngest full professor in Yale’s 20th-century history. It has been claimed that he was also the third-youngest full professor in the 300-plus year history of Yale University.
During his tenure at Yale he wrote a number of papers in various subfields of theoretical chemistry, the most widely cited of which was his 1961 paper on electron correlation. This work anticipated the widely used coupled cluster method for describing electrons in molecules with greater accuracy than is possible via the Hartree-Fock method. He also published important papers on the statistical mechanics of clathrate hydrates, solvation, and surface tension. His final projects were focused on the development of his valency interaction formula (VIF) theory, a method for predicting energy level patterns for compounds from the manipulation of graphs (1983). He intended for chemists to be able to use his system to predict the ways in which complex chemical reactions would proceed, using only a chalkboard or pencil and paper. He continued to develop the VIF method, which he sometimes referred to as "Sinanoğlu Made Simple," and other problems related to graph theory and quantum mechanics for the rest of his career. After 37 years on the Yale faculty, Sinanoğlu retired in 1997.
During his time at Yale, Sinanoğlu served as a consultant to Turkish universities, the Scientific and Technological Research Council of Turkey (TÜBİTAK), and the Japan Society for the Promotion of Science (JSPS). In 1962, the Board of Trustees of Middle East Technical University in Ankara granted him the title of "consulting professor."
After his retirement from Yale, Sinanoğlu was appointed to the chemistry department of Yıldız Technical University in Istanbul, serving until 2002.
Sinanoğlu was the author or co-author of over 200 scientific articles and books. He also authored books on contemporary affairs in Turkey and Turkish language, such as "Target Turkey" and "Bye Bye Turkish" (2005). In "Bye Bye Turkish", he propounded the idea of cognation between Turkish and Japanese based on the alleged similarity of a number of words.
A 2001 best-seller book about his life and works, edited by Turkish writer Emine Çaykara, referred to him as "The Turkish Einstein, Oktay Sinanoğlu" ().
Honors
He received the "TÜBİTAK Science Award" for chemistry in 1966, the Alexander von Humboldt Research Award in chemistry in 1973, and the "International Outstanding Scientist Award of Japan" in 1975. It has been reported in Turkish media that Sinanoğlu was a two-time nominee for the Nobel Prize in Chemistry, but this claim is not supported by actual data from the Nobel Foundation.
Personal life and death
On December 21, 1963, Oktay Sinanoğlu married Paula Armbruster, who was doing graduate work at Yale University. The wedding ceremony took place in the Branford College Chapel of Yale. They had three children. After their later divorce, he married Dilek Sinanoğlu and from this marriage he became the father of twins. The family resided in the Emerald Lakes neighborhood of Fort Lauderdale, Florida, and in Istanbul, Turkey.
Dilek Sinanoğlu made public on April 10, 2015, that Oktay Sinanoğlu was hospitalized in Miami, Florida, and was in a coma in the intensive care unit. He died at age 80 on April 19, 2015. No medical statement was released about the cause of the death. His body was transferred to Turkey, where he was buried in Karacaahmet Cemetery, Üsküdar following the religious funeral service at Şakirin Mosque.
References
External links
List of publications by Oktay Sinanoğlu
1935 births
2015 deaths
People from Bari
TED Ankara College Foundation Schools alumni
University of California, Berkeley alumni
Massachusetts Institute of Technology alumni
Sloan Research Fellows
Theoretical chemists
Turkish chemists
Turkish biochemists
Turkish expatriates in the United States
Yale University faculty
Academic staff of Middle East Technical University
Academic staff of Yıldız Technical University
Recipients of TÜBİTAK Science Award
Burials at Karacaahmet Cemetery
American academics of Turkish descent | Oktay Sinanoğlu | [
"Chemistry"
] | 1,154 | [
"Quantum chemistry",
"Theoretical chemistry",
"Theoretical chemists",
"Physical chemists"
] |
912,171 | https://en.wikipedia.org/wiki/Optical%20ring%20resonators | An optical ring resonator is a set of waveguides in which at least one is a closed loop coupled to some sort of light input and output. (These can be, but are not limited to being, waveguides.) The concepts behind optical ring resonators are the same as those behind whispering galleries except that they use light and obey the properties behind constructive interference and total internal reflection. When light of the resonant wavelength is passed through the loop from the input waveguide, the light builds up in intensity over multiple round-trips owing to constructive interference and is output to the output bus waveguide which serves as a detector waveguide. Because only a select few wavelengths will be at resonance within the loop, the optical ring resonator functions as a filter. Additionally, as implied earlier, two or more ring waveguides can be coupled to each other to form an add/drop optical filter.
Background
Optical ring resonators work on the principles behind total internal reflection, constructive interference, and optical coupling.
Total internal reflection
The light travelling through the waveguides in an optical ring resonator remains within the waveguides due to the ray optics phenomenon known as total internal reflection (TIR). TIR is an optical phenomenon that occurs when a ray of light strikes the boundary of a medium and fails to refract through the boundary. Given that the angle of incidence is larger than the critical angle (with respect to the normal of the surface) and the refractive index is lower on the other side of the boundary relative to the incident ray, TIR will occur and no light will be able to pass through. For an optical ring resonator to work well, total internal reflection conditions must be met and the light travelling through the waveguides must not be allowed to escape by any means.
Interference
Interference is the process by which two waves superimpose to form a resultant wave of greater or less amplitude. Interference usually refers to the interaction of two distinct waves and it is a result of the linearity of Maxwell Equation. Interference could be constructive or destructive depending on the relative phase of the two waves. In constructive interference, the two waves have the same phase and, as a result, interfere in a way that the resulting wave amplitude will be equal to the sum of the two individual amplitudes. As the light in an optical ring resonator completes multiple circuits around the ring component, it will interfere with the other light still in the loop. As such, assuming there are no losses in the system such as those due to absorption, evanescence, or imperfect coupling and the resonance condition is met, the intensity of the light emitted from a ring resonator will be equal to the intensity of the light fed into the system.
Optical coupling
Important for understanding how an optical ring resonator works, is the concept of how the linear waveguides are coupled to the ring waveguide. When a beam of light passes through a wave guide as shown in the graph on the right, part of light will be coupled into the optical ring resonator. The reason for this is the phenomenon of the evanescent field, which extends outside of the waveguide mode in an exponentially decreasing radial profile. In other words, if the ring and the waveguide are brought closely together, some light from the waveguide can couple into the ring. There are three aspects that affect the optical coupling: the distance, the coupling length and the refractive indices between the waveguide and the optical ring resonator. In order to optimize the coupling, it is usually the case to narrow the distance between the ring resonator and the waveguide. The closer the distance, the easier the optical coupling happens. In addition, the coupling length affects the coupling as well. The coupling length represents the effective curve length of the ring resonator for the coupling phenomenon to happen with the waveguide. It has been studied that as the optical coupling length increases, the difficulty for the coupling to happen decreases. Furthermore, the refractive index of the waveguide material, the ring resonator material and the medium material in between the waveguide and the ring resonator also affect the optical coupling. The medium material is usually the most important feature under study since it has a great effect on the transmission of the light wave. The refractive index of the medium can be either large or small according to various applications and purposes.
One more feature about optical coupling is the critical coupling. The critical coupling shows that no light is passing through the waveguide after the light beam is coupled into the optical ring resonator. The light will be stored and lost inside the resonator thereafter.
Lossless coupling is when no light is transmitted all the way through the input waveguide to its own output; instead, all of the light is coupled into the ring waveguide (such as what is depicted in the image at the top of this page). For lossless coupling to occur, the following equation must be satisfied:
where t is the transmission coefficient through the coupler and is the taper-sphere mode coupling amplitude, also referred to as the coupling coefficient.
Theory
To understand how optical ring resonators work, we must first understand the optical path length difference (OPD) of a ring resonator. This is given as follows for a single-ring ring resonator:
where r is the radius of the ring resonator and is the effective index of refraction of the waveguide material. Due to the total internal reflection requirement, must be greater than the index of refraction of the surrounding fluid in which the resonator is placed (e.g. air). For resonance to take place, the following resonant condition must be satisfied:
where is the resonant wavelength and m is the mode number of the ring resonator. This equation means that in order for light to interfere constructively inside the ring resonator, the circumference of the ring must be an integer multiple of the wavelength of the light. As such, the mode number must be a positive integer for resonance to take place. As a result, when the incident light contains multiple wavelengths (such as white light), only the resonant wavelengths will be able to pass through the ring resonator fully.
The quality factor and the finesse of an optical ring resonator can be quantitatively described using the following formulas (see: eq: 2.37 in, or eq:19+20 in, or eq:12+19 in ):
where is the finesse of the ring resonator, is the operation frequency, is the free spectral range and is the full-width half-max of the transmission spectra. The quality factor is useful in determining the spectral range of the resonance condition for any given ring resonator. The quality factor is also useful for quantifying the amount of losses in the resonator as a low factor is usually due to large losses.
Double ring resonators
In a double ring resonator, two ring waveguides are used instead of one. They may be arranged in series (as shown on the right) or in parallel. When using two ring waveguides in series, the output of the double ring resonator will be in the same direction as the input (albeit with a lateral shift). When the input light meets the resonance condition of the first ring, it will couple into the ring and travel around inside of it. As subsequent loops around the first ring bring the light to the resonance condition of the second ring, the two rings will be coupled together and the light will be passed into the second ring. By the same method, the light will then eventually be transferred into the bus output waveguide. Therefore, in order to transmit light through a double ring resonator system, we will need to satisfy the resonant condition for both rings as follows:
where and are the mode numbers of the first and second ring respectively and they must remain as positive integer numbers. For the light to exit the ring resonator to the output bus waveguide, the wavelength of the light in each ring must be same. That is, for resonance to occur. As such, we get the following equation governing resonance:
Note that both and need to remain integers.
A system of two ring resonators coupled to a single waveguide has also been shown to work as a tunable reflective filter (or an optical mirror). Forward propagating waves in the waveguide excite anti-clockwise rotating waves in both rings. Due to the inter-resonator coupling, these waves generate clockwise rotating waves in both rings which are in turn coupled to backward propagating (reflected) waves in the waveguide.
In this context, the utilization of nested ring resonator cavities has been demonstrated in recent studies. These nested ring resonators are designed to enhance the quality factor (Q-factor) and extend the effective light-matter interaction length. These nested cavity configurations enable light to traverse the nested cavity multiple times, a number equal to the round trips of the main cavity multiplied by the round trips of the nested cavity, as depicted in Figure below.
Applications
Due to the nature of the optical ring resonator and how it "filters" certain wavelengths of light passing through, it is possible to create high-order optical filters by cascading many optical ring resonators in series. This would allow for "small size, low losses, and integrability into [existing] optical networks." Additionally, since the resonance wavelengths can be changed by simply increasing or decreasing the radius of each ring, the filters can be considered tunable. This basic property can be used to create a sort of mechanical sensor. If an optical fiber experiences mechanical strain, the dimensions of the fiber will be altered, thus resulting in a change in the resonant wavelength of light emitted. This can be used to monitor fibers or waveguides for changes in their dimensions.
The tuning process can be affected also by a change of refractive index using various means including thermo-optic, electro-optic or all-optical effects. Electro-optic and all-optical tuning is faster than thermal and mechanical means, and hence find various applications including in optical communication. Optical modulators with a high-Q microring are reported to yield outstandingly small power of modulation at a speed of > 50 Gbit/s at cost of a tuning power to match wavelength of the light source. A ring modulator placed in a Fabry-Perot laser cavity was reported to eliminate the tuning power by automatic matching of the laser wavelength with that of the ring modulator while maintaining high-speed ultralow-power modulation of a Si microring modulator.
Optical ring, cylindrical, and spherical resonators have also been proven useful in the field of biosensing., and a crucial research focus is the enhancement of biosensing performance One of the main benefits of using ring resonators in biosensing is the small volume of sample specimen required to obtain a given spectroscopy results in greatly reduced background Raman and fluorescence signals from the solvent and other impurities. Resonators have also been used to characterize a variety of absorption spectra for the purposes of chemical identification, particularly in the gaseous phase.
Another potential application for optical ring resonators are in the form of whispering gallery mode switches. "[Whispering Gallery Resonator] microdisk lasers are stable and switch reliably and hence, are suitable as switching elements in all-optical networks." An all-optical switch based on a high Quality factor cylindrical resonator has been proposed that allows for fast binary switching at low power.
Many researchers are interested in creating three-dimensional ring resonators with very high quality factors. These dielectric spheres, also called microsphere resonators, "were proposed as low-loss optical resonators with which to study cavity quantum electrodynamics with laser-cooled atoms or as ultrasensitive detectors for the detection of single trapped atoms.”
Ring resonators have also proved useful as single photon sources for quantum information experiments. Many materials used to fabricate ring resonator circuits have non-linear responses to light at high enough intensities. This non-linearity allows for frequency modulation processes such as four-wave mixing and Spontaneous parametric down-conversion which generate photon pairs. Ring resonators amplify the efficiency of these processes as they allow the light to circulate around the ring.
See also
Resonator
Ring laser
Total internal reflection
Coupling
Filter (optics)
Optical switch
Coupled mode theory
References
External links
Animation of optical ring resonator on YouTube
Optical devices
Resonators | Optical ring resonators | [
"Materials_science",
"Engineering"
] | 2,602 | [
"Glass engineering and science",
"Optical devices"
] |
912,904 | https://en.wikipedia.org/wiki/Flow%20control%20valve | A flow control valve regulates the flow or pressure of a fluid. Control valves normally respond to signals generated by independent devices such as flow meters or temperature gauges.
Operation
Control valves are normally fitted with actuators and positioners. Pneumatically-actuated globe valves and diaphragm valves are widely used for control purposes in many industries, although quarter-turn types such as (modified) ball and butterfly valves are also used.
Control valves can also work with hydraulic actuators (also known as hydraulic pilots). These types of valves are also known as automatic control valves. The hydraulic actuators respond to changes of pressure or flow and will open/close the valve. Automatic control valves do not require an external power source, meaning that the fluid pressure is enough to open and close them.
Automatic control valves include pressure reducing valves, flow control valves, back-pressure sustaining valves, altitude valves, and relief valves.
Application
Process plants consist of hundreds, or even thousands, of control loops all networked together to produce a product to be offered for sale. Each of these control loops is designed to keep some important process variable, such as pressure, flow, level, or temperature, within a required operating range to ensure the quality of the end product. Each loop receives and internally creates disturbances that detrimentally affect the process variable, and interaction from other loops in the network provides disturbances that influence the process variable.
To reduce the effect of these load disturbances, sensors and transmitters collect information about the process variable and its relationship to some desired set point. A controller then processes this information and decides what must be done to get the process variable back to where it should be after a load disturbance occurs. When all the measuring, comparing, and calculating are done, some type of final control element must implement the strategy selected by the controller. The most common final control element in the process control industries is the control valve. The control valve manipulates a flowing fluid, such as gas, steam, water, or chemical compounds, to compensate for the load disturbance and keep the regulated process variable as close as possible to the desired set point.
Images
See also
Ball valve
Butterfly valve
Check valve
Control valve
Diaphragm valve
Flow limiter
Flow measurement
Gate valve
Globe valve
Mass flow controller
Needle valve
Plastic pressure pipe systems
Thermal mass flow meter
References
Valves | Flow control valve | [
"Physics",
"Chemistry"
] | 470 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
912,962 | https://en.wikipedia.org/wiki/Butterfly%20valve | A butterfly valve is a valve that isolates or regulates the flow of a fluid. The closing mechanism is a disk that rotates.
Principle of operation
Operation is similar to that of a ball valve, which allows for quick shut off. Butterfly valves are generally favored because they cost less than other valve designs, and are lighter weight so they need less support. The disc is positioned in the center of the pipe. A rod passes through the disc to an actuator on the outside of the valve. Rotating the actuator turns the disc either parallel or perpendicular to the flow. Unlike a ball valve, the disc is always present within the flow, so it induces a pressure drop, even when open.
A butterfly valve is from a family of valves called quarter-turn valves. In operation, the valve is fully open or closed when the disc is rotated a quarter turn. The "butterfly" is a metal disc mounted on a rod. When the valve is closed, the disc is turned so that it completely blocks off the passageway. When the valve is fully open, the disc is rotated a quarter turn so that it allows an almost unrestricted passage of the fluid. The valve may also be opened incrementally to throttle flow.
There are different kinds of butterfly valves, each adapted for different pressures and different usage. The zero-offset butterfly valve, which uses the flexibility of rubber, has the lowest pressure rating. The high-performance double offset butterfly valve, used in slightly higher-pressure systems, is offset from the center line of the disc seat and body seal (offset one), and the center line of the bore (offset two). This creates a cam action during operation to lift the seat out of the seal resulting in less friction than is created in the zero offset design and decreases its tendency to wear. The valve best suited for high-pressure systems is the triple offset butterfly valve. In this valve, the disc seat contact axis is offset, which acts to virtually eliminate sliding contact between disc and seat. In the case of triple offset valves the seat is made of metal so that it can be machined such as to achieve a bubble-tight shut-off when in contact with the disc.
Types
Concentric butterfly valves – this type of valve has a resilient rubber seat with a metal disc.
Doubly-eccentric butterfly valves (high-performance butterfly valves or double-offset butterfly valves) – different type of materials is used for seat and disc.
Triply-eccentric butterfly valves (triple-offset butterfly valves) – the seats are either laminated or solid metal seat design.
Butterfly valve with actuator electric valve - An electrically actuated butterfly valve is a quarter-turn valve controlled by an electric motor. It offers fast and precise flow regulation, remote operation, and versatility for various applications.
Wafer-style butterfly valve
The wafer style butterfly valve is designed to maintain a seal against bi-directional pressure differential to prevent any backflow in systems designed for unidirectional flow. It accomplishes this with a tightly fitting seal; i.e., gasket, o-ring, precision machined, and a flat valve face on the upstream and downstream sides of the valve.the drawback is that wafer butterfly valves only have a small flow control range. The pressure drop across wafer butterfly valves may be greater. Wafer butterfly valves are prone to clogging due to their design.
Lug-style butterfly valve
Lug-style valves have threaded inserts at both sides of the valve body. This allows them to be installed into a system using two sets of bolts and no nuts. The valve is installed between two flanges using a separate set of bolts for each flange. This setup permits either side of the piping system to be disconnected without disturbing the other side.
A lug-style butterfly valve used in dead end service generally has a reduced pressure rating. For example, a lug-style butterfly valve mounted between two flanges has a pressure rating. The same valve mounted with one flange, in dead end service, has a rating. Lugged valves are extremely resistant to chemicals and solvents and can handle temperatures up to 200 °C, which makes it a versatile solution.
Rotary valve
Rotary valves constitute a derivation of the general butterfly valves and are used mainly in powder processing industries. Instead of being flat, the butterfly is equipped with pockets. When closed, it acts exactly like a butterfly valve and is tight. But when it is in the rotation, the pockets allow dropping a defined amount of solids, which makes the valve suitable for dosing bulk product by gravity. Such valves are usually of small size (less than 300 mm), pneumatically activated and rotate 180 degrees back and forth.
Use in industry
In the pharmaceutical, chemical, and food industries, a butterfly valve is used to interrupt product flow (solid, liquid, gas) within the process. The valves used in these industries are usually manufactured according to cGMP guidelines (current good manufacturing practice). Butterfly valves generally replaced ball valves in many industries, particularly petroleum, due to lower cost and ease of installation, but pipelines containing butterfly valves cannot be 'pigged' for cleaning.
History
The butterfly valve has been in use since the late 18th century. James Watt used a butterfly valve in his steam engine prototypes. With advances in material manufacturing and technology, butterfly valves could be made smaller and withstand more-extreme temperatures. After World War II, synthetic rubbers were used in the sealer members, allowing the butterfly valve to be used in many more industries. In 1969 James E. Hemphill patented an improvement to the butterfly valve, reducing the hydrodynamic torque needed to change the output of the valve.
Images
See also
Check valve
Control valve
Diaphragm valve
Gate valve
Globe valve
Needle valve
Plastic pressure pipe systems
References
Valves
Plumbing valves | Butterfly valve | [
"Physics",
"Chemistry"
] | 1,193 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
912,982 | https://en.wikipedia.org/wiki/Euclidean%20quantum%20gravity | In theoretical physics, Euclidean quantum gravity is a version of quantum gravity. It seeks to use the Wick rotation to describe the force of gravity according to the principles of quantum mechanics.
Introduction in layperson's terms
The Wick rotation
In physics, a Wick rotation, named after Gian-Carlo Wick, is a method of finding a solution to dynamics problems in dimensions, by transposing their descriptions in dimensions, by trading one dimension of space for one dimension of time. More precisely, it substitutes a mathematical problem in Minkowski space into a related problem in Euclidean space by means of a transformation that substitutes an imaginary-number variable for a real-number variable.
It is called a rotation because when complex numbers are represented as a plane, the multiplication of a complex number by is equivalent to rotating the vector representing that number by an angle of radians about the origin.
For example, a Wick rotation could be used to relate a macroscopic event temperature diffusion (like in a bath) to the underlying thermal movements of molecules. If we attempt to model the bath volume with the different gradients of temperature we would have to subdivide this volume into infinitesimal volumes and see how they interact. We know such infinitesimal volumes are in fact water molecules. If we represent all molecules in the bath by only one molecule in an attempt to simplify the problem, this unique molecule should walk along all possible paths that the real molecules might follow. The path integral formulation is the conceptual tool used to describe the movements of this unique molecule, and Wick rotation is one of the mathematical tools that are very useful to analyse a path integral problem.
Application in quantum mechanics
In a somewhat similar manner, the motion of a quantum object as described by quantum mechanics implies that it can exist simultaneously in different positions and have different speeds. It differs clearly to the movement of a classical object (e.g. a billiard ball), since in this case a single path with precise position and speed can be described. A quantum object does not move from A to B with a single path, but moves from A to B by all ways possible at the same time. According to the Feynman path-integral formulation of quantum mechanics, the path of the quantum object is described mathematically as a weighted average of all those possible paths. In 1966 an explicitly gauge invariant functional-integral algorithm was found by DeWitt, which extended Feynman's new rules to all orders. What is appealing in this new approach is its lack of singularities when they are unavoidable in general relativity.
Another operational problem with general relativity is the computational difficulty, because of the complexity of the mathematical tools used. Path integrals in contrast have been used in mechanics since the end of the nineteenth century and is well known. In addition, the path-integral formalism is used both in classical and quantum physics so it might be a good starting point for unifying general relativity and quantum theories. For example, the quantum-mechanical Schrödinger equation and the classical heat equation are related by Wick rotation. So the Wick relation is a good tool to relate a classical phenomenon to a quantum phenomenon. The ambition of Euclidean quantum gravity is to use the Wick rotation to find connections between a macroscopic phenomenon, gravity, and something more microscopic.
More rigorous treatment
Euclidean quantum gravity refers to a Wick rotated version of quantum gravity, formulated as a quantum field theory. The manifolds that are used in this formulation are 4-dimensional Riemannian manifolds instead of pseudo Riemannian manifolds. It is also assumed that the manifolds are compact, connected and boundaryless (i.e. no singularities). Following the usual quantum field-theoretic formulation, the vacuum to vacuum amplitude is written as a functional integral over the metric tensor, which is now the quantum field under consideration.
where φ denotes all the matter fields. See Einstein–Hilbert action.
Relation to ADM formalism
Euclidean Quantum Gravity does relate back to ADM formalism used in canonical quantum gravity and recovers the Wheeler–DeWitt equation under various circumstances. If we have some matter field , then the path integral reads
where integration over includes an integration over the three-metric, the lapse function , and shift vector . But we demand that be independent of the lapse function and shift vector at the boundaries, so we obtain
where is the three-dimensional boundary. Observe that this expression vanishes implies the functional derivative vanishes, giving us the Wheeler–DeWitt equation. A similar statement may be made for the diffeomorphism constraint (take functional derivative with respect to the shift functions instead).
References
Formally relates Euclidean quantum gravity to ADM formalism.
Quantum gravity | Euclidean quantum gravity | [
"Physics"
] | 962 | [
"Quantum gravity",
"Unsolved problems in physics",
"Physics beyond the Standard Model"
] |
913,620 | https://en.wikipedia.org/wiki/Touchdown%20polymerase%20chain%20reaction | The touchdown polymerase chain reaction or touchdown style polymerase chain reaction is a method of polymerase chain reaction by which primers avoid amplifying nonspecific sequences. The annealing temperature during a polymerase chain reaction determines the specificity of primer annealing. The melting point of the primer sets the upper limit on annealing temperature. At temperatures just above this point, only very specific base pairing between the primer and the template will occur. At lower temperatures, the primers bind less specifically. Nonspecific primer binding obscures polymerase chain reaction results, as the nonspecific sequences to which primers anneal in early steps of amplification will "swamp out" any specific sequences because of the exponential nature of polymerase amplification.
Method
The earliest steps of a touchdown polymerase chain reaction cycle have high annealing temperatures. The annealing temperature is decreased in increments for every subsequent set of cycles. The primer will anneal at the highest temperature which is least-permissive of nonspecific binding that it is able to tolerate. Thus, the first sequence amplified is the one between the regions of greatest primer specificity; it is most likely that this is the sequence of interest. These fragments will be further amplified during subsequent rounds at lower temperatures, and will outcompete the nonspecific sequences to which the primers may bind at those lower temperatures. If the primer initially (during the higher-temperature phases) binds to the sequence of interest, subsequent rounds of polymerase chain reaction can be performed upon the product to further amplify those fragments. Touchdown increases specificity of the reaction at higher temperatures and increases the efficiency towards the end by lowering the annealing temperature.
From a mathematical point of view products of annealing at smaller temperatures are disadvantaged by for the first annealing in cycle and the second one in cycle for .
References
Molecular biology
Laboratory techniques
Amplifiers
Polymerase chain reaction | Touchdown polymerase chain reaction | [
"Chemistry",
"Technology",
"Biology"
] | 399 | [
"Biochemistry methods",
"Genetics techniques",
"Polymerase chain reaction",
"nan",
"Molecular biology",
"Biochemistry",
"Amplifiers"
] |
913,945 | https://en.wikipedia.org/wiki/Hyperfocal%20distance | In optics and photography, hyperfocal distance is a distance from a lens beyond which all objects can be brought into an "acceptable" focus. As the hyperfocal distance is the focus distance giving the maximum depth of field, it is the most desirable distance to set the focus of a fixed-focus camera. The hyperfocal distance is entirely dependent upon what level of sharpness is considered to be acceptable.
The hyperfocal distance has a property called "consecutive depths of field", where a lens focused at an object whose distance from the lens is at the hyperfocal distance will hold a depth of field from to infinity, if the lens is focused to , the depth of field will be from to ; if the lens is then focused to , the depth of field will be from to , etc.
Thomas Sutton and George Dawson first wrote about hyperfocal distance (or "focal range") in 1867. Louis Derr in 1906 may have been the first to derive a formula for hyperfocal distance. Rudolf Kingslake wrote in 1951 about the two methods of measuring hyperfocal distance.
Some cameras have their hyperfocal distance marked on the focus dial. For example, on the Minox LX focusing dial there is a red dot between and infinity; when the lens is set at the red dot, that is, focused at the hyperfocal distance, the depth of field stretches from to infinity. Some lenses have markings indicating the hyperfocal range for specific f-stops, also called a depth-of-field scale.
Two methods
There are two common methods of defining and measuring hyperfocal distance, leading to values that differ only slightly. The distinction between the two meanings is rarely made, since they have almost identical values. The value computed according to the first definition exceeds that from the second by just one focal length.
Definition 1
The hyperfocal distance is the closest distance at which a lens can be focused while keeping objects at infinity acceptably sharp. When the lens is focused at this distance, all objects at distances from half of the hyperfocal distance out to infinity will be acceptably sharp.
Definition 2
The hyperfocal distance is the distance beyond which all objects are acceptably sharp, for a lens focused at infinity.
Acceptable sharpness
The hyperfocal distance is entirely dependent upon what level of sharpness is considered to be acceptable. The criterion for the desired acceptable sharpness is specified through the circle of confusion (CoC) diameter limit. This criterion is the largest acceptable spot size diameter that an infinitesimal point is allowed to spread out to on the imaging medium (film, digital sensor, etc.).
Formula
For the first definition,
where
is the hyperfocal distance;
is the focal length of the lens;
is f-number ( for aperture diameter ); and
is the circle of confusion limit.
For any practical f-number, the added focal length is insignificant in comparison with the first term, so that
This formula is exact for the second definition, if is measured from a thin lens, or from the front principal plane of a complex lens; it is also exact for the first definition if is measured from a point that is one focal length in front of the front principal plane. For practical purposes, there is little difference between the first and second definitions.
Derivation using geometric optics
The following derivations refer to the accompanying figures. For clarity, half the aperture and circle of confusion are indicated.
Definition 1
An object at distance forms a sharp image at distance (blue line). Here, objects at infinity have images with a circle of confusion indicated by the brown ellipse where the upper red ray through the focal point intersects the blue line.
First using similar triangles hatched in green,
Then using similar triangles dotted in purple,
as found above.
Definition 2
Objects at infinity form sharp images at the focal length (blue line). Here, an object at forms an image with a circle of confusion indicated by the brown ellipse where the lower red ray converging to its sharp image intersects the blue line.
Using similar triangles shaded in yellow,
Example
As an example, for a lens at using a circle of confusion of , which is a value typically used in photography, the hyperfocal distance according to Definition 1 is
If the lens is focused at a distance of , then everything from half that distance () to infinity will be acceptably sharp in our photograph. With the formula for the Definition 2, the result is , a difference of 0.5%.
Consecutive depths of field
The hyperfocal distance has a curious property: while a lens focused at will hold a depth of field from to infinity, if the lens is focused to , the depth of field will extend from to ; if the lens is then focused to , the depth of field will extend from to . This continues on through all successive neighboring terms in the harmonic series () values of the hyperfocal distance. That is, focusing at will cause the depth of field to extend from to .
C. Welborne Piper calls this phenomenon "consecutive depths of field" and shows how to test the idea easily. This is also among the earliest of publications to use the word hyperfocal.
History
The concepts of the two definitions of hyperfocal distance have a long history, tied up with the terminology for depth of field, depth of focus, circle of confusion, etc. Here are some selected early quotations and interpretations on the topic.
Sutton and Dawson 1867
Thomas Sutton and George Dawson define focal range for what we now call hyperfocal distance:
Their focal range is about 1000 times their aperture diameter, so it makes sense as a hyperfocal distance with CoC value of , or image format diagonal times 1/1000 assuming the lens is a "normal" lens. What is not clear, however, is whether the focal range they cite was computed, or empirical.
Abney 1881
Sir William de Wivelesley Abney says:
That is, is the reciprocal of what we now call the f-number, and the answer is evidently in meters. His 0.41 should obviously be 0.40. Based on his formulae, and on the notion that the aperture ratio should be kept fixed in comparisons across formats, Abney says:
Taylor 1892
John Traill Taylor recalls this word formula for a sort of hyperfocal distance:
This formula implies a stricter CoC criterion than we typically use today.
Hodges 1895
John Hodges discusses depth of field without formulas but with some of these relationships:
This "mathematically" observed relationship implies that he had a formula at hand, and a parameterization with the f-number or "intensity ratio" in it. To get an inverse-square relation to focal length, you have to assume that the CoC limit is fixed and the aperture diameter scales with the focal length, giving a constant f-number.
Piper 1901
C. Welborne Piper may be the first to have published a clear distinction between Depth of Field in the modern sense and Depth of Definition in the focal plane, and implies that Depth of Focus and Depth of Distance are sometimes used for the former (in modern usage, Depth of Focus is usually reserved for the latter). He uses the term Depth Constant for , and measures it from the front principal focus (i. e., he counts one focal length less than the distance from the lens to get the simpler formula), and even introduces the modern term:
It is unclear what distinction he means. Adjacent to Table I in his appendix, he further notes:
At this point we do not have evidence of the term hyperfocal before Piper, nor the hyphenated hyper-focal which he also used, but he obviously did not claim to coin this descriptor himself.
Derr 1906
Louis Derr may be the first to clearly specify the first definition, which is considered to be the strictly correct one in modern times, and to derive the formula corresponding to it. Using for hyperfocal distance, for aperture diameter, for the diameter that a circle of confusion shall not exceed, and for focal length, he derives:
As the aperture diameter, is the ratio of the focal length to the numerical aperture (); and the diameter of the circle of confusion, , this gives the equation for the first definition above.
Johnson 1909
George Lindsay Johnson uses the term Depth of Field for what Abney called Depth of Focus, and Depth of Focus in the modern sense (possibly for the first time), as the allowable distance error in the focal plane. His definitions include hyperfocal distance:
His drawing makes it clear that his is the radius of the circle of confusion. He has clearly anticipated the need to tie it to format size or enlargement, but has not given a general scheme for choosing it.
Johnson's use of former and latter seem to be swapped; perhaps former was here meant to refer to the immediately preceding section title Depth of Focus, and latter to the current section title Depth of Field. Except for an obvious factor-of-2 error in using the ratio of stop diameter to CoC radius, this definition is the same as Abney's hyperfocal distance.
Others, early twentieth century
The term hyperfocal distance also appears in Cassell's Cyclopaedia of 1911, The Sinclair Handbook of Photography of 1913, and Bayley's The Complete Photographer of 1914.
Kingslake 1951
Rudolf Kingslake is explicit about the two meanings:
Kingslake uses the simplest formulae for DOF near and far distances, which has the effect of making the two different definitions of hyperfocal distance give identical values.
See also
Circle of confusion
Deep focus
References
External links
http://www.dofmaster.com/dofjs.html to calculate hyperfocal distance and depth of field
Length
Science of photography | Hyperfocal distance | [
"Physics",
"Mathematics"
] | 1,987 | [
"Scalar physical quantities",
"Physical quantities",
"Distance",
"Quantity",
"Size",
"Length",
"Wikipedia categories named after physical quantities"
] |
914,098 | https://en.wikipedia.org/wiki/Amplicon | In molecular biology, an amplicon is a piece of DNA or RNA that is the source and/or product of amplification or replication events. It can be formed artificially, using various methods including polymerase chain reactions (PCR) or ligase chain reactions (LCR), or naturally through gene duplication. In this context, amplification refers to the production of one or more copies of a genetic fragment or target sequence, specifically the amplicon. As it refers to the product of an amplification reaction, amplicon is used interchangeably with common laboratory terms, such as "PCR product."
Artificial amplification is used in research, forensics, and medicine for purposes that include detection and quantification of infectious agents, identification of human remains, and extracting genotypes from human hair.
Natural gene duplication plays a major role in evolution. It is also implicated in several forms of human cancer including primary mediastinal B cell lymphoma and Hodgkin's lymphoma. In this context the term amplicon can refer both to a section of chromosomal DNA that has been excised, amplified, and reinserted elsewhere in the genome, and to a fragment of extrachromosomal DNA known as a double minute, each of which can be composed of one or more genes. Amplification of the genes encoded by these amplicons generally increases transcription of those genes and ultimately the volume of associated proteins.
Structure
Amplicons in general are direct repeat (head-to-tail) or inverted repeat (head-to-head or tail-to-tail) genetic sequences, and can be either linear or circular in structure. Circular amplicons consist of imperfect inverted duplications annealed into a circle and are thought to arise from precursor linear amplicons.
During artificial amplification, amplicon length is dictated by the experimental goals.
Technology
Analysis of amplicons has been made possible by the development of amplification methods such as PCR, and increasingly by cheaper and more high-throughput technologies for DNA sequencing or next-generation sequencing, such as ion semiconductor sequencing, popularly referred to as the brand of the developer, Ion Torrent.
DNA sequencing technologies such as next-generation sequencing have made it possible to study amplicons in genome biology and genetics, including cancer genetics research, phylogenetic research, and human genetics. For example, using the 16S rRNA gene, which is part of every bacterial and archaeal genome and is highly conserved, bacteria can be taxonomically classified by comparison of the amplicon sequence to known sequences. This works similarly in the fungal domain with the 18S rRNA gene as well as the ITS1 non-coding region.
Irrespective of the approach used to amplify the amplicons, some technique must be used to quantitate the amplified product. Generally, these techniques incorporate a capture step and a detection step, although how these steps are incorporated depends on the individual assay.
Examples include the Amplicor HIV-1 Monitor Assay (RT-PCR), which has the capacity to recognize HIV in plasma; the HIV-1 QT (NASBA), which is used to measure plasma viral load by amplifying a segment of the HIV RNA; and transcription mediated amplification, which employs a hybridization protection assay to distinguish Chlamydia trachomatis infections. Various detection and capture steps are involved in each approach to assess the amplification product, or amplicon. With amplicon sequencing the high number of different amplicons resulting from amplification of a usual sample are concatenated and sequenced. After quality control classification is done by different methods, the counts of identical taxa representing their relative abundance in the sample.
Applications
PCR can be used to determine sex from a human DNA sample. The loci of Alu element insertion is selected, amplified and evaluated in terms of size of the fragment. The sex assay utilizes AluSTXa for the X chromosome, AluSTYa for the Y chromosome, or both AluSTXa and AluSTYa, to reduce the possibility of error to a negligible quantity. The inserted chromosome yields a large fragment when the homologous region is amplified. The males are distinguished as having two DNA amplicons present, while females have only a single amplicon. The kit adapted for carrying out the method includes a pair of primers to amplify the locus and optionally polymerase chain reaction reagents.
LCR can be used to diagnose tuberculosis. The sequence containing protein antigen B is targeted by four oligonucleotide primers—two for the sense strand, and two for the antisense strand. The primers bind adjacent to one another, forming a segment of double stranded DNA that once separated, can serve as a target for future rounds of replication. In this instance, the product can be detected via the microparticle enzyme immunoassay (MEIA).
See also
DNA polymerase
High Resolution Melt
Melting curve analysis
Molecular biology
References
Further reading
External links
DNA
Genetics techniques
Molecular biology
Biotechnology
Polymerase chain reaction
Laboratory techniques
Molecular biology techniques | Amplicon | [
"Chemistry",
"Engineering",
"Biology"
] | 1,071 | [
"Biochemistry methods",
"Genetics techniques",
"Polymerase chain reaction",
"Genetic engineering",
"Biotechnology",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Biochemistry"
] |
914,462 | https://en.wikipedia.org/wiki/Engine-indicating%20and%20crew-alerting%20system | An engine-indicating and crew-alerting system (EICAS) is an integrated system used in modern aircraft to provide aircraft flight crew with instrumentation and crew annunciations for aircraft engines and other systems. On EICAS equipped aircraft the "recommended remedial action" is called a checklist.
Components
EICAS typically includes instrumentation of various engine parameters, including for example speed of rotation, temperature values including exhaust gas temperature, fuel flow and quantity, oil pressure etc. Other aircraft systems typically monitored by EICAS are for example hydraulic, pneumatic, electrical, deicing, environmental and control surface systems. EICAS has high connectivity & provides data acquisition and routing.
EICAS is a key function of a glass cockpit system, which replaces all analog gauges with software-driven electronic displays. Most of the display area is used for navigation and orientation displays, but one display or a section of a display is set aside specifically for EICAS.
The crew-alerting system (CAS) is used in place of the annunciator panel on older systems. Rather than signaling a system failure by turning on a light behind a translucent button, failures are shown as a list of messages in a small window near the other EICAS indications.
Alternative systems
Some alternatives are:
Electronic Centralised Aircraft Monitor (ECAM) on Airbus
Centralized Fault Detection System (CFDS) on McDonnell Douglas
Flight Warning System (FWS) on Fokker
Engine Warning Display (EWD) on ATR
Комплексная система электронной индикации и сигнализации (КСЭИС) on Antonov.
Presence
The system is called EICAS at least on the following aircraft:
Jetliners
The first Boeing airliner with EICAS was the Boeing 757. The Boeing 747 has EICAS since the 747-400. No version of the Boeing 737 has EICAS. The Boeing 717 has CFDS, as it was originally a McDonnell Douglas product.
The Embraer ERJ family and the Embraer E-Jet family have EICAS.
The Bombardier CRJ and the Bombardier CSeries have EICAS.
The Fairchild-Dornier 328JET has EICAS.
The COMAC ARJ21 and the COMAC C919 have EICAS.
Turboprop airliners
The Saab 2000 has EICAS.
The Dornier 328 and the Dornier 228NG have EICAS.
The Xi'an MA60 and the Xi'an MA600 have EICAS.
Limitations
On some Bombardier aircraft, it is possible to call up the wrong checklist. Messages forbidding take-off can be shown as advisories.
The 757, 767, and 747-400 have no electronic checklists.
The ERJ and the E-Jets have no electronic checklists.
The CRJ have no electronic checklists.
The Do-328 and the Do-328JET have no electronic checklists.
The Saab 2000 has no electronic checklists.
Gallery
See also
Electronic centralised aircraft monitor, a similar system by Airbus
References
External links
Astronautics Corporation of America EICAS displays (Archive)
Aircraft instruments
Aircraft components
Glass cockpit | Engine-indicating and crew-alerting system | [
"Technology",
"Engineering"
] | 682 | [
"Glass cockpit",
"Aircraft instruments",
"Measuring instruments"
] |
914,572 | https://en.wikipedia.org/wiki/Excipient | An excipient is a substance formulated alongside the active ingredient of a medication. They may be used to enhance the active ingredient’s therapeutic properties; to facilitate drug absorption; to reduce viscosity; to enhance solubility; to improve long-term stabilization (preventing denaturation and aggregation during the expected shelf life); or to add bulk to solid formulations that have small amounts of potent active ingredients (in that context, they are often referred to as "bulking agents", "fillers", or "diluents"). During the manufacturing process, excipients can improve the handling of active substances and facilitate powder flow. The choice of excipients depends on factors such as the intended route of administration, the dosage form, and compatibility with the active ingredient.
Virtually all marketed drugs contain excipients, and final drug formulations commonly contain more excipient than active ingredient. Pharmaceutical regulations and standards mandate the identification and safety assessment of all ingredients in drugs, including their chemical decomposition products. Novel excipients can sometimes be patented, or the specific formulation can be kept as a trade secret to prevent competitors from duplicating it through reverse engineering.
Relative versus absolute inactivity
Though excipients were at one time assumed to be "inactive" ingredients, it is now understood that they can sometimes be "a key determinant of dosage form performance"; in other words, their effects on pharmacodynamics and pharmacokinetics, although usually negligible, cannot be known to be negligible without empirical confirmation and sometimes are important. For that reason, in basic research and clinical trials they are sometimes included in the control substances in order to minimize confounding, reflecting that otherwise, the absence of the active ingredient would not be the only variable involved, because absence of excipient cannot always be assumed not to be a variable. Such studies are called excipient-controlled or vehicle-controlled studies.
Types
Adjuvants
Adjuvants are added to vaccines to enhance or modify the immune system response to an immunization. An adjuvant may stimulate the immune system to respond more vigorously to a vaccine, which leads to more robust immunity in the recipient.
Antiadherents
Antiadherents reduce the adhesion between the powder (granules) and the punch faces and thus prevent sticking to tablet punches by offering a non-stick surface. They are also used to help protect tablets from sticking. The most commonly used is magnesium stearate.
Binders
Binders hold the ingredients in a tablet together. Binders ensure that tablets and granules can be formed with required mechanical strength, and give volume to low active dose tablets. Binders are usually:
Saccharides and their derivatives:
Disaccharides: sucrose, lactose;
Polysaccharides and their derivatives: starches, cellulose or modified cellulose such as microcrystalline cellulose and cellulose ethers such as hydroxypropyl cellulose (HPC);
Sugar alcohols such as xylitol, sorbitol or mannitol;
Protein: gelatin;
Synthetic polymers: polyvinylpyrrolidone (PVP), polyethylene glycol (PEG)...
Binders are classified according to their application:
Solution binders are dissolved in a solvent (for example water or alcohol can be used in wet granulation processes). Examples include gelatin, cellulose, cellulose derivatives, polyvinylpyrrolidone, starch, sucrose and polyethylene glycol.
Dry binders are added to the powder blend, either after a wet granulation step, or as part of a direct powder compression (DC) formula. Examples include cellulose, methyl cellulose, polyvinylpyrrolidone and polyethylene glycol.
Coatings
Tablet coatings protect tablet ingredients from deterioration by moisture in the air and make large or unpleasant-tasting tablets easier to swallow. For most coated tablets, a cellulose ether hydroxypropyl methylcellulose (HPMC) film coating is used which is free of sugar and potential allergens. Occasionally, other coating materials are used, for example synthetic polymers, shellac, corn protein zein or other polysaccharides. Capsules are coated with gelatin.
Enterics control the rate of drug release and determine where the drug will be released in the digestive tract. Materials used for enteric coatings include fatty acids, waxes, shellac, plastics, and plant fibers.
Colours
Colours are added to improve the appearance of a formulation. Colour consistency is important as it allows easy identification of a medication. Furthermore, colours often improve the aesthetic look and feel of medications. Small amounts of colouring agents are easily processed by the body, although rare reactions are known, notably to tartrazine. Commonly, titanium oxide is used as a colouring agent to produce the popular opaque colours along with azo dyes for other colors. By increasing these organoleptic properties a patient is more likely to adhere to their schedule and therapeutic objectives will also have a better outcome for the patient especially children.
Disintegrants
Disintegrants expand and dissolve when wet causing the tablet to break apart in the digestive tract, or in specific segments of the digestion process, releasing the active ingredients for absorption. They ensure that when the tablet is in contact with water, it rapidly breaks down into smaller fragments, facilitating dissolution.
Examples of disintegrants include:
Crosslinked polymers: crosslinked polyvinylpyrrolidone (crospovidone), crosslinked sodium carboxymethyl cellulose (croscarmellose sodium).
sodium starch glycolate, a modified starch
Flavours
Flavours can be used to mask unpleasant tasting active ingredients and improve the acceptance that the patient will complete a course of medication. Flavourings may be natural (e.g. fruit extract) or artificial.
For example, to improve:
a bitter product–mint, cherry or anise may be used
a salty product–peach, apricot or liquorice may be used
a sour product–raspberry or liquorice may be used
an excessively sweet product–vanilla may be used
Glidants
Glidants are used to promote powder flow by reducing interparticle friction and cohesion. These are used in combination with lubricants as they have no ability to reduce wall friction. Examples include silica gel, fumed silica, talc, and magnesium carbonate. However, some silica gel glidants such as Syloid(R) 244 FP and Syloid(R) XDP are multi-functional and offer several other performance benefits in addition to reducing interparticle friction including moisture resistance, taste, marketing, etc.
Lubricants
Lubricants prevent ingredients from clumping together and from sticking to the tablet punches or capsule filling machine. Lubricants also ensure that tablet formation and ejection can occur with low friction between the solid and die wall.
Common minerals like talc or silica, and fats, e.g. vegetable stearin, magnesium stearate or stearic acid are the most frequently used lubricants in tablets or hard gelatin capsules. Lubricants are agents added in small quantities to tablet and capsule formulations to improve certain processing characteristics. While lubricants are often added to improve manufacturability of the drug products, it may also negatively impact the product quality. For example, extended mixing of lubricants during blending may results in delayed dissolution and softer tablets, which is often referred to as "over-lubrication". Therefore, optimizing lubrication time is critical during pharmaceutical development.
There are three roles identified with lubricants as follows:
True lubricant role:
To decrease friction at the interface between a tablet’s surface and the die wall during ejection and reduce wear on punches and dies.
Anti-adherent role:
Prevent sticking to punch faces or in the case of encapsulation, lubricants.
Prevent sticking to machine dosators, tamping pins, etc.
Glidant role:
Enhance product flow by reducing interparticulate friction.
There are two major types of lubricants:
Hydrophilic
Generally poor lubricants, no glidant or anti-adherent properties.
Hydrophobic
Most widely used lubricants in use today are of the hydrophobic category. Hydrophobic lubricants are generally good lubricants and are usually effective at relatively low concentrations. Many also have both anti-adherent and glidant properties. For these reasons, hydrophobic lubricants are used much more frequently than hydrophilic compounds. Examples include magnesium stearate.
Preservatives
Some typical preservatives used in pharmaceutical formulations are
Antioxidants like vitamin A, vitamin E, vitamin C, retinyl palmitate, and selenium
The amino acids cysteine and methionine
Citric acid and sodium citrate
Synthetic preservatives like the parabens: methyl paraben and propyl paraben.
Sorbents
Sorbents are used for tablet/capsule moisture-proofing by limited fluid sorbing (taking up of a liquid or a gas either by adsorption or by absorption) in a dry state. For example, desiccants absorb water, drying out (desiccating) the surrounding materials.
Sweeteners
Sweeteners are added to make the ingredients more palatable, especially in chewable tablets such as antacid or liquids like cough syrup. Sugar can be used to mask unpleasant tastes or smells, but artificial sweeteners tend to be preferred, as natural ones tend to cause tooth decay.
Vehicles
In liquid and gel formulations, the bulk excipient that serves as a medium for conveying the active ingredient is usually called the vehicle. Petrolatum, dimethyl sulfoxide and mineral oil are common vehicles.
See also
Active ingredient
Pharmaceutics
Pharmacology
Placebo
Placebo effect
Quality system
References
External links
FDA database for Inactive Ingredient Search for Approved Drug Products
Excipient selection for injectable / parenteral formulations
IPEC-Americas
UCSF-CERSI Excipient Browser
Drug manufacturing
Pharmacy | Excipient | [
"Chemistry"
] | 2,178 | [
"Pharmacology",
"Pharmacy"
] |
21,389,230 | https://en.wikipedia.org/wiki/Standard%20flowgram%20format | Standard flowgram format (SFF) is a binary file format used to encode results of pyrosequencing from the 454 Life Sciences platform for high-throughput sequencing. SFF files can be viewed, edited and converted with DNA Baser SFF Workbench (graphic tool), or converted to FASTQ format with sff2fastq or seq_crumbs.
Further reading
NCBI reference for SFF format
Biotechnology companies of the United States
Computer file formats
DNA sequencing | Standard flowgram format | [
"Chemistry",
"Biology"
] | 105 | [
"Bioinformatics stubs",
"Biotechnology stubs",
"Biochemistry stubs",
"Bioinformatics",
"Molecular biology techniques",
"DNA sequencing"
] |
21,391,368 | https://en.wikipedia.org/wiki/Prismatic%20joint | A prismatic joint is a one-degree-of-freedom kinematic pair which constrains the motion of two bodies to sliding along a common axis, without rotation; for this reason it is often called a slider (as in the slider-crank linkage) or a sliding pair. They are often utilized in hydraulic and pneumatic cylinders.
A prismatic joint can be formed with a polygonal cross-section to resist rotation. Examples of this include the dovetail joint and linear bearings.
See also
Cylindrical joint
Degrees of freedom (mechanics)
Kinematic pair
Kinematics
Mechanical joint
Revolute joint
References
Kinematics
Rigid bodies | Prismatic joint | [
"Physics",
"Technology"
] | 135 | [
"Machines",
"Kinematics",
"Physical phenomena",
"Classical mechanics stubs",
"Classical mechanics",
"Physical systems",
"Motion (physics)",
"Mechanics"
] |
21,391,441 | https://en.wikipedia.org/wiki/Amott%20test | The Amott test is one of the most widely used empirical wettability measurements for reservoir cores in petroleum engineering. The method combines two spontaneous imbibition measurements and two forced displacement measurements. This test defines two different indices: the Amott water index () and the Amott oil index ().
Amott–Harvey index
The two Amott indices are often combined to give the Amott–Harvey index. It is a number between −1 and 1 describing wettability of a rock in drainage processes. It is defined as:
These two indices are obtained from special core analysis (SCAL) experiments (porous plate or centrifuge) by plotting the capillary pressure curve as a function of the water saturation as shown on figure 1:
with is the water saturation for a zero capillary pressure during the imbibition process, is the irreducible water saturation and is the residual oil saturation after imbibition.
with is the oil saturation for a zero capillary pressure during the secondary drainage process, is the irreducible water saturation and is the residual non-wetting phase saturation after imbibition.
A rock is defined as:
See also
Capillary pressure
Imbibition
Leverett J-function
Multiphase flow
Relative permeability
TEM-function
Rise in core – An alternate Reservoir Wettability Characterization Method
Lak wettability index
USBM wettability index
References
Dake, L.P., "Fundamentals of Reservoir Engineering", Elsevier Scientific Publishing Company, Amsterdam, 1977.
Amott, E., "Observations relating to the wettability of porous rock", Trans. AIME 219, pp. 156–162, 1959.
Petroleum engineering
External links
Fluid flow in porous media : facts
Fundamentals of Wettability
Multi-phase Saturated Rock Properties: Wettability: Laboratory Determination | Amott test | [
"Chemistry",
"Engineering"
] | 387 | [
"Petroleum engineering",
"Energy engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
21,391,464 | https://en.wikipedia.org/wiki/Topos | In mathematics, a topos (, ; plural topoi or , or toposes) is a category that behaves like the category of sheaves of sets on a topological space (or more generally: on a site). Topoi behave much like the category of sets and possess a notion of localization; they are a direct generalization of point-set topology. The Grothendieck topoi find applications in algebraic geometry; the more general elementary topoi are used in logic.
The mathematical field that studies topoi is called topos theory.
Grothendieck topos (topos in geometry)
Since the introduction of sheaves into mathematics in the 1940s, a major theme has been to study a space by studying sheaves on a space. This idea was expounded by Alexander Grothendieck by introducing the notion of a "topos". The main utility of this notion is in the abundance of situations in mathematics where topological heuristics are very effective, but an honest topological space is lacking; it is sometimes possible to find a topos formalizing the heuristic. An important example of this programmatic idea is the étale topos of a scheme. Another illustration of the capability of Grothendieck topoi to incarnate the “essence” of different mathematical situations is given by their use as "bridges" for connecting theories which, albeit written in possibly very different languages, share a common mathematical content.
Equivalent definitions
A Grothendieck topos is a category which satisfies any one of the following three properties. (A theorem of Jean Giraud states that the properties below are all equivalent.)
There is a small category and an inclusion that admits a finite-limit-preserving left adjoint.
is the category of sheaves on a Grothendieck site.
satisfies Giraud's axioms, below.
Here denotes the category of contravariant functors from to the category of sets; such a contravariant functor is frequently called a presheaf.
Giraud's axioms
Giraud's axioms for a category are:
has a small set of generators, and admits all small colimits. Furthermore, fiber products distribute over coproducts; that is, given a set , an -indexed coproduct mapping to , and a morphism , the pullback is an -indexed coproduct of the pullbacks:
Sums in are disjoint. In other words, the fiber product of and over their sum is the initial object in .
All equivalence relations in are effective.
The last axiom needs the most explanation. If X is an object of C, an "equivalence relation" R on X is a map R → X × X in C
such that for any object Y in C, the induced map Hom(Y, R) → Hom(Y, X) × Hom(Y, X) gives an ordinary equivalence relation on the set Hom(Y, X). Since C has colimits we may form the coequalizer of the two maps R → X; call this X/R. The equivalence relation is "effective" if the canonical map
is an isomorphism.
Examples
Giraud's theorem already gives "sheaves on sites" as a complete list of examples. Note, however, that nonequivalent sites often give
rise to equivalent topoi. As indicated in the introduction, sheaves on ordinary topological spaces motivate many of the basic definitions and results of topos theory.
Category of sets and G-sets
The category of sets is an important special case: it plays the role of a point in topos theory. Indeed, a set may be thought of as a sheaf on a point since functors on the singleton category with a single object and only the identity morphism are just specific sets in the category of sets.
Similarly, there is a topos for any group which is equivalent to the category of -sets. We construct this as the category of presheaves on the category with one object, but now the set of morphisms is given by the group . Since any functor must give a -action on the target, this gives the category of -sets. Similarly, for a groupoid the category of presheaves on gives a collection of sets indexed by the set of objects in , and the automorphisms of an object in has an action on the target of the functor.
Topoi from ringed spaces
More exotic examples, and the raison d'être of topos theory, come from algebraic geometry. The basic example of a topos comes from the Zariski topos of a scheme. For each scheme there is a site (of objects given by open subsets and morphisms given by inclusions) whose category of presheaves forms the Zariski topos . But once distinguished classes of morphisms are considered, there are multiple generalizations of this which leads to non-trivial mathematics. Moreover, topoi give the foundations for studying schemes purely as functors on the category of algebras.
To a scheme and even a stack one may associate an étale topos, an fppf topos, or a Nisnevich topos. Another important example of a topos is from the crystalline site. In the case of the étale topos, these form the foundational objects of study in anabelian geometry, which studies objects in algebraic geometry that are determined entirely by the structure of their étale fundamental group.
Pathologies
Topos theory is, in some sense, a generalization of classical point-set topology. One should therefore expect to see old and new instances of pathological behavior. For instance, there is an example due to Pierre Deligne of a nontrivial topos that has no points (see below for the definition of points of a topos).
Geometric morphisms
If and are topoi, a geometric morphism is a pair of adjoint functors (u∗,u∗) (where u∗ : Y → X is left adjoint to u∗ : X → Y) such that u∗ preserves finite limits. Note that u∗ automatically preserves colimits by virtue of having a right adjoint.
By Freyd's adjoint functor theorem, to give a geometric morphism X → Y is to give a functor u∗: Y → X that preserves finite limits and all small colimits. Thus geometric morphisms between topoi may be seen as analogues of maps of locales.
If and are topological spaces and is a continuous map between them, then the pullback and pushforward operations on sheaves yield a geometric morphism between the associated topoi for the sites .
Points of topoi
A point of a topos is defined as a geometric morphism from the topos of sets to .
If X is an ordinary space and x is a point of X, then the functor that takes a sheaf F to its stalk Fx has a right adjoint
(the "skyscraper sheaf" functor), so an ordinary point of X also determines a topos-theoretic point. These may be constructed as the pullback-pushforward along the continuous map x: 1 → X.
For the etale topos of a space , a point is a bit more refined of an object. Given a point of the underlying scheme a point of the topos is then given by a separable field extension of such that the associated map factors through the original point . Then, the factorization map is an etale morphism of schemes.
More precisely, those are the global points. They are not adequate in themselves for displaying the space-like aspect of a topos, because a non-trivial topos may fail to have any. Generalized points are geometric morphisms from a topos Y (the stage of definition) to X. There are enough of these to display the space-like aspect. For example, if X is the classifying topos S[T] for a geometric theory T, then the universal property says that its points are the models of T (in any stage of definition Y).
Essential geometric morphisms
A geometric morphism (u∗,u∗) is essential if u∗ has a further left adjoint u!, or equivalently (by the adjoint functor theorem) if u∗ preserves not only finite but all small limits.
Ringed topoi
A ringed topos is a pair (X,R), where X is a topos and R is a commutative ring object in X. Most of the constructions of ringed spaces go through for ringed topoi. The category of R-module objects in X is an abelian category with enough injectives. A more useful abelian category is the subcategory of quasi-coherent R-modules: these are R-modules that admit a presentation.
Another important class of ringed topoi, besides ringed spaces, are the étale topoi of Deligne–Mumford stacks.
Homotopy theory of topoi
Michael Artin and Barry Mazur associated to the site underlying a topos a pro-simplicial set (up to homotopy). (It's better to consider it in Ho(pro-SS); see Edwards) Using this inverse system of simplicial sets one may sometimes associate to a homotopy invariant in classical topology an inverse system of invariants in topos theory. The study of the pro-simplicial set associated to the étale topos of a scheme is called étale homotopy theory. In good cases (if the scheme is Noetherian and geometrically unibranch), this pro-simplicial set is pro-finite.
Elementary topoi (topoi in logic)
Introduction
Since the early 20th century, the predominant axiomatic foundation of mathematics has been set theory, in which all mathematical objects are ultimately represented by sets (including functions, which map between sets). More recent work in category theory allows this foundation to be generalized using topoi; each topos completely defines its own mathematical framework. The category of sets forms a familiar topos, and working within this topos is equivalent to using traditional set-theoretic mathematics. But one could instead choose to work with many alternative topoi. A standard formulation of the axiom of choice makes sense in any topos, and there are topoi in which it is invalid. Constructivists will be interested to work in a topos without the law of excluded middle. If symmetry under a particular group G is of importance, one can use the topos consisting of all G-sets.
It is also possible to encode an algebraic theory, such as the theory of groups, as a topos, in the form of a classifying topos. The individual models of the theory, i.e. the groups in our example, then correspond to functors from the encoding topos to the category of sets that respect the topos structure.
Formal definition
When used for foundational work a topos will be defined axiomatically; set theory is then treated as a special case of topos theory. Building from category theory, there are multiple equivalent definitions of a topos. The following has the virtue of being concise:
A topos is a category that has the following two properties:
All limits taken over finite index categories exist.
Every object has a power object. This plays the role of the powerset in set theory.
Formally, a power object of an object is a pair with , which classifies relations, in the following sense.
First note that for every object , a morphism ("a family of subsets") induces a subobject . Formally, this is defined by pulling back along . The universal property of a power object is that every relation arises in this way, giving a bijective correspondence between relations and morphisms .
From finite limits and power objects one can derive that
All colimits taken over finite index categories exist.
The category has a subobject classifier.
The category is Cartesian closed.
In some applications, the role of the subobject classifier is pivotal, whereas power objects are not. Thus some definitions reverse the roles of what is defined and what is derived.
Logical functors
A logical functor is a functor between topoi that preserves finite limits and power objects. Logical functors preserve the structures that topoi have. In particular, they preserve finite colimits, subobject classifiers, and exponential objects.
Explanation
A topos as defined above can be understood as a Cartesian closed category for which the notion of subobject of an object has an elementary or first-order definition. This notion, as a natural categorical abstraction of the notions of subset of a set, subgroup of a group, and more generally subalgebra of any algebraic structure, predates the notion of topos. It is definable in any category, not just topoi, in second-order language, i.e. in terms of classes of morphisms instead of individual morphisms, as follows. Given two monics m, n from respectively Y and Z to X, we say that m ≤ n when there exists a morphism p: Y → Z for which np = m, inducing a preorder on monics to X. When m ≤ n and n ≤ m we say that m and n are equivalent. The subobjects of X are the resulting equivalence classes of the monics to it.
In a topos "subobject" becomes, at least implicitly, a first-order notion, as follows.
As noted above, a topos is a category C having all finite limits and hence in particular the empty limit or final object 1. It is then natural to treat morphisms of the form x: 1 → X as elements x ∈ X. Morphisms f: X → Y thus correspond to functions mapping each element x ∈ X to the element fx ∈ Y, with application realized by composition.
One might then think to define a subobject of X as an equivalence class of monics m: X′ → X having the same image { mx | x ∈ X′ }. The catch is that two or more morphisms may correspond to the same function, that is, we cannot assume that C is concrete in the sense that the functor C(1,-): C → Set is faithful. For example the category Grph of graphs and their associated homomorphisms is a topos whose final object 1 is the graph with one vertex and one edge (a self-loop), but is not concrete because the elements 1 → G of a graph G correspond only to the self-loops and not the other edges, nor the vertices without self-loops. Whereas the second-order definition makes G and the subgraph of all self-loops of G (with their vertices) distinct subobjects of G (unless every edge is, and every vertex has, a self-loop), this image-based one does not. This can be addressed for the graph example and related examples via the Yoneda Lemma as described in the Further examples section below, but this then ceases to be first-order. Topoi provide a more abstract, general, and first-order solution.
As noted above, a topos C has a subobject classifier Ω, namely an object of C with an element t ∈ Ω, the generic subobject of C, having the property that every monic m: X′ → X arises as a pullback of the generic subobject along a unique morphism f: X → Ω, as per Figure 1. Now the pullback of a monic is a monic, and all elements including t are monics since there is only one morphism to 1 from any given object, whence the pullback of t along f: X → Ω is a monic. The monics to X are therefore in bijection with the pullbacks of t along morphisms from X to Ω. The latter morphisms partition the monics into equivalence classes each determined by a morphism f: X → Ω, the characteristic morphism of that class, which we take to be the subobject of X characterized or named by f.
All this applies to any topos, whether or not concrete. In the concrete case, namely C(1,-) faithful, for example the category of sets, the situation reduces to the familiar behavior of functions. Here the monics m: X′ → X are exactly the injections (one-one functions) from X′ to X, and those with a given image { mx | x ∈ X′ } constitute the subobject of X corresponding to the morphism f: X → Ω for which f−1(t) is that image. The monics of a subobject will in general have many domains, all of which however will be in bijection with each other.
To summarize, this first-order notion of subobject classifier implicitly defines for a topos the same equivalence relation on monics to X as had previously been defined explicitly by the second-order notion of subobject for any category. The notion of equivalence relation on a class of morphisms is itself intrinsically second-order, which the definition of topos neatly sidesteps by explicitly defining only the notion of subobject classifier Ω, leaving the notion of subobject of X as an implicit consequence characterized (and hence namable) by its associated morphism f: X → Ω.
Further examples and non-examples
Every Grothendieck topos is an elementary topos, but the converse is not true (since every Grothendieck topos is cocomplete, which is not required from an elementary topos).
The categories of finite sets, of finite G-sets (actions of a group G on a finite set), and of finite graphs are elementary topoi that are not Grothendieck topoi.
If C is a small category, then the functor category SetC (consisting of all covariant functors from C to sets, with natural transformations as morphisms) is a topos. For instance, the category Grph of graphs of the kind permitting multiple directed edges between two vertices is a topos. Such a graph consists of two sets, an edge set and a vertex set, and two functions s,t between those sets, assigning to every edge e its source s(e) and target t(e). Grph is thus equivalent to the functor category SetC, where C is the category with two objects E and V and two morphisms s,t: E → V giving respectively the source and target of each edge.
The Yoneda lemma asserts that Cop embeds in SetC as a full subcategory. In the graph example the embedding represents Cop as the subcategory of SetC whose two objects are V' as the one-vertex no-edge graph and E' as the two-vertex one-edge graph (both as functors), and whose two nonidentity morphisms are the two graph homomorphisms from V' to E' (both as natural transformations). The natural transformations from V' to an arbitrary graph (functor) G constitute the vertices of G while those from E' to G constitute its edges. Although SetC, which we can identify with Grph, is not made concrete by either V' or E' alone, the functor U: Grph → Set2 sending object G to the pair of sets (Grph(V' ,G), Grph(E' ,G)) and morphism h: G → H to the pair of functions (Grph(V' ,h), Grph(E' ,h)) is faithful. That is, a morphism of graphs can be understood as a pair of functions, one mapping the vertices and the other the edges, with application still realized as composition but now with multiple sorts of generalized elements. This shows that the traditional concept of a concrete category as one whose objects have an underlying set can be generalized to cater for a wider range of topoi by allowing an object to have multiple underlying sets, that is, to be multisorted.
The category of pointed sets with point-preserving functions is not a topos, since it doesn't have power objects: if were the power object of the pointed set , and denotes the pointed singleton, then there is only one point-preserving function , but the relations in are as numerous as the pointed subsets of . The category of abelian groups is also not a topos, for a similar reason: every group homomorphism must map 0 to 0.
See also
History of topos theory
Homotopy hypothesis
Intuitionistic type theory
∞-topos
Quasitopos
Geometric logic
Generalized space
Notes
References
Some gentle papers
A gentle introduction.
Steven Vickers: "Toposes pour les nuls" and "Toposes pour les vraiment nuls." Elementary and even more elementary introductions to toposes as generalized spaces.
The following texts are easy-paced introductions to toposes and the basics of category theory. They should be suitable for those knowing little mathematical logic and set theory, even non-mathematicians.
An "introduction to categories for computer scientists, logicians, physicists, linguists, etc." (cited from cover text).
Introduces the foundations of mathematics from a categorical perspective.
Grothendieck foundational work on topoi:
Tome 2 270
The following monographs include an introduction to some or all of topos theory, but do not cater primarily to beginning students. Listed in (perceived) order of increasing difficulty.
A nice introduction to the basics of category theory, topos theory, and topos logic. Assumes very few prerequisites.
A good start. Available online at Robert Goldblatt's homepage.
Version available online at John Bell's homepage.
More complete, and more difficult to read.
(Online version). More concise than Sheaves in Geometry and Logic, but hard on beginners.
Reference works for experts, less suitable for first introduction
The third part of "Borceux' remarkable magnum opus", as Johnstone has labelled it. Still suitable as an introduction, though beginners may find it hard to recognize the most relevant results among the huge amount of material given.
For a long time the standard compendium on topos theory. However, even Johnstone describes this work as "far too hard to read, and not for the faint-hearted."
As of early 2010, two of the scheduled three volumes of this overwhelming compendium were available.
Books that target special applications of topos theory
Includes many interesting special applications.
Foundations of mathematics
Sheaf theory | Topos | [
"Mathematics"
] | 4,776 | [
"Mathematical structures",
"Foundations of mathematics",
"Sheaf theory",
"Topology",
"Category theory",
"Topos theory"
] |
21,391,870 | https://en.wikipedia.org/wiki/Halting%20problem | In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs. The problem comes up often in discussions of computability since it demonstrates that some functions are mathematically definable but not computable.
A key part of the formal statement of the problem is a mathematical definition of a computer and program, usually via a Turing machine. The proof then shows, for any program that might determine whether programs halt, that a "pathological" program exists for which makes an incorrect determination. Specifically, is the program that, when called with some input, passes its own source and its input to f and does the opposite of what predicts will do. The behavior of on shows undecidability as it means no program will solve the halting problem in every possible case.
Background
The halting problem is a decision problem about properties of computer programs on a fixed Turing-complete model of computation, i.e., all programs that can be written in some given programming language that is general enough to be equivalent to a Turing machine. The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input. In this abstract framework, there are no resource limitations on the amount of memory or time required for the program's execution; it can take arbitrarily long and use an arbitrary amount of storage space before halting. The question is simply whether the given program will ever halt on a particular input.
For example, in pseudocode, the program
while (true) continue
does not halt; rather, it goes on forever in an infinite loop. On the other hand, the program
print "Hello, world!"
does halt.
While deciding whether these programs halt is simple, more complex programs prove problematic. One approach to the problem might be to run the program for some number of steps and check if it halts. However, as long as the program is running, it is unknown whether it will eventually halt or run forever. Turing proved no algorithm exists that always correctly decides whether, for a given arbitrary program and input, the program halts when run with that input. The essence of Turing's proof is that any such algorithm can be made to produce contradictory output and therefore cannot be correct.
Programming consequences
Some infinite loops can be quite useful. For instance, event loops are typically coded as infinite loops. However, most subroutines are intended to finish. In particular, in hard real-time computing, programmers attempt to write subroutines that are not only guaranteed to finish, but are also guaranteed to finish before a given deadline.
Sometimes these programmers use some general-purpose (Turing-complete) programming language,
but attempt to write in a restricted style—such as MISRA C or SPARK—that makes it easy to prove that the resulting subroutines finish before the given deadline.
Other times these programmers apply the rule of least power—they deliberately use a computer language that is not quite fully Turing-complete. Frequently, these are languages that guarantee all subroutines finish, such as Coq.
Common pitfalls
The difficulty in the halting problem lies in the requirement that the decision procedure must work for all programs and inputs. A particular program either halts on a given input or does not halt. Consider one algorithm that always answers "halts" and another that always answers "does not halt". For any specific program and input, one of these two algorithms answers correctly, even though nobody may know which one. Yet neither algorithm solves the halting problem generally.
There are programs (interpreters) that simulate the execution of whatever source code they are given. Such programs can demonstrate that a program does halt if this is the case: the interpreter itself will eventually halt its simulation, which shows that the original program halted. However, an interpreter will not halt if its input program does not halt, so this approach cannot solve the halting problem as stated; it does not successfully answer "does not halt" for programs that do not halt.
The halting problem is theoretically decidable for linear bounded automata (LBAs) or deterministic machines with finite memory. A machine with finite memory has a finite number of configurations, and thus any deterministic program on it must eventually either halt or repeat a previous configuration:
However, a computer with a million small parts, each with two states, would have at least 21,000,000 possible states:
Although a machine may be finite, and finite automata "have a number of theoretical limitations":
It can also be decided automatically whether a nondeterministic machine with finite memory halts on none, some, or all of the possible sequences of nondeterministic decisions, by enumerating states after each possible decision.
History
In April 1936, Alonzo Church published his proof of the undecidability of a problem in the lambda calculus. Turing's proof was published later, in January 1937. Since then, many other undecidable problems have been described, including the halting problem which emerged in the 1950s.
Timeline
Origin of the halting problem
Many papers and textbooks refer the definition and proof of undecidability of the halting problem to Turing's 1936 paper. However, this is not correct. Turing did not use the terms "halt" or "halting" in any of his published works, including his 1936 paper. A search of the academic literature from 1936 to 1958 showed that the first published material using the term “halting problem” was . However, Rogers says he had a draft of available to him, and Martin Davis states in the introduction that "the expert will perhaps find some novelty in the arrangement and treatment of topics", so the terminology must be attributed to Davis. Davis stated in a letter that he had been referring to the halting problem since 1952. The usage in Davis's book is as follows:
A possible precursor to Davis's formulation is Kleene's 1952 statement, which differs only in wording:
The halting problem is Turing equivalent to both Davis's printing problem ("does a Turing machine starting from a given state ever print a given symbol?") and to the printing problem considered in Turing's 1936 paper ("does a Turing machine starting from a blank tape ever print a given symbol?"). However, Turing equivalence is rather loose and does not mean that the two problems are the same. There are machines which print but do not halt, and halt but not print. The printing and halting problems address different issues and exhibit important conceptual and technical differences. Thus, Davis was simply being modest when he said:
Formalization
In his original proof Turing formalized the concept of algorithm by introducing Turing machines. However, the result is in no way specific to them; it applies equally to any other model of computation that is equivalent in its computational power to Turing machines, such as Markov algorithms, Lambda calculus, Post systems, register machines, or tag systems.
What is important is that the formalization allows a straightforward mapping of algorithms to some data type that the algorithm can operate upon. For example, if the formalism lets algorithms define functions over strings (such as Turing machines) then there should be a mapping of these algorithms to strings, and if the formalism lets algorithms define functions over natural numbers (such as computable functions) then there should be a mapping of algorithms to natural numbers. The mapping to strings is usually the most straightforward, but strings over an alphabet with n characters can also be mapped to numbers by interpreting them as numbers in an n-ary numeral system.
Representation as a set
The conventional representation of decision problems is the set of objects possessing the property in question. The halting set
K = {(i, x) | program i halts when run on input x}
represents the halting problem.
This set is recursively enumerable, which means there is a computable function that lists all of the pairs (i, x) it contains. However, the complement of this set is not recursively enumerable.
There are many equivalent formulations of the halting problem; any set whose Turing degree equals that of the halting problem is such a formulation. Examples of such sets include:
{i | program i eventually halts when run with input 0}
{i | there is an input x such that program i eventually halts when run with input x}.
Proof concept
Christopher Strachey outlined a proof by contradiction that the halting problem is not solvable. The proof proceeds as follows: Suppose that there exists a total computable function halts(f) that returns true if the subroutine f halts (when run with no inputs) and returns false otherwise. Now consider the following subroutine:
def g():
if halts(g):
loop_forever()
halts(g) must either return true or false, because halts was assumed to be total. If halts(g) returns true, then g will call loop_forever and never halt, which is a contradiction. If halts(g) returns false, then g will halt, because it will not call loop_forever; this is also a contradiction. Overall, g does the opposite of what halts says g should do, so halts(g) can not return a truth value that is consistent with whether g halts. Therefore, the initial assumption that halts is a total computable function must be false.
Sketch of rigorous proof
The concept above shows the general method of the proof, but the computable function halts does not directly take a subroutine as an argument; instead it takes the source code of a program. Moreover, the definition of g is self-referential. A rigorous proof addresses these issues. The overall goal is to show that there is no total computable function that decides whether an arbitrary program i halts on arbitrary input x; that is, the following function h (for "halts") is not computable:
Here program i refers to the i th program in an enumeration of all the programs of a fixed Turing-complete model of computation.
Possible values for a total computable function f arranged in a 2D array. The orange cells are the diagonal. The values of f(i,i) and g(i) are shown at the bottom; U indicates that the function g is undefined for a particular input value.
The proof proceeds by directly establishing that no total computable function with two arguments can be the required function h. As in the sketch of the concept, given any total computable binary function f, the following partial function g is also computable by some program e:
The verification that g is computable relies on the following constructs (or their equivalents):
computable subprograms (the program that computes f is a subprogram in program e),
duplication of values (program e computes the inputs i,i for f from the input i for g),
conditional branching (program e selects between two results depending on the value it computes for f(i,i)),
not producing a defined result (for example, by looping forever),
returning a value of 0.
The following pseudocode for e illustrates a straightforward way to compute g:
procedure e(i):
if f(i, i) == 0 then
return 0
else
loop forever
Because g is partial computable, there must be a program e that computes g, by the assumption that the model of computation is Turing-complete. This program is one of all the programs on which the halting function h is defined. The next step of the proof shows that h(e,e) will not have the same value as f(e,e).
It follows from the definition of g that exactly one of the following two cases must hold:
f(e,e) = 0 and so g(e) = 0. In this case program e halts on input e, so h(e,e) = 1.
f(e,e) ≠ 0 and so g(e) is undefined. In this case program e does not halt on input e, so h(e,e) = 0.
In either case, f cannot be the same function as h. Because f was an arbitrary total computable function with two arguments, all such functions must differ from h.
This proof is analogous to Cantor's diagonal argument. One may visualize a two-dimensional array with one column and one row for each natural number, as indicated in the table above. The value of f(i,j) is placed at column i, row j. Because f is assumed to be a total computable function, any element of the array can be calculated using f. The construction of the function g can be visualized using the main diagonal of this array. If the array has a 0 at position (i,i), then g(i) is 0. Otherwise, g(i) is undefined. The contradiction comes from the fact that there is some column e of the array corresponding to g itself. Now assume f was the halting function h, if g(e) is defined (g(e) = 0 in this case), g(e) halts so f(e,e) = 1. But g(e) = 0 only when f(e,e) = 0, contradicting f(e,e) = 1. Similarly, if g(e) is not defined, then halting function f(e,e) = 0, which leads to g(e) = 0 under g'''s construction. This contradicts the assumption of g(e) not being defined. In both cases contradiction arises. Therefore any arbitrary computable function f cannot be the halting function h.
Computability theory
A typical method of proving a problem to be undecidable is to reduce the halting problem to .
For example, there cannot be a general algorithm that decides whether a given statement about natural numbers is true or false. The reason for this is that the proposition stating that a certain program will halt given a certain input can be converted into an equivalent statement about natural numbers. If an algorithm could find the truth value of every statement about natural numbers, it could certainly find the truth value of this one; but that would determine whether the original program halts.
Rice's theorem generalizes the theorem that the halting problem is unsolvable. It states that for any non-trivial property, there is no general decision procedure that, for all programs, decides whether the partial function implemented by the input program has that property. (A partial function is a function which may not always produce a result, and so is used to model programs, which can either produce results or fail to halt.) For example, the property "halt for the input 0" is undecidable. Here, "non-trivial" means that the set of partial functions that satisfy the property is neither the empty set nor the set of all partial functions. For example, "halts or fails to halt on input 0" is clearly true of all partial functions, so it is a trivial property, and can be decided by an algorithm that simply reports "true." Also, this theorem holds only for properties of the partial function implemented by the program; Rice's Theorem does not apply to properties of the program itself. For example, "halt on input 0 within 100 steps" is not a property of the partial function that is implemented by the program—it is a property of the program implementing the partial function and is very much decidable.
Gregory Chaitin has defined a halting probability, represented by the symbol Ω, a type of real number that informally is said to represent the probability that a randomly produced program halts. These numbers have the same Turing degree as the halting problem. It is a normal and transcendental number which can be defined but cannot be completely computed. This means one can prove that there is no algorithm which produces the digits of Ω, although its first few digits can be calculated in simple cases.
Since the negative answer to the halting problem shows that there are problems that cannot be solved by a Turing machine, the Church–Turing thesis limits what can be accomplished by any machine that implements effective methods. However, not all machines conceivable to human imagination are subject to the Church–Turing thesis (e.g. oracle machines). It is an open question whether there can be actual deterministic physical processes that, in the long run, elude simulation by a Turing machine, and in particular whether any such hypothetical process could usefully be harnessed in the form of a calculating machine (a hypercomputer) that could solve the halting problem for a Turing machine amongst other things. It is also an open question whether any such unknown physical processes are involved in the working of the human brain, and whether humans can solve the halting problem.
Approximations
Turing's proof shows that there can be no mechanical, general method (i.e., a Turing machine or a program in some equivalent model of computation) to determine whether algorithms halt. However, each individual instance of the halting problem has a definitive answer, which may or may not be practically computable. Given a specific algorithm and input, one can often show that it halts or does not halt, and in fact computer scientists often do just that as part of a correctness proof. There are some heuristics that can be used in an automated fashion to attempt to construct a proof, which frequently succeed on typical programs. This field of research is known as automated termination analysis.
Some results have been established on the theoretical performance of halting problem heuristics, in particular the fraction of programs of a given size that may be correctly classified by a recursive algorithm. These results do not give precise numbers because the fractions are uncomputable and also highly dependent on the choice of program encoding used to determine "size". For example, consider classifying programs by their number of states and using a specific "Turing semi-infinite tape" model of computation that errors (without halting) if the program runs off the left side of the tape. Then , over programs chosen uniformly by number of states. But this result is in some sense "trivial" because these decidable programs are simply the ones that fall off the tape, and the heuristic is simply to predict not halting due to error. Thus a seemingly irrelevant detail, namely the treatment of programs with errors, can turn out to be the deciding factor in determining the fraction of programs.
To avoid these issues, several restricted notions of the "size" of a program have been developed. A dense Gödel numbering assigns numbers to programs such that each computable function occurs a positive fraction in each sequence of indices from 1 to n, i.e. a Gödelization φ is dense iff for all , there exists a such that . For example, a numbering that assigns indices to nontrivial programs and all other indices the error state is not dense, but there exists a dense Gödel numbering of syntactically correct Brainfuck programs. A dense Gödel numbering is called optimal if, for any other Gödel numbering , there is a 1-1 total recursive function and a constant such that for all , and . This condition ensures that all programs have indices not much larger than their indices in any other Gödel numbering. Optimal Gödel numberings are constructed by numbering the inputs of a universal Turing machine. A third notion of size uses universal machines operating on binary strings and measures the length of the string needed to describe the input program. A universal machine U is a machine for which every other machine V there exists a total computable function h such that . An optimal machine is a universal machine that achieves the Kolmogorov complexity invariance bound, i.e. for every machine V, there exists c such that for all outputs x, if a V-program of length n outputs x, then there exists a U-program of at most length outputting x.
We consider partial computable functions (algorithms) . For each we consider the fraction of errors among all programs of size metric at most , counting each program for which fails to terminate, produces a "don't know" answer, or produces a wrong answer, i.e. halts and outputs DOES_NOT_HALT, or does not halt and outputs HALTS. The behavior may be described as follows, for dense Gödelizations and optimal machines:
For every algorithm , . In words, any algorithm has a positive minimum error rate, even as the size of the problem becomes extremely large.
There exists such that for every algorithm , . In words, there is a positive error rate for which any algorithm will do worse than that error rate arbitrarily often, even as the size of the problem grows indefinitely.
. In words, there is a sequence of algorithms such that the error rate gets arbitrarily close to zero for a specific sequence of increasing sizes. However, this result allows sequences of algorithms that produce wrong answers.
If we consider only "honest" algorithms that may be undefined but never produce wrong answers, then depending on the metric, may or may not be 0. In particular it is 0 for left-total universal machines, but for effectively optimal machines it is greater than 0.
The complex nature of these bounds is due to the oscillatory behavior of . There are infrequently occurring new varieties of programs that come in arbitrarily large "blocks", and a constantly growing fraction of repeats. If the blocks of new varieties are fully included, the error rate is at least , but between blocks the fraction of correctly categorized repeats can be arbitrarily high. In particular a "tally" heuristic that simply remembers the first N inputs and recognizes their equivalents allows reaching an arbitrarily low error rate infinitely often.
Gödel's incompleteness theorems
Generalization
Many variants of the halting problem can be found in computability textbooks. Typically, these problems are RE-complete and describe sets of complexity in the arithmetical hierarchy, the same as the standard halting problem. The variants are thus undecidable, and the standard halting problem reduces to each variant and vice-versa. However, some variants have a higher degree of unsolvability and cannot be reduced to the standard halting problem. The next two examples are common.
Halting on all inputs
The universal halting problem, also known (in recursion theory) as totality, is the problem of determining whether a given computer program will halt for every input (the name totality comes from the equivalent question of whether the computed function is total).
This problem is not only undecidable, as the halting problem is, but highly undecidable. In terms of the arithmetical hierarchy, it is -complete.
This means, in particular, that it cannot be decided even with an oracle for the halting problem.
Recognizing partial solutions
There are many programs that, for some inputs, return a correct answer to the halting problem, while for other inputs they do not return an answer at all.
However the problem "given program p, is it a partial halting solver" (in the sense described) is at least as hard as the halting problem.
To see this, assume that there is an algorithm PHSR ("partial halting solver recognizer") to do that. Then it can be used to solve the halting problem,
as follows:
To test whether input program x halts on y, construct a program p that on input (x,y) reports true and diverges on all other inputs.
Then test p with PHSR.
The above argument is a reduction of the halting problem to PHS recognition, and in the same manner,
harder problems such as halting on all inputs can also be reduced, implying that PHS recognition is not only undecidable, but higher in the arithmetical hierarchy, specifically -complete.
Lossy computation
A lossy Turing machine is a Turing machine in which part of the tape may non-deterministically disappear. The halting problem is decidable for a lossy Turing machine but non-primitive recursive.
Oracle machines
A machine with an oracle for the halting problem can determine whether particular Turing machines will halt on particular inputs, but they cannot determine, in general, whether machines equivalent to themselves will halt.
See also
Busy beaver
Gödel's incompleteness theorem
Brouwer–Hilbert controversy
Kolmogorov complexity
P versus NP problem
Termination analysis
Worst-case execution time
Notes
References
. Turing's paper is #3 in this volume. Papers include those by Godel, Church, Rosser, Kleene, and Post.
.
. Chapter XIII ("Computable Functions") includes a discussion of the unsolvability of the halting problem for Turing machines. In a departure from Turing's terminology of circle-free nonhalting machines, Kleene refers instead to machines that "stop", i.e. halt.
. See chapter 8, Section 8.2 "Unsolvability of the Halting Problem."
. First published in 1970, a fascinating history of German mathematics and physics from 1880s through 1930s. Hundreds of names familiar to mathematicians, physicists and engineers appear in its pages. Perhaps marred by no overt references and few footnotes: Reid states her sources were numerous interviews with those who personally knew Hilbert, and Hilbert's letters and papers.
, This is the epochal paper where Turing defines Turing machines, formulates the halting problem, and shows that it (as well as the Entscheidungsproblem) is unsolvable.
. Cf. Chapter 2, "Algorithms and Turing Machines". An over-complicated presentation (see Davis's paper for a better model), but a thorough presentation of Turing machines and the halting problem, and Church's Lambda Calculus.
. See Chapter 7 "Turing Machines." A book centered around the machine-interpretation of "languages", NP-Completeness, etc.
. Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.
Collected works of A.M. Turing
Further reading
c2:HaltingProblem
Alfred North Whitehead and Bertrand Russell, Principia Mathematica to *56, Cambridge at the University Press, 1962. Re: the problem of paradoxes, the authors discuss the problem of a set not be an object in any of its "determining functions", in particular "Introduction, Chap. 1 p. 24 "...difficulties which arise in formal logic", and Chap. 2.I. "The Vicious-Circle Principle" p. 37ff, and Chap. 2.VIII. "The Contradictions" p. 60ff.
Martin Davis, "What is a computation", in Mathematics Today, Lynn Arthur Steen, Vintage Books (Random House), 1980. A wonderful little paper, perhaps the best ever written about Turing Machines for the non-specialist. Davis reduces the Turing Machine to a far-simpler model based on Post's model of a computation. Discusses Chaitin proof. Includes little biographies of Emil Post, Julia Robinson.
Edward Beltrami, What is Random? Chance and order in mathematics and life, Copernicus: Springer-Verlag, New York, 1999. Nice, gentle read for the mathematically inclined non-specialist, puts tougher stuff at the end. Has a Turing-machine model in it. Discusses the Chaitin contributions.
Ernest Nagel and James R. Newman, Godel’s Proof, New York University Press, 1958. Wonderful writing about a very difficult subject. For the mathematically inclined non-specialist. Discusses Gentzen's proof on pages 96–97 and footnotes. Appendices discuss the Peano Axioms briefly, gently introduce readers to formal logic.
. Chapter 3 Section 1 contains a quality description of the halting problem, a proof by contradiction, and a helpful graphic representation of the Halting Problem.
Taylor Booth, Sequential Machines and Automata Theory, Wiley, New York, 1967. Cf. Chapter 9, Turing Machines. Difficult book, meant for electrical engineers and technical specialists. Discusses recursion, partial-recursion with reference to Turing Machines, halting problem. Has a Turing Machine model in it. References at end of Chapter 9 catch most of the older books (i.e. 1952 until 1967 including authors Martin Davis, F. C. Hennie, H. Hermes, S. C. Kleene, M. Minsky, T. Rado) and various technical papers. See note under Busy-Beaver Programs.
Busy Beaver Programs are described in Scientific American, August 1984, also March 1985 p. 23. A reference in Booth attributes them to Rado, T.(1962), On non-computable functions, Bell Systems Tech. J. 41. Booth also defines Rado's Busy Beaver Problem in problems 3, 4, 5, 6 of Chapter 9, p. 396.
David Bolter, Turing’s Man: Western Culture in the Computer Age, The University of North Carolina Press, Chapel Hill, 1984. For the general reader. May be dated. Has yet another (very simple) Turing Machine model in it.
Sven Köhler, Christian Schindelhauer, Martin Ziegler, On approximating real-world halting problems'', pp.454-466 (2005) Springer Lecture Notes in Computer Science volume 3623: Undecidability of the Halting Problem means that not all instances can be answered correctly; but maybe "some", "many" or "most" can? On the one hand the constant answer "yes" will be correct infinitely often, and wrong also infinitely often. To make the question reasonable, consider the density of the instances that can be solved. This turns out to depend significantly on the Programming System under consideration.
Logical Limitations to Machine Ethics, with Consequences to Lethal Autonomous Weapons - paper discussed in: Does the Halting Problem Mean No Moral Robots?
External links
Scooping the loop snooper - a poetic proof of undecidability of the halting problem
animated movie - an animation explaining the proof of the undecidability of the halting problem
A 2-Minute Proof of the 2nd-Most Important Theorem of the 2nd Millennium - a proof in only 13 lines
haltingproblem.org - popular videos and documents explaining the Halting Problem.
Theory of computation
Computability theory
Mathematical problems
Undecidable problems
1936 introductions | Halting problem | [
"Mathematics"
] | 6,401 | [
"Mathematical logic",
"Computational problems",
"Undecidable problems",
"Computability theory",
"Mathematical problems"
] |
21,392,296 | https://en.wikipedia.org/wiki/Research%20into%20centenarians | A centenarian is a person who has attained the age of 100 years or more. Research on centenarians has become more common with clinical and general population studies now having been conducted in France, Hungary, Japan, Italy, Finland, Denmark, the United States, and China. Centenarians are the second fastest-growing demographic in much of the developed world. By 2030, it is expected that there will be around a million centenarians worldwide. In the United States, a 2010 Census Bureau report found that more than 80 percent of centenarians are women.
Biochemical factors
Research carried out in Italy suggests that healthy centenarians have high levels of vitamin A and vitamin E and that this seems to be important in guaranteeing their extreme longevity. Other research contradicts this and has found that these findings do not apply to centenarians from Sardinia, for whom other factors probably play a more important role. A preliminary study carried out in Poland showed that, in comparison with young healthy female adults, centenarians living in Upper Silesia had significantly higher red blood cell glutathione reductase and catalase activities and higher, although insignificantly, serum levels of vitamin E. Researchers in Denmark have also found that centenarians exhibit a high activity of glutathione reductase in red blood cells. In this study, those centenarians having the best cognitive and physical functional capacity tended to have the highest activity of this enzyme.
Some research suggests that high levels of vitamin D may be associated with longevity.
Other research has found that people having parents who became centenarians have an increased number of naïve B cells.
It is believed that centenarians possess a different adiponectin isoform pattern and have a favorable metabolic phenotype in comparison with elderly individuals.
Genetic factors
Research carried out in the United States has found that people are much more likely to celebrate their 100th birthday if their brother or sister has reached the age. These findings, from the New England Centenarian Study in Boston, suggest that the sibling of a centenarian is four times more likely to live past 90 than the general population. Other research carried out by the New England Centenarian Study has identified 150 genetic variations that appeared to be associated with longevity which could be used to predict with 77 percent accuracy whether someone would live to be at least 100.
Research also suggests that there is a clear link between living to 100 and inheriting a hyperactive version of telomerase, an enzyme that prevents cells from ageing. Scientists from the Albert Einstein College of Medicine in the US say centenarian Ashkenazi Jews have this mutant gene.
Many centenarians manage to avoid chronic diseases even after indulging in a lifetime of serious health risks. For example, many people in the New England Centenarian Study experienced a century free of cancer or heart disease despite smoking as many as 60 cigarettes a day for 50 years. The same applies to people from Okinawa in Japan, where around half of supercentenarians had a history of smoking and one-third were regular alcohol drinkers. It is possible that these people may have had genes that protected them from the dangers of carcinogens or the random mutations that crop up naturally when cells divide.
Similarly, centenarian research carried out at the Albert Einstein College of Medicine found that the individuals studied had less than sterling health habits. As a group, for example, they were more obese, more sedentary and exercised less than other, younger cohorts. The researchers also discovered three uncommon genotype similarities among the centenarians: one gene that causes HDL cholesterol to be at levels two- to three-fold higher than average; another gene that results in a mildly underactive thyroid; and a functional mutation in the human growth hormone axis that may be a safeguard from aging-associated diseases.
It is well known that the children of parents who have a long life are also likely to reach a healthy age, but it is not known why, although the inherited genes are probably important. A variation in the gene FOXO3 is known to have a positive effect on the life expectancy of humans, and is found much more often in people living to 100 and beyond – moreover, this appears to be true worldwide.
Some research suggests that centenarian offspring are more likely to age in better cardiovascular health than their peers.
Other factors
A 2011 study found people with exceptional longevity (aged 95 and older) not to be distinct from the general population in terms of lifestyle factors such as regular physical activity, diet or alcohol consumption.
A study indicates gut microbiomes with large amounts of microbes capable of generating unique secondary bile acids are a key element of centenarians' longevity.
General observations
Several studies have shown that centenarians have better cardiovascular risk profiles compared to younger old people. The contribution of drug treatments to promote extreme longevity is not confirmed and centenarians in general have needed fewer drugs at younger ages due to a healthy lifestyle. A study by the International Longevity Centre-UK, published in 2011, suggested that today's centenarians may be healthier than the next generation of centenarians.
Ninety percent of the centenarians studied in the New England Centenarian Study were functionally independent the vast majority of their lives up until the average age of 92 years and 75% were the same at an average age of 95 years. Similarly, a study of US supercentenarians (age 110 to 119 years) showed that, even at these advanced ages, 40% needed little assistance or were independent.
A study supported by the US National Institute on Aging found significant associations between month of birth and longevity, with individuals born in September–November having a higher likelihood of becoming centenarians compared to March-born individuals.
In the United States, a 2010 Census Bureau report found that more than 80 percent of centenarians are women.
Possible errors in records
In 2024, Saul Justin Newman published a pre-print paper finding that supercentenarians and extreme age records tend to come from areas with no birth certificates, rampant clerical errors, pension fraud, and short life spans. The study argues that document validation, the only method that demographics use to verify old age, is susceptible to errors that have often been ignored due to confirmation bias and other factors, causing inflated number of valid cases. This suggests that many figures of supercentenarians' population, and studies that rely on those populations, may contain significant errors that have yet to reassessed critically. The study was awarded with the Ig Nobel Prize in 2024.
See also
Centenarian
Food choice of older adults
New England Centenarian Study
Okinawa Centenarian Study
(study project of UC San Diego Health)
References
External links
The Okinawa Centenarian Study
New England Centenarian Study
New England Supercentenarian Study
Living to 100 and Beyond: Search for Predictors of Exceptional Human Longevity
Living Beyond 100, ILC-UK
'Centenarians: 2010' - U.S. Department of Commerce, United States Census Bureau report
Gerontology
Biogerontology
Centenarians
Longevity study projects | Research into centenarians | [
"Biology"
] | 1,444 | [
"Senescence",
"Gerontology",
"Centenarians"
] |
21,393,532 | https://en.wikipedia.org/wiki/Torque%20multiplier | A torque multiplier is a tool used to provide a mechanical advantage in applying torque to turn bolts, nuts or other items designed to be actuated by application of torque, particularly where there are relatively high torque requirements.
Description
Torque multipliers are often used instead of extended handles, often called "cheater bars". Extended handles use leverage instead of gear reduction to achieve torque. This torque is transmitted through the driving tool and could become dangerous in the case of a sudden catastrophic failure of the drive tool with the extended handle attached. Torque multipliers only have a fraction of the final torque pressure on the drive tool making them a safer choice.
Torque multipliers typically employ an epicyclic gear train having one or more stages. Each stage of gearing multiplies the torque applied. In epicyclic gear systems, torque is applied to the input gear or ‘sun’ gear. A number of planet gears are arranged around and engaged with this sun gear, and therefore rotate. The outside casing of the multiplier is also engaged with the planet gear teeth, but is prevented from rotating by means of a reaction arm, causing the planet gears to orbit around the sun gear. The planet gears are held in a ‘planet carrier' which also holds the output drive shaft. As the planet gears orbit around the sun gear, the carrier and the output shaft rotate together. Without the reaction arm to prevent rotation of the outer casing, the output shaft cannot apply torque.
Along with the multiplication of torque, there is a decrease in rotational speed of the output shaft compared to the input shaft. This decrease in speed is inversely proportional to the increase in torque. For example, a torque multiplier with a rating of 3:1 will turn its output shaft with three times the torque, but at one third the speed, of the input shaft. However, due to friction and other inefficiencies in the mechanism, the output torque is slightly lower than the theoretical output.
Applications
Torque multipliers are most often used when a compressed air powered impact wrench is unavailable due to remote locations without power, or where cost considerations require manually operated tools which do not require any power supply or power source of any kind. There are many instances where screws, bolts and other fasteners are secured so tightly that using a typical lug wrench with a cheater bar is not sufficient to loosen them. These include automotive repair, product assembly, construction projects, heavy equipment maintenance and other instances where high torque output is needed. A torque multiplier allows the user to generate high torque output without the use of an air compressor or impact gun.
A torque multiplier is generally used when there are space limitations that disallow the use of long handles. They are also used as a safer alternative to a cheater bar as lever length and operator effort are both reduced. Finally, torque multipliers allow for more accurate torque. By reducing the amount of effort needed to tighten, a torque multiplier allows for slow and smooth application, ensuring more accurate torque levels, and preventing damage to sensitive components.
References
Wrenches
Gears
Mechanisms (engineering) | Torque multiplier | [
"Engineering"
] | 640 | [
"Mechanical engineering",
"Mechanisms (engineering)"
] |
21,394,677 | https://en.wikipedia.org/wiki/Gas%20protection | Gas protection is the prevention or control of the penetration of hazardous gases into buildings or other types of real property. It usually involves either blocking entry pathways or removing the source of the gas.
Hazardous gases
Methane (which is flammable at 5-15% by volume in air) and carbon dioxide (which is toxic) are the most relevant gases, especially following two gas explosions in the 1980s in Loscoe and Abbeystead, England.
UK regulatory bodies such as Building Research Establishment, British Standards, the Department of Environment, and others in the construction industry have developed and published guidance for preventing such gasses from entering buildings. Their production in the environment is associated with coal seams, deposited river silt, sewage, landfill waste, and peat.
In the case of landfill gas migration, gas is produced by organic materials in the waste degrading over time. Typically 40% carbon dioxide () and 60% methane () by volume, this gas can be heavier than air or lighter depending on the concentration (which varies from time to time), but will move from an area of high pressure to one at a lower pressure irrespective of its relative density.
Usage
Systems to prevent gas ingress include a passive barrier or, less commonly, an active system. Passive systems utilize a barrier with low permeability, such as a membrane. Active systems are mostly employed on commercial properties because of the associated costs. There are two main practical types of active systems to prevent the ingress of gases into buildings: positive pressurization, and forced ventilation.
Integrity testing
Both passive and active systems require "gas integrity testing", most often using the NHBC traffic light system. This is because the conditions under which gas membranes are installed are often difficult and can adversely compromise the integrity required by the manufacturer or client. The purpose of the test is to ensure integrity and allow the installation to be certified if the method of protection performs correctly.
Membrane testing
The membrane is tested immediately after installation and prior to being covered up by any following construction processes. The area below the membrane is temporarily pressurized with a mixture of clean air and a non-toxic and inert tracer gas that is sensitive to detection. Special equipment is then used to trace all leaks within the installation, with particular attention being paid to critical points and junctions formed between the membrane material and other structural elements prior to conducting a sweep of the complete area. Any leaks are identified and sealed, and the membrane is re-tested before it passes and the certificate is issued.
Active system testing
Active systems require a test of the alarm in case of failure of the system or power supply and possible buildup of gas.
References
Sources
BS8485.
Department of the Environment, The control of landfill gas, Waste management paper No 27.
Department of Environment, Landfill sites development control.
Guidance on evaluation of development proposals on sites where Methane and Carbon dioxide are present, incorporating Traffic Lights. Rep Ref.10627-R01-(02) Milton Keynes; National House Building Council.
Environmental mitigation
Landfill
Pollution control technologies | Gas protection | [
"Chemistry",
"Engineering"
] | 617 | [
"Environmental mitigation",
"Pollution control technologies",
"Environmental engineering"
] |
21,395,116 | https://en.wikipedia.org/wiki/Time%20of%20concentration | Time of concentration is a concept used in hydrology to measure the response of a watershed to a rain event. It is defined as the time needed for water to flow from the most remote point in a watershed to the watershed outlet. It is a function of the topography, geology, and land use within the watershed. A number of methods can be used to calculate time of concentration, including the Kirpich (1940) and NRCS (1997) methods.
Time of concentration is useful in predicting flow rates that would result from hypothetical storms, which are based on statistically derived return periods through IDF curves. For many (often economic) reasons, it is important for engineers and hydrologists to be able to accurately predict the response of a watershed to a given rain event. This can be important for infrastructure development (design of bridges, culverts, etc.) and management, as well as to assess flood risk such as the ARkStorm-scenario.
Example
This image shows the basic principle which leads to determination of the time of concentration. Much like a topographic map showing lines of equal elevation, a map with isolines can be constructed to show locations with the same travel time to the watershed outlet. In this simplified example, the watershed outlet is located at the bottom of the picture with a stream flowing through it. Moving up the map, we can say that rainfall which lands on all of the places along the first yellow line will reach the watershed outlet at exactly the same time. This is true for every yellow line, with each line further away from the outlet corresponding to a greater travel time for runoff traveling to the outlet.
Furthermore, as this image shows, the spatial representation of travel time can be transformed into a cumulative distribution plot detailing how travel times are distributed throughout the area of the watershed.
References
External links
Time of Concentration
Hydrology | Time of concentration | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 369 | [
"Temporal quantities",
"Hydrology",
"Physical quantities",
"Environmental engineering",
"Durations"
] |
21,396,772 | https://en.wikipedia.org/wiki/Technetium%20%2899mTc%29%20tetrofosmin | {{DISPLAYTITLE:Technetium (99mTc) tetrofosmin}}
Technetium (99mTc) tetrofosmin is a drug used in nuclear medicine cardiac imaging. It is sold under the brand name Myoview (GE Healthcare). The radioisotope, technetium-99m, is chelated by two 1,2-bis[di-(2-ethoxyethyl)phosphino]ethane ligands which belong to the group of diphosphines and which are referred to as tetrofosmin.
Tc-99m tetrofosmin is rapidly taken up by myocardial tissue and reaches its maximum level in approximately 5 minutes. About 66% of the total injected dose is excreted within 48 hours after injection (40% urine, 26% feces).
Tc-99m tetrofosmin is indicated for use in scintigraphic imaging of the myocardium under stress and rest conditions. It is used to determine areas of reversible ischemia and infarcted tissue in the heart. It is also indicated to detect changes in perfusion induced by pharmacologic stress (adenosine, lexiscan, dobutamine or persantine) in patients with coronary artery disease. Its third indication is to assess left ventricular function (ejection fraction) in patients thought to have heart disease.
No contraindications are known for use of Tc-99m tetrofosmin, but care should be taken to constantly monitor the cardiac function in patients with known or suspected coronary artery disease.
Patients should be encouraged to void their bladders as soon as the images are gathered, and as often as possible after the tests to decrease their radiation doses, since the majority of elimination is renal.
The recommended dose of Tc-99m tetrofosmin is between 5 and 33 millicuries (185-1221 megabecquerels). For a two-dose stress/rest dosing, the typical dose is normally a 10 mCi dose, followed one to four hours later by a dose of 30 mCi. Imaging normally begins 15 minutes following injection.
References
External links
Myoview Prescribing Information Page
Radiopharmaceuticals
Technetium-99m | Technetium (99mTc) tetrofosmin | [
"Chemistry"
] | 489 | [
"Pharmacology",
"Medicinal radiochemistry",
"Medicinal chemistry stubs",
"Chemicals in medicine",
"Radiopharmaceuticals",
"Pharmacology stubs"
] |
20,351,675 | https://en.wikipedia.org/wiki/MDynaMix | Molecular Dynamics of Mixtures (MDynaMix) is a computer software package for general purpose molecular dynamics to simulate mixtures of molecules, interacting by AMBER- and CHARMM-like force fields in periodic boundary conditions.
Algorithms are included for NVE, NVT, NPT, anisotropic NPT ensembles, and Ewald summation to treat electrostatic interactions.
The code was written in a mix of Fortran 77 and 90 (with Message Passing Interface (MPI) for parallel execution). The package runs on Unix and Unix-like (Linux) workstations, clusters of workstations, and on Windows in sequential mode.
MDynaMix is developed at the Division of Physical Chemistry, Department of Materials and Environmental Chemistry, Stockholm University, Sweden. It is released as open-source software under a GNU General Public License (GPL).
Programs
md is the main MDynaMix block
makemol is a utility which provides help to create files describing molecular structure and the force field
tranal is a suite of utilities to analyze trajectories
mdee is a version of the program which implements expanded ensemble method to compute free energy and chemical potential (is not parallelized)
mge provides a graphical user interface to construct molecular models and monitor dynamics process
Field of application
Thermodynamic properties of liquids
Nucleic acid - ions interaction
Modeling of lipid bilayers
Polyelectrolytes
Ionic liquids
X-ray spectra of liquid water
Force Field development
See also
References
External links
Ascalaph, graphical shell for MDynaMix (GNU GPL)
Molecular dynamics software
Free science software
Free software programmed in C++
Free software programmed in Fortran | MDynaMix | [
"Chemistry"
] | 344 | [
"Molecular dynamics",
"Molecular dynamics software",
"Computational chemistry software"
] |
20,354,163 | https://en.wikipedia.org/wiki/Diamphotoxin | Diamphotoxin is a toxin produced by larvae and pupae of the beetle genus Diamphidia. Diamphotoxin is a hemolytic, cardiotoxic, and highly labile single-chain polypeptide bound to a protein that protects it from deactivation.
Diamphotoxin increases the permeability of cell membranes of red blood cells. Although this does not affect the normal flow of ions between cells, it allows all small ions to pass through cell membranes easily, which fatally disrupts the cells' ion levels. Although diamphotoxin has no neurotoxic effect, its hemolytic effect is lethal, and may reduce hemoglobin levels by as much as 75%.
The San people of Southern Africa use diamphotoxin as an arrow poison for hunting game. The toxin paralyses muscles gradually. Large mammals hunted in this way die slowly from a small injection of the poison.
Several leaf beetles species of genus Leptinotarsa produce a similar toxin, leptinotarsin.
See also
Palytoxin
Arrow poison
References
Further reading
External links
Diamphotoxin at PubChem. Retrieved 4 July 2013.
Insect toxins
Peptides | Diamphotoxin | [
"Chemistry"
] | 254 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
3,810,122 | https://en.wikipedia.org/wiki/Glasser%20effect | The Glasser effect describes the creation of singularities in the flow field of a magnetically confined plasma when small resonant perturbations modify the gradient of the pressure field.
External links
Physics of magnetically confined plasmas
Fusion power | Glasser effect | [
"Physics",
"Chemistry"
] | 50 | [
"Nuclear fusion",
"Plasma physics stubs",
"Fusion power",
"Plasma physics"
] |
3,811,054 | https://en.wikipedia.org/wiki/Carbonium%20ion | In chemistry, a carbonium ion is a cation that has a pentacoordinated carbon atom. They are a type of carbocation. In older literature, the name "carbonium ion" was used for what is today called carbenium. Carbonium ions charge is delocalized in three-center, two-electron bonds. The more stable members are often bi- or polycyclic.
2-Norbornyl cation
The 2-Norbornyl cation is one of the best characterized carbonium ion. It is the prototype for non-classical ions. As indicated first by low-temperature NMR spectroscopy and confirmed by X-ray crystallography, it has a symmetric structure with an RCH2+ group bonded to an alkene group, stabilized by a bicyclic structure.
Cyclopropylmethyl cation
A non-classical structure for is supported by substantial experimental evidence from solvolysis experiments and NMR studies. One or both of two structures, the cyclopropylcarbinyl cation and the bicyclobutonium cation, were invoked to account for the observed reactivity. he NMR spectrum consists of two 13C NMR signals, even at temperatures as low as −132 °C. Computations suggest that the energetic landscape of the system is very flat. The bicyclobutonium structure is computed to be 0.4 kcal/mol more stable than the cyclopropylcarbinyl structure. In the solution phase (SbF5·SO2ClF·SO2F2, with as the counterion), the bicyclobutonium structure predominates over the cyclopropylcarbinyl structure in a 84:16 ratio at −61 °C. Three other possible structures, two classical structures (the homoallyl cation and cyclobutyl cation) and a more highly delocalized non-classical structure (the tricyclobutonium ion), are less stable.
The low temperature NMR spectrum of a dimethyl derivative shows two methyl signals, indicating that the molecular conformation of this cation is not perpendicular (as in A), which possesses a mirror plane, but is bisected (as in B) with the empty p-orbital parallel to the cyclopropyl ring system:
In terms of bent bond theory, this preference is explained by assuming favorable orbital overlap between the filled cyclopropane bent bonds and the empty p-orbital.
Methanium and ethanium
The simplest carbonium ions are also the least accessible. In methanium () carbon is covalently bonded to five hydrogen atoms.
The ethanium ion has been characterized by infrared spectroscopy. The isomers of octonium (protonated octane, ) have been studied.
Pyramidal carbocations
Applications
Carbonium ions are intermediates in the isomerization of alkanes catalyzed by very strong solid acids. Such carbonium ions are invoked in cracking (Haag-Dessau mechanism).
See also
The complex pentakis(triphenylphosphinegold(I))methanium .
Fluxional molecules
More carbonium ions called non-classical ions are found in certain norbornyl systems
Onium compounds
Carbenium ion
References
Reactive intermediates
Carbocations | Carbonium ion | [
"Chemistry"
] | 694 | [
"Organic compounds",
"Reactive intermediates",
"Physical organic chemistry"
] |
3,811,299 | https://en.wikipedia.org/wiki/HT-7 | HT-7, or Hefei Tokamak-7, is an experimental superconducting tokamak nuclear fusion reactor built in Hefei, China, to investigate the process of developing fusion power. The HT-7 was developed with the assistance of Russia, and was based on the earlier T-7 tokamak reactor. The reactor was built by the Hefei-based Institute of Plasma Physics under the direction of the Chinese Academy of Sciences.
The HT-7 construction was completed in May 1994, with final tests accomplished by December of the same year allowing experiments to proceed.
The HT-7 has been superseded by the Experimental Advanced Superconducting Tokamak (EAST) built in Hefei by the Institute of Plasma Physics as an experimental reactor before ITER is completed.
References
Reactor data
Report on the reactor
Tokamaks
Buildings and structures in Hefei
Chinese Academy of Sciences
Nuclear power in China | HT-7 | [
"Physics"
] | 194 | [
"Plasma physics stubs",
"Plasma physics"
] |
3,811,829 | https://en.wikipedia.org/wiki/DBm0 | dBm0 is an abbreviation for the power in decibel-milliwatts (dBm) measured at a zero transmission level point (ZLP).
dBm0 is a concept used (amongst other areas) in audio/telephony processing since it allows a smooth integration of analog and digital chains. Notably, for A-law and μ-law codecs the standards define a sequence which has a output.
The unit dBm0 is used to describe levels of digital as well as analog signals and is derived from its counterpart dBm. Although today dBm0 may be considered supplanted by the similar unit decibels relative to full scale (dBFS) (discussion at ), dBm0 can be viewed as connecting both the old world of analog telecommunication and the new world of digital communication. The level corresponds to the digital milliwatt (DMW) and is defined as the absolute power level at a digital reference point of the same signal that would be measured as the absolute power level, in dBm, if the reference point was analog.
The absolute power in dBm scale for a power in milliwatts (mW) is defined as:
When the test impedance is resistive, can be referred to a voltage of , which results in a reference active power of . Then corresponds to an overload level of approximately in the analog-to-digital conversion.
Given a sinusoid signal of RMS, the power at a zero transmission level point is:
and the voltage level at the ZLP is:
TIA-810 characterizes:
When a analog signal is applied to the coder input, a digital code is present at the digital reference. In general, when a digital code is applied to the decoder, a analog signal appears at the decoder output. More specifically, when the periodic sequence as given in Table 2, in either mu-law or A-law as appropriate, is applied to the decoder at the digital reference point, a , sine-wave signal appears at the decoder output. is 3.14 (A-law) or 3.17 (mu-law) dB below digital full scale.
In all standards, dBm0 is always an RMS unit. Peaks are described in a different way, sometimes by mentioning the margin to overload or clipping.
The nominal downlink level in mobile phone telecommunication at the point of interconnection is .
Comparison to dBFS
Digital signals in the abstract digital realm do not necessarily inherently represent any type of measurable physical unit. They are not necessarily relative to any specific reference power level, and thus they need not be expressed as dBm0. But the early pioneers of telephonometry gave us the pseudo-digital unit of dBm0, which persists.
A more commonly used unit today for digital signal levels is dB Full Scale or dBFS. The relationship between dBm0 and dBFS is unfortunately ambiguous. It depends how RMS and peak levels in dBFS are defined.
The ambiguity is if a full scale sinusoidal in a digital system is defined to have an RMS level of or if it should be defined to have a RMS value of , equal to the dBFS peak value. Today, the interpretation by many companies tend to go towards a definition that a full scale sinusoidal is and . The only signal that can hold according to this definition, is a fully saturated square wave. For the relationship between dBm0 and dBFS, this means that is equivalent to and .
This also means that the commonly used POI (Point of Interconnect) level of can be transformed to in an A-law codec system, or in a μ-law codec system (using the definition of a full scale sinusoidal being and .
Though, there are some companies defining that dBFS RMS equals dBFS peak for sinusoidal signals. Examples are: Qualcomm and Knowles (and other digital MEMS microphone companies). This gives some consequences when trying to calculate crest factors for speech or noise, because the difference between peak and rms value in analog domain does not correspond to the difference between peak and rms level in digital domain.
Other companies like Adobe (software creator of Adobe Audition) and Listen Inc. (software creator of SoundCheck) offer the possibility to choose which dBFS rms definition you want to use in the program.
Notes
References
Sources
ETSI
G.711
Radio frequency propagation | DBm0 | [
"Physics"
] | 908 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
3,813,953 | https://en.wikipedia.org/wiki/Rustproofing | Rustproofing is the prevention or delay of rusting of iron and steel objects, or the permanent protection against corrosion. Typically, the protection is achieved by a process of surface finishing or treatment. Depending on mechanical wear or environmental conditions, the degradation may not be stopped completely, unless the process is periodically repeated. The term is particularly used in the automobile industry.
Vehicle rustproofing
Factory
In the factory, car bodies are protected with special chemical formulations.
Typically, phosphate conversion coatings were used. Some firms galvanized part or all of their car bodies before the primer coat of paint was applied. If a car is body-on-frame, then the frame (chassis) must also be rustproofed. In traditional automotive manufacturing of the early- and mid-20th century, paint was the final part of the rustproofing barrier between the body shell and the atmosphere, except on the underside. On the underside, an underseal rubberized or PVC-based coating was often sprayed on. These products will be breached eventually and can lead to unseen corrosion that spreads underneath the underseal. Old 1960s and 1970s rubberized underseal can become brittle on older cars and is particularly liable to this.
The first electrodeposition primer was developed in the 1950s, but were found to be impractical for widespread use. Revised cathodic automotive electrocoat primer systems were introduced in the 1970s that markedly reduced the problem of corrosion that had been experienced by a vast number of automobiles in the first seven decades of automobile manufacturing. Termed e-coat, "electrocoat automotive primers are applied by totally submerging the assembled car body in a large tank that contains the waterborne e-coat, and the coating is applied through cathodic electrodeposition. This assures nearly 100% coverage of all metal surfaces by the primer. The coating chemistry is waterborne enamel based on epoxy, an aminoalcohol adduct, and blocked isocyanate, which all crosslink on baking to form an epoxy-urethane resin system.
E-coat resin technology, combined with the excellent coverage provided by electrodeposition, provides one of the more effective coatings for protecting steel from corrosion. For modern automobile manufacturing after the 1990s, nearly all cars use e-coat technology as base foundation for their corrosion protection coating system.
Aftermarket
Aftermarket kits are available to apply rustproofing compounds both to external surfaces and inside enclosed sections, for example sills/rocker panels (see monocoque), through either existing or specially drilled holes. The compounds are usually wax-based and can be applied by aerosol can, brush, low pressure pump up spray, or compressor fed spray gun.
An alternative for sills/rocker panels is to block drain holes and simply fill them up with wax and then drain most of it out (the excess can be stored and reused), leaving a complete coating inside. Anti-rust wax like phosphoric acid based rust killers/neutralizers can also be painted on already rusted areas. Loose or thick rust must be removed before anti-rust wax like Waxoyl or a similar product is used.
Structural rust (affecting structural components which must withstand considerable forces) should be cut back to sound metal and new metal welded in, or the affected part should be completely replaced. Wax may not penetrate spot-welded seams or thick rust effectively. A thinner (less viscous) mineral-oil-based anti-rust product followed by anti-rust wax can be more effective. Application is easier in hot weather rather than cold because even when pre-heated, the products are viscous and don't flow and penetrate well on cold metal.
Aftermarket "underseals" can also be applied. They are particularly useful in high-impact areas like wheel arches. There are two types - drying and non-drying. The hardening and drying products are also known as "Shutz" and "Anti Stone Chip" with similar potential problems to the original factory underseals. These are available in black, white, grey and red colors and can be overpainted. These are best used for the area below the bumpers on cars that have painted metal body work in that location, rather than modern plastic deep bumpers. The bitumen based products do not dry and harden, so they cannot become brittle, like the confusingly named "Underbody Seal with added Waxoyl" made by Hammerite, which can be supplied in a Shutz type cartridge labelled "Shutz" for use with a Shutz compressor fed gun. Mercedes bodyshops use a similar product supplied by Mercedes-Benz. There are many manufacturers of similar products at varying prices, these are regularly group tested and reviewed in the classic car magazine press.
The non drying types contain anti-rust chemicals similar to those in anti-rust waxes. Petroleum-based rust-inhibitors provide several benefits, including the ability to creep over metal, covering missed areas. Additionally, a petroleum, solvent-free rust inhibitor remains on the metal surface, sealing it from rust-accelerating water and oxygen. Other benefits of petroleum-based rust protection include the self-healing properties that come naturally to oils, which helps undercoatings to resist abrasion caused by road sand and other debris. The disadvantage of using a petroleum-based coating is the film left over on surfaces, rendering these products too messy for top side exterior application, and unsafe in areas where it can be slipped on. They also cannot be painted.
There are aftermarket electronic "rustproofing" technologies claimed to prevent corrosion by "pushing" electrons into the car body, to limit the combination of oxygen and iron to form rust. The loss of electrons in paint is also claimed to be the cause of “paint oxidisation” and the electronic system is also supposed to protect the paint. However, there is no peer reviewed scientific testing and validation supporting the use of these devices and corrosion control professionals find they do not work.
Rate of corrosion
The rate at which vehicles corrode is dependent upon:
Local climate and use of ice-melting chemicals (salt) upon the roads
Atmospheric pollution, such as acid rain or salt spray, which can cause paint damage
Quality, thickness, and composition of metal used, often an alloy of mild steel
Improper use of some dissimilar metals, which can accelerate the rusting of steel bodywork through electrolytic corrosion
Design of "rust traps" (nooks and crannies that collect road dirt and water)
Particular process of rustproofing used
Plastic/under-seal protection on the car underside
Exposure to salt water, which strips off the protective paint and also causes rust much more quickly than ordinary rain water would
Rustproof alloys
Stainless steel, also known as "inox steel" does not stain, corrode, or rust as easily as ordinary steel. Pierre Berthier, a Frenchman, was the first to notice the rust-resistant properties of mixing chromium with alloys in 1821, which led to new metal treating and metallurgy processes, and eventually the creation of usable stainless steel. DeLorean cars had a fiberglass body structure with a steel backbone chassis, along with external brushed stainless-steel body panels.
Some cars have been made from aluminum, which may be more corrosion resistant than steel when exposed to water, but not to salt or certain other chemicals.
Weathering steel, often referred to by the genericised trademark COR-TEN steel and sometimes written without the hyphen as corten steel, is a group of steel alloys which were developed to eliminate the need for painting by forming a stable external layer of rust. U.S. Steel (USS) holds the registered trademark on the name COR-TEN. The name COR-TEN refers to the two distinguishing properties of this type of steel: corrosion resistance and tensile strength.
See also
Corrosion
Corrosion engineering
Hot-dip galvanizing
Tinning
Iron pillar of Delhi
Cathodic Protection for Vehicles
Cosmoline
Corrosion inhibitor
References
Vehicle technology
Corrosion prevention
Materials science | Rustproofing | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,638 | [
"Corrosion prevention",
"Applied and interdisciplinary physics",
"Materials science",
"Corrosion",
"Vehicle technology",
"Mechanical engineering by discipline",
"nan"
] |
3,814,851 | https://en.wikipedia.org/wiki/Steam%E2%80%93electric%20power%20station | A steam–electric power station is a power station in which the electric generator is steam-driven: water is heated, evaporates, and spins a steam turbine which drives an electric generator. After it passes through the turbine, the steam is condensed in a condenser. The greatest variation in the design of steam–electric power plants is due to the different fuel sources.
Almost all coal, nuclear, geothermal, solar thermal electric power plants, waste incineration plants as well as many natural gas power plants are steam–electric. Natural gas is frequently combusted in gas turbines as well as boilers. The waste heat from a gas turbine can be used to raise steam, in a combined cycle plant that improves overall efficiency.
Worldwide, most electric power is produced by steam–electric power plants. The only widely used alternatives are photovoltaics, direct mechanical power conversion as found in hydroelectric and wind turbine power as well as some more exotic applications like tidal power or wave power and finally some forms of geothermal power plants. Niche applications for methods like betavoltaics or chemical power conversion (including electrochemistry) are only of relevance in batteries and atomic batteries. Fuel cells are a proposed alternative for a future hydrogen economy.
History
Reciprocating steam engines have been used for mechanical power sources since the 18th Century, with notable improvements being made by James Watt. The very first commercial central electrical generating stations in New York and London, in 1882, also used reciprocating steam engines. As generator sizes increased, eventually turbines took over due to higher efficiency and lower cost of construction. By the 1920s any central station larger than a few thousand kilowatts would use a turbine prime mover.
Efficiency
The efficiency of a conventional steam–electric power plant, defined as energy produced by the plant divided by the heating value of the fuel consumed by it, is typically 33 to 48%, limited as all heat engines are by the laws of thermodynamics (See: Carnot cycle). The rest of the energy must leave the plant in the form of heat. This waste heat can be removed by cooling water or in cooling towers. (cogeneration uses the waste heat for district heating). An important class of steam power plants is associated with desalination facilities, which are typically found in desert countries with large supplies of natural gas. In these plants freshwater and electricity are equally important products.
Since the efficiency of the plant is fundamentally limited by the ratio of the absolute temperatures of the steam at turbine input and output, efficiency improvements require use of higher temperature, and therefore higher pressure, steam. Historically, other working fluids such as mercury have been experimentally used in a mercury vapour turbine power plant, since these can attain higher temperatures than water at lower working pressures. However, poor heat transfer properties and the obvious hazard of toxicity have ruled out mercury as a working fluid.
Another option is using a supercritical fluid as a working fluid. Supercritical fluids behave similar to gases in some respects and similar to liquids in others. Supercritical water or supercritical carbon dioxide can be heated to much higher temperatures than are achieved in conventional steam cycles thus allowing for higher thermal efficiency. However, these substances need to be kept at high pressures (above the critical pressure) to maintain supercriticality and there are issues with corrosion.
Components Of Steam plant
Condenser
Steam–electric power plants use a surface condenser cooled by water circulating through tubes. The steam which was used to turn the turbine is exhausted into the condenser and is condensed as it comes in contact with the tubes full of cool circulating water. The condensed steam, commonly referred to as condensate.
is withdrawn from the bottom of the condenser. The adjacent image is a diagram of a typical surface condenser.
For best efficiency, the temperature in the condenser must be kept as low as practical in order to achieve the lowest possible pressure in the condensing steam. Since the condenser temperature can almost always be kept significantly below 100 °C where the vapor pressure of water is much less than atmospheric pressure, the condenser generally works under vacuum. Thus leaks of non-condensable air into the closed loop must be prevented. Plants operating in hot climates may have to reduce output if their source of condenser cooling water becomes warmer; unfortunately this usually coincides with periods of high electrical demand for air conditioning. If a good source of cooling water is not available, cooling towers may be used to reject waste heat to the atmosphere. A large river or lake can also be used as a heat sink for cooling the condensers; temperature rises in naturally occurring waters may have undesirable ecological effects, but may also incidentally improve yields of fish in some circumstances.
Feedwater heater
In the case of a conventional steam–electric power plant using a drum boiler, the surface condenser removes the latent heat of vaporization from the steam as it changes states from vapor to liquid. The condensate pump then pumps the condensate water through a feedwater heater, which raises the temperature of the water by using extraction steam from various stages of the turbine.
Preheating the feedwater reduces the irreversibilities involved in steam generation and therefore improves the thermodynamic efficiency of the system. This reduces plant operating costs and also helps to avoid thermal shock to the boiler metal when the feedwater is introduced back into the steam cycle.
Boiler
Once this water is inside the boiler or steam generator, the process of adding the latent heat of vaporization begins. The boiler transfers energy to the water by the chemical reaction of burning some type of fuel. The water enters the boiler through a section in the convection pass called the economizer. From the economizer, it passes to the steam drum, from where it goes down the downcomers to the lower inlet water wall headers. From the inlet headers, the water rises through the waterwalls. Some of it is turned into steam due to the heat being generated by the burners located on the front and rear waterwalls (typically). From the waterwalls, the water/steam mixture enters the steam drum and passes through a series of steam and water separators and then dryers inside the steam drum. The steam separators and dryers remove water droplets from the steam; liquid water carried over into the turbine can produce destructive erosion of the turbine blades. and the cycle through the waterwalls is repeated. This process is known as natural circulation.
Geothermal plants need no boiler since they use naturally occurring steam sources. Heat exchangers may be used where the geothermal steam is very corrosive or contains excessive suspended solids. Nuclear plants also boil water to raise steam, either directly passing the working steam through the reactor or else using an intermediate heat exchanger.
Superheater
After the steam is conditioned by the drying equipment inside the drum, it is piped from the upper drum area into an elaborate set up of tubing in different areas of the boiler, the areas known as superheater and reheater. The steam vapor picks up energy and is superheated above the saturation temperature. The superheated steam is then piped through the main steam lines to the valves of the high-pressure turbine.
See also
Boiler
Combined heat and power
Cooling tower system
Flue gas stacks
Fossil fuel power plant
Geothermal power
Nuclear power plant
Power station
Thermal power station
Water-tube boiler
References
External links
Power plant diagram
Power Plant Reference Books
Chemical process engineering
Energy conversion
Power station technology
Turbo generators | Steam–electric power station | [
"Chemistry",
"Engineering"
] | 1,543 | [
"Chemical process engineering",
"Chemical engineering"
] |
3,816,147 | https://en.wikipedia.org/wiki/Carbogen | Carbogen, also called Meduna's Mixture after its inventor Ladislas Meduna, is a mixture of carbon dioxide and oxygen gas. Meduna's original formula was 30% CO2 and 70% oxygen, but the term carbogen can refer to any mixture of these two gases, from 1.5% to 50% CO2.
Mechanism
When carbogen is inhaled, the increased level of carbon dioxide causes a perception, both psychological and physiological, of suffocation because the brain interprets an increase in blood carbon dioxide as a decrease in oxygen level, which would generally be the case under natural circumstances. Inhalation of carbogen causes the body to react as if it were not receiving sufficient oxygen: breathing quickens and deepens, heart rate increases, and cells release alkaline buffering agents to remove carbonic acid from the bloodstream.
Psychotherapy
Carbogen was once used in psychology and psychedelic psychotherapy to determine whether a patient would react to an altered state of consciousness or to a sensation of loss of control. Individuals who reacted especially negatively to carbogen were generally not administered other psychotherapeutic drugs for fear of similar reactions. Meduna administered carbogen to his patients to induce abreaction, which, with proper preparation and administration, he found could help clients become free of their neuroses. Carbogen users are said to have discovered unconscious contents of their minds, with the experience clearing away repressed material and freeing the subject for a smoother, more profound psychedelic experience.
One subject reported:
"After the second breath came an onrush of color, first a predominant sheet of beautiful rosy-red, following which came successive sheets of brilliant color and design, some geometric, some fanciful and graceful …. Then the colors separated; my soul drawing apart from the physical being, was drawn upward seemingly to leave the earth and to go upward where it reached a greater Spirit with Whom there was a communion, producing a remarkable, new relaxation and deep security."
Carbogen is rarely used in therapy anymore, largely due to the decline in psychedelic psychotherapy.
Uses
A carbogen mixture of 95% oxygen and 5% carbon dioxide can be used as part of the early treatment of central retinal artery occlusion. On this same premise it has also been proposed to be used in the management of sudden sensorineural hearing loss which can increase the blood flow to the inner ear and also possibly relieve the internal auditory artery spasm.
Carbogen is used in biology research to study in vivo oxygen and carbon dioxide flows, as well as to oxygenate the aCSF solution and stabilize the pH to about 7.4 in research on acute brain slices.
Its use in combination with nicotinamide is also being investigated in conjunction with radiation therapy in the treatment strategy of certain cancers. Because increased tumor oxygenation improves the cell-killing effects of radiation, it is thought that the inhalation of these agents during radiation therapy could increase its effectiveness.
See also
References
Experimental medical treatments
Gases
Psychedelic drug research
Anxiogenics
Industrial gases | Carbogen | [
"Physics",
"Chemistry"
] | 629 | [
"Matter",
"Phases of matter",
"Industrial gases",
"Chemical process engineering",
"Statistical mechanics",
"Gases"
] |
3,816,297 | https://en.wikipedia.org/wiki/African%20Well%20Fund | African Well Fund is a non-profit organization dedicated to raising funds for the construction and maintenance of freshwater wells throughout impoverished sections of Africa. It was founded in October 2002 by a group of U2 fans who were inspired by frontman Bono's May 2002 visit to poor sections of Africa along with former U.S. Secretary of the Treasury Paul O'Neill. The organization was inspired by Bono's charitable work throughout Africa, but is not directly connected to the band.
The organization is partnered with Africare, and is staffed entirely by volunteers to minimize overhead.
History
In 2002, fans of the band U2 were inspired by Bono's visit to Uganda and began raising money for a well in Africa. By 2005, the nonprofit group had 135 members and raised more than $110,000.
Funds donated to the African Well Fund are used by Africare, which works with local communities. Local communities that have a well installed by the project select a water user committee to monitor and maintain the facility. The Africare-connected representatives who help set up the wells use that as an opportunity to also educate the community about the issues of HIV and AIDS.
For Bono's 50th birthday, U2 fans attempted to raise 50,000 dollars for the Buhara District in Zimbabwe.
References
External links
The African Well Fund's website.
References
Charities based in Africa
Development charities based in the United States
Water supply | African Well Fund | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 279 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
3,816,650 | https://en.wikipedia.org/wiki/Commercial%20use%20of%20space | Space economy refers to the set of activities, industries, technologies, services, and resources that generate economic value through the exploration, understanding, management, and utilization of outer space.
Commercial satellite use began in 1962 with Telstar 1, transmitting TV signals across the Atlantic Ocean. Syncom 3 expanded possibilities in 1964, broadcasting the Olympics. NASA's TIROS satellites advanced meteorological research, while Intelsat I in 1965 showed commercial viability. Later, France's Arianespace and USA's Iridium Communications furthered satellite services. By 2004, global investment in all space sectors was estimated to be US$50.8 billion. As of 2010, 31% of all space launches were commercial. By the year 2035, the space economy is projected to have grown to $1.8 trillion.
The commercial spaceflight sector primarily generates revenue by launching satellites into Earth's orbit, facilitated by providers deploying satellites into Low Earth Orbit and Geostationary Earth Orbit. The Federal Aviation Administration (FAA) licenses six U.S. spaceports and oversees commercial rocket launches, with global capacity expanding from sites in Russia, France, and China. Investment in reusable launch vehicles by companies like SpaceX and Blue Origin is driving innovation in this sector. In 2022, 74 FAA-licensed commercial space operations were conducted, and this number is expected to double in the near future.
Commercial satellite manufacturing encompasses non-military, civilian, governmental, and non-profit satellite production along with ground equipment manufacturing, supporting satellite operations, and transponder leasing providing satellite access. Satellite subscription services offer access to a variety of television channels (such as DirecTV and Dish network), radio stations (like SiriusXM), and other media content through satellite transmission. Satellite imagery provides detailed views of Earth, sold by imaging companies to governments and businesses like Apple Maps. Satellite telecommunications enable Internet services globally. Satellite navigation systems use signals from satellites for precise positioning and timing. Space tourism ventures (led by SpaceX, Virgin Galactic and Blue Origin) envision recreational human space travel. Commercial space resource recovery involves extracting materials from asteroids and other celestial bodies for use in space or on Earth.
Space commerce regulation has historically faced challenges regarding property rights in space, but legislation like the U.S. Commercial Space Launch Competitiveness Act aims to clarify ownership and encourage commercial space exploration.
History
The first commercial use of satellites may have been the Telstar 1 satellite, launched in 1962, which was the first privately sponsored space launch, funded by AT&T and Bell Telephone Laboratories. Telstar 1 was capable of relaying television signals across the Atlantic Ocean, and was the first satellite to transmit live television, telephone, fax, and other data signals. Two years later, the Hughes Aircraft Company developed the Syncom 3 satellite, a geosynchronous communications satellite, leased to the Department of Defense. Commercial possibilities of satellites were further realized when the Syncom 3, orbiting near the International Date Line, was used to telecast the 1964 Olympic Games from Tokyo to the United States.
Between 1960 and 1966, the U.S. National Aeronautics and Space Administration (NASA) launched a series of early weather satellites known as Television Infrared Observation Satellites (TIROS). These satellites greatly advanced meteorology worldwide, as satellite imagery was used for better forecasting, for both public and commercial interests.
On April 6, 1965, the Hughes Aircraft Company placed the Intelsat I communications satellite in geosynchronous orbit over the Atlantic Ocean. Intelsat I was built for the Communications Satellite Corporation (COMSAT), and demonstrated that satellite-based communication was commercially feasible. Intelsat I allowed for near-instantaneous contact between Europe and North America by handling television, telephone and fax transmissions. Two years later, the Soviet Union launched the Orbita satellite, which provided television signals across Russia, and started the first national satellite television network. Similarly, the 1972 Anik A satellite, launched by Telesat Canada, allowed the Canadian Broadcasting Corporation to reach northern Canada for the first time.
In 1980, Europe's Arianespace became the world's first commercial launch service provider.
Beginning in 1997, Iridium Communications began launching a series of satellites known as the Iridium satellite constellation, which provided the first satellites for direct satellite telephone service.
Spaceflight
The commercial spaceflight industry derives the bulk of its revenue from the launching of satellites into the Earth's orbit. Commercial launch providers typically place private and government satellites into low Earth orbit (LEO) and geosynchronous Earth orbit (GEO).
The Federal Aviation Administration (FAA) has licensed six commercial spaceports in the United States: Wallops Flight Facility, Kodiak Launch Complex, Spaceport Florida, Kennedy Space Center, Cape Canaveral Space Force Station, and the Vandenberg Air Force Base. Launch sites within Russia, France, and China have added to the global commercial launch capacity. The Delta IV, Atlas V, and Falcon family of launch vehicles are made available for commercial ventures for the United States, while Russia promotes eight families of vehicles.
Between 1996 and 2002, 245 launches were made for commercial ventures while government (non-classified) launches only totaled 167 for the same period. Commercial space flight has spurred investment into the development of an efficient reusable launch vehicle (RLV) which can place larger payloads into orbit. Several companies such as SpaceX and Blue Origin are currently developing new RLV designs.
In the United States, the Office of Commercial Space Transportation (generally referred to as FAA/AST or simply AST) is the branch of the Federal Aviation Administration (FAA) that approves any commercial rocket launch operations—that is, any launches that are not classified as model, amateur, or "by and for the government." In fiscal year 2022, there were 74 FAA-licensed commercial space operations, which includes both launches and reentries. In 2023, the FAA predicted that commercial launches it licenses could more than double in the next several years.
Satellites and equipment
Satellite manufacturing
Commercial satellite manufacturing is defined by the United States government as satellites manufactured for civilian, government, or non-profit use. Not included are satellites constructed for military use, nor for activities associated with any human space flight program. Between the years of 1996 and 2002, satellite manufacturing within the United States experienced an annual growth of 11%. The rest of the world experienced higher growth levels of around 13%.
Ground equipment manufacturing
Operating satellites communicate via receivers and transmitters on Earth. The manufacturing of satellite ground station communication terminals (including VSATs), mobile satellite telephones, and home television receivers are a part of the ground equipment manufacturing sector. This sector grew through the latter half of the 1990s as it manufactured equipment for the satellite services sector. Between 1996 and 2002, this industry saw a 14% annual increase.
Satellite imagery
Satellite imagery (also Earth observation imagery or spaceborne photography) are images of Earth or other planets collected by imaging satellites operated by governments and businesses around the world. Satellite imaging companies sell images by licensing them to governments and businesses such as Apple Maps and Google Maps.
Satellite telecommunications
In 1994, DirecTV debuted direct broadcast satellite by introducing a signal receiving dish 18 inches in diameter. In 1996, Astro started in Malaysia with the launch of the MEASAT satellite. In November 1999, the Satellite Home Viewer Improvement Act became law, and local stations were then made available in satellite channel packages, fueling the industry's growth in the years that followed. By the end of 2000, DTH subscriptions totaled over 67 million.
Satellite radio was pioneered by XM Satellite Radio and Sirius Satellite Radio. XM's first satellite was launched on March 18, 2001 and its second on May 8, 2001. Its first broadcast occurred on September 25, 2001, nearly four months before Sirius. Sirius launched the initial phase of its service in four cities on February 14, 2002, expanding to the rest of the contiguous United States on July 1, 2002. The two companies spent over $3 billion combined to develop satellite radio technology, build and launch the satellites, and for various other business expenses.
Satellite internet is also an emerging market, as they can be used to transmit and receive Internet services from space to any place in the planet Earth. This enables its use for markets such as cruise ships, long-haul buses, flights and rural areas. Starlink is a notable example of such a service offered by SpaceX.
Transponder leasing
Businesses that operate satellites often lease or sell access to their satellites to data relay and telecommunication firms. This service is often referred to as transponder leasing. Between 1996 and 2002, this industry experienced a 15% annual growth. The United States accounts for about 32% of the world's transponder market.
Satellite navigation
A satellite navigation or satnav system is a system that uses satellites to provide autonomous geo-spatial positioning. It allows small electronic receivers to determine their location (longitude, latitude, and altitude/elevation) to high precision (within a few centimeters to metres) using time signals transmitted along a line of sight by radio from satellites. The system can be used for providing position, navigation or for tracking the position of something fitted with a receiver (satellite tracking). The signals also allow the electronic receiver to calculate the current local time to high precision, which allows time synchronization. These uses are collectively known as Positioning, Navigation and Timing (PNT). Satnav systems operate independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the positioning information generated.
Space tourism
Space tourism is human space travel for recreational purposes. There are several different types of space tourism, including orbital, suborbital and lunar space tourism. Work also continues towards developing suborbital space tourism vehicles. This is being done by aerospace companies like Blue Origin and Virgin Galactic.
Commercial recovery of space resources
Commercial recovery of space resources is the exploitation of raw materials from asteroids, comets and other space objects, including near-Earth objects. Minerals and volatiles could be mined then used in space for in-situ utilization (e.g., construction materials and rocket propellant) or taken back to Earth. These include gold, iridium, silver, osmium, palladium, platinum, rhenium, rhodium, ruthenium and tungsten for transport back to Earth; iron, cobalt, manganese, molybdenum, nickel, aluminium, and titanium for construction; water and oxygen to sustain astronauts; as well as hydrogen, ammonia, and oxygen for use as rocket propellant.
There are several commercial enterprises working in this field, including ispace Inc. and Moon Express.
The first in-space transaction of resources is contracted by NASA to four companies to sell NASA collected lunar regolith on the Moon.
Regulation
Beyond the many technological factors that could make space commercialization more widespread, it has been suggested that the lack of private property, the difficulty or inability of individuals in establishing property rights in space, has been an impediment to the development of space for both human habitation and commercial development.
Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, , ratified by all spacefaring nations.
On November 25, 2015, President Obama signed the U.S. Commercial Space Launch Competitiveness Act (H.R. 2262) into law. The law recognizes the right of U.S. citizens to own space resources they obtain and encourages the commercial exploration and utilization of resources from asteroids. According to the law under 51 U.S.C. § 51303:
See also
Space launch market competition
Commercial astronaut
Private spaceflight
Satellite Internet access
Space industry
Space manufacturing
Space-based industry
Space pollution
Space tourism
References
Futron Corporation (2001) "Trends in Space Commerce". Retrieved January 24, 2006
Further reading
External links
Ethical Issues
Lunar Land Grab
Office of Space Commercialization
Property Rights
Government Policy
Mir Space Station Privatization
Satellite broadcasting
Space industry
1962 introductions
Space-based economy | Commercial use of space | [
"Astronomy",
"Engineering"
] | 2,481 | [
"Space industry",
"Telecommunications engineering",
"Outer space",
"Satellite broadcasting"
] |
3,816,902 | https://en.wikipedia.org/wiki/Supercavitating%20propeller | The supercavitating propeller is a variant of a propeller for propulsion in water, where supercavitation is actively employed to gain increased speed by reducing friction. They are being used for military purposes and for high performance racing boats as well as model racing boats.
This article distinguishes a supercavitating propeller from a subcavitating propeller running under supercavitating conditions. In general, subcavitating propellers become less efficient when they are running under supercavitating conditions.
The supercavitating propeller operates submerged with the entire diameter of the blade below the water line. Its blades are wedge-shaped to force cavitation at the leading edge and to avoid water skin friction along the whole forward face. As the cavity collapses well behind the blade, the supercavitating propeller avoids the spalling damage due to cavitation that is a problem with conventional propellers.
An alternative to the supercavitating propeller is the surface piercing, or ventilated propeller. These propellers are designed to intentionally leave the water and entrain atmospheric air to fill the void, which means that the resulting gas layer on the forward face of the propeller blade consists of air instead of water vapour. Less energy is thus used, and the surface-piercing propeller generally enjoys lower drag than the supercavitating principle. The surface-piercing propeller also has wedge-shaped blades, and propellers may be designed that can operate in both supercavitating and surface-piercing mode.
Supercavitating propellers were developed to usefulness for very fast military vessels by Vosper & Company.
The pioneer of this technology and other high speed offshore boating technologies was Albert Hickman (1877–1957), early in the 20th century. His Sea Sled designs used a surface piercing propeller.
See also
Axial fan design
Boston Whaler
Cathedral hull
Supercavitating torpedo
References
Damned by Faint Praise, article in Wooden Boat about Albert Hickman
Albert Hickman biography
Propellers
Shipbuilding | Supercavitating propeller | [
"Engineering"
] | 395 | [
"Shipbuilding",
"Marine engineering"
] |
2,840,305 | https://en.wikipedia.org/wiki/Computer-assisted%20proof | A computer-assisted proof is a mathematical proof that has been at least partially generated by computer.
Most computer-aided proofs to date have been implementations of large proofs-by-exhaustion of a mathematical theorem. The idea is to use a computer program to perform lengthy computations, and to provide a proof that the result of these computations implies the given theorem. In 1976, the four color theorem was the first major theorem to be verified using a computer program.
Attempts have also been made in the area of artificial intelligence research to create smaller, explicit, new proofs of mathematical theorems from the bottom up using automated reasoning techniques such as heuristic search. Such automated theorem provers have proved a number of new results and found new proofs for known theorems. Additionally, interactive proof assistants allow mathematicians to develop human-readable proofs which are nonetheless formally verified for correctness. Since these proofs are generally human-surveyable (albeit with difficulty, as with the proof of the Robbins conjecture) they do not share the controversial implications of computer-aided proofs-by-exhaustion.
Methods
One method for using computers in mathematical proofs is by means of so-called validated numerics or rigorous numerics. This means computing numerically yet with mathematical rigour. One uses set-valued arithmetic and in order to ensure that the set-valued output of a numerical program encloses the solution of the original mathematical problem. This is done by controlling, enclosing and propagating round-off and truncation errors using for example interval arithmetic. More precisely, one reduces the computation to a sequence of elementary operations, say . In a computer, the result of each elementary operation is rounded off by the computer precision. However, one can construct an interval provided by upper and lower bounds on the result of an elementary operation. Then one proceeds by replacing numbers with intervals and performing elementary operations between such intervals of representable numbers.
Philosophical objections
Computer-assisted proofs are the subject of some controversy in the mathematical world, with Thomas Tymoczko first to articulate objections. Those who adhere to Tymoczko's arguments believe that lengthy computer-assisted proofs are not, in some sense, 'real' mathematical proofs because they involve so many logical steps that they are not practically verifiable by human beings, and that mathematicians are effectively being asked to replace logical deduction from assumed axioms with trust in an empirical computational process, which is potentially affected by errors in the computer program, as well as defects in the runtime environment and hardware.
Other mathematicians believe that lengthy computer-assisted proofs should be regarded as calculations, rather than proofs: the proof algorithm itself should be proved valid, so that its use can then be regarded as a mere "verification". Arguments that computer-assisted proofs are subject to errors in their source programs, compilers, and hardware can be resolved by providing a formal proof of correctness for the computer program (an approach which was successfully applied to the four color theorem in 2005) as well as replicating the result using different programming languages, different compilers, and different computer hardware.
Another possible way of verifying computer-aided proofs is to generate their reasoning steps in a machine readable form, and then use a proof checker program to demonstrate their correctness. Since validating a given proof is much easier than finding a proof, the checker program is simpler than the original assistant program, and it is correspondingly easier to gain confidence into its correctness. However, this approach of using a computer program to prove the output of another program correct does not appeal to computer proof skeptics, who see it as adding another layer of complexity without addressing the perceived need for human understanding.
Another argument against computer-aided proofs is that they lack mathematical elegance—that they provide no insights or new and useful concepts. In fact, this is an argument that could be advanced against any lengthy proof by exhaustion.
An additional philosophical issue raised by computer-aided proofs is whether they make mathematics into a quasi-empirical science, where the scientific method becomes more important than the application of pure reason in the area of abstract mathematical concepts. This directly relates to the argument within mathematics as to whether mathematics is based on ideas, or "merely" an exercise in formal symbol manipulation. It also raises the question whether, if according to the Platonist view, all possible mathematical objects in some sense "already exist", whether computer-aided mathematics is an observational science like astronomy, rather than an experimental one like physics or chemistry. This controversy within mathematics is occurring at the same time as questions are being asked in the physics community about whether twenty-first century theoretical physics is becoming too mathematical, and leaving behind its experimental roots.
The emerging field of experimental mathematics is confronting this debate head-on by focusing on numerical experiments as its main tool for mathematical exploration.
Theorems proved with the help of computer programs
Inclusion in this list does not imply that a formal computer-checked proof exists, but rather, that a computer program has been involved in some way. See the main articles for details.
See also
References
Further reading
External links
Argument technology
Automated theorem proving
Computer-assisted proofs
Formal methods
Numerical analysis
Philosophy of mathematics | Computer-assisted proof | [
"Mathematics",
"Engineering"
] | 1,067 | [
"Automated theorem proving",
"Computer-assisted proofs",
"Mathematical logic",
"Computational mathematics",
"Formal methods",
"Software engineering",
"Mathematical relations",
"nan",
"Numerical analysis",
"Approximations"
] |
2,841,222 | https://en.wikipedia.org/wiki/Direct%20stiffness%20method | In structural engineering, the direct stiffness method, also known as the matrix stiffness method, is a structural analysis technique particularly suited for computer-automated analysis of complex structures including the statically indeterminate type. It is a matrix method that makes use of the members' stiffness relations for computing member forces and displacements in structures. The direct stiffness method is the most common implementation of the finite element method (FEM). In applying the method, the system must be modeled as a set of simpler, idealized elements interconnected at the nodes. The material stiffness properties of these elements are then, through linear algebra, compiled into a single matrix equation which governs the behaviour of the entire idealized structure. The structure’s unknown displacements and forces can then be determined by solving this equation. The direct stiffness method forms the basis for most commercial and free source finite element software.
The direct stiffness method originated in the field of aerospace. Researchers looked at various approaches for analysis of complex airplane frames. These included elasticity theory, energy principles in structural mechanics, flexibility method and matrix stiffness method. It was through analysis of these methods that the direct stiffness method emerged as an efficient method ideally suited for computer implementation.
History
Between 1934 and 1938 A. R. Collar and W. J. Duncan published the first papers with the representation and terminology for matrix systems that are used today. Aeroelastic research continued through World War II but publication restrictions from 1938 to 1947 make this work difficult to trace. The second major breakthrough in matrix structural analysis occurred through 1954 and 1955 when professor John H. Argyris systemized the concept of assembling elemental components of a structure into a system of equations. Finally, on Nov. 6 1959, M. J. Turner, head of Boeing’s Structural Dynamics Unit, published a paper outlining the direct stiffness method as an efficient model for computer implementation .
Member stiffness relations
A typical member stiffness relation has the following general form:
where
m = member number m.
= vector of member's characteristic forces, which are unknown internal forces.
= member stiffness matrix which characterizes the member's resistance against deformations.
= vector of member's characteristic displacements or deformations.
= vector of member's characteristic forces caused by external effects (such as known forces and temperature changes) applied to the member while .
If are member deformations rather than absolute displacements, then are independent member forces, and in such case (1) can be inverted to yield the so-called member flexibility matrix, which is used in the flexibility method.
System stiffness relation
For a system with many members interconnected at points called nodes, the members' stiffness relations such as Eq.(1) can be integrated by making use of the following observations:
The member deformations can be expressed in terms of system nodal displacements r in order to ensure compatibility between members. This implies that r will be the primary unknowns.
The member forces help to the keep the nodes in equilibrium under the nodal forces R. This implies that the right-hand-side of (1) will be integrated into the right-hand-side of the following nodal equilibrium equations for the entire system:
where
= vector of nodal forces, representing external forces applied to the system's nodes.
= system stiffness matrix, which is established by assembling the members' stiffness matrices .
= vector of system's nodal displacements that can define all possible deformed configurations of the system subject to arbitrary nodal forces R.
= vector of equivalent nodal forces, representing all external effects other than the nodal forces which are already included in the preceding nodal force vector R. This vector is established by assembling the members' .
Solution
The system stiffness matrix K is square since the vectors R and r have the same size. In addition, it is symmetric because is symmetric. Once the supports' constraints are accounted for in (2), the nodal displacements are found by solving the system of linear equations (2), symbolically:
Subsequently, the members' characteristic forces may be found from Eq.(1) where can be found from r by compatibility consideration.
The direct stiffness method
It is common to have Eq.(1) in a form where and are, respectively, the member-end displacements and forces matching in direction with r and R. In such case, and can be obtained by direct summation of the members' matrices and . The method is then known as the direct stiffness method.
The advantages and disadvantages of the matrix stiffness method are compared and discussed in the flexibility method article.
Example
Breakdown
The first step when using the direct stiffness method is to identify the individual elements which make up the structure.
Once the elements are identified, the structure is disconnected at the nodes, the points which connect the different elements together.
Each element is then analyzed individually to develop member stiffness equations. The forces and displacements are related through the element stiffness matrix which depends on the geometry and properties of the element.
A truss element can only transmit forces in compression or tension. This means that in two dimensions, each node has two degrees of freedom (DOF): horizontal and vertical displacement. The resulting equation contains a four by four stiffness matrix.
A frame element is able to withstand bending moments in addition to compression and tension. This results in three degrees of freedom: horizontal displacement, vertical displacement and in-plane rotation. The stiffness matrix in this case is six by six.
Other elements such as plates and shells can also be incorporated into the direct stiffness method and similar equations must be developed.
Assembly
Once the individual element stiffness relations have been developed they must be assembled into the original structure. The first step in this process is to convert the stiffness relations for the individual elements into a global system for the entire structure. In the case of a truss element, the global form of the stiffness method depends on the angle of the element with respect to the global coordinate system (This system is usually the traditional Cartesian coordinate system).
(for a truss element at angle β)
Equivalently,
where and are the direction cosines of the truss element (i.e., they are components of a unit vector aligned with the member). This form reveals how to generalize the element stiffness to 3-D space trusses by simply extending the pattern that is evident in this formulation.
After developing the element stiffness matrix in the global coordinate system, they must be merged into a single “master” or “global” stiffness matrix. When merging these matrices together there are two rules that must be followed: compatibility of displacements and force equilibrium at each node. These rules are upheld by relating the element nodal displacements to the global nodal displacements.
The global displacement and force vectors each contain one entry for each degree of freedom in the structure. The element stiffness matrices are merged by augmenting or expanding each matrix in conformation to the global displacement and load vectors.
(for element (1) of the above structure)
Finally, the global stiffness matrix is constructed by adding the individual expanded element matrices together.
Solution
Once the global stiffness matrix, displacement vector, and force vector have been constructed, the system can be expressed as a single matrix equation.
For each degree of freedom in the structure, either the displacement or the force is known.
After inserting the known value for each degree of freedom, the master stiffness equation is complete and ready to be evaluated. There are several different methods available for evaluating a matrix equation including but not limited to Cholesky decomposition and the brute force evaluation of systems of equations. If a structure isn’t properly restrained, the application of a force will cause it to move rigidly and additional support conditions must be added.
The method described in this section is meant as an overview of the direct stiffness method. Additional sources should be consulted for more details on the process as well as the assumptions about material properties inherent in the process.
Applications
The direct stiffness method was developed specifically to effectively and easily implement into computer software to evaluate complicated structures that contain a large number of elements. Today, nearly every finite element solver available is based on the direct stiffness method. While each program utilizes the same process, many have been streamlined to reduce computation time and reduce the required memory. In order to achieve this, shortcuts have been developed.
One of the largest areas to utilize the direct stiffness method is the field of structural analysis where this method has been incorporated into modeling software. The software allows users to model a structure and, after the user defines the material properties of the elements, the program automatically generates element and global stiffness relationships. When various loading conditions are applied the software evaluates the structure and generates the deflections for the user.
See also
Finite element method
Finite element method in structural mechanics
Structural analysis
Flexibility method
List of finite element software packages
External links
Application of direct stiffness method to a 1-D Spring System
Matrix Structural Analysis
Animations of Stiffness Analysis Simulations
References
Felippa, Carlos A. Introduction to Finite Element Method. Fall 2001. University of Colorado. 18 Sept. 2005
Robinson, John. Structural Matrix Analysis for the Engineer. New York: John Wiley & Sons, 1966
Rubinstein, Moshe F. Matrix Computer Analysis of Structures. New Jersey: Prentice-Hall, 1966
McGuire, W., Gallagher, R. H., and Ziemian, R. D. Matrix Structural Analysis, 2nd Ed. New York: John Wiley & Sons, 2000.
Structural analysis
Numerical differential equations | Direct stiffness method | [
"Engineering"
] | 1,961 | [
"Structural engineering",
"Structural analysis",
"Mechanical engineering",
"Aerospace engineering"
] |
2,843,344 | https://en.wikipedia.org/wiki/Symmetric%20hydrogen%20bond | A symmetric hydrogen bond is a special type of hydrogen bond in which the proton is spaced exactly halfway between two identical atoms. The strength of the bond to each of those atoms is equal. It is an example of a 3-center 4-electron bond. This type of bond is much stronger than "normal" hydrogen bonds, in fact, its strength is comparable to a covalent bond. It is seen in ice at high pressure (Ice X), and also in the solid phase of many anhydrous acids such as hydrofluoric acid and formic acid at high pressure. It is also seen in the bifluoride ion [F−H−F]−. Much has been done to explain the symmetric hydrogen bond quantum-mechanically, as it seems to violate the duet rule for the first shell: The proton is effectively surrounded by four electrons. Because of this problem, some consider it to be an ionic bond.
References
Chemical bonding | Symmetric hydrogen bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 194 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
2,843,876 | https://en.wikipedia.org/wiki/Isotopic%20signature | An isotopic signature (also isotopic fingerprint) is a ratio of non-radiogenic 'stable isotopes', stable radiogenic isotopes, or unstable radioactive isotopes of particular elements in an investigated material. The ratios of isotopes in a sample material are measured by isotope-ratio mass spectrometry against an isotopic reference material. This process is called isotope analysis.
Stable isotopes
The atomic mass of different isotopes affect their chemical kinetic behavior, leading to natural isotope separation processes.
Carbon isotopes
For example, different sources and sinks of methane have different affinity for the 12C and 13C isotopes, which allows distinguishing between different sources by the 13C/12C ratio in methane in the air. In geochemistry, paleoclimatology and paleoceanography this ratio is called δ13C. The ratio is calculated with respect to Pee Dee Belemnite (PDB) standard:
‰
Similarly, carbon in inorganic carbonates shows little isotopic fractionation, while carbon in materials originated by photosynthesis is depleted of the heavier isotopes. In addition, there are two types of plants with different biochemical pathways; the C3 carbon fixation, where the isotope separation effect is more pronounced, C4 carbon fixation, where the heavier 13C is less depleted, and Crassulacean Acid Metabolism (CAM) plants, where the effect is similar but less pronounced than with C4 plants. Isotopic fractionation in plants is caused by physical (slower diffusion of 13C in plant tissues due to increased atomic weight) and biochemical (preference of 12C by two enzymes: RuBisCO and phosphoenolpyruvate carboxylase) factors. The different isotope ratios for the two kinds of plants propagate through the food chain, thus it is possible to determine if the principal diet of a human or an animal consists primarily of C3 plants (rice, wheat, soybeans, potatoes) or C4 plants (corn, or corn-fed beef) by isotope analysis of their flesh and bone collagen (however, to obtain more accurate determinations, carbon isotopic fractionation must be also taken into account, since several studies have reported significant 13C discrimination during biodegradation of simple and complex substrates).
Within C3 plants processes regulating changes in δ13C are well understood, particularly at the leaf level, but also during wood formation. Many recent studies combine leaf level isotopic fractionation with annual patterns of wood formation (i.e. tree ring δ13C) to quantify the impacts of climatic variations and atmospheric composition on physiological processes of individual trees and forest stands. The next phase of understanding, in terrestrial ecosystems at least, seems to be the combination of multiple isotopic proxies to decipher interactions between plants, soils and the atmosphere, and predict how changes in land use will affect climate change.
Similarly, marine fish contain more 13C than freshwater fish, with values approximating the C4 and C3 plants respectively.
The ratio of carbon-13 and carbon-12 isotopes in these types of plants is as follows:
C4 plants:
CAM plants:
C3 plants:
Limestones formed by precipitation in seas from the atmospheric carbon dioxide contain normal proportion of 13C. Conversely, calcite found in salt domes originates from carbon dioxide formed by oxidation of petroleum, which due to its plant origin is 13C-depleted. The layer of limestone deposited at the Permian extinction 252 Mya can be identified by the 1% drop in 13C/12C.
The 14C isotope is important in distinguishing biosynthetized materials from man-made ones. Biogenic chemicals are derived from biospheric carbon, which contains 14C. Carbon in artificially made chemicals is usually derived from fossil fuels like coal or petroleum, where the 14C originally present has decayed below detectable limits. The amount of 14C currently present in a sample therefore indicates the proportion of carbon of biogenic origin.
Nitrogen isotopes
Nitrogen-15, or 15N, is often used in agricultural and medical research, for example in the Meselson–Stahl experiment to establish the nature of DNA replication. An extension of this research resulted in development of DNA-based stable-isotope probing, which allows examination of links between metabolic function and taxonomic identity of microorganisms in the environment, without the need for culture isolation. Proteins can be isotopically labelled by cultivating them in a medium containing 15N as the only source of nitrogen, e.g., in quantitative proteomics such as SILAC.
Nitrogen-15 is extensively used to trace mineral nitrogen compounds (particularly fertilizers) in the environment. When combined with the use of other isotopic labels, 15N is also a very important tracer for describing the fate of nitrogenous organic pollutants. Nitrogen-15 tracing is an important method used in biogeochemistry.
The ratio of stable nitrogen isotopes, 15N/14N or δ15N, tends to increase with trophic level, such that herbivores have higher nitrogen isotope values than plants, and carnivores have higher nitrogen isotope values than herbivores. Depending on the tissue being examined, there tends to be an increase of 3-4 parts per thousand with each increase in trophic level. The tissues and hair of vegans therefore contain significantly lower δ15N than the bodies of people who eat mostly meat. Similarly, a terrestrial diet produces a different signature than a marine-based diet. Isotopic analysis of hair is an important source of information for archaeologists, providing clues about the ancient diets and differing cultural attitudes to food sources.
A number of other environmental and physiological factors can influence the nitrogen isotopic composition at the base of the food web (i.e. in plants) or at the level of individual animals. For example, in arid regions, the nitrogen cycle tends to be more 'open' and prone to the loss of 14N, increasing δ15N in soils and plants. This leads to relatively high δ15N values in plants and animals in hot and arid ecosystems relative to cooler and moister ecosystems. Furthermore, elevated δ15N have been linked to the preferential excretion of 14N and reutilization of already enriched 15N tissues in the body under prolonged water stress conditions or insufficient protein intake.
δ15N also provides a diagnostic tool in planetary science as the ratio exhibited in atmospheres and surface materials "is closely tied to the conditions under which materials form".
Oxygen isotopes
Oxygen occurs naturally in three variants, but 17O is so rare that it is very difficult to detect (~0.04% abundant). The ratio of 18O/16O in water depends on the amount of evaporation the water experienced (as 18O is heavier and therefore less likely to vaporize). As the vapor tension depends on the concentration of dissolved salts, the 18O/16O ratio shows correlation on the salinity and temperature of water. As oxygen is incorporated into the shells of calcium carbonate-secreting organisms, such sediments provide a chronological record of temperature and salinity of the water in the area.
The oxygen isotope ratio in the atmosphere varies predictably with time of year and geographic location; e.g. there is a 2% difference between 18O-rich precipitation in Montana and 18O-depleted precipitation in Florida Keys. This variability can be used for approximate determination of geographic location of origin of a material; e.g. it is possible to determine where a shipment of uranium oxide was produced. The rate of exchange of surface isotopes with the environment has to be taken in account.
The oxygen isotopic signatures of solid samples (organic and inorganic) are usually measured with pyrolysis and mass spectrometry. Improper or prolonged storage of samples can lead to inaccurate measurements.
Sulfur Isotopes
Sulfur has four stable isotopes, 32S, 33S, 34S, and 36S, of which 32S is the most abundant by a large margin due to the fact it is created by the very common 12C in supernovas. Sulfur isotope ratios are almost always expressed as ratios relative to 32S due to this major relative abundance (95.0%). Sulfur isotope fractionations are usually measured in terms of δ34S due to its higher abundance (4.25%) compared to the other stable isotopes of sulfur, though δ33S is also sometimes measured. Differences in sulfur isotope ratios are thought to exist primarily due to kinetic fractionation during reactions and transformations.
Sulfur isotopes are generally measured against standards; prior to 1993, the Canyon Diablo troilite standard (abbreviated to CDT), which has a 32S:34S equal to 22.220, was used as both a reference material and the zero point for the isotopic scale. Since 1993, the Vienna-CDT standard has been used as a zero point, and there are several materials used as reference materials for sulfur isotope measurements. Sulfur fractionations by natural processes measured against these standards have been shown to exist between −72‰ and +147‰, as calculated by the following equation:
As a very redox-active element, sulfur can be useful for recording major chemistry-altering events throughout Earth's history, such as marine evaporites which reflect the change in the atmosphere's redox state brought about by the Oxygen Crisis.
Radiogenic isotopes
Lead isotopes
Lead consists of four stable isotopes: 204Pb, 206Pb, 207Pb, and 208Pb. Local variations in uranium/thorium/lead content cause a wide location-specific variation of isotopic ratios for lead from different localities. Lead emitted to the atmosphere by industrial processes has an isotopic composition different from lead in minerals. Combustion of gasoline with tetraethyllead additive led to formation of ubiquitous micrometer-sized lead-rich particulates in car exhaust smoke; especially in urban areas the man-made lead particles are much more common than natural ones. The differences in isotopic content in particles found in objects can be used for approximate geolocation of the object's origin.
Radioactive isotopes
Hot particles, radioactive particles of nuclear fallout and radioactive waste, also exhibit distinct isotopic signatures. Their radionuclide composition (and thus their age and origin) can be determined by mass spectrometry or by gamma spectrometry. For example, particles generated by a nuclear blast contain detectable amounts of 60Co and 152Eu. The Chernobyl accident did not release these particles but did release 125Sb and 144Ce. Particles from underwater bursts will consist mostly of irradiated sea salts. Ratios of 152Eu/155Eu, 154Eu/155Eu, and 238Pu/239Pu are also different for fusion and fission nuclear weapons, which allows identification of hot particles of unknown origin.
Uranium has a relatively constant isotope ratio in all natural samples with ~0.72% , some 55 ppm (in secular equilibrium with its parent nuclide ), and the balance made up by . Isotopic compositions that diverge significantly from those values are evidence for the uranium having been subject to depletion or enrichment in some fashion or of (part of it) having participated in a nuclear fission reaction. While the latter is almost as universally due to human influence as the former two, the natural nuclear fission reactor at Oklo, Gabon was detected through a significant diversion of concentration in samples from Oklo compared to those of all other known deposits on earth. Given that is a material of proliferation concern then as now every IAEA-approved supplier of Uranium fuel keeps track of the isotopic composition of uranium to ensure none is diverted for nefarious purposes. It would thus become apparent quickly if another Uranium deposit besides Oklo proves to have once been a natural nuclear fission reactor.
Applications
Archaeological studies
In archaeological studies, stable isotope ratios have been used to track diet within the time span formation of analyzed tissues (10–15 years for bone collagen and intra-annual periods for tooth enamel bioapatite) from individuals; "recipes" of foodstuffs (ceramic vessel residues); locations of cultivation and types of plants grown (chemical extractions from sediments); and migration of individuals (dental material).
Forensics
With the advent of stable isotope ratio mass spectrometry, isotopic signatures of materials find increasing use in forensics, distinguishing the origin of otherwise similar materials and tracking the materials to their common source. For example, the isotope signatures of plants can be to a degree influenced by the growth conditions, including moisture and nutrient availability. In case of synthetic materials, the signature is influenced by the conditions during the chemical reaction. The isotopic signature profiling is useful in cases where other kinds of profiling, e.g. characterization of impurities, are not optimal. Electronics coupled with scintillator detectors are routinely used to evaluate isotope signatures and identify unknown sources.
A study was published demonstrating the possibility of determination of the origin of a common brown PSA packaging tape by using the carbon, oxygen, and hydrogen isotopic signature of the backing polymer, additives, and adhesive.
Measurement of carbon isotopic ratios can be used for detection of adulteration of honey. Addition of sugars originated from corn or sugar cane (C4 plants) skews the isotopic ratio of sugars present in honey, but does not influence the isotopic ratio of proteins; in an unadulterated honey the carbon isotopic ratios of sugars and proteins should match. As low as 7% level of addition can be detected.
Nuclear explosions form 10Be by a reaction of fast neutrons with 13C in the carbon dioxide in air. This is one of the historical indicators of past activity at nuclear test sites.
Solar System origins
Isotopic fingerprints are used to study the origin of materials in the Solar System. For example, the Moon's oxygen isotopic ratios seem to be essentially identical to Earth's. Oxygen isotopic ratios, which may be measured very precisely, yield a unique and distinct signature for each Solar System body. Different oxygen isotopic signatures can indicate the origin of material ejected into space. The Moon's titanium isotope ratio (50Ti/47Ti) appears close to the Earth's (within 4 ppm). In 2013, a study was released that indicated water in lunar magma was 'indistinguishable' from carbonaceous chondrites and nearly the same as Earth's, based on the composition of water isotopes.
Records of early life on Earth
Isotope biogeochemistry has been used to investigate the timeline surrounding life and its earliest iterations on Earth. Isotopic fingerprints typical of life, preserved in sediments, have been used to suggest, but do not necessarily prove, that life was already in existence on Earth by 3.85 billion years ago.
Sulfur isotope evidence has also been used to corroborate the timing of the Great Oxidation Event, during which the Earth's atmosphere experienced a measurable rise in oxygen (to about 9% of modern values) for the first time about 2.3–2.4 billion years ago. Mass-independent sulfur isotope fractionations are found to be widespread in the geologic record before about 2.45 billion years ago, and these isotopic signatures have since ceded to mass-dependent fractionation, providing strong evidence that the atmosphere shifted from anoxic to oxygenated at that threshold.
Modern sulfate-reducing bacteria are known to favorably reduce lighter 32S instead of 34S, and the presence of these microorganisms can measurably alter the sulfur isotope composition of the ocean. Because the δ34S values of sulfide minerals is primarily influenced by the presence of sulfate-reducing bacteria, the absence of sulfur isotope fractionations in sulfide minerals suggests the absence of these bacterial processes or the absence of freely available sulfate. Some have used this knowledge of microbial sulfur fractionation to suggest that minerals (namely pyrite) with large sulfur isotope fractionations relative to the inferred seawater composition may be evidence of life. This claim is not clear-cut, however, and is sometimes contested using geologic evidence from the ~3.49 Ga sulfide minerals found in the Dresser Formation of Western Australia, which are found to have δ34S values as negative as −22‰. Because it has not been proven that the sulfide and barite minerals formed in the absence of major hydrothermal input, it is not conclusive evidence of life or of the microbial sulfate reduction pathway in the Archean.
See also
Isoscapes
Isotope electrochemistry
Isotope geochemistry
Radiometric dating
References
Further reading
Carbon isotopes: you are what you eat
Hair-rising research
Ayacucho Archaeo Isotope Project
The pursuit of isotopic and molecular fire tracers in the polar atmosphere and cryosphere
Radiometric dating
Bioindicators
Anthropology
Isotopes
Forensic evidence | Isotopic signature | [
"Physics",
"Chemistry",
"Environmental_science"
] | 3,482 | [
"Bioindicators",
"Environmental chemistry",
"Isotopes",
"Radiometric dating",
"Nuclear physics",
"Radioactivity"
] |
5,161,053 | https://en.wikipedia.org/wiki/Cryptographic%20module | A cryptographic module is a component of a computer system that securely implements cryptographic algorithms, typically with some element of tamper resistance.
NIST defines a cryptographic module as "The set of hardware, software, and/or firmware that implements security functions (including cryptographic algorithms), holds plaintext keys and uses them for performing cryptographic operations, and is contained within a cryptographic module boundary."
Hardware security modules, including secure cryptoprocessors, are one way of implementing cryptographic modules.
Standards for cryptographic modules include FIPS 140-3 and ISO/IEC 19790.
See also
Cryptographic Module Validation Program (CMVP)
Cryptographic Module Testing Laboratory
References
Cryptography
Computer security | Cryptographic module | [
"Mathematics",
"Engineering"
] | 147 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
5,161,169 | https://en.wikipedia.org/wiki/Telegrapher%27s%20equations | The telegrapher's equations (or just telegraph equations) are a set of two coupled, linear equations that predict the voltage and current distributions on a linear electrical transmission line. The equations are important because they allow transmission lines to be analyzed using circuit theory. The equations and their solutions are applicable from 0 Hz (i.e. direct current) to frequencies at which the transmission line structure can support higher order non-TEM modes. The equations can be expressed in both the time domain and the frequency domain. In the time domain the independent variables are distance and time. The resulting time domain equations are partial differential equations of both time and distance. In the frequency domain the independent variables are distance and either frequency, or complex frequency, The frequency domain variables can be taken as the Laplace transform or Fourier transform of the time domain variables or they can be taken to be phasors. The resulting frequency domain equations are ordinary differential equations of distance. An advantage of the frequency domain approach is that differential operators in the time domain become algebraic operations in frequency domain.
The equations come from Oliver Heaviside who developed the transmission line model starting with an August 1876 paper, On the Extra Current. The model demonstrates that the electromagnetic waves can be reflected on the wire, and that wave patterns can form along the line. Originally developed to describe telegraph wires, the theory can also be applied to radio frequency conductors, audio frequency (such as telephone lines), low frequency (such as power lines), and pulses of direct current.
Distributed components
The telegrapher's equations, like all other equations describing electrical phenomena, result from Maxwell's equations. In a more practical approach, one assumes that the conductors are composed of an infinite series of two-port elementary components, each representing an infinitesimally short segment of the transmission line:
The distributed resistance of the conductors is represented by a series resistor (expressed in ohms per unit length). In practical conductors, at higher frequencies, increases approximately proportional to the square root of frequency due to the skin effect.
The distributed inductance (due to the magnetic field around the wires, self-inductance, etc.) is represented by a series inductor (henries per unit length).
The capacitance between the two conductors is represented by a shunt capacitor (farads per unit length).
The conductance of the dielectric material separating the two conductors is represented by a shunt resistor between the signal wire and the return wire (siemens per unit length). This resistor in the model has a resistance of accounts for both bulk conductivity of the dielectric and dielectric loss. If the dielectric is an ideal vacuum, then
The model consists of an infinite series of the infinitesimal elements shown in the figure, and that the values of the components are specified per unit length so the picture of the component can be misleading. An alternative notation is to use and to emphasize that the values are derivatives with respect to length, and that the units of measure combine correctly. These quantities can also be known as the primary line constants to distinguish from the secondary line constants derived from them, these being the characteristic impedance, the propagation constant, attenuation constant and phase constant. All these constants are constant with respect to time, voltage and current. They may be non-constant functions of frequency.
Role of different components
The role of the different components can be visualized based on the animation at right.
Inductance The inductance couples current to energy stored in the magnetic field. It makes it look like the current has inertia – i.e. with a large inductance, it is difficult to increase or decrease the current flow at any given point. Large inductance makes the wave move more slowly, just as waves travel more slowly down a heavy rope than a light string. Large inductance also increases the line's surge impedance (more voltage needed to push the same current through the line).
Capacitance The capacitance couples voltage to the energy stored in the electric field. It controls how much the bunched-up electrons within each conductor repel, attract, or divert the electrons in the other conductor. By deflecting some of these bunched up electrons, the speed of the wave and its strength (voltage) are both reduced. With a larger capacitance, , there is less repulsion, because the other line (which always has the opposite charge) partly cancels out these repulsive forces within each conductor. Larger capacitance equals weaker restoring forces, making the wave move slightly slower, and also gives the transmission line a lower surge impedance (less voltage needed to push the same current through the line).
Resistance Resistance corresponds to resistance interior to the two lines, combined. That resistance couples current to ohmic losses that drop a little of the voltage along the line as heat deposited into the conductor, leaving the current unchanged. Generally, the line resistance is very low, compared to inductive reactance at radio frequencies, and for simplicity is treated as if it were zero, with any voltage dissipation or wire heating accounted for as corrections to the "lossless line" calculation, or just ignored.
Conductance Conductance between the lines represents how well current can "leak" from one line to the other. Conductance couples voltage to dielectric loss deposited as heat into whatever serves as insulation between the two conductors. reduces propagating current by shunting it between the conductors. Generally, wire insulation (including air) is quite good, and the conductance is almost nothing compared to the capacitive susceptance , and for simplicity is treated as if it were zero.
All four parameters , , , and depend on the material used to build the cable or feedline. All four change with frequency: , and tend to increase for higher frequencies, and and tend to drop as the frequency goes up.
The figure at right shows a lossless transmission line, where both and are zero, which is the simplest and by far most common form of the telegrapher's equations used, but slightly unrealistic (especially regarding ).
Values of primary parameters for telephone cable
Representative parameter data for 24 gauge telephone polyethylene insulated cable (PIC) at 70 °F (294 K)
{| class="wikitable" style="text-align:right;"
|-
! Frequency
! colspan="2" |
! colspan="2" |
! colspan="2" |
! colspan="2" |
|-
! Hz
!
!
!
!
!
!
!
!
|-
|align=center| 1 Hz || 172.24 ||align=right| 52.50 || 612.9 || 186.8 ||align=right| 0.000 ||align=right| 0.000 || 51.57 || 15.72
|-
|align=center| 1 kHz || 172.28 ||align=right| 52.51 || 612.5 || 186.7 ||align=right| 0.072 ||align=right| 0.022 || 51.57 || 15.72
|-
|align=center| 10 kHz || 172.70 ||align=right| 52.64 || 609.9 || 185.9 ||align=right| 0.531 ||align=right| 0.162 || 51.57 || 15.72
|-
|align=center| 100 kHz || 191.63 ||align=right| 58.41 || 580.7 || 177.0 ||align=right| 3.327 ||align=right| 1.197 || 51.57 || 15.72
|-
|align=center| 1 MHz || 463.59 ||align=right| 141.30 || 506.2 || 154.3 ||align=right| 29.111 ||align=right| 8.873 || 51.57 || 15.72
|-
|align=center| 2 MHz || 643.14 ||align=right| 196.03 || 486.2 || 148.2 ||align=right| 53.205 ||align=right| 16.217 || 51.57 || 15.72
|-
|align=center| 5 MHz || 999.41 ||align=right| 304.62 || 467.5 || 142.5 ||align=right| 118.074 ||align=right| 35.989 || 51.57 || 15.72
|}
This data is from . The variation of and is mainly due to skin effect and proximity effect. The constancy of the capacitance is a consequence of intentional design.
The variation of can be inferred from a statement by :
"The power factor ... tends to be independent of frequency, since the fraction of energy lost during each cycle ... is substantially independent of the number of cycles per second over wide frequency ranges."
A function of the form
with close to would fit Terman's statement. gives an equation of similar form. Where is conductivity as a function of frequency, and are all real constants.
Usually the resistive losses () grow proportionately to and dielectric losses grow proportionately to with so at a high enough frequency, dielectric losses will exceed resistive losses. In practice, before that point is reached, a transmission line with a better dielectric is used. In long distance rigid coaxial cable, to get very low dielectric losses, the solid dielectric may be replaced by air with plastic spacers at intervals to keep the center conductor on axis.
The equation
Time domain
The telegrapher's equations in the time domain are:
They can be combined to get two partial differential equations, each with only one dependent variable, either or
Except for the dependent variable ( or ) the formulas are identical.
Frequency domain
The telegrapher's equations in the frequency domain are developed in similar forms in the following references: Kraus,
Hayt,
Marshall,
Sadiku,
Harrington,
Karakash,
Metzger.
The first equation means that the propagating voltage at point is decreased by the voltage loss produced by the current at that point passing through the series impedance The second equation means that the propagating current at point is decreased by the current loss produced by the voltage at that point appearing across the shunt admittance
The subscript ω indicates possible frequency dependence. and are phasors.
These equations may be combined to produce two, single-variable partial differential equations
with
where is called the attenuation constant and is called the phase constant.
Homogeneous solutions
Each of the preceding partial differential equations have two homogeneous solutions in an infinite transmission line.
For the voltage equation
For the current equation
The negative sign in the previous equation indicates that the current in the reverse wave is traveling in the opposite direction.
Note:
where the following symbol definitions hold:
{| class="wikitable"
|+ Symbol definitions
|-
! Symbol !! Definition
|-
| || point at which the values of the forward waves are known
|-
| || point at which the values of the reverse waves are known
|-
| || value of the total voltage at point
|-
| || value of the forward voltage wave at point
|-
| || value of the reverse voltage wave at point
|-
| || value of the forward voltage wave at point
|-
| || value of the reverse voltage wave at point
|-
| || value of the total current at point
|-
| || value of the forward current wave at point
|-
| || value of the reverse current wave at point
|-
| || value of the forward current wave at point
|-
| || value of the reverse current wave at point
|-
| || Characteristic impedance
|}
Finite length
Johnson gives the following solution,
where and is the length of the transmission line.
In the special case where all the impedances are equal, the solution reduces to
Lossless transmission
When and wire resistance and insulation conductance can be neglected, and the transmission line is considered as an ideal lossless structure. In this case, the model depends only on the and elements. The telegrapher's equations then describe the relationship between the voltage and the current along the transmission line, each of which is a function of position and time :
The equations themselves consist of a pair of coupled, first-order, partial differential equations. The first equation shows that the induced voltage is related to the time rate-of-change of the current through the cable inductance, while the second shows, similarly, that the current drawn by the cable capacitance is related to the time rate-of-change of the voltage.
These equations may be combined to form two wave equations, one for voltage the other for current
where is the propagation speed of waves traveling through the transmission line. For transmission lines made of parallel perfect conductors with vacuum between them, this speed is equal to the speed of light.
Lossless sinusoidal steady-state
In the case of sinusoidal steady-state (i.e., when a pure sinusoidal voltage is applied and transients have ceased), the voltage and current take the form of single-tone sine waves:
where is the angular frequency of the steady-state wave. In this case, the telegrapher's equations reduce to
Likewise, the wave equations reduce to one-dimensional Helmholtz equations
where is the wave number:
In the lossless case, it is possible to show that
and
where in this special case, is a real quantity that may depend on frequency and is the characteristic impedance of the transmission line, which, for a lossless line is given by
and and are arbitrary constants of integration, which are determined by the two boundary conditions (one for each end of the transmission line).
This impedance does not change along the length of the line since and are constant at any point on the line, provided that the cross-sectional geometry of the line remains constant.
Loss-free case, general solution
In the loss-free case the general solution of the wave equation for the voltage is the sum of a forward traveling wave and a backward traveling wave:
where
and can be any two analytic functions, and
is the waveform's propagation speed (also known as phase velocity).
Here, represents the amplitude profile of a wave traveling from left to right – in a positive direction – whilst represents the amplitude profile of a wave traveling from right to left. It can be seen that the instantaneous voltage at any point on the line is the sum of the voltages due to both waves.
Using the current and voltage relations given by the telegrapher's equations, we can write
Lossy transmission line
When the loss elements and are too substantial to ignore, the differential equations describing the elementary segment of line are
By differentiating both equations with respect to , and some algebra, we obtain a pair of damped, dispersive hyperbolic partial differential equations each involving only one unknown:
These equations resemble the homogeneous wave equation with extra terms in and and their first derivatives. These extra terms cause the signal to decay and spread out with time and distance. If the transmission line is only slightly lossy and signal strength will decay over distance as where .
Solutions of the telegrapher's equations as circuit components
The solutions of the telegrapher's equations can be inserted directly into a circuit as components. The circuit in the figure implements the solutions of the telegrapher's equations.
The solution of the telegrapher's equations can be expressed as an ABCD two-port network with the following defining equations
where
and
just as in the preceding sections. The line parameters , , , and are subscripted by to emphasize that they could be functions of frequency.
The ABCD type two-port gives and as functions of and The voltage and current relations are symmetrical: Both of the equations shown above, when solved for and as functions of and yield exactly the same relations, merely with subscripts "1" and "2" reversed, and the terms' signs made negative ("1"→"2" direction is reversed "1"←"2", hence the sign change).
Every two-wire or balanced transmission line has an implicit (or in some cases explicit) third wire which is called the shield, sheath, common, earth, or ground. So every two-wire balanced transmission line has two modes which are nominally called the differential mode and common mode. The circuit shown in the bottom diagram only can model the differential mode.
In the top circuit, the voltage doublers, the difference amplifiers, and impedances account for the interaction of the transmission line with the external circuit. This circuit is a useful equivalent for an unbalanced transmission line like a coaxial cable.
These are not unique: Other equivalent circuits are possible.
See also
Reflections of signals on conducting lines
Law of squares, Lord Kelvin's preliminary work on this subject
References
Hyperbolic partial differential equations
Distributed element circuits
Transmission lines | Telegrapher's equations | [
"Engineering"
] | 3,551 | [
"Electronic engineering",
"Distributed element circuits"
] |
5,161,942 | https://en.wikipedia.org/wiki/Biodiversity%20and%20drugs | Biodiversity plays a vital role in maintaining human and animal health because numerous plants, animals, and fungi are used in medicine to produce vital vitamins, painkillers, antibiotics, and other medications. Natural products have been recognized and used as medicines by ancient cultures all around the world. Some animals are also known to self-medicate using plants and other materials available to them.
Plant drugs
Many plant species have been studied thoroughly for their value as a source of medicine. They have a wide range of benefits such as anti-fever and anti-inflammatory properties, can treat diseases such as malaria and diabetes, and are used as vitamins and antibiotic and antifungal medications. More than 60% of the world's population relies almost entirely on plant medicine for primary health care, and about 119 pure chemicals such as caffeine, methyl salicylate, and quinine are extracted from less than 90 species of higher plants and used as medicines throughout the world.
In China, Japan, India, and Germany, there is a great deal of interest in and support for the search for new drugs from higher plants. For example, the Herbalome Project was launched in China in 2008 and aims to use high throughput sequencing and toxicity testing to identify active components in traditional herbal remedies.
Sweet Wormwood
Sweet Wormwood (Artemisia annua) grows in all continents besides Antarctica. It is the only known source of artemisinin, a drug that has been used to treat fevers due to malaria, exhaustion, or many other causes, since ancient times. Upon further study, scientists have found that Sweet Wormwood inhibits activity of various bacteria, viruses, and parasites and exhibits anti-cancer and anti-inflammatory properties.
Animal-derived drugs
Animal-derived drugs are a major source of modern medications used around the world. The use of these drugs can cause certain animals to become endangered or threatened; however, it is difficult to identify the animal species used in medicine since animal-derived drugs are often processed, which degrades their DNA.
Medicinal Animal Horns and Shells
Cells from animal horns and shells are included in a group of medications call Medicinal Animal Horns and Shells (MAHS). These drugs are often used in dermatology and have been reported to have anti-fever and anti-inflammatory properties and treat some diseases.
Drugs derived from animal toxins
Certain animals have obtained many adaptations of toxic substances due to a coevolutionary arms race between them and their predators. Some components of these toxins such as enzymes and inorganic salts are used in modern medicine. For example, drugs such as Captopril and Lisinopril are derived from snake venom and inhibit the angiotensin-converting enzyme. Another example is Ziconotide, a drug from the cone snail, Conus magus, that is used to reduce pain.
Medicinal fungi
Edible fungi can contain important nutrients and biomolecules that can be used for medical applications. For example, medicinal fungi have polysaccharides that can be used to prevent the spread of cancer by activating different types of immune cells (namely T lymphocytes, macrophages, and NK cells), which inhibit cancer cell reproduction and metastasis (the process by which cancer can spread to different parts of the body).
Fungi have been used to make many antibiotics since Sir Alexander Flemming discovered Penicillin from the mold, Penicillium notatum. Recently, there has been a renewed interest in using fungi to create antibiotics since many bacteria have obtained antibiotic resistance due to the heavy selection pressures that antibiotics cause. The diversity of marine fungi makes them a potential new source of antibiotic compunds; however, most are difficult to cultivate in a laboratory setting.
Countries in Asia such as Egypt and China have been using fungi for medical uses for centuries.
Turkey Tail Mushrooms
Toxoplasmosis is a disease caused by an infection by the parasite: Toxoplasma gondii (T. gondii). Current drugs used to treat this disease have many side effects and do not inhibit all forms of T. gondii. An in vitro study by Sharma et al. suggests that Turkey Tail mushroom extract could be used to treat Toxoplasmosis since it inhibited T. gondii growth.
Pestalone
Pestalone is an antibiotic created from the marine fungus: Pestalotia sp. M. Cueto et al. (2001–11) found that it has antibiotic activity against two bacteria species that have gained resistance to antibiotics: vancomycin-resistant Enterococcus faecium and methicillin-resistant Staphylococcus aureus.
Zoopharmacognosy
Zoopharmacognosy is the study of how animals select certain plants as self-medication to treat or prevent disease. Usually, this behavior is a result of coevolution between the animal and the plant that it uses for self-medication. For example, apes have been observed selecting a particular part of a medicinal plant by taking off leaves and breaking the stem to suck out the juice. In an interview with the late Neil Campbell, Eloy Rodriguez describes the importance of biodiversity:
"Some of the compounds we've identified by zoopharmacognosy kill parasitic worms, and some of these chemicals may be useful against tumors. There is no question that the templates for most drugs are in the natural world."
References
Biodiversity
Drug development
Drugs | Biodiversity and drugs | [
"Chemistry",
"Biology"
] | 1,113 | [
"Pharmacology",
"Products of chemical industry",
"Biodiversity",
"Chemicals in medicine",
"Drugs"
] |
5,163,454 | https://en.wikipedia.org/wiki/Waste%20heat | Waste heat is heat that is produced by a machine, or other process that uses energy, as a byproduct of doing work. All such processes give off some waste heat as a fundamental result of the laws of thermodynamics. Waste heat has lower utility (or in thermodynamics lexicon a lower exergy or higher entropy) than the original energy source. Sources of waste heat include all manner of human activities, natural systems, and all organisms, for example, incandescent light bulbs get hot, a refrigerator warms the room air, a building gets hot during peak hours, an internal combustion engine generates high-temperature exhaust gases, and electronic components get warm when in operation.
Instead of being "wasted" by release into the ambient environment, sometimes waste heat (or cold) can be used by another process (such as using hot engine coolant to heat a vehicle), or a portion of heat that would otherwise be wasted can be reused in the same process if make-up heat is added to the system (as with heat recovery ventilation in a building).
Thermal energy storage, which includes technologies both for short- and long-term retention of heat or cold, can create or improve the utility of waste heat (or cold). One example is waste heat from air conditioning machinery stored in a buffer tank to aid in night time heating. Another is seasonal thermal energy storage (STES) at a foundry in Sweden. The heat is stored in the bedrock surrounding a cluster of heat exchanger equipped boreholes, and is used for space heating in an adjacent factory as needed, even months later. An example of using STES to use natural waste heat is the Drake Landing Solar Community in Alberta, Canada, which, by using a cluster of boreholes in bedrock for interseasonal heat storage, obtains 97 percent of its year-round heat from solar thermal collectors on the garage roofs. Another STES application is storing winter cold underground, for summer air conditioning.
On a biological scale, all organisms reject waste heat as part of their metabolic processes, and will die if the ambient temperature is too high to allow this.
Anthropogenic waste heat can contribute to the urban heat island effect. The biggest point sources of waste heat originate from machines (such as electrical generators or industrial processes, such as steel or glass production) and heat loss through building envelopes. The burning of transport fuels is a major contribution to waste heat.
Conversion of energy
Machines converting energy contained in fuels to mechanical work or electric energy produce heat as a by-product.
Sources
In the majority of applications, energy is required in multiple forms. These energy forms typically include some combination of heating, ventilation, and air conditioning, mechanical energy and electric power. Often, these additional forms of energy are produced by a heat engine running on a source of high-temperature heat. A heat engine can never have perfect efficiency, according to the second law of thermodynamics, therefore a heat engine will always produce a surplus of low-temperature heat. This is commonly referred to as waste heat or "secondary heat", or "low-grade heat". This heat is useful for the majority of heating applications, however, it is sometimes not practical to transport heat energy over long distances, unlike electricity or fuel energy. The largest proportions of total waste heat are from power stations and vehicle engines. The largest single sources are power stations and industrial plants such as oil refineries and steelmaking plants.
Air conditioning
Conventional air conditioning systems are a source of waste heat by releasing waste heat into the outdoor ambient air whilst cooling indoor spaces. This expelling of waste heat from air conditioning can worsen the urban heat island effect. Waste heat from air conditioning can be reduced through the use of passive cooling building design and zero-energy methods like evaporative cooling and passive daytime radiative cooling, the latter of which sends waste heat directly to outer space through the infrared window.
Power generation
The electrical efficiency of thermal power plants is defined as the ratio between the input and output energy. It is typically only 33% when disregarding usefulness of the heat output for building heat. The images show cooling towers, which allow power stations to maintain the low side of the temperature difference essential for conversion of heat differences to other forms of energy. Discarded or "waste" heat that is lost to the environment may instead be used to advantage.
Industrial processes
Industrial processes, such as oil refining, steel making or glass making are major sources of waste heat.
Electronics
Although small in terms of power, the disposal of waste heat from microchips and other electronic components, represents a significant engineering challenge. This necessitates the use of fans, heatsinks, etc. to dispose of the heat.
For example, data centers use electronic components that consume electricity for computing, storage and networking. The French CNRS explains a data center is like a resistor and most of the energy it consumes is transformed into heat and requires cooling systems.
Biological
Humans, like all animals, produce heat as a result of metabolism. In warm conditions, this heat exceeds a level required for homeostasis in warm-blooded animals, and is disposed of by various thermoregulation methods such as sweating and panting.
Disposal
Low temperature heat contains very little capacity to do work (Exergy), so the heat is qualified as waste heat and rejected to the environment. Economically most convenient is the rejection of such heat to water from a sea, lake or river. If sufficient cooling water is not available, the plant can be equipped with a cooling tower or air cooler to reject the waste heat into the atmosphere. In some cases it is possible to use waste heat, for instance in district heating systems.
Uses
Conversion to electricity
There are many different approaches to transfer thermal energy to electricity, and the technologies to do so have existed for several decades.
An established approach is by using a thermoelectric device, where a change in temperature across a semiconductor material creates a voltage through a phenomenon known as the Seebeck effect.
A related approach is the use of thermogalvanic cells, where a temperature difference gives rise to an electric current in an electrochemical cell.
The organic Rankine cycle, offered by companies such as Ormat, is a very known approach, whereby an organic substance is used as working fluid instead of water. The benefit is that this process can reject heat at lower temperatures for the production of electricity than the regular water steam cycle. An example of use of the steam Rankine cycle is the Cyclone Waste Heat Engine.
Cogeneration and trigeneration
Waste of the by-product heat is reduced if a cogeneration system is used, also known as a Combined Heat and Power (CHP) system. Limitations to the use of by-product heat arise primarily from the engineering cost/efficiency challenges in effectively exploiting small temperature differences to generate other forms of energy. Applications utilizing waste heat include swimming pool heating and paper mills. In some cases, cooling can also be produced by the use of absorption refrigerators for example, in this case it is called trigeneration or CCHP (combined cooling, heat and power).
District heating
Waste heat can be used in district heating. Depending on the temperature of the waste heat and the district heating system, a heat pump must be used to reach sufficient temperatures. These are an easy and cheap way to use waste heat in cold district heating systems, as these are operated at ambient temperatures and therefore even low-grade waste heat can be used without needing a heat pump at the producer side.
Pre-heating
Waste heat can be forced to heat incoming fluids and objects before being highly heated. For instance, outgoing water can give its waste heat to incoming water in a heat exchanger before heating in homes or power plants.
Anthropogenic heat
Anthropogenic heat is heat generated by humans and human activity. The American Meteorological Society defines it as "Heat released to the atmosphere as a result of human activities, often involving combustion of fuels. Sources include industrial plants, space heating and cooling, human metabolism, and vehicle exhausts. In cities this source typically contributes 15–50 W/m2 to the local heat balance, and several hundred W/m2 in the center of large cities in cold climates and industrial areas." In 2020, the overall anthropogenic annual energy release was 168,000 terawatt-hours; given the 5.1×10 m surface area of Earth, this amounts to a global average anthropogenic heat release rate of 0.04 W/m.
Environmental impact
Anthropogenic heat is a small influence on rural temperatures, and becomes more significant in dense urban areas. It is one contributor to urban heat islands. Other human-caused effects (such as changes to albedo, or loss of evaporative cooling) that might contribute to urban heat islands are not considered to be anthropogenic heat by this definition.
Anthropogenic heat is a much smaller contributor to global warming than greenhouse gases are. In 2005, anthropogenic waste heat flux globally accounted for only 1% of the energy flux created by anthropogenic greenhouse gases. The heat flux is not evenly distributed, with some regions higher than others, and significantly higher in certain urban areas. For example, global forcing from waste heat in 2005 was 0.028 W/m2, but was +0.39 and +0.68 W/m2 for the continental United States and western Europe, respectively.
Although waste heat has been shown to have influence on regional climates, climate forcing from waste heat is not normally calculated in state-of-the-art global climate simulations. Equilibrium climate experiments show statistically significant continental-scale surface warming (0.4–0.9 °C) produced by one 2100 AHF scenario, but not by current or 2040 estimates. Simple global-scale estimates with different growth rates of anthropogenic heat that have been actualized recently show noticeable contributions to global warming, in the following centuries. For example, a 2% p.a. growth rate of waste heat resulted in a 3 degree increase as a lower limit for the year 2300. Meanwhile, this has been confirmed by more refined model calculations.
A 2008 scientific paper showed that if anthropogenic heat emissions continue to rise at the current rate, they will become a source of warming as strong as GHG emissions in the 21st century.
See also
Cost of electricity by source
Heat recovery steam generator
Pinch analysis
Thermal pollution
Urban metabolism
Waste heat recovery unit
References
Heat transfer
Thermodynamics
Energy conversion
Climate forcing
Atmospheric radiation
Heat | Waste heat | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,149 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Materials",
"Thermodynamics",
"Waste",
"Matter",
"Dynamical systems"
] |
5,163,904 | https://en.wikipedia.org/wiki/Error%20exponent | In information theory, the error exponent of a channel code or source code over the block length of the code is the rate at which the error probability decays exponentially with the block length of the code. Formally, it is defined as the limiting ratio of the negative logarithm of the error probability to the block length of the code for large block lengths. For example, if the probability of error of a decoder drops as , where is the block length, the error exponent is . In this example, approaches for large . Many of the information-theoretic theorems are of asymptotic nature, for example, the channel coding theorem states that for any rate less than the channel capacity, the probability of the error of the channel code can be made to go to zero as the block length goes to infinity. In practical situations, there are limitations to the delay of the communication and the block length must be finite. Therefore, it is important to study how the probability of error drops as the block length go to infinity.
Error exponent in channel coding
For time-invariant DMC's
The channel coding theorem states that for any ε > 0 and for any rate less than the channel capacity, there is an encoding and decoding scheme that can be used to ensure that the probability of block error is less than ε > 0 for sufficiently long message block X. Also, for any rate greater than the channel capacity, the probability of block error at the receiver goes to one as the block length goes to infinity.
Assuming a channel coding setup as follows: the channel can transmit any of messages, by transmitting the corresponding codeword (which is of length n). Each component in the codebook is drawn i.i.d. according to some probability distribution with probability mass function Q. At the decoding end, maximum likelihood decoding is done.
Let be the th random codeword in the codebook, where goes from to . Suppose the first message is selected, so codeword is transmitted. Given that is received, the probability that the codeword is incorrectly detected as is:
The function has upper bound
for Thus,
Since there are a total of M messages, and the entries in the codebook are i.i.d., the probability that is confused with any other message is times the above expression. Using the union bound, the probability of confusing with any message is bounded by:
for any . Averaging over all combinations of :
Choosing and combining the two sums over in the above formula:
Using the independence nature of the elements of the codeword, and the discrete memoryless nature of the channel:
Using the fact that each element of codeword is identically distributed and thus stationary:
Replacing M by 2nR and defining
probability of error becomes
Q and should be chosen so that the bound is tighest. Thus, the error exponent can be defined as
Error exponent in source coding
For time invariant discrete memoryless sources
The source coding theorem states that for any and any discrete-time i.i.d. source such as and for any rate less than the entropy of the source, there is large enough and an encoder that takes i.i.d. repetition of the source, , and maps it to binary bits such that the source symbols are recoverable from the binary bits with probability at least .
Let be the total number of possible messages. Next map each of the possible source output sequences to one of the messages randomly using a uniform distribution and independently from everything else. When a source is generated the corresponding message is then transmitted to the destination. The message gets decoded to one of the possible source strings. In order to minimize the probability of error the decoder will decode to the source sequence that maximizes , where denotes the event that message was transmitted. This rule is equivalent to finding the source sequence among the set of source sequences that map to message that maximizes . This reduction follows from the fact that the messages were assigned randomly and independently of everything else.
Thus, as an example of when an error occurs, supposed that the source sequence was mapped to message as was the source sequence . If was generated at the source, but then an error occurs.
Let denote the event that the source sequence was generated at the source, so that Then the probability of error can be broken down as Thus, attention can be focused on finding an upper bound to the .
Let denote the event that the source sequence was mapped to the same message as the source sequence and that . Thus, letting denote the event that the two source sequences and map to the same message, we have that
and using the fact that and is independent of everything else have that
A simple upper bound for the term on the left can be established as
for some arbitrary real number This upper bound can be verified by noting that either equals or because the probabilities of a given input sequence are completely deterministic. Thus, if then so that the inequality holds in that case. The inequality holds in the other case as well because
for all possible source strings. Thus, combining everything and introducing some , have that
Where the inequalities follow from a variation on the Union Bound. Finally applying this upper bound to the summation for have that:
Where the sum can now be taken over all because that will only increase the bound. Ultimately yielding that
Now for simplicity let so that Substituting this new value of into the above bound on the probability of error and using the fact that is just a dummy variable in the sum gives the following as an upper bound on the probability of error:
and each of the components of are independent. Thus, simplifying the above equation yields
The term in the exponent should be maximized over in order to achieve the highest upper bound on the probability of error.
Letting see that the error exponent for the source coding case is:
See also
Source coding
Channel coding
References
R. Gallager, Information Theory and Reliable Communication, Wiley 1968
Information theory
Data compression | Error exponent | [
"Mathematics",
"Technology",
"Engineering"
] | 1,207 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
5,165,589 | https://en.wikipedia.org/wiki/ADHM%20construction | In mathematical physics and gauge theory, the ADHM construction or monad construction is the construction of all instantons using methods of linear algebra by Michael Atiyah, Vladimir Drinfeld, Nigel Hitchin, Yuri I. Manin in their paper "Construction of Instantons."
ADHM data
The ADHM construction uses the following data:
complex vector spaces V and W of dimension k and N,
k × k complex matrices B1, B2, a k × N complex matrix I and a N × k complex matrix J,
a real moment map
a complex moment map
Then the ADHM construction claims that, given certain regularity conditions,
Given B1, B2, I, J such that , an anti-self-dual instanton in a SU(N) gauge theory with instanton number k can be constructed,
All anti-self-dual instantons can be obtained in this way and are in one-to-one correspondence with solutions up to a U(k) rotation which acts on each B in the adjoint representation and on I and J via the fundamental and antifundamental representations,
The metric on the moduli space of instantons is that inherited from the flat metric on B, I and J.
Generalizations
Noncommutative instantons
In a noncommutative gauge theory, the ADHM construction is identical but the moment map is set equal to the self-dual projection of the noncommutativity matrix of the spacetime times the identity matrix. In this case instantons exist even when the gauge group is U(1). The noncommutative instantons were discovered by Nikita Nekrasov and Albert Schwarz in 1998.
Vortices
Setting B2 and J to zero, one obtains the classical moduli space of nonabelian vortices in a supersymmetric gauge theory with an equal number of colors and flavors, as was demonstrated in Vortices, instantons and branes. The generalization to greater numbers of flavors appeared in Solitons in the Higgs phase: The Moduli matrix approach. In both cases the Fayet–Iliopoulos term, which determines a squark condensate, plays the role of the noncommutativity parameter in the real moment map.
The construction formula
Let x be the 4-dimensional Euclidean spacetime coordinates written in quaternionic notation
Consider the 2k × (N + 2k) matrix
Then the conditions are equivalent to the factorization condition
where f(x) is a k × k Hermitian matrix.
Then a hermitian projection operator P can be constructed as
The nullspace of Δ(x) is of dimension N for generic x. The basis vectors for this null-space can be assembled into an (N + 2k) × N matrix U(x) with orthonormalization condition U†U = 1.
A regularity condition on the rank of Δ guarantees the completeness condition
The anti-selfdual connection is then constructed from U by the formula
See also
Monad (homological algebra)
Twistor theory
References
Hitchin, N. (1983), "On the Construction of Monopoles", Commun. Math. Phys. 89, 145–190.
Gauge theories
Differential geometry
Quantum chromodynamics | ADHM construction | [
"Physics"
] | 670 | [
"Quantum mechanics",
"Quantum physics stubs"
] |
5,167,043 | https://en.wikipedia.org/wiki/Loop%20entropy | Loop entropy is the entropy lost upon bringing together two residues of a polymer within a prescribed distance. For a single loop, the entropy varies logarithmically with the number of residues in the loop
where is the Boltzmann constant and is a coefficient that depends on the properties of the polymer. This entropy formula corresponds to a power-law distribution for the probability of the residues contacting.
The loop entropy may also vary with the position of the contacting residues. Residues near the ends of the polymer are more likely to contact (quantitatively, have a lower ) than those in the middle (i.e., far from the ends), primarily due to excluded volume effects.
Wang-Uhlenbeck entropy
The loop entropy formula becomes more complicated with multiples loops, but may be determined for a Gaussian polymer using a matrix method developed by Wang and Uhlenbeck. Let there be contacts among the residues, which define loops of the polymers. The Wang-Uhlenbeck matrix is an symmetric, real matrix whose elements equal the number of common residues between loops and . The entropy of making the specified contacts equals
As an example, consider the entropy lost upon making the contacts between residues 26 and 84 and residues 58 and 110 in a polymer (cf. ribonuclease A). The first and second loops have lengths 58 (=84-26) and 52 (=110-58), respectively, and they have 26 (=84-58) residues in common. The corresponding Wang-Uhlenbeck matrix is
whose determinant is 2340. Taking the logarithm and multiplying by the constants gives the entropy.
References
Wang, M. C., & Uhlenbeck, G. E. (1945). On the theory of the Brownian motion II. Reviews of Modern Physics, 17(2-3), 323.
Thermodynamic entropy
Polymer physics | Loop entropy | [
"Physics",
"Chemistry",
"Materials_science"
] | 391 | [
"Thermodynamics stubs",
"Polymer physics",
"Physical quantities",
"Polymer stubs",
"Thermodynamic entropy",
"Entropy",
"Thermodynamics",
"Polymer chemistry",
"Statistical mechanics",
"Physical chemistry stubs",
"Organic chemistry stubs"
] |
1,402,262 | https://en.wikipedia.org/wiki/Baby%20colic | Baby colic, also known as infantile colic, is defined as episodes of crying for more than three hours a day, for more than three days a week, for three weeks in an otherwise healthy child. Often crying occurs in the evening. It typically does not result in long-term problems. The crying can result in frustration of the parents, depression following delivery, excess visits to the doctor, and child abuse.
The cause of colic is unknown. Some believe it is due to gastrointestinal discomfort like intestinal cramping. Diagnosis requires ruling out other possible causes. Concerning findings include a fever, poor activity, or a swollen abdomen. Fewer than 5% of infants with excess crying have an underlying organic disease.
Treatment is generally conservative, with little to no role for either medications or alternative therapies. Extra support for the parents may be useful. Tentative evidence supports certain probiotics for the baby and a low-allergen diet by the mother in those who are breastfed. Hydrolyzed formula may be useful in those who are bottlefed.
Colic affects 10–40% of babies. Equally common in bottle and breast-fed infants, it begins during the second week of life, peaks at 6 weeks, and resolves between 12 and 16 weeks. It rarely lasts up to one year of age. It occurs at the same rate in boys and in girls. The first detailed medical description of the problem was published in 1954.
Signs and symptoms
Colic is defined as episodes of crying for more than three hours a day, for more than three days a week for at least a three-week duration in an otherwise healthy child. It is most common around six weeks of age and gets better by six months of age. By contrast, infants normally cry an average of just over two hours a day, with the duration peaking at six weeks. With colic, periods of crying most commonly happen in the evening and for no obvious reason. Associated symptoms may include legs pulled up to the stomach, a flushed face, clenched hands, and a wrinkled brow. The cry is often high pitched (piercing).
Effect on the family
An infant with colic may affect family stability and be a cause of short-term anxiety or depression in the father and mother. It may also contribute to exhaustion and stress in the parents.
Persistent infant crying has been associated with severe marital discord, postpartum depression, early termination of breastfeeding, frequent visits to doctors, a quadrupling of laboratory tests, and prescription of medication for acid reflux. Babies with colic may be exposed to abuse, especially shaken baby syndrome.
Parent training programs for managing infantile colic may result in a reduction in crying time.
Causes
The cause of colic is generally unknown. Fewer than 5% of infants who cry excessively turn out to have an underlying organic disease, such as constipation, gastroesophageal reflux disease, lactose intolerance, anal fissures, subdural hematomas, or infantile migraine. Babies fed cow's milk have been shown to develop antibody responses to the bovine protein, and some studies have shown an association between consumption of cow's milk and infant colic. Studies performed showed conflicting evidence about the role of cow's milk allergy. While previously believed to be related to gas pains, this does not appear to be the case. Another theory holds that colic is related to hyperperistalsis of the digestive tube (increased level of activity of contraction and relaxation). The evidence that the use of anticholinergic agents improve colic symptoms supports this hypothesis.
Psychological and social factors have been proposed as a cause, but there is no evidence. Studies performed do not support the theory that maternal (or paternal) personality or anxiety causes colic, nor that it is a consequence of a difficult temperament of the baby, but families with colicky children may eventually develop anxiety, fatigue and problems with family functioning as a result. There is some evidence that cigarette smoke may increase the risk. It seems unrelated to breast or bottle feeding with rates similar in both groups. Reflux does not appear to be related to colic.
Diagnosis
Colic is diagnosed after other potential causes of crying are excluded. This can typically be done via a history and physical exam, and in most cases tests such as X-rays or blood tests are not needed. Babies who cry may simply be hungry, uncomfortable, or ill. Less than 10% of babies who would meet the definition of colic based on the amount they cry have an identifiable underlying disease.
Cause for concern include: an elevated temperature, a history of breathing problems or a child who is not appropriately gaining weight.
Indications that further investigations may be needed include:
Vomiting (vomit that is green or yellow, bloody or occurring more than five times a day)
Change in stool (constipation or diarrhea, especially with blood or mucus)
Abnormal temperature (a rectal temperature less than or over
Irritability (crying all day with few calm periods in between)
Lethargy (excess sleepiness, lack of smiles or interested gaze, weak sucking lasting over six hours)
Poor weight gain (gaining less than 15 grams a day)
Problems to consider when the above are present include:
Infections (e.g. ear infection, urine infection, meningitis, appendicitis)
Intestinal pain (e.g. food allergy, acid reflux, constipation, intestinal blockage)
Trouble breathing (e.g. from a cold, excessive dust, congenital nasal blockage, oversized tongue)
Increased brain pressure (e.g. hematoma, hydrocephalus)
Skin pain (e.g. a loose diaper pin, irritated rash, a hair wrapped around a toe)
Mouth pain (e.g. yeast infection)
Kidney pain (e.g. blockage of the urinary system)
Eye pain (e.g. scratched cornea, glaucoma)
Overdose (e.g. excessive Vitamin D, excessive sodium)
Others (e.g. migraine headache, heart failure, hyperthyroidism)
Persistently fussy babies with poor weight gain, vomiting more than five times a day, or other significant feeding problems should be evaluated for other illnesses (e.g. urinary infection, intestinal obstruction, acid reflux).
Treatment
Management of colic is generally conservative and involves the reassurance of parents. Calming measures may be used and include soothing motions, limiting stimulation, pacifier use, and carrying the baby around in a carrier, although it is not entirely clear if these actions have any effect beyond placebo. Swaddling does not appear to help.
Medication
No medications have been found to be both safe and effective. Simethicone is safe but ineffective, while dicyclomine works but is unsafe. Evidence does not support the use of cimetropium bromide, and there is little evidence for alternative medications or techniques. While medications to treat reflux are common, there is no evidence that they are useful.
Diet
Dietary changes by infants are generally not needed. In mothers who are breastfeeding, a hypoallergenic diet by the mother—not eating milk and dairy products, eggs, wheat, and nuts—may improve matters, while elimination of only cow's milk does not seem to produce any improvement. In formula-fed infants, switching to a soy-based or hydrolyzed protein formula may help. Evidence of benefit is greater for hydrolyzed protein formula with the benefit from soy based formula being disputed. Both these formulas have greater cost and may not be as palatable. Supplementation with fiber has not been shown to have any benefit. A 2018 Cochrane review of 15 randomized controlled trials involving 1,121 infants was unable to recommend any dietary interventions. A 2019 review determined that probiotics were no more effective than placebo although a reduction in crying time was measured.
Complimentary and alternative medicine
No clear beneficial effect from spinal manipulation or massage has been shown. Further, as there is no evidence of safety for cervical manipulation for baby colic, it is not advised. There is a case of a three-month-old dying following manipulation of the neck area.
Little clinical evidence supports the efficacy of "gripe water" and caution in use is needed, especially in formulations that include alcohol or sugar. Evidence does not support lactase supplementation. The use of probiotics, specifically Lactobacillus reuteri, decreases crying time at three weeks by 46 minutes in breastfeed babies but has unclear effects in those who are formula fed. Fennel also appears effective.
Prognosis
Infants who are colicky do just as well as their non colicky peers with respect to temperament at one year of age.
Epidemiology
Colic affects 10–40% of children, occurring at the same rate in boys and in girls.
History
The word "colic" is derived from the ancient Greek word for intestine (sharing the same root as the word "colon").
It has been an age-old practice to drug crying infants. During the second century AD, the Greek physician Galen prescribed opium to calm fussy babies, and during the Middle Ages in Europe, mothers and wet nurses smeared their nipples with opium lotions before each feeding. Alcohol was also commonly given to infants.
References
External links
Ailments of unknown cause
Crying
Pediatrics
Wikipedia medicine articles ready to translate | Baby colic | [
"Biology"
] | 1,942 | [
"Crying",
"Behavior",
"Human behavior"
] |
1,402,463 | https://en.wikipedia.org/wiki/Hydroinformatics | Hydroinformatics is a branch of informatics which concentrates on the application of information and communications technologies (ICTs) in addressing the increasingly serious problems of the equitable and efficient use of water for many different purposes. Growing out of the earlier discipline of computational hydraulics, the numerical simulation of water flows and related processes remains a mainstay of hydroinformatics, which encourages a focus not only on the technology but on its application in a social context.
On the technical side, in addition to computational hydraulics, hydroinformatics has a strong interest in the use of techniques originating in the so-called artificial intelligence community, such as artificial neural networks or recently support vector machines and genetic programming. These might be used with large collections of observed data for the purpose of data mining for knowledge discovery, or with data generated from an existing, physically based model in order to generate a computationally efficient emulator of that model for some purpose.
Hydroinformatics recognises the inherently social nature of the problems of water management and of decision-making processes, and strives to understand the social processes by which technologies are brought into use. Since the problems of water management are most severe in the majority world, while the resources to obtain and develop technological solutions are concentrated in the hands of the minority, the need to examine these social processes are particularly acute.
Hydroinformatics draws on and integrates hydraulics, hydrology, environmental engineering and many other disciplines. It sees application at all points in the water cycle from atmosphere to ocean, and in artificial interventions in that cycle such as urban drainage and water supply systems. It provides support for decision making at all levels from governance and policy through management to operations.
Hydroinformatics has a growing world-wide community of researchers and practitioners, and postgraduate programmes in Hydroinformatics are offered by many leading institutions. The Journal of Hydroinformatics provides a specific outlet for Hydroinformatics research, and the community gathers to exchange ideas at the biennial conferences. These activities are coordinated by the joint IAHR, IWA, IAHS Hydroinformatics Section.
Classic Soft-Computing Techniques is the first volume of the three, in the Handbook of HydroInformatics series (Elsevier) by Saeid Eslamian.
Handbook of HydroInformatics, Volume II: Advanced Machine Learning Techniques presents both the art of designing good learning algorithms, as well as the science of analyzing an algorithm's computational and statistical properties and performance guarantees
Handbook of HydroInformatics Volume III: Water Data Management Best Practices presents the latest and most updated data processing techniques that are fundamental to Water Science and Engineering disciplines.
References
External links
Hydroinformatics Lab at Brigham Young University
Hydroinformatics Lab at the University of Iowa - Research and Community Platform.
IHE Delft MSc / PhD in Hydroinformatics.
EuroAquae - European master course of Hydroinformatics and Water Management.
Hydroinformatics MSc at Newcastle University.
The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System.
Environmental engineering
Hydrology
Information science by discipline
Computational fields of study | Hydroinformatics | [
"Chemistry",
"Technology",
"Engineering",
"Environmental_science"
] | 640 | [
"Hydrology",
"Computational fields of study",
"Chemical engineering",
"Civil engineering",
"Computing and society",
"Environmental engineering"
] |
1,403,749 | https://en.wikipedia.org/wiki/Force%20density | In fluid mechanics, the force density is the negative gradient of pressure. It has the physical dimensions of force per unit volume. Force density is a vector field representing the flux density of the hydrostatic force within the bulk of a fluid. Force density is represented by the symbol f, and given by the following equation, where p is the pressure:
.
The net force on a differential volume element dV of the fluid is:
Force density acts in different ways which is caused by the boundary conditions. There are stick-slip boundary conditions and stick boundary conditions which affect force density.
In a sphere placed in an arbitrary non-stationary flow field of viscous incompressible fluid for stick boundary conditions where the force density's calculations leads to show the generalisation of Faxen's theorem to force multipole moments of arbitrary order.
In a sphere moving in an incompressible fluid in a non-stationary flow with mixed stick-slip boundary condition where the force of density shows an expression of the Faxén type for the total force, but the total torque and the symmetric force-dipole moment.
The force density at a point in a fluid, divided by the density, is the acceleration of the fluid at that point.
The force density f is defined as the force per unit volume, so that the net force can be calculated by:
.
The force density in an electromagnetic field is given in CGS by:
,
where is the charge density, E is the electric field, J is the current density, c is the speed of light, and B is the magnetic field.
See also
Body force
Pressure gradient
Gradient
References
Density | Force density | [
"Physics",
"Chemistry",
"Mathematics"
] | 331 | [
"Fluid dynamics stubs",
"Physical quantities",
"Quantity",
"Mass",
"Density",
"Wikipedia categories named after physical quantities",
"Matter",
"Fluid dynamics"
] |
1,404,523 | https://en.wikipedia.org/wiki/Hayflick%20limit | The Hayflick limit, or Hayflick phenomenon, is the number of times a normal somatic, differentiated human cell population will divide before cell division stops.
The concept of the Hayflick limit was advanced by American anatomist Leonard Hayflick in 1961, at the Wistar Institute in Philadelphia, Pennsylvania. Hayflick demonstrated that a normal human fetal cell population will divide between 40 and 60 times in cell culture before entering a senescence phase. This finding refuted the contention by Alexis Carrel that normal cells are immortal.
Hayflick interpreted his discovery to be aging at the cellular level. The aging of cell populations appears to correlate with the overall physical aging of an organism.
Macfarlane Burnet coined the name "Hayflick limit" in his book Intrinsic Mutagenesis: A Genetic Approach to Ageing, published in 1974.
History
The belief in cell immortality
Prior to Leonard Hayflick's discovery, it was believed that vertebrate cells had an unlimited potential to replicate. Alexis Carrel, a Nobel Prize-winning surgeon, had stated "that all cells explanted in tissue culture are immortal, and that the lack of continuous cell replication was due to ignorance on how best to cultivate the cells". He claimed to have cultivated fibroblasts from the hearts of chickens (which typically live 5 to 10 years) and to have kept the culture growing for 34 years.
However, other scientists have been unable to replicate Carrel's results, and they are suspected to be due to an error in experimental procedure. To provide required nutrients, embryonic stem cells of chickens may have been re-added to the culture daily. This would have easily allowed the cultivation of new, fresh cells in the culture, so there was not an infinite reproduction of the original cells. It has been speculated that Carrel knew about this error, but he never admitted it.
Also, it has been theorized that the cells Carrel used were young enough to contain pluripotent stem cells, which, if supplied with a supporting telomerase-activation nutrient, would have been capable of staving off replicative senescence, or even possibly reversing it. Cultures not containing telomerase-active pluripotent stem cells would have been populated with telomerase-inactive cells, which would have been subject to the 50 ± 10 mitosis event limit until cellular senescence occurs as described in Hayflick's findings.
Experiment and discovery
Hayflick first became suspicious of Carrel's claims while working in a lab at the Wistar Institute. Hayflick noticed that one of his cultures of embryonic human fibroblasts had developed an unusual appearance and that cell division had slowed. Initially, he brushed this aside as an anomaly caused by contamination or technical error. However, he later observed other cell cultures exhibiting similar manifestations. Hayflick checked his research notebook and was surprised to find that the atypical cell cultures had all been cultured to approximately their 40th doubling while younger cultures never exhibited the same problems. Furthermore, conditions were similar between the younger and older cultures he observed—same culture medium, culture containers, and technician. This led him to doubt that the manifestations were due to contamination or technical error.
Hayflick next set out to prove that the cessation of normal cell replicative capacity that he observed was not the result of viral contamination, poor culture conditions or some unknown artifact. Hayflick teamed with Paul Moorhead for the definitive experiment to eliminate these as causative factors. As a skilled cytogeneticist, Moorhead was able to distinguish between male and female cells in culture. The experiment proceeded as follows: Hayflick mixed equal numbers of normal human male fibroblasts that had divided many times (cells at the 40th population doubling) with female fibroblasts that had divided fewer times (cells at the 15th population doubling). Unmixed cell populations were kept as controls. After 20 doublings of the mixed culture, only female cells remained. Cell division ceased in the unmixed control cultures at the anticipated times; when the male control culture stopped dividing, only female cells remained in the mixed culture. This suggested that technical errors or contaminating viruses were unlikely explanations as to why cell division ceased in the older cells, and proved that unless the virus or artifact could distinguish between male and female cells (which it could not) then the cessation of normal cell replication was governed by an internal counting mechanism.
These results disproved Carrel's immortality claims and established the Hayflick limit as a credible biological theory. Unlike Carrel's experiment, Hayflick's have been successfully repeated by other scientists.
Cell phases
Hayflick describes three phases in the life of normal cultured cells. At the start of his experiment he named the primary culture "phase one". Phase two is defined as the period when cells are proliferating; Hayflick called this the time of "luxuriant growth". After months of doubling the cells eventually reach phase three, a phenomenon he named "senescence", where cell replication rate slows before halting altogether.
Telomere length
The Hayflick limit has been found to correlate with the length of the telomeric region at the end of chromosomes. During the process of DNA replication of a chromosome, small segments of DNA within each telomere are unable to be copied and are lost. This occurs due to the uneven nature of DNA replication, where leading and lagging strands are not replicated symmetrically. The telomeric region of DNA does not code for any protein; it is simply a repeated code on the end region of linear eukaryotic chromosomes. After many divisions, the telomeres reach a critical length and the cell becomes senescent. It is at this point that a cell has reached its Hayflick limit.
Hayflick was the first to report that only cancer cells are immortal. This could not have been demonstrated until he had demonstrated that normal cells are mortal. Cellular senescence does not occur in most cancer cells due to expression of an enzyme called telomerase. This enzyme extends telomeres, preventing the telomeres of cancer cells from shortening and giving them infinite replicative potential. A proposed treatment for cancer is the usage of telomerase inhibitors that would prevent the restoration of the telomere, allowing the cell to die like other body cells.
Organismal aging
Hayflick suggested that his results in which normal cells have a limited replicative capacity may have significance for understanding human aging at the cellular level.
It has been reported that the limited replicative capability of human fibroblasts observed in cell culture is far greater than the number of replication events experienced by non-stem cells in vivo during a normal postnatal lifespan. In addition, it has been suggested that no inverse correlation exists between the replicative capacity of normal human cell strains and the age of the human donor from which the cells were derived, as previously argued. It is now clear that at least some of these variable results are attributable to the mosaicism of cell replication numbers at different body sites where cells were taken.
Comparisons of different species indicate that cellular replicative capacity may correlate primarily with species body mass, but more likely to species lifespan. Thus, the limited capacity of cells to replicate in culture may be directly relevant to the overall physical aging of an organism.
See also
Ageing
Apoptosis
Biological immortality
HeLa cells
References
Further reading
Genetics
Life extension
Senescence
Cellular senescence | Hayflick limit | [
"Chemistry",
"Biology"
] | 1,545 | [
"Genetics",
"Senescence",
"Cellular senescence",
"Cellular processes",
"Metabolism"
] |
1,404,820 | https://en.wikipedia.org/wiki/Institution%20of%20Mechanical%20Engineers | The Institution of Mechanical Engineers (IMechE) is an independent professional association and learned society headquartered in London, United Kingdom, that represents mechanical engineers and the engineering profession. With over 120,000 members in 140 countries, working across industries such as railways, automotive, aerospace, manufacturing, energy, biomedical and construction, the Institution is licensed by the Engineering Council to assess candidates for inclusion on its Register of Chartered Engineers, Incorporated Engineers and Engineering Technicians.
The Institution was founded at the Queen's Hotel, Birmingham, by George Stephenson in 1847. It received a Royal Charter in 1930. The Institution's headquarters, purpose-built for the Institution in 1899, is situated at No. 1 Birdcage Walk in central London.
Origins
Informal meetings are said to have taken place in 1846, at locomotive designer Charles Beyer's house in Cecil Street, Manchester, or alternatively at Bromsgrove at the house of James McConnell, after viewing locomotive trials at the Lickey Incline. Beyer, Richard Peacock, George Selby, Archibald Slate and Edward Humphrys were present. Bromsgrove seems the more likely candidate for the initial discussion, not least because McConnell was the driving force in the early years. A meeting took place at the Queen's Hotel in Birmingham to consider the idea further on 7 October and a committee appointed with McDonnell at its head to see the idea to its inauguration.
The Institution of Mechanical Engineers was then founded on 27 January 1847, in the Queen's Hotel next to Curzon Street station in Birmingham by the railway pioneer George Stephenson and others. McConnnell became the first chairman. The founding of the Institution was said by Stephenson's biographer Samuel Smiles to have been spurred by outrage that Stephenson, the most famous mechanical engineer of the age, had been refused admission to the Institution of Civil Engineers unless he sent in "a probationary essay as proof of his capacity as an engineer". However, this account has been challenged as part of a pattern of exaggeration on Smiles' part aimed at glorifying the struggles that various Victorian mechanical engineers had to overcome in their personal efforts to attain greatness. Though there was certainly coolness between Stephenson and the Institution of Civil Engineers, it is more likely that the motivation behind the founding of the Institution of Mechanical Engineers was simply the need for a specific home for the growing number of mechanical engineers employed in the burgeoning railway and manufacturing industries.
Beyer proposed that George Stephenson become the Institution's first president in 1847, followed by his son, Robert Stephenson, in 1849. Beyer became vice-president and was one of the first to present papers to the Institution; Charles Geach was the first treasurer. Throughout the 19th and 20th centuries some of Britain's most notable engineers held the position of president, including Joseph Whitworth, Carl Wilhelm Siemens and Harry Ricardo. It operated from premises in Birmingham until 1877 when it moved to London, taking up its present headquarters on Birdcage Walk in 1899.
Birdcage Walk
Upon its move to London in 1877 the Institution rented premises at No. 10 Victoria Chambers, where it remained for 20 years. In 1895 the Institution bought a plot of land at Storey's Gate, on the eastern end of Birdcage Walk, for £9,500. Architect Basil Slade looked to the newly-completed Admiralty buildings facing the site for inspiration. The building was designed in the Queen Anne, 'streaky bacon', style in red brick and Portland stone. Inside, there were several features that were state of the art for the time, including a telephone, a 54-inch fan in the lecture theatre for driving air into the building, an electric lift from the Otis Elevator Company, and a Synchronome master-clock, which controlled all house timepieces. In 1933 architect James Miller, who also designed the neighbouring Institution of Civil Engineers, remodelled the building, expanding the library and introducing electric lighting.
The building would go on to host the first public presentation of Frank Whittle's jet engine in 1945. In 1943 it became the venue for the Royal Electrical & Mechanical Engineers' planning of Operation Overlord and the invasion of Normandy.
Today No. 1 Birdcage Walk hosts events, lectures, seminars and meetings in 17 conference and meeting rooms named after notable former members of the Institution, such as Whittle, Stephenson and Charles Parsons.
Membership grades and post-nominals
The following are membership grades with post-nominals :
Affiliate: (no post-nominal) The grade for students, apprentices and those interested in or involved in mechanical engineering.
AMIMechE: Associate Member of the Institution of Mechanical Engineers: this is the grade for graduates (of acceptable degrees or equivalents in engineering, mathematics or science)
MIMechE: Member of the Institution of Mechanical Engineers. For those who meet the educational and professional requirements for registration as a Chartered Mechanical Engineer (CEng, MIMechE) and also as a Chartered Engineer (CEng) or Incorporated Engineer (IEng) or Engineering Technician (EngTech) in mechanical engineering.
FIMechE: Fellow of the Institution of Mechanical Engineers. This is the highest class of elected membership, and is awarded to individuals who have demonstrated exceptional commitment to and innovation in mechanical engineering.
Awards
The James Watt International Medal is an award for excellence in engineering established in 1937 by the Institution of Mechanical Engineers. It is named after Scottish engineer James Watt (1736-1819) who developed the Watt steam engine in 1781, which was fundamental to the changes brought by the Industrial Revolution in both his native Great Britain and the rest of the world.
The Whitworth Scholarship is awarded to a few promising engineers of the main engineering disciplines for the length of a degree course. On successful completion, they become Whitworth Scholars, with a medal and are entitled to use post-nominals Wh.Sch.. It was founded by Joseph Whitworth.
The Engineering Heritage Awards were created in 1984 to help recognise and promote the value of artefacts, locations, collections and landmarks of significant engineering importance.
Along with The Manufacturer, the Institution also runs The Manufacturer MX Awards, and Formula Student, the world's largest student motorsport event.
The Tribology Gold Medal is awarded each year for outstanding and supreme achievement in the field of tribology. It is funded from The Tribology Trust Fund. It was established and first awarded in 1972. As of 2017, it has been awarded to 39 individuals from 12 different countries.
Presidents
, there have been 135 presidents of the Institution, who since 1922 have been elected annually for one year. The first president was George Stephenson, followed by his son Robert. Prior to 2018, Joseph Whitworth, John Penn and William Armstrong were the only presidents to have served two terms.
Pamela Liversidge in 1997 became the first female president; Professor Isobel Pollock became the second in 2012 and Carolyn Griffiths became the third in 2017.
List of presidents
† Baker resigned in June 2018. The Institution's by-laws state that a casual vacancy for President shall be filled by appointing a Past President to the role; Tony Roche was elected and duly took up office for a second term in August of that year.
Engineering Committees
The Institution of Mechanical Engineers has a number of committees that work to promote and develop thought leadership in different industry sectors. The Institution has 8 divisions: - Aerospace, Automobile, Biomedical Engineering Association, Construction & Building Services, Manufacturing Industries, Power Industries, Process Industries and Railway.
Biomedical Engineering Association (BmEA) aims to bring together key workers from both medicine and engineering to discuss the latest advances and issues, to enable networking among different industry leaders, and to promote the field of Medical Engineering, also known as Bioengineering or Biomedical Engineering, to government, healthcare professionals and the wider public. This committee offers:
seminars, lectures and conferences every year;
the Journal of Engineering in Medicine;
the annual Student Project Competition.
The Railway Division was formed in 1969 when the Institution of Locomotive Engineers amalgamated with IMechE.
Arms
See also
Engineering
James Watt International Medal
Chartered Engineer
Proceedings of the Institution of Mechanical Engineers
Footnotes
References
Sources
External links
IMechE Official website
Professional Engineering magazine website
1847 establishments in the United Kingdom
ECUK Licensed Members
Mechanical engineering organizations
Organisations based in the City of Westminster
Scientific organizations established in 1847
Learned societies of the United Kingdom | Institution of Mechanical Engineers | [
"Engineering"
] | 1,680 | [
"Institution of Mechanical Engineers",
"Mechanical engineering",
"Mechanical engineering organizations"
] |
1,404,852 | https://en.wikipedia.org/wiki/Life%20Safety%20Code | The publication Life Safety Code, known as NFPA 101, is a consensus standard widely adopted in the United States. It is administered, trademarked, copyrighted, and published by the National Fire Protection Association and, like many NFPA documents, is systematically revised on a three-year cycle.
Despite its title, the standard is not a legal code, is not published as an instrument of law, and has no statutory authority in its own right. However, it is deliberately crafted with language suitable for mandatory application to facilitate adoption into law by those empowered to do so.
The bulk of the standard addresses "those construction, protection, and occupancy features necessary to minimize danger to life from the effects of fire, including smoke, heat, and toxic gases created during a fire.". The standard does not address the "general fire prevention or building construction features that are normally a function of fire prevention codes and building codes".
History
The Life Safety Code was originated in 1913 by the Committee on Safety to Life (one of the NFPA's more than 200 committees). As noted in the 1991 Life Safety Code Handbook; "...the Committee devoted its attention to a study of notable fires involving loss of life and to analyzing the causes of that loss of life. This work led to the preparation of standards for the construction of stairways,fire escapes, and similar structures; for fire drills in various occupancies and for the construction and arrangement of exit facilities for factories, schools and other occupancies, which form the basis of the present Code." This study became the basis for two early NFPA publications, "Outside Stairs for Fire Exits" (1916) and "Safeguarding Factory Workers from Fire" (1918).
In 1921 the Committee on Safety to Life expanded and the publication they generated in 1927 became known as the Building Exits Code. New editions were published in 1929, 1934, 1936, 1938, 1942 and 1946.
After a disastrous series of fires between 1942 and 1946, including the Cocoanut Grove Nightclub fire in Boston, which claimed the lives of 492 people and the Winecoff Hotel fire in Atlanta which claimed 119 lives, the Building Exits Code began to be utilized as potential legal legislation. The verbiage of the code, however, was intended for building contractors and not legal statutes, so the NFPA decided to re-edit the Code and some revisions appeared in the 1948, 1949, 1951 and 1952 publications. The editions published in 1957, 1958, 1959, 1960, 1961 and 1963 refined the verbiage and presentation even further.
In 1955 the NPFA101 was broken into three separate documents, NFPA101B (covering nursing homes) and NFPA101C (covering interior finishes). NFPA101C was revised once in 1956 before both publications were withdrawn and pertinent passages re-incorporated back into the main body.
The Committee on Safety to Life was restructured in 1963 and the first publication in 1966 was a complete revision. The title was changed from Building Exits Code to Code for Safety to Life from Fire in Buildings and Structures. The final revision to all "code language" (legalese) was made and it was decided that the Code would be revised and republished on a three-year schedule.
New editions were subsequently published in 1967, 1970, 1973 and 1976. The Committee was reorganized again in 1977 and the 1981 edition of the Code featured major editorial and structural changes that reflect the organization of the modern Code.
Ongoing amendment
Codes produced by NFPA are continually updated to incorporate new technologies as well as lessons learned from actual fire experiences.
The fire at The Station nightclub in 2003, which claimed the lives of 100 and injured more than 200, resulted in swift attention to several amendments specific to nightclubs and large crowds.
Current code
The Life Safety Code is unusual among safety codes in that it applies to existing structures as well as new structures. When a Code revision is adopted into local law, existing structures may have a grace period before they must comply, but all structures must comply with code. In some cases, the authority having jurisdiction can simply permit previously approved features to be used under specified conditions. In other cases, the local law amends the Code to omit undesired sections prior to its adoption.
When some or all of the Code is adopted as regulations in a jurisdiction, it can be enforced by inspectors from local zoning boards, fire departments, building inspectors, fire marshals or other bodies and authorities having jurisdiction.
In particular, the Life Safety Code deals with hazards to human life in buildings, public and private conveyances and other human occupancies, but only when permanently fixed to a foundation, attached to a building, or permanently moored for human habitation. Regardless of official adoption as regulations, Life Safety Code provides a valuable source for determination of liability in accidents, and many codes and related standards are sponsored by insurance companies.
The Life Safety Code is coordinated with hundreds of other building codes and standards such as National Electrical Code NFPA 70, fuel-gas, mechanical, plumbing (for sprinklers and standpipes), energy and fire codes.
Normally, the Life Safety Code is used by architects and designers of vehicles and vessels used for human occupancy. Since the Life Safety Code is a valuable source for determining liability in accidents, it is also used by insurance companies to evaluate risks and set rates, not to mention assessment of compliance after an incident.
In the United States, the words Life Safety Code and NFPA 101 are registered trademarks of NFPA. All or part of the NFPA's Life Safety Code are adopted as local regulations throughout the country.
Sample sections
This listing of chapters from the 2009 edition shows the scope of the Code.
Beyond the policies, core definitions and topical requirements of chapters 1–11, chapters 12–42 address the specific requirements for each listed class of occupancy, making reference to Chapters 1–11, as well as other codes.
1. Administration
2. Referenced Publications
3. Definitions
4. General
5. Performance Based Option
6. Classification of Occupancy and Hazard of Contents
7. Means of Egress
8. Features of Fire Protection
9. Building Service and Fire Protection Equipment
10. Interior Finish, Contents and Furnishings
11. Special structures and High Rise Buildings
12. New Assembly Occupancies
13. Existing Assembly Occupancies
14. New Educational Occupancies
15. Existing Educational Occupancies
16 New Day-Care Occupancies
17. Existing Day Care Occupancies
18. New Health Care Occupancies
19. Existing Health Care Occupancies
20. New Ambulatory Health Care Occupancies
21. Existing Ambulatory Health Care Occupancies
22. New Detention and Correctional Occupancies
23. Existing Detention and Correctional Occupancies
24. One- and Two-Family Dwellings
25. Reserved
26. Lodging and Rooming Houses
27. Reserved
28. New Hotels and Dormitories
29. Existing Hotels and Dormitories
30. New Apartment Buildings
31. Existing Apartment Buildings
32. New Residential Board and Care Occupancies
33. Existing Residential Board and Care Facilities
34. Reserved
35. Reserved
36. New Mercantile Occupancies
37. Existing Mercantile Occupancies
38. New Business Occupancies
39. Existing Business Occupancies
40. Industrial Occupancies
41. Reserved
42. Storage Occupancies
43. Building Rehabilitation (first appeared in 2006 Code)
Annex A: Explanatory material
Annex B: Use of elevators for early evacuation
Annex C: Supplemental Evacuation Equipment
The Code and corresponding Handbook also include several supplemental publications including:
Case Histories: Fires Influencing the Life Safety Code
Fire Alarm Systems for Life Safety Code Users (NFPA 72 and related standards)
Brief Introduction to Sprinkler Systems... (NFPA 13)
Fire Test Standards (According to 25 different codes)
Home Security and Fire Safety (crime prevention versus fire safety)
Application of Performance Based Design Concepts
Technical and substantive changes
See also
Building code
Fire code
Fire Safety Equivalency System
Sanitation code
OSHA
Electrical code
References
External links
NFPA Web Site
NFPA 101 Life Safety Code
States Adopting NFPA 101
United States Access Board
Building engineering
Construction standards
Real estate in the United States
Safety codes
NFPA Standards | Life Safety Code | [
"Engineering"
] | 1,725 | [
"Construction standards",
"Building engineering",
"Construction",
"Civil engineering",
"Architecture"
] |
1,406,502 | https://en.wikipedia.org/wiki/Monitor%20unit | A monitor unit (MU) is a measure of machine output from a clinical accelerator for radiation therapy such as a linear accelerator or an orthovoltage unit. Monitor units are measured by monitor chambers, which are ionization chambers that measure the dose delivered by a beam and are built into the treatment head of radiotherapy linear accelerators.
Calibration and dose quantities
Linear accelerators are calibrated to give a particular absorbed dose under particular conditions, although the definition and measurement configuration may vary among medical clinics.
The most common definitions are:
The monitor chamber reads 100 MU when an absorbed dose of 1 gray (100 rads) is delivered to a point at the depth of maximum dose in a water-equivalent phantom whose surface is at the isocenter of the machine (i.e. usually at 100 cm from the source) with a field size at the surface of 10 cm × 10 cm.
The monitor chamber reads 100 MU when an absorbed dose of is delivered to a point at a given depth in the phantom with the surface of the phantom positioned so that the specified point is at the isocentre of the machine and the field size is 10 cm × 10 cm at the isocentre.
Some linear accelerators are calibrated using source-to-axis distance (SAD) instead of source-to-surface distance (SSD), and calibration (monitor unit definition) may vary depending on hospital custom.
Early radiotherapy was performed using "constant SSD" treatments, and so the definition of monitor unit was adopted to reflect this calibration geometry.
Modern radiotherapy is performed using isocentric treatment plans, so newer definitions of the monitor unit are based on geometry at the isocenter based on the source-to-axis distance (SAD).
Secondary monitor unit calculations
Nearly 60% of the reported errors involved a lack of an appropriate independent secondary check of the treatment plan or dose calculation
With the development and technological advances, radiotherapy requires that high doses of radiation are delivered to the tumor with increasing precision. According to the recommendations of the International Commission on Radiation Units and Measurements (ICRU) in Publication 24
, the delivered dose should not deviate by more than ± 5% of the prescribed dose. More recently, the new ICRU recommendations in Publication 62
Commercially available computerized treatment planning systems are often used in radiotherapy services to perform monitoring unit (MU) calculations to deliver the prescribed dose to the patient. As only a part of the total dose uncertainty originates from the calculation process in treatment planning, the tolerance for accuracy of planning systems has to be smaller.
Publications on quality assurance in radiotherapy have recommended routine checks of MU calculations through independent manual calculation. This type of verification can also increase confidence in the accuracy of the algorithm and in the data integrity of the beams used, in addition to providing an indication of the limitations of the application of conventional dose calculation algorithms used by planning systems. Table 1 lists MU calculation software manufacturers.
References
Radiation therapy
Medical physics
Oncology
X-rays | Monitor unit | [
"Physics"
] | 606 | [
"Applied and interdisciplinary physics",
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Medical physics"
] |
1,406,812 | https://en.wikipedia.org/wiki/Energy%20harvesting | Energy harvesting (EH) – also known as power harvesting, energy scavenging, or ambient power – is the process by which energy is derived from external sources (e.g., solar power, thermal energy, wind energy, salinity gradients, and kinetic energy, also known as ambient energy), then stored for use by small, wireless autonomous devices, like those used in wearable electronics, condition monitoring, and wireless sensor networks.
Energy harvesters usually provide a very small amount of power for low-energy electronics. While the input fuel to some large-scale energy generation costs resources (oil, coal, etc.), the energy source for energy harvesters is present as ambient background. For example, temperature gradients exist from the operation of a combustion engine and in urban areas, there is a large amount of electromagnetic energy in the environment due to radio and television broadcasting.
One of the first examples of ambient energy being used to produce electricity was the successful use of electromagnetic radiation (EMR) to generate the crystal radio.
The principles of energy harvesting from ambient EMR can be demonstrated with basic components.
Operation
Energy harvesting devices converting ambient energy into electrical energy have attracted much interest in both the military and commercial sectors. Some systems convert motion, such as that of ocean waves, into electricity to be used by oceanographic monitoring sensors for autonomous operation. Future applications may include high-power output devices (or arrays of such devices) deployed at remote locations to serve as reliable power stations for large systems. Another application is in wearable electronics, where energy-harvesting devices can power or recharge cell phones, mobile computers, and radio communication equipment. All of these devices must be sufficiently robust to endure long-term exposure to hostile environments and have a broad range of dynamic sensitivity to exploit the entire spectrum of wave motions. In addition, one of the latest techniques to generate electric power from vibration waves is the utilization of Auxetic Boosters. This method falls under the category of piezoelectric-based vibration energy harvesting (PVEH), where the harvested electric energy can be directly used to power wireless sensors, monitoring cameras, and other Internet of Things (IoT) devices.
Accumulating energy
Energy can also be harvested to power small autonomous sensors such as those developed using MEMS technology. These systems are often very small and require little power, but their applications are limited by the reliance on battery power. Scavenging energy from ambient vibrations, wind, heat, or light could enable smart sensors to function indefinitely.
Typical power densities available from energy harvesting devices are highly dependent upon the specific application (affecting the generator's size) and the design itself of the harvesting generator. In general, for motion-powered devices, typical values are a few μW/cm3 for human body-powered applications and hundreds of μW/cm3 for generators powered by machinery. Most energy-scavenging devices for wearable electronics generate very little power.
Storage of power
In general, energy can be stored in a capacitor, super capacitor, or battery. Capacitors are used when the application needs to provide huge energy spikes. Batteries leak less energy and are therefore used when the device needs to provide a steady flow of energy. These aspects of the battery depend on the type that is used. A common type of battery that is used for this purpose is the lead acid or lithium-ion battery although older types such as nickel metal hydride are still widely used today. Compared to batteries, super capacitors have virtually unlimited charge-discharge cycles and can therefore operate forever, enabling a maintenance-free operation in IoT and wireless sensor devices.
Use of the power
Current interest in low-power energy harvesting is for independent sensor networks. In these applications, an energy harvesting scheme puts power stored into a capacitor then boosts/regulates it to a second storage capacitor or battery for use in the microprocessor or in the data transmission. The power is usually used in a sensor application and the data is stored or transmitted, possibly through a wireless method.
Motivation
One of the main driving forces behind the search for new energy harvesting devices is the desire to power sensor networks and mobile devices without batteries that need external charging or service. Batteries have several limitations, such as limited lifespan, environmental impact, size, weight, and cost. Energy harvesting devices can provide an alternative or complementary source of power for applications that require low power consumption, such as remote sensing, wearable electronics, condition monitoring, and wireless sensor networks. Energy harvesting devices can also extend the battery life or enable batteryless operation of some applications.
Another motivation for energy harvesting is the potential to address the issue of climate change by reducing greenhouse gas emissions and fossil fuel consumption. Energy harvesting devices can utilize renewable and clean sources of energy that are abundant and ubiquitous in the environment, such as solar, thermal, wind, and kinetic energy. Energy harvesting devices can also reduce the need for power transmission and distribution systems that cause energy losses and environmental impacts. Energy harvesting devices can therefore contribute to the development of a more sustainable and resilient energy system.
Recent research in energy harvesting has led to the innovation of devices capable of powering themselves through user interactions. Notable examples include battery-free game boys and other toys, which showcase the potential of devices powered by the energy generated from user actions, such as pressing buttons or turning knobs. These studies highlight how energy harvested from interactions can not only power the devices themselves but also extend their operational autonomy, promoting the use of renewable energy sources and reducing reliance on traditional batteries.
Energy sources
There are many small-scale energy sources that generally cannot be scaled up to industrial size in terms of comparable output to industrial size solar, wind or wave power:
Some wristwatches are powered by kinetic energy (called automatic watches) generated through movement of the arm when walking. The arm movement causes winding of the watch's mainspring. Other designs, like Seiko's Kinetic, use a loose internal permanent magnet to generate electricity.
Photovoltaics is a method of generating electrical power by converting solar radiation into direct current electricity using semiconductors that exhibit the photovoltaic effect. Photovoltaic power generation employs solar panels composed of a number of cells containing a photovoltaic material. Photovoltaics have been scaled up to industrial size and large-scale solar farms now exist.
Thermoelectric generators (TEGs) consist of the junction of two dissimilar materials and the presence of a thermal gradient. High-voltage outputs are possible by connecting many junctions electrically in series and thermally in parallel. Typical performance is 100–300 μV/K per junction. These can be utilized to capture mWs of energy from industrial equipment, structures, and even the human body. They are typically coupled with heat sinks to improve temperature gradient.
Micro wind turbines are used to harvest kinetic energy readily available in the environment in the form of wind to fuel low-power electronic devices such as wireless sensor nodes. When air flows across the blades of the turbine, a net pressure difference is developed between the wind speeds above and below the blades. This will result in a lift force generated which in turn rotates the blades. Similar to photovoltaics, wind farms have been constructed on an industrial scale and are being used to generate substantial amounts of electrical energy.
Piezoelectric crystals or fibers generate a small voltage whenever they are mechanically deformed. Vibration from engines can stimulate piezoelectric materials, as can the heel of a shoe or the pushing of a button.
Special antennas can collect energy from stray radio waves. This can also be done with a Rectenna and theoretically at even higher frequency EM radiation with a Nantenna.
Power from keys pressed during use of a portable electronic device or remote controller, using magnet and coil or piezoelectric energy converters, may be used to help power the device.
Vibration energy harvesting, based on electromagnetic induction, uses a magnet and a copper coil in the most simple versions to generate a current that can be converted into electricity.
Electrically-charged humidity produces electricity in the Air-gen, a nanopore-based device invented by a group at the University of Massachusetts at Amherst led by Jun Yao.
Ambient-radiation sources
A possible source of energy comes from ubiquitous radio transmitters. Historically, either a large collection area or close proximity to the radiating wireless energy source is needed to get useful power levels from this source. The nantenna is one proposed development which would overcome this limitation by making use of the abundant natural radiation (such as solar radiation).
One idea is to deliberately broadcast RF energy to power and collect information from remote devices. This is now commonplace in passive radio-frequency identification (RFID) systems, but the Safety and US Federal Communications Commission (and equivalent bodies worldwide) limit the maximum power that can be transmitted this way to civilian use. This method has been used to power individual nodes in a wireless sensor network.
Fluid flow
Various turbine and non-turbine generator technologies can harvest airflow. Towered wind turbines and airborne wind energy systems (AWES) harness the flow of air. Multiple companies are developing these technologies, which can operate in low-light environments, such as HVAC ducts, and can be scaled and optimized for the energy requirements of specific applications.
The flow of blood can also be utilized to power devices. For example, a pacemaker developed at the University of Bern, uses blood flow to wind up a spring, which then drives an electrical micro-generator.
Water energy harvesting has seen advancements in design, such as generators with transistor-like architecture, achieving high energy conversion efficiency and power density.
Photovoltaic
Photovoltaic (PV) energy harvesting wireless technology offers significant advantages over wired or solely battery-powered sensor solutions: virtually inexhaustible sources of power with little or no adverse environmental effects. Indoor PV harvesting solutions have to date been powered by specially tuned amorphous silicon (aSi)a technology most used in Solar Calculators. In recent years new PV technologies have come to the forefront in Energy Harvesting such as Dye-Sensitized Solar Cells (DSSC). The dyes absorb light much like chlorophyll does in plants. Electrons released on impact escape to the layer of TiO2 and from there diffuse, through the electrolyte, as the dye can be tuned to the visible spectrum much higher power can be produced. At a DSSC can provide over per cm2.
Piezoelectric
The piezoelectric effect converts mechanical strain into electric current or voltage. This strain can come from many different sources. Human motion, low-frequency seismic vibrations, and acoustic noise are everyday examples. Except in rare instances the piezoelectric effect operates in AC requiring time-varying inputs at mechanical resonance to be efficient.
Most piezoelectric electricity sources produce power on the order of milliwatts, too small for system application, but enough for hand-held devices such as some commercially available self-winding wristwatches. One proposal is that they are used for micro-scale devices, such as in a device harvesting micro-hydraulic energy. In this device, the flow of pressurized hydraulic fluid drives a reciprocating piston supported by three piezoelectric elements which convert the pressure fluctuations into an alternating current.
As piezo energy harvesting has been investigated only since the late 1990s, it remains an emerging technology. Nevertheless, some interesting improvements were made with the self-powered electronic switch at INSA school of engineering, implemented by the spin-off Arveni. In 2006, the proof of concept of a battery-less wireless doorbell push button was created, and recently, a product showed that classical wireless wallswitch can be powered by a piezo harvester. Other industrial applications appeared between 2000 and 2005, to harvest energy from vibration and supply sensors for example, or to harvest energy from shock.
Piezoelectric systems can convert motion from the human body into electrical power. DARPA has funded efforts to harness energy from leg and arm motion, shoe impacts, and blood pressure for low level power to implantable or wearable sensors. The nanobrushes are another example of a piezoelectric energy harvester. They can be integrated into clothing. Multiple other nanostructures have been exploited to build an energy-harvesting device, for example, a single crystal PMN-PT nanobelt was fabricated and assembled into a piezoelectric energy harvester in 2016. Careful design is needed to minimise user discomfort. These energy harvesting sources by association affect the body. The Vibration Energy Scavenging Project is another project that is set up to try to scavenge electrical energy from environmental vibrations and movements. Microbelt can be used to gather electricity from respiration. Besides, as the vibration of motion from human comes in three directions, a single piezoelectric cantilever based omni-directional energy harvester is created by using 1:2 internal resonance. Finally, a millimeter-scale piezoelectric energy harvester has also already been created.
Piezo elements are being embedded in walkways to recover the "people energy" of footsteps. They can also be embedded in shoes to recover "walking energy". Researchers at MIT developed the first micro-scale piezoelectric energy harvester using thin film PZT in 2005. Arman Hajati and Sang-Gook Kim invented the Ultra Wide-Bandwidth micro-scale piezoelectric energy harvesting device by exploiting the nonlinear stiffness of a doubly clamped microelectromechanical systems (MEMSs) resonator. The stretching strain in a doubly clamped beam shows a nonlinear stiffness, which provides a passive feedback and results in amplitude-stiffened Duffing mode resonance. Typically, piezoelectric cantilevers are adopted for the above-mentioned energy harvesting system. One drawback is that the piezoelectric cantilever has gradient strain distribution, i.e., the piezoelectric transducer is not fully utilized. To address this issue, triangle shaped and L-shaped cantilever are proposed for uniform strain distribution.
In 2018, Soochow University researchers reported hybridizing a triboelectric nanogenerator and a silicon solar cell by sharing a mutual electrode. This device can collect solar energy or convert the mechanical energy of falling raindrops into electricity.
UK telecom company Orange UK created an energy harvesting T-shirt and boots. Other companies have also done the same.
Energy from smart roads and piezoelectricity
Brothers Pierre Curie and Jacques Curie gave the concept of piezoelectric effect in 1880. Piezoelectric effect converts mechanical strain into voltage or electric current and generates electric energy from motion, weight, vibration and temperature changes as shown in the figure.
Considering piezoelectric effect in thin film lead zirconate titanate PZT, microelectromechanical systems (MEMS) power generating device has been developed. During recent improvement in piezoelectric technology, Aqsa Abbasi ) differentiated two modes called and in vibration converters and re-designed to resonate at specific frequencies from an external vibration energy source, thereby creating electrical energy via the piezoelectric effect using electromechanical damped mass.
However, Aqsa further developed beam-structured electrostatic devices that are more difficult to fabricate than PZT MEMS devices versus a similar because general silicon processing involves many more mask steps that do not require PZT film. Piezoelectric type sensors and actuators have a cantilever beam structure that consists of a membrane bottom electrode, film, piezoelectric film, and top electrode. More than mask steps are required for patterning of each layer while have very low induced voltage. Pyroelectric crystals that have a unique polar axis and have spontaneous polarization, along which the spontaneous polarization exists. These are the crystals of classes , , , , , , ,, . The special polar axis—crystallophysical axis – coincides with the axes ,, , and of the crystals or lies in the unique straight plane . Consequently, the electric centers of positive and negative charges are displaced of an elementary cell from equilibrium positions, i.e., the spontaneous polarization of the crystal changes. Therefore, all considered crystals have spontaneous polarization . Since
piezoelectric effect in pyroelectric crystals arises as a result of changes in their spontaneous polarization under external effects (electric fields, mechanical stresses). As a result of displacement, Aqsa Abbasi introduced change in the components along all three axes . Suppose that is proportional to the mechanical stresses causing in a first approximation, which results where represents the mechanical stress and represents the piezoelectric modules.
PZT thin films have attracted attention for applications such as force sensors, accelerometers, gyroscopes actuators, tunable optics, micro pumps, ferroelectric RAM, display systems and smart roads, when energy sources are limited, energy harvesting plays an important role in the environment. Smart roads have the potential to play an important role in power generation. Embedding piezoelectric material in the road can convert pressure exerted by moving vehicles into voltage and current.
Smart transportation intelligent system
Piezoelectric sensors are most useful in smart-road technologies that can be used to create systems that are intelligent and improve productivity in the long run. Imagine highways that alert motorists of a traffic jam before it forms. Or bridges that report when they are at risk of collapse, or an electric grid that fixes itself when blackouts hit. For many decades, scientists and experts have argued that the best way to fight congestion is intelligent transportation systems, such as roadside sensors to measure traffic and synchronized traffic lights to control the flow of vehicles. But the spread of these technologies has been limited by cost. There are also some other smart-technology shovel ready projects which could be deployed fairly quickly, but most of the technologies are still at the development stage and might not be practically available for five years or more.
Pyroelectric
The pyroelectric effect converts a temperature change into electric current or voltage. It is analogous to the piezoelectric effect, which is another type of ferroelectric behavior. Pyroelectricity requires time-varying inputs and suffers from small power outputs in energy harvesting applications due to its low operating frequencies. However, one key advantage of pyroelectrics over thermoelectrics is that many pyroelectric materials are stable up to 1200 °C or higher, enabling energy harvesting from high temperature sources and thus increasing thermodynamic efficiency.
One way to directly convert waste heat into electricity is by executing the Olsen cycle on pyroelectric materials. The Olsen cycle consists of two isothermal and two isoelectric field processes in the electric displacement-electric field (D-E) diagram. The principle of the Olsen cycle is to charge a capacitor via cooling under low electric field and to discharge it under heating at higher electric field. Several pyroelectric converters have been developed to implement the Olsen cycle using conduction, convection, or radiation. It has also been established theoretically that pyroelectric conversion based on heat regeneration using an oscillating working fluid and the Olsen cycle can reach Carnot efficiency between a hot and a cold thermal reservoir. Moreover, recent studies have established polyvinylidene fluoride trifluoroethylene [P(VDF-TrFE)] polymers and lead lanthanum zirconate titanate (PLZT) ceramics as promising pyroelectric materials to use in energy converters due to their large energy densities generated at low temperatures. Additionally, a pyroelectric scavenging device that does not require time-varying inputs was recently introduced. The energy-harvesting device uses the edge-depolarizing electric field of a heated pyroelectric to convert heat energy into mechanical energy instead of drawing electric current off two plates attached to the crystal-faces.
Thermoelectrics
In 1821, Thomas Johann Seebeck discovered that a thermal gradient formed between two dissimilar conductors produces a voltage. At the heart of the thermoelectric effect is the fact that a temperature gradient in a conducting material results in heat flow; this results in the diffusion of charge carriers. The flow of charge carriers between the hot and cold regions in turn creates a voltage difference. In 1834, Jean Charles Athanase Peltier discovered that running an electric current through the junction of two dissimilar conductors could, depending on the direction of the current, cause it to act as a heater or cooler. The heat absorbed or produced is proportional to the current, and the proportionality constant is known as the Peltier coefficient. Today, due to knowledge of the Seebeck and Peltier effects, thermoelectric materials can be used as heaters, coolers and generators (TEGs).
Ideal thermoelectric materials have a high Seebeck coefficient, high electrical conductivity, and low thermal conductivity. Low thermal conductivity is necessary to maintain a high thermal gradient at the junction. Standard thermoelectric modules manufactured today consist of P- and N-doped bismuth-telluride semiconductors sandwiched between two metallized ceramic plates. The ceramic plates add rigidity and electrical insulation to the system. The semiconductors are connected electrically in series and thermally in parallel.
Miniature thermocouples have been developed that convert body heat into electricity and generate 40 μ W at 3 V with a 5-degree temperature gradient, while on the other end of the scale, large thermocouples are used in nuclear RTG batteries.
Practical examples are the finger-heartratemeter by the Holst Centre and the thermogenerators by the Fraunhofer-Gesellschaft.
Advantages to thermoelectrics:
No moving parts allow continuous operation for many years.
Thermoelectrics contain no materials that must be replenished.
Heating and cooling can be reversed.
One downside to thermoelectric energy conversion is low efficiency (currently less than 10%). The development of materials that are able to operate in higher temperature gradients, and that can conduct electricity well without also conducting heat (something that was until recently thought impossible ), will result in increased efficiency.
Future work in thermoelectrics could be to convert wasted heat, such as in automobile engine combustion, into electricity.
Electrostatic (capacitive)
This type of harvesting is based on the changing capacitance of vibration-dependent capacitors. Vibrations separate the plates of a charged variable capacitor, and mechanical energy is converted into electrical energy.
Electrostatic energy harvesters need a polarization source to work and to convert mechanical energy from vibrations into electricity. The polarization source should be in the order of some hundreds of volts; this greatly complicates the power management circuit. Another solution consists in using electrets, that are electrically charged dielectrics able to keep the polarization on the capacitor for years.
It's possible to adapt structures from classical electrostatic induction generators, which also extract energy from variable capacitances, for this purpose. The resulting devices are self-biasing, and can directly charge batteries, or can produce exponentially growing voltages on storage capacitors, from which energy can be periodically extracted by DC/DC converters.
Magnetic induction
Magnetic induction refers to the production of an electromotive force (i.e., voltage) in a changing magnetic field. This changing magnetic field can be created by motion, either rotation (i.e. Wiegand effect and Wiegand sensors) or linear movement (i.e. vibration).
Magnets wobbling on a cantilever are sensitive to even small vibrations and generate microcurrents by moving relative to conductors due to Faraday's law of induction. By developing a miniature device of this kind in 2007, a team from the University of Southampton made possible the planting of such a device in environments that preclude having any electrical connection to the outside world. Sensors in inaccessible places can now generate their own power and transmit data to outside receivers.
One of the major limitations of the magnetic vibration energy harvester developed at University of Southampton is the size of the generator, in this case approximately one cubic centimeter, which is much too large to integrate into today's mobile technologies. The complete generator including circuitry is a massive 4 cm by 4 cm by 1 cm nearly the same size as some mobile devices such as the iPod nano. Further reductions in the dimensions are possible through the integration of new and more flexible materials as the cantilever beam component. In 2012, a group at Northwestern University developed a vibration-powered generator out of polymer in the form of a spring. This device was able to target the same frequencies as the University of Southampton groups silicon based device but with one third the size of the beam component.
A new approach to magnetic induction based energy harvesting has also been proposed by using ferrofluids. The journal article, "Electromagnetic ferrofluid-based energy harvester", discusses the use of ferrofluids to harvest low frequency vibrational energy at 2.2 Hz with a power output of ~80 mW per g.
Quite recently, the change in domain wall pattern with the application of stress has been proposed as a method to harvest energy using magnetic induction. In this study, the authors have shown that the applied stress can change the domain pattern in microwires. Ambient vibrations can cause stress in microwires, which can induce a change in domain pattern and hence change the induction. Power, of the order of uW/cm2 has been reported.
Commercially successful vibration energy harvesters based on magnetic induction are still relatively few in number. Examples include products developed by Swedish company ReVibe Energy, a technology spin-out from Saab Group. Another example is the products developed from the early University of Southampton prototypes by Perpetuum. These have to be sufficiently large to generate the power required by wireless sensor nodes (WSN) but in M2M applications this is not normally an issue. These harvesters are now being supplied in large volumes to power WSNs made by companies such as GE and Emerson and also for train bearing monitoring systems made by Perpetuum.
Overhead powerline sensors can use magnetic induction to harvest energy directly from the conductor they are monitoring.
Blood sugar
Another way of energy harvesting is through the oxidation of blood sugars. These energy harvesters are called biobatteries. They could be used to power implanted electronic devices (e.g., pacemakers, implanted biosensors for diabetics, implanted active RFID devices, etc.). At present, the Minteer Group of Saint Louis University has created enzymes that could be used to generate power from blood sugars. However, the enzymes would still need to be replaced after a few years. In 2012, a pacemaker was powered by implantable biofuel cells at Clarkson University under the leadership of Dr. Evgeny Katz.
Tree-based
Tree metabolic energy harvesting is a type of bio-energy harvesting. Voltree has developed a method for harvesting energy from trees. These energy harvesters are being used to power remote sensors and mesh networks as the basis for a long term deployment system to monitor forest fires and weather in the forest. According to Voltree's website, the useful life of such a device should be limited only by the lifetime of the tree to which it is attached. A small test network was recently deployed in a US National Park forest.
Other sources of energy from trees include capturing the physical movement of the tree in a generator. Theoretical analysis of this source of energy shows some promise in powering small electronic devices. A practical device based on this theory has been built and successfully powered a sensor node for a year.
Metamaterial
A metamaterial-based device wirelessly converts a 900 MHz microwave signal to 7.3 volts of direct current (greater than that of a USB device). The device can be tuned to harvest other signals including Wi-Fi signals, satellite signals, or even sound signals. The experimental device used a series of five fiberglass and copper conductors. Conversion efficiency reached 37 percent. When traditional antennas are close to each other in space they interfere with each other. But since RF power goes down by the cube of the distance, the amount of power is very very small. While the claim of 7.3 volts is grand, the measurement is for an open circuit. Since the power is so low, there can be almost no current when any load is attached.
Atmospheric pressure changes
The pressure of the atmosphere changes naturally over time from temperature changes and weather patterns. Devices with a sealed chamber can use these pressure differences to extract energy. This has been used to provide power for mechanical clocks such as the Atmos clock.
Ocean energy
A relatively new concept of generating energy is to generate energy from oceans. Large masses of waters are present on the planet which carry with them great amounts of energy. The energy in this case can be generated by tidal streams, ocean waves, difference in salinity and also difference in temperature. , efforts are underway to harvest energy this way. United States Navy recently was able to generate electricity using difference in temperatures present in the ocean.
One method to use the temperature difference across different levels of the thermocline in the ocean is by using a thermal energy harvester that is equipped with a material that changes phase while in different temperatures regions. This is typically a polymer-based material that can handle reversible heat treatments. When the material is changing phase, the energy differential is converted into mechanical energy. The materials used will need to be able to alter phases, from liquid to solid, depending on the position of the thermocline underwater. These phase change materials within thermal energy harvesting units would be an ideal way to recharge or power an unmanned underwater vehicle (UUV) being that it will rely on the warm and cold water already present in large bodies of water; minimizing the need for standard battery recharging. Capturing this energy would allow for longer-term missions since the need to be collected or return for charging can be eliminated. This is also a very environmentally friendly method of powering underwater vehicles. There are no emissions that come from utilizing a phase change fluid, and it will likely have a longer lifespan than that of a standard battery.
Future directions
Electroactive polymers (EAPs) have been proposed for harvesting energy. These polymers have a large strain, elastic energy density, and high energy conversion efficiency. The total weight of systems based on EAPs (electroactive polymers) is proposed to be significantly lower than those based on piezoelectric materials.
Nanogenerators, such as the one made by Georgia Tech, could provide a new way for powering devices without batteries. As of 2008, it only generates some dozen nanowatts, which is too low for any practical application.
Noise has been the subject of a proposal by NiPS Laboratory in Italy to harvest wide spectrum low scale vibrations via a nonlinear dynamical mechanism that can improve harvester efficiency up to a factor 4 compared to traditional linear harvesters.
Combinations of different types
of energy harvesters can further reduce dependence on batteries, particularly in environments where the available ambient energy types change periodically. This type of complementary balanced energy harvesting has the potential to increase reliability of wireless sensor systems for structural health monitoring.
See also
Airborne wind energy
Automotive thermoelectric generators
EnOcean
Future energy development
IEEE 802.15 Ultra Wideband (UWB)
List of energy resources
Outline of energy
Parasitic load
Real-time locating system (RTL)
Rechargeable battery
Rectenna
Solar charger
Thermoacoustic heat engine
Thermoelectric generator
Ubiquitous Sensor Network
Unmanned aerial vehicles can be powered by energy harvesting
Wireless power transfer
References
External links
Microtechnology
Energy harvesting research centers | Energy harvesting | [
"Materials_science",
"Engineering"
] | 6,576 | [
"Materials science",
"Microtechnology"
] |
28,923,195 | https://en.wikipedia.org/wiki/Baumslag%E2%80%93Gersten%20group | In the mathematical subject of geometric group theory, the Baumslag–Gersten group, also known as the Baumslag group, is a particular one-relator group exhibiting some remarkable properties regarding its finite quotient groups, its Dehn function and the complexity of its word problem.
The group is given by the presentation
Here exponential notation for group elements denotes conjugation, that is, for .
History
The Baumslag–Gersten group G was originally introduced in a 1969 paper of Gilbert Baumslag, as an example of a non-residually finite one-relator group with an additional remarkable property that all finite quotient groups of this group are cyclic. Later, in 1992, Stephen Gersten showed that G, despite being a one-relator group given by a rather simple presentation, has the Dehn function growing very quickly, namely faster than any fixed iterate of the exponential function. This example remains the fastest known growth of the Dehn function among one-relator groups. In 2011 Alexei Myasnikov, Alexander Ushakov, and Dong Wook Won proved that G has the word problem solvable in polynomial time.
Baumslag-Gersten group as an HNN extension
The Baumslag–Gersten group G can also be realized as an HNN extension of the Baumslag–Solitar group with stable letter t and two cyclic associated subgroups:
Properties of the Baumslag–Gersten group G
Every finite quotient group of G is cyclic. In particular, the group G is not residually finite.
An endomorphism of G is either an automorphism or its image is a cyclic subgroup of G. In particular the group G is Hopfian and co-Hopfian.
The outer automorphism group Out(G) of G is isomorphic to the additive group of dyadic rationals and in particular is not finitely generated.
Gersten proved that the Dehn function f(n) of G grows faster than any fixed iterate of the exponential. Subsequently A. N. Platonov proved that f(n) is equivalent to
Myasnikov, Ushakov, and Won, using compression methods of ``power circuits" arithmetics, proved that the word problem in G is solvable in polynomial time. Thus the group G exhibits a large gap between the growth of its Dehn function and the complexity of its word problem.
The conjugacy problem in G is known to be decidable, but the only known worst-case upper bound estimate for the complexity of the conjugacy problem, due to Janis Beese, is elementary recursive. It is conjectured that this estimate is sharp, based on some reductions to power circuit division problems. There is a strongly generically polynomial time solution of the conjugacy problem for G.
Generalizations
Andrew Brunner considered one-relator groups of the form
where
and generalized many of Baumslag's original results in that context.
Mahan Mitra considered a word-hyperbolic analog G of the Baumslag–Gersten group, where Mitra's group possesses a rank three free subgroup that is highly distorted in G, namely where the subgroup distortion is higher than any fixed iterated power of the exponential.
See also
Subgroup distortion
References
External links
Distortion of finitely presented subgroups of non-positively curved groups, the blog of the Spring 2011 Berstein Seminar at Cornell, including van Kampen diagrams demonstrating subgroup distortion in the Baumslag–Gersten group and the discussion of Mitra-like examples
Geometric group theory
Algebraic structures | Baumslag–Gersten group | [
"Physics",
"Mathematics"
] | 743 | [
"Geometric group theory",
"Mathematical structures",
"Group actions",
"Mathematical objects",
"Algebraic structures",
"Symmetry"
] |
28,923,910 | https://en.wikipedia.org/wiki/Electronic%20engineering | Electronic engineering is a sub-discipline of electrical engineering that emerged in the early 20th century and is distinguished by the additional use of active components such as semiconductor devices to amplify and control electric current flow. Previously electrical engineering only used passive devices such as mechanical switches, resistors, inductors, and capacitors.
It covers fields such as analog electronics, digital electronics, consumer electronics, embedded systems and power electronics. It is also involved in many related fields, for example solid-state physics, radio engineering, telecommunications, control systems, signal processing, systems engineering, computer engineering, instrumentation engineering, electric power control, photonics and robotics.
The Institute of Electrical and Electronics Engineers (IEEE) is one of the most important professional bodies for electronics engineers in the US; the equivalent body in the UK is the Institution of Engineering and Technology (IET). The International Electrotechnical Commission (IEC) publishes electrical standards including those for electronics engineering.
History and development
Electronics engineering as a profession emerged following the identification of the electron in 1897 and the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, that inaugurated the field of electronics. Practical applications started with the invention of the diode by Ambrose Fleming and the triode by Lee De Forest in the early 1900s, which made the detection of small electrical voltages such as radio signals from a radio antenna possible with a non-mechanical device. The growth of electronics was rapid. By the early 1920s, commercial radio broadcasting and communications were becoming widespread and electronic amplifiers were being used in such diverse applications as long-distance telephony and the music recording industry.
The discipline was further enhanced by the large amount of electronic systems development during World War II in such as radar and sonar, and the subsequent peace-time consumer revolution following the invention of transistor by William Shockley, John Bardeen and Walter Brattain.
Specialist areas
Electronics engineering has many subfields. This section describes some of the most popular.
Electronic signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information.
For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment and the modulation and demodulation of radio frequency signals for telecommunications. For digital signals, signal processing may involve compression, error checking and error detection, and correction.
Telecommunications engineering deals with the transmission of information across a medium such as a co-axial cable, an optical fiber, or free space. Transmissions across free space require information to be encoded in a carrier wave in order to be transmitted, this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation.
Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. If the signal strength of a transmitter is insufficient the signal's information will be corrupted by noise.
Aviation-electronics engineering and Aviation-telecommunications engineering, are concerned with aerospace applications. Aviation-telecommunication engineers include specialists who work on airborne avionics in the aircraft or ground equipment. Specialists in this field mainly need knowledge of computer, networking, IT, and sensors. These courses are offered at such as Civil Aviation Technology Colleges.
Control engineering has a wide range of electronic applications from the flight and propulsion systems of commercial airplanes to the cruise control present in many modern cars. It also plays an important role in industrial automation. Control engineers often use feedback when designing control systems.
Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. The design of such instrumentation requires a good understanding of electronics engineering and physics; for example, radar guns use the Doppler effect to measure the speed of oncoming vehicles. Similarly, thermocouples use the Peltier–Seebeck effect to measure the temperature difference between two points.
Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control engineering.
Computer engineering deals with the design of computers and computer systems. This may involve the design of new computer hardware, the design of PDAs or the use of computers to control an industrial plant. Development of embedded systems—systems made for specific tasks (e.g., mobile phones)—is also included in this field. This field includes the microcontroller and its applications.
Computer engineers may also work on a system's software. However, the design of complex software systems is often the domain of software engineering which falls under computer science, which is usually considered a separate discipline.
VLSI design engineering VLSI stands for very large-scale integration. It deals with fabrication of ICs and various electronic components. In designing an integrated circuit, electronics engineers first construct circuit schematics that specify the electrical components and describe the interconnections between them. When completed, VLSI engineers convert the schematics into actual layouts, which map the layers of various conductor and semiconductor materials needed to construct the circuit.
Education and training
Electronics is a subfield within the wider electrical engineering academic subject. Electronics engineers typically possess an academic degree with a major in electronics engineering. The length of study for such a degree is usually three or four years and the completed degree may be designated as a Bachelor of Engineering, Bachelor of Science, Bachelor of Applied Science, or Bachelor of Technology depending upon the university. Many UK universities also offer Master of Engineering (MEng) degrees at the graduate level.
Some electronics engineers also choose to pursue a postgraduate degree such as a Master of Science, Doctor of Philosophy in Engineering, or an Engineering Doctorate. The master's degree is being introduced in some European and American Universities as a first degree and the differentiation of an engineer with graduate and postgraduate studies is often difficult. In these cases, experience is taken into account. The master's degree may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy consists of a significant research component and is often viewed as the entry point to academia.
In most countries, a bachelor's degree in engineering represents the first step towards certification and the degree program itself is certified by a professional body. Certification allows engineers to legally sign off on plans for projects affecting public safety. After completing a certified degree program, the engineer must satisfy a range of requirements, including work experience requirements, before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States, Canada, and South Africa), Chartered Engineer or Incorporated Engineer (in the United Kingdom, Ireland, India, and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (in much of the European Union).
A degree in electronics generally includes units covering physics, chemistry, mathematics, project management and specific topics in electrical engineering. Initially, such topics cover most, if not all, of the subfields of electronics engineering. Students then choose to specialize in one or more subfields towards the end of the degree.
Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today, most engineering work involves the use of computers and it is commonplace to use computer-aided design and simulation software programs when designing electronic systems. Although most electronic engineers will understand basic circuit theory, the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid-state physics might be relevant to an engineer working on VLSI but are largely irrelevant to engineers working with embedded systems.
Apart from electromagnetics and network theory, other items in the syllabus are particular to electronic engineering courses. Electrical engineering courses have other specialisms such as machines, power generation, and distribution. This list does not include the extensive engineering mathematics curriculum that is a prerequisite to a degree.
Supporting knowledge areas
The huge breadth of electronics engineering has led to the use of a large number of specialists supporting knowledge areas.
Elements of vector calculus: divergence and curl; Gauss' and Stokes' theorems, Maxwell's equations: differential and integral forms. Wave equation, Poynting vector. Plane waves: propagation through various media; reflection and refraction; phase and group velocity; skin depth. Transmission lines: characteristic impedance; impedance transformation; Smith chart; impedance matching; pulse excitation. Waveguides: modes in rectangular waveguides; boundary conditions; cut-off frequencies; dispersion relations. Antennas: Dipole antennas; antenna arrays; radiation pattern; reciprocity theorem, antenna gain.
Network graphs: matrices associated with graphs; incidence, fundamental cut set, and fundamental circuit matrices. Solution methods: nodal and mesh analysis. Network theorems: superposition, Thevenin and Norton's maximum power transfer, Wye-Delta transformation. Steady state sinusoidal analysis using phasors. Linear constant coefficient differential equations; time domain analysis of simple RLC circuits, Solution of network equations using Laplace transform: frequency domain analysis of RLC circuits. 2-port network parameters: driving point and transfer functions. State equations for networks.
Electronic devices: Energy bands in silicon, intrinsic and extrinsic silicon. Carrier transport in silicon: diffusion current, drift current, mobility, resistivity. Generation and recombination of carriers. p-n junction diode, Zener diode, tunnel diode, BJT, JFET, MOS capacitor, MOSFET, LED, p-i-n and avalanche photo diode, LASERs. Device technology: integrated circuit fabrication process, oxidation, diffusion, ion implantation, photolithography, n-tub, p-tub and twin-tub CMOS process.
Analog circuits: Equivalent circuits (large and small-signal) of diodes, BJT, JFETs, and MOSFETs. Simple diode circuits, clipping, clamping, rectifier. Biasing and bias stability of transistor and FET amplifiers. Amplifiers: single-and multi-stage, differential, operational, feedback and power. Analysis of amplifiers; frequency response of amplifiers. Simple op-amp circuits. Filters. Sinusoidal oscillators; criterion for oscillation; single-transistor and op-amp configurations. Function generators and wave-shaping circuits, Power supplies.
Digital circuits: Boolean functions (NOT, AND, OR, XOR,...). Logic gates digital IC families (DTL, TTL, ECL, MOS, CMOS). Combinational circuits: arithmetic circuits, code converters, multiplexers, and decoders. Sequential circuits: latches and flip-flops, counters, and shift-registers. Sample and hold circuits, ADCs, DACs. Semiconductor memories. Microprocessor 8086: architecture, programming, memory, and I/O interfacing.
Signals and systems: Definitions and properties of Laplace transform, continuous-time and discrete-time Fourier series, continuous-time and discrete-time Fourier Transform, z-transform. Sampling theorems. Linear Time-Invariant (LTI) Systems: definitions and properties; causality, stability, impulse response, convolution, poles and zeros frequency response, group delay and phase delay. Signal transmission through LTI systems. Random signals and noise: probability, random variables, probability density function, autocorrelation, power spectral density, and function analogy between vectors & functions.
Electronic Control systems
Basic control system components; block diagrammatic description, reduction of block diagrams — Mason's rule. Open loop and closed loop (negative unity feedback) systems and stability analysis of these systems. Signal flow graphs and their use in determining transfer functions of systems; transient and steady-state analysis of LTI control systems and frequency response. Analysis of steady-state disturbance rejection and noise sensitivity.
Tools and techniques for LTI control system analysis and design: root loci, Routh–Hurwitz stability criterion, Bode and Nyquist plots. Control system compensators: elements of lead and lag compensation, elements of proportional–integral–derivative (PID) control. Discretization of continuous-time systems using zero-order hold and ADCs for digital controller implementation. Limitations of digital controllers: aliasing. State variable representation and solution of state equation of LTI control systems. Linearization of Nonlinear dynamical systems with state-space realizations in both frequency and time domains. Fundamental concepts of controllability and observability for MIMO LTI systems. State space realizations: observable and controllable canonical form. Ackermann's formula for state-feedback pole placement. Design of full order and reduced order estimators.
Communications
Analog communication systems: amplitude and angle modulation and demodulation systems, spectral analysis of these operations, superheterodyne noise conditions.
Digital communication systems: pulse-code modulation (PCM), differential pulse-code modulation (DPCM), delta modulation (DM), digital modulation – amplitude, phase- and frequency-shift keying schemes (ASK, PSK, FSK), matched-filter receivers, bandwidth consideration and probability of error calculations for these schemes, GSM, TDMA.
Professional bodies
Professional bodies of note for electrical engineers USA's Institute of Electrical and Electronics Engineers (IEEE) and the UK's Institution of Engineering and Technology (IET). Members of the Institution of Engineering and Technology (MIET) are recognized professionally in Europe, as electrical and computer engineers. The IEEE claims to produce 30 percent of the world's literature in electrical and electronics engineering, has over 430,000 members, and holds more than 450 IEEE sponsored or cosponsored conferences worldwide each year. SMIEEE is a recognised professional designation in the United States.
Project engineering
For most engineers not involved at the cutting edge of system design and development, technical work accounts for only a fraction of the work they do. A lot of time is also spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules. Many senior engineers manage a team of technicians or other engineers and for this reason, project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important.
The workplaces of electronics engineers are just as varied as the types of work they do. Electronics engineers may be found in the pristine laboratory environment of a fabrication plant, the offices of a consulting firm or in a research laboratory. During their working life, electronics engineers may find themselves supervising a wide range of individuals including scientists, electricians, programmers, and other engineers.
Obsolescence of technical skills is a serious concern for electronics engineers. Membership and participation in technical societies, regular reviews of periodicals in the field, and a habit of continued learning are therefore essential to maintaining proficiency, which is even more crucial in the field of consumer electronics products.
See also
Comparison of EDA software
Electrical engineering technology
Glossary of electrical and electronics engineering
Index of electrical engineering articles
Information engineering
List of electrical engineers
Timeline of electrical and electronics engineering
References
External links
Electrical engineering
Computer engineering
Engineering disciplines | Electronic engineering | [
"Technology",
"Engineering"
] | 3,179 | [
"Electrical engineering",
"Electronic engineering",
"Computer engineering",
"nan"
] |
28,925,553 | https://en.wikipedia.org/wiki/Dirac%20spectrum | In mathematics, a Dirac spectrum, named after Paul Dirac, is the spectrum of eigenvalues of a Dirac operator on a Riemannian manifold with a spin structure. The isospectral problem for the Dirac spectrum asks whether two Riemannian spin manifolds have identical spectra. The Dirac spectrum depends on the spin structure in the sense that there exists a Riemannian manifold with two different spin structures that have different Dirac spectra.
See also
Can you hear the shape of a drum?
Dirichlet eigenvalue
Spectral asymmetry
Angle-resolved photoemission spectroscopy
References
Spectral theory
Quantum mechanics | Dirac spectrum | [
"Physics"
] | 130 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
6,785,716 | https://en.wikipedia.org/wiki/Bearing%20capacity | In geotechnical engineering, bearing capacity is the capacity of soil to support the loads applied to the ground. The bearing capacity of soil is the maximum average contact pressure between the foundation and the soil which should not produce shear failure in the soil. Ultimate bearing capacity is the theoretical maximum pressure which can be supported without failure; allowable bearing capacity is the ultimate bearing capacity divided by a factor of safety. Sometimes, on soft soil sites, large settlements may occur under loaded foundations without actual shear failure occurring; in such cases, the allowable bearing capacity is based on the maximum allowable settlement. The allowable bearing pressure is the maximum pressure that can be applied to the soil without causing failure. The ultimate bearing capacity, on the other hand, is the maximum pressure that can be applied to the soil before it fails.
There are three modes of failure that limit bearing capacity: general shear failure, local shear failure, and punching shear failure.
It depends upon the shear strength of soil as well as shape, size, depth and type of foundation.
Introduction
A foundation is the part of a structure which transmits the weight of the structure to the ground. All structures constructed on land are supported on foundations. A foundation is a connecting link between the structure proper and the ground which supports it.
The bearing strength characteristics of foundation soil are major design criterion for civil engineering structures. In nontechnical engineering, bearing capacity is the capacity of soil to support the loads applied to the ground. The bearing capacity of soil is the maximum average contact pressure between the foundation and the soil which should not produce shear failure in the soil. Ultimate bearing capacity is the theoretical maximum pressure which can be supported without failure; allowable bearing capacity is the ultimate bearing capacity divided by a factor of safety. Sometimes, on soft soil sites, large settlements may occur under loaded foundations without actual shear failure occurring; in such cases, the allowable bearing capacity is based on the maximum allowable settlement.
General bearing failure
A general bearing failure occurs when the load on the footing causes large movement of the soil on a shear failure surface which extends away from the footing and up to the soil surface. Calculation of the capacity of the footing in general bearing is based on the size of the footing and the soil properties. The basic method was developed by Terzaghi, with modifications and additional factors by Meyerhof and Vesić.
. The general shear failure case is the one normally analyzed. Prevention against other failure modes is accounted for implicitly in settlement calculations. Stress distribution in elastic soils under foundations was found in a closed form by Ludwig Föppl (1941) and Gerhard Schubert (1942). There are many different methods for computing when this failure will occur.
Terzaghi's Bearing Capacity Theory
Karl von Terzaghi was the first to present a comprehensive theory for the evaluation of the ultimate bearing capacity of rough shallow foundations. This theory states that a foundation is shallow if its depth is less than or equal to its width. Later investigations, however, have suggested that foundations with a depth, measured from the ground surface, equal to 3 to 4 times their width may be defined as shallow foundations.
Terzaghi developed a method for determining bearing capacity for the general shear failure case in 1943. The equations, which take into account soil cohesion, soil friction, embedment, surcharge, and self-weight, are given below.
For square foundations:
For continuous foundations:
For circular foundations:
where
for φ' = 0 [Note: 5.14 is Meyerhof's value -- see below. Terzaghi's value is 5.7.]
for φ' > 0 [Note: As phi' goes to zero, N_c goes to 5.71...]
c′ is the effective cohesion.
σzD′ is the vertical effective stress at the depth the foundation is laid.
γ′ is the effective unit weight when saturated or the total unit weight when not fully saturated.B is the width or the diameter of the foundation.φ′ is the effective internal angle of friction.Kpγ is obtained graphically. Simplifications have been made to eliminate the need for Kpγ. One such was done by Coduto, given below, and it is accurate to within 10%.
For foundations that exhibit the local shear failure mode in soils, Terzaghi suggested the following modifications to the previous equations. The equations are given below.
For square foundations:
For continuous foundations:
For circular foundations:
, the modified bearing capacity factors, can be calculated by using the bearing capacity factors equations(for , respectively) by replacing the effective internal angle of friction by a value equal to
Meyerhof's Bearing Capacity theory
In 1951, Meyerhof published a bearing capacity theory which could be applied to rough shallow and deep foundations. Meyerhof (1951, 1963) proposed a bearing-capacity equation similar to that of Terzaghi's but included a shape factor s-q with the depth term Nq. He also included depth factors and inclination factors. [Note: Meyerhof re-evaluated N_q based on a different assumption from Terzaghi and found N_q = ( 1 + sin phi) exp (pi tan phi ) / (1 - sin phi). Then N_c is the same equation as Terzaghi: N_c = (N_q - 1) / tan phi. For phi = 0, Meyerhof's N_c converges to 2 + pi = 5.14.... Meyerhof also re-evaluated N_gamma and obtained N_gamma = (N_q - 1) tan(1.4 phi).]
Factor of safety
Calculating the gross allowable-load bearing capacity of shallow foundations requires the application of a factor of safety (FS) to the gross ultimate bearing capacity'', or;
See also
Soil mechanics
Bearing capacity of Soil
References
Soil mechanics
Foundations (buildings and structures) | Bearing capacity | [
"Physics",
"Engineering"
] | 1,200 | [
"Soil mechanics",
"Structural engineering",
"Applied and interdisciplinary physics",
"Foundations (buildings and structures)"
] |
6,786,507 | https://en.wikipedia.org/wiki/Ab%20initio%20multiple%20spawning | The ab initio multiple spawning, or AIMS, method is a time-dependent formulation of quantum chemistry.
In AIMS, nuclear dynamics and electronic structure problems are solved simultaneously. Quantum mechanical effects in the nuclear dynamics are included, especially the nonadiabatic effects which are crucial in modeling dynamics on multiple electronic states.
The AIMS method makes it possible to describe photochemistry from first principles molecular dynamics, with no empirical parameters. The method has been applied to two molecules of interest in organic photochemistry - ethylene and cyclobutene.
The photodynamics of ethylene involves both covalent and ionic electronic excited states and the return to the ground state proceeds through a pyramidalized geometry. For the photoinduced ring opening of cyclobutene, is it shown that the disrotatory motion predicted by the Woodward–Hoffmann rules is established within the first 50 fs after optical excitation.
The method was developed by chemistry professor Todd Martinez.
References
Ab Initio Multiple Spawning: Photochemistry from First Principles Quantum Molecular Dynamics, M. Ben-Nun, Jason Quenneville, and Todd J. Martínez, J. Phys. Chem. A 104 (2000), #22, pp. 5161–5175. DOI 10.1021/jp994174i.
Nonadiabatic molecular dynamics: Validation of the multiple spawning method for a multidimensional problem, M. Ben-Nun and Todd J. Martínez, Journal of Chemical Physics 108, #17 (May 1, 1998), pp. 7244–7257. DOI 10.1063/1.476142.
Quantum chemistry
Theoretical chemistry | Ab initio multiple spawning | [
"Physics",
"Chemistry"
] | 349 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
"Physical chemistry stubs",
" and optical physics"
] |
6,788,445 | https://en.wikipedia.org/wiki/Planetary%20Fourier%20Spectrometer | The Planetary Fourier Spectrometer (PFS) is an infrared spectrometer built by the Istituto Nazionale di Astrofisica (Italian National Institute for Astrophysics) along with the Istituto di Fisica dello spazio Interplanetario and the Consiglio Nazionale delle Ricerche (Italian National Research Council). The instrument is currently used by the European Space Agency on both the Mars Express Mission and the Venus Express Mission. It consists of four units which together weigh around 31.4 kg, including a pointing device, a power supply, a control unit, and an interferometer with electronics.
The main objective of the instrument is to provide temperature profiles of Mars's carbon dioxide atmosphere, and to the study composition of the planet's atmosphere through the infrared radiation that is reflected and emitted by the planet.
Methane in the Martian atmosphere
In March 2004, Professor Vittorio Formisano, the researcher in charge of the Mars Express Planetary Fourier Spectrometer, announced the discovery of methane in the Martian atmosphere. However, methane cannot persist in the Martian atmosphere for more than a few hundred years since it can be broken down by sunlight. Thus, this discovery suggests that the methane is being continually replenished by some unidentified volcanic or geologic process, or that some kind of extremophile life form similar to some existing on Earth is metabolising carbon dioxide and hydrogen and producing methane. In July 2004, rumours began to circulate that Formisano would announce the discovery of ammonia at an upcoming conference. It later came to light that none had been found; in fact some noted that the PFS was not precise enough to distinguish ammonia from carbon dioxide anyway.
See also
Atmosphere of Mars
ExoMars Trace Gas Orbiter
References
External links
ESA Venus Express PFS page
ESA Mars Express PFS page
Spectrometers
Spacecraft instruments
Mars Express | Planetary Fourier Spectrometer | [
"Physics",
"Chemistry"
] | 380 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
6,790,888 | https://en.wikipedia.org/wiki/Karplus%20equation | The Karplus equation, named after Martin Karplus, describes the correlation between 3J-coupling constants and dihedral torsion angles in nuclear magnetic resonance spectroscopy:
where J is the 3J coupling constant, is the dihedral angle, and A, B, and C are empirically derived parameters whose values depend on the atoms and substituents involved. The relationship may be expressed in a variety of equivalent ways e.g. involving cos 2φ rather than cos2 φ —these lead to different numerical values of A, B, and C but do not change the nature of the relationship.
The relationship is used for 3JH,H coupling constants. The superscript "3" indicates that a 1H atom is coupled to another 1H atom three bonds away, via H-C-C-H bonds. (Such H atoms bonded to neighbouring carbon atoms are termed vicinal). The magnitude of these couplings are generally smallest when the torsion angle is close to 90° and largest at angles of 0 and 180°.
This relationship between local geometry and coupling constant is of great value throughout nuclear magnetic resonance spectroscopy and is particularly valuable for determining backbone torsion angles in protein NMR studies.
References
External links
Generalized Karplus calculation of proton-proton coupling constants
Karplus equations app
Nuclear magnetic resonance | Karplus equation | [
"Physics",
"Chemistry"
] | 274 | [
"Nuclear magnetic resonance",
"Nuclear physics"
] |
6,791,136 | https://en.wikipedia.org/wiki/Aerodynamic%20levitation | Aerodynamic levitation is the use of gas pressure to levitate materials so that they are no longer in physical contact with any container. In scientific experiments this removes contamination and nucleation issues associated with physical contact with a container.
Overview
The term aerodynamic levitation could be applied to many objects that use gas pressure to counter the force of gravity, and allow stable levitation. Helicopters and air hockey pucks are two good examples of objects that are aerodynamically levitated. However, more recently this term has also been associated with a scientific technique which uses a cone-shaped nozzle allowing stable levitation of 1-3mm diameter spherical samples without the need for active control mechanisms.
Aerodynamic levitation as a scientific tool
These systems allow spherical samples to be levitated by passing gas up through a diverging conical nozzle. Combining this with >200W continuous CO2 laser heating allows sample temperatures in excess of 3000 degrees Celsius to be achieved.
When heating materials to these extremely high temperatures levitation in general provides two key advantages over traditional furnaces. First, contamination that would otherwise occur from a solid container is eliminated. Second, the sample can be undercooled, i.e. cooled below its normal freezing temperature without actually freezing.
Undercooling of liquid samples
Undercooling, or supercooling, is the cooling of a liquid below its equilibrium freezing temperature while it remains a liquid. This can occur wherever crystal nucleation is suppressed. In levitated samples, heterogeneous nucleation is suppressed due to lack of contact with a solid surface. Levitation techniques typically allow samples to be cooled several hundred degrees Celsius below their equilibrium freezing temperatures.
Glass produced by aerodynamic levitation
Since crystal nucleation is suppressed by levitation, and since it is not limited by sample conductivity (unlike electromagnetic levitation), aerodynamic levitation can be used to make glassy materials, from high temperature melts that cannot be made by standard methods. Several silica-free, aluminium oxide based glasses have been made.
Physical property measurements
In the last few years a range of in situ measurement techniques have also been developed. The following measurements can be made with varying precision:
electrical conductivity,
viscosity,
density,
surface tension,
specific heat capacity,
In situ aerodynamic levitation has also been combined with:
X-ray synchrotron radiation,
neutron scattering,
NMR spectroscopy
See also
Magnetic levitation
Electrostatic levitation
Optical levitation
Acoustic levitation
Further reading
References
Levitation
Aerodynamics | Aerodynamic levitation | [
"Physics",
"Chemistry",
"Engineering"
] | 510 | [
"Physical phenomena",
"Aerodynamics",
"Levitation",
"Motion (physics)",
"Aerospace engineering",
"Fluid dynamics"
] |
30,422,155 | https://en.wikipedia.org/wiki/Interstellar%20ice | Interstellar ice consists of grains of volatiles in the ice phase that form in the interstellar medium. Ice and dust grains form the primary material out of which the Solar System was formed. Grains of ice are found in the dense regions of molecular clouds, where new stars are formed. Temperatures in these regions can be as low as , allowing molecules that collide with grains to form an icy mantle. Thereafter, atoms undergo thermal motion across the surface, eventually forming bonds with other atoms. This results in the formation of water and methanol. Indeed, the ices are dominated by water and methanol, as well as ammonia, carbon monoxide and carbon dioxide. Frozen formaldehyde and molecular hydrogen may also be present. Found in lower abundances are nitriles, ketones, esters and carbonyl sulfide. The mantles of interstellar ice grains are generally amorphous, becoming crystalline only in the presence of a star.
The composition of interstellar ice can be determined through its infrared spectrum. As starlight passes through a molecular cloud containing ice, molecules in the cloud absorb energy. This adsorption occurs at the characteristic frequencies of vibration of the gas and dust. Ice features in the cloud are relatively prominently in this spectra, and the composition of the ice can be determined by comparison with samples of ice materials on Earth. In the sites directly observable from Earth, around 60–70% of the interstellar ice consists of water, which displays a strong emission at 3.05 μm from stretching of the O–H bond.
In September 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics - "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks."
Older than the Sun
Research published in the journal Science estimates that about 30–50% of the water in the Solar System, like the water on Earth, the discs around Saturn, and the meteorites of other planets, was present before the birth of the Sun.
Comet 67P/Churyumov–Gerasimenko
On 18 November 2014, spacecraft Philae revealed presence of large amount of water ice on the comet 67P/Churyumov–Gerasimenko, the report stating that "the strength of the ice found under a layer of dust on the first landing site is surprisingly high". The team responsible for the MUPUS (Multi-Purpose Sensors for Surface and Sub-Surface Science) instrument, which hammered a probe into the comet, estimated that the comet is hard as ice. "Although the power of the hammer was gradually increased, we were not able to go deep into the surface," explained Tilman Spohn from the DLR Institute for Planetary Research, who led the research team.
See also
Amorphous ice
Heavy water
References
Ice
Astrochemistry
Water ice | Interstellar ice | [
"Chemistry",
"Astronomy"
] | 678 | [
"Interstellar media",
"Outer space",
"Astrochemistry",
"nan",
"Astronomical sub-disciplines"
] |
30,423,565 | https://en.wikipedia.org/wiki/Nucleosome%20positioning%20region%20database | Nucleosome Positioning Region Database (NPRD) is a database of nucleosome formation sites (NFSs).
See also
References
External links
http://srs6.bionet.nsc.ru/srs6/.
Biological databases
Genetics databases | Nucleosome positioning region database | [
"Biology"
] | 59 | [
"Bioinformatics",
"Biological databases"
] |
30,426,861 | https://en.wikipedia.org/wiki/Weyburn-Midale%20Carbon%20Dioxide%20Project | The Weyburn-Midale Carbon Dioxide Project (or IEA GHG Weyburn-Midale Monitoring and Storage Project) was, as of 2008, the world's largest carbon capture and storage project. It has since been overtaken in terms of carbon capture capacity by projects such as the Shute Creek project and the Century Plant. It is located in Midale, Saskatchewan, Canada.
Introduction
The IEAGHG Weyburn-Midale CO2 Monitoring and Storage Project is an international collaborative scientific study to assess the technical feasibility of CO2 storage in geological formations with a focus on oil reservoirs, together with the development of world leading best practices for project implementation. The project itself began in 2000 and runs until the end of 2011 when a best practices manual for the transitioning of CO2-EOR operations into long-term storage operations will be released.
The research project accesses data from the actual CO2-enhanced oil recovery operations in the Weyburn oil field (formerly operated by Cenovus Energy of Calgary before its Saskatchewan operations were sold to Whitecap Resources in 2017), and after the year 2005 from the adjacent Midale field (operated by Apache Canada). These EOR operations are independent of the research program. Cenovus Energy's only contribution to the IEAGHG Weyburn-Midale CO2 Monitoring and Storage Project was to allow access to the fields for measurement, monitoring and verification of the CO2 for the global scientists and researchers involved in the project.
History – The Weyburn oilfield
The Weyburn and Midale oil fields were discovered in 1954 near Midale, Saskatchewan.
The Weyburn Oilfield covers an area of some and has a current oil production rate of ~3,067 m3/day. Original oil-in-place is estimated to be . The oil is produced from a total of 963 active wells made up of 534 vertical wells, 138 horizontal wells, and 171 injection systems. There are also 146 enclosed wells. Current production consists primarily of medium-gravity crude oil with a low gas-to-oil ratio.
The Midale oil field is about in size, and has of oil-in-place. It began injecting CO2 in 2005.
Various enhanced oil recovery techniques were used in the Weyburn field prior to the introduction of CO2, between the 1970s and 1990s. These include additional vertical drilling, the introduction of horizontal drilling, and the use of waterfloods to increase pressure in the reservoir. In October 2000, Cenovus (formerly Pan Canadian, Encana) began injecting significant amounts of carbon dioxide into the Weyburn field in order to boost oil production. Cenovus was the operator and held the largest share of the 37 current partners in the oilfield prior to the sale of local assets to Whitecap in 2017.
History – Injection of CO2
Initial CO2 injection rates in the Weyburn field amounted to ~5,000 tonnes/day or 95 million scf/day (2.7 million m3/d); this would otherwise have been vented to the atmosphere from the Dakota Gasification facility. At one point, CO2 injection by Cenovus at Weyburn was at ~6,500 tonnes per day. Apache Canada is injecting approximately 1,500 tonnes/day into the Midale field.
Overall, it is anticipated that some 40 Mt of carbon dioxide will be permanently sequestered over the lifespan of the project in the Weyburn and Midale fields. The gas is being supplied via a 320 kilometre mile long pipeline (completed in 1999) from the lignite-fired Dakota Gasification Company synfuels plant site in Beulah, North Dakota (See attached image). The company is a subsidiary of Basin Electric Power Co-operative. At the plant, CO2 is produced from a Rectisol unit in the gas cleanup train. The CO2 project adds about $30 million of gross revenue to the gasification plant's cash flow each year. Approximately 8000 tonnes/day of compressed CO2 (in liquid form) is provided to the Weyburn and Midale fields via the pipeline.
During its life, the Weyburn and Midale fields combined are expected to produce at least 220 million additional barrels of incremental oil, through miscible or near-miscible displacement with CO2, from a fields that have already produced over since discovery in 1954. This will extend the life of the Weyburn field by approximately 20–25 years. It is estimated that ultimate oil recovery will increase to 34% of the oil-in-place. It has been estimated that, on a full life-cycle basis, the oil produced at Weyburn by CO2 EOR will release only two-thirds as much CO2 to the atmosphere compared to oil produced using conventional technology.
This is the first instance of cross-border transfer of CO2 from the US to Canada and highlights the ability for international cooperation with GHG mitigation technologies. Whilst there are emissions trading projects being developed within countries such as Canada, the Weyburn project is essentially the first international project where physical quantities of CO2 are being sold commercially for enhanced oil recovery, with the added benefit of carbon sequestration.
History – Research project
The First Phase of the IEAGHG Weyburn CO2 Monitoring and Storage Project (the Midale oil field did not join the research project until the Final phase research) which began in 2000 and ended in 2004, verified the ability of an oil reservoir to securely store CO2 for significant lengths of time. This was done through a comprehensive analysis of the various process factors as well as monitoring/modeling methods designed to measure, monitor and track the CO2. Research was conducted into geological characterization of both the geosphere (the geological layers deeper than near surface) and biosphere (basically from the depths of groundwater up). As well, prediction, monitoring and verification techniques were used to examine the movements of the CO2. Finally, both the economic and geologic limits of the CO2 storage capacity were predicted, and a long-term risk assessment developed for storage of CO2 permanently in the formation.
A critical part of the First Phase was the accumulation of baseline surveys for both CO2 soil content, and water wells in the area. These baselines were identified in 2001 and have helped to confirm through comparison with more recent readings that CO2 is not leaking from the reservoir into the biosphere in the study area.
First phase findings
Based on preliminary results, the natural geological setting of the oil field was deemed to be highly suitable for long-term CO2 geological storage.
The results form the most complete, comprehensive, peer-reviewed data set in the world for CO2 geological storage. However, additional research was deemed to be needed to further develop and refine CO2 monitoring and verification technologies. With this in mind, a second and final phase of research was developed and began in the year 2005, and will be completed in 2011.
The PTRC and IEA GHG issued a full report on the first phase, and it is available from the PTRC's website.
The Final Phase project (2005–2011)
The Final Phase of the IEAGHG Weyburn-Midale CO2 Monitoring and Storage Project is utilizing scientific experts from most of the world's leading carbon capture and storage research organizations and universities to further develop and build upon the most scrutinized CO2 geological storage data set in the world.
The project's major technical research "themes" can be broadly broken out into four areas:
Technical Components:
Site Characterization: The research will develop geocellular framework models that incorporate geotechnical and simulation work that will help with proper risk management of the site.
Wellbore Integrity: increase the knowledge, and assess the risk of leakage from enclosed wells caused by materials and cement degradation. This issue is viewed as critical for resolving questions around long-term storage.
Monitoring and Verification: Field test and assess a range of geochemical and geophysical techniques for monitoring the injected CO2.
Performance Assessment: Perform simulations for containment and performance assessments; engage public stakeholders and experts in the risk assessment process.
Ultimately, the goal of the final phase of the project is to produce a best practices manual that can be used by other jurisdictions and organizations to help transition CO2-EOR operations into long-term storage projects. The research of the project's final phase should be complete in 2011, with the Best Practices Manual issued before the end of that year.
Claims of leaking
A report of leaks above the project was released in January 2011 by an advocacy group on behalf of owners of land above the project. They reported ponds fizzing with bubbles, dead animals found near those ponds, sounds of explosions which they attributed to gas blowing out holes in the walls of a quarry. The report said that carbon dioxide levels in the soil averaged about 23,000 parts per million, several times higher than is normal for the area. "The ... source of the high concentrations of CO2 in the soils of the Kerr property is clearly the anthropogenic CO2 injected into the Weyburn reservoir... The survey also demonstrates that the overlying thick cap rock of anhydrite over the Weyburn reservoir is not an impermeable barrier to the upward movement of light hydrocarbons and CO2 as is generally thought." said the report.
The PTRC posted an extensive rebuttal of the Petro-Find report, stating that the isotopic signatures of the , claimed by Mr. Lafleur to be indicative of the manmade being injected into the reservoir, were in fact, according to studies of conducted by the British Geological Survey and two other European Union geological groups prior to being injected at Weyburn, occurring naturally in several locations near the Kerr farm. Subsequent soil surveys after injection in 2002 to 2005 found levels dropped in these same regions. In addition, prior to injection occurring into the oil field, these samplings were found to be as high as 125,000 parts per million and averaging 25,000 ppm across the region, even more than the average and largest readings from the Kerr's property that were being claimed as unusually high. The report also questions, based on seismic imaging conducted over ten years, that any active faults exist or that the caprock is compromised to allow pathways for the to reach the surface. The PTRC acknowledged that they do not monitor the entire site for leaks, rather primarily above the part of the Weyburn field where is injected and key locations outside it, but the organization did monitor the Kerr's well between 2002 and 2006, finding no appreciable difference in water quality. They have also acknowledged that PTRC is a research organisation rather than a regulator, and manage the IEA GHG Weyburn-Midale Monitoring and Storage Project on behalf of the International Energy Agency's Greenhouse Gas R&D Programme, which includes some 30 international research groups.
References
External links
Weyburn-MIdale, The IEAGHG Weyburn-Midale CO2 Monitoring and Storage Project , PTRC, Petroleum Technology Research Center
Annual reports, PTRC, Petroleum Technology Research Center
Weyburn-Midale Fact Sheet: Carbon Dioxide Capture and Storage Project, MIT
Environmental engineering
Climate change in Canada
Buildings and structures in Saskatchewan
2000 establishments in Saskatchewan | Weyburn-Midale Carbon Dioxide Project | [
"Chemistry",
"Engineering"
] | 2,316 | [
"Chemical engineering",
"Civil engineering",
"Environmental engineering"
] |
30,438,370 | https://en.wikipedia.org/wiki/Screw%20pump | A screw pump is a positive-displacement pump that use one or several screws to move fluid solids or liquids along the screw(s) axis.
History
The screw pump is the oldest positive displacement pump. The first records of a water screw, or screw pump, dates back to Ancient Egypt before the 3rd century BC. The Egyptian screw, used to lift water from the Nile, was composed of tubes wound round a cylinder; as the entire unit rotates, water is lifted within the spiral tube to the higher elevation. A later screw pump design from Egypt had a spiral groove cut on the outside of a solid wooden cylinder and then the cylinder was covered by boards or sheets of metal closely covering the surfaces between the grooves.
A cuneiform inscription of Assyrian king Sennacherib (704–681 BC) has been interpreted by Stephanie Dalley to describe casting water screws in bronze some 350 years earlier. This is consistent with classical author Strabo, who describes the Hanging Gardens as watered by screws.
The screw pump was later introduced from Egypt to Greece. It was described by Archimedes, on the occasion of his visit to Egypt, circa 234 BC. This suggests that the apparatus was unknown to the Greeks before Hellenistic times.
Design
Three principal forms exist; In its simplest form (the Archimedes' screw pump or 'water screw'), a single screw rotates in a cylindrical cavity, thereby gravitationally trapping some material on top of a section of the screw as if it was a scoop, and progressively moving the material along the screw's axle until it is discharged at the top. This ancient construction is still used in many low-tech applications, such as irrigation systems and in agricultural machinery for transporting grain and other solids. The second form works differently; it squeezes a trapped pocket of material against another screw. This form is what is typically referred to in modern times with the term 'screw pump'. The third form (the progressive cavity pump or eccentric screw pump) squeezes a trapped pocket of material against the cavity walls by spinning the screw eccentrically.
Like all positive-displacement pumps, all various kinds of screw pumps function by trapping a volume of material somehow, and then moving it. There are numerous ways to shape the screw or the cavity to accomplish this function, and the number of screws working together can be many. The term 'screw pump' refers generically to all of these types. However, this generalization can be a pitfall as it fails to recognize that the different ‘screw' configurations have different advantages and design considerations for each, which lead to the various kinds being suitable for very different use cases, material types, flow rates, and pressures.
One of the most common configurations of a screw pump is the three-spindle screw pump. Three screws press against each other to form pockets of the pumped liquid in the grooves of the screws. As the screws rotate in opposite directions, the pumped liquid moves along the screws' spindles. There is nothing magical about two, three or any number of screws; pockets are formed regardless. Three rather than two spindles are used because this allows the central screw to experience symmetrical pressure loading from all sides. This ensures that the central screw is not pushed sideways, will not be bent, and thus eliminates the need for radial bearings on the main axle to absorb radial forces. The two side screws can then be made as internally-hidden free-floating rollers, lubricated by the pumped liquid itself, thus eliminating the need for bearings on those axles. This is commonly desired because seals and bearings on machines are common sources of failure.
Three-spindle screw pumps are most often used for transport of viscous fluids with lubricating properties. They are suited for a variety of applications such as fuel-injection, oil burners, boosting, hydraulics, fuel, lubrication, circulating, feed, and to pump high-pressure viscous fluids in offshore and marine installations.
Compared to various other pumps, screw pumps have several advantages. The pumped fluid is moving axially without turbulence which eliminates foaming that would otherwise occur in viscous fluids. They are also able to pump fluids of higher viscosity without losing flow rate. Also, changes in the pressure difference have little impact on screw pumps compared to various other pumps. There is also very little back-drive on the power axle, and the output of the flow is typically very even and doesn't pulsate much.
See also
Rotary-screw compressor, a gas compressor similar to a screw pump.
References
Pumps
Screws
Egyptian inventions | Screw pump | [
"Physics",
"Chemistry"
] | 938 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
30,439,786 | https://en.wikipedia.org/wiki/Proximity%20ligation%20assay | Proximity ligation assay (in situ PLA) is a technology that extends the capabilities of traditional immunoassays to include direct detection of proteins, protein interactions, extracellular vesicles and post translational modifications with high specificity and sensitivity. Protein targets can be readily detected and localized with single molecule resolution and objectively quantified in unmodified cells and tissues. Utilizing only a few cells, sub-cellular events, even transient or weak interactions, are revealed in situ and sub-populations of cells can be differentiated. Within hours, results from conventional co-immunoprecipitation and co-localization techniques can be confirmed.
The PLA principle
Two primary antibodies raised in different species recognize the target antigen on the proteins of interest (Figure 1). Secondary antibodies (2o Ab) directed against the constant regions of the different primary antibodies, called PLA probes, bind to the primary antibodies (Figure 2). Each of the PLA probes has a short sequence specific DNA strand attached to it. If the PLA probes are in proximity (that is, if the two original proteins of interest are in proximity, or part of a protein complex, as shown in the figures), the DNA strands can participate in rolling circle DNA synthesis upon addition of two other sequence-specific DNA oligonucleotides together with appropriate substrates and enzymes (Figure 3).
The DNA synthesis reaction results in several-hundredfold amplification of the DNA circle. Next, fluorescent-labeled complementary oligonucleotide probes are added, and they bind to the amplified DNA (Figure 4). The resulting high concentration of fluorescence is easily visible as a distinct bright spot when viewed with a fluorescence microscope. In the specific case shown (Figure 5), the nucleus is enlarged because this is a B-cell lymphoma cell. The two proteins of interest are a B cell receptor and MYD88. The finding of interaction in the cytoplasm was interesting because B cell receptors are thought of as being located in the cell membrane.
Applications
PLA as described above has been used to study aspects of animal development and breast cancer among many other topics. In situ proximity ligation assays (isPLA) has been applied to antibody validation in human tissues with various advantages over IHC, including increased detection specificity, decreased unspecific staining, and better localization. A variation of the technique (rISH-PLA) has been used to study the association of protein and RNA. Another variation of in situ PLA includes a multiplex PLA assay that makes it possible to visualize multiple protein complexes in parallel. PLA can also be combined with other read out forms such as ELISA, flow cytometry. and Western blotting
References
Biochemistry detection reactions
Immunologic tests | Proximity ligation assay | [
"Chemistry",
"Biology"
] | 584 | [
"Biochemistry detection reactions",
"Microbiology techniques",
"Immunologic tests",
"Biochemical reactions"
] |
37,025,096 | https://en.wikipedia.org/wiki/Jamilur%20Reza%20Choudhury | Dr. Jamilur Reza Choudhury (15 November 1942 – 28 April 2020) was a Bangladeshi civil engineer, professor, researcher, and education advocate. He was an Adviser (Minister) to Caretaker Government of Bangladesh (April–June 1996). He was the first vice chancellor of BRAC University and former vice chancellor of University of Asia Pacific. He was also the president of Bangladesh Mathematical Olympiad Committee from 2003. He was awarded Ekushey Padak by the Government of Bangladesh in the category of science and technology in 2017. He was inducted as a National Professor by the Government of Bangladesh in 2018.
Early life and background
Choudhury was born in Sylhet (during British colonial rule) on 15 November 1943. His father, Abid Reza Choudhury (1905–1991), was a civil engineer who had migrated to Dhaka in 1952 (after the Partition of India) from Rangauty, a rural area of Hailakandi (Assam, India). Abid was the first Muslim to graduate from Bengal Engineering and Science University, in 1929. His mother, Hayatun Nessa Choudhury (née Laskor) (1922–2010) was a homemaker, from the Nitainagar area of Hailakandi. Jamilur is the middle child of five siblings.
Education
Choudhury began school in 1950 (age 6) at the Mymensingh Zilla School. After his family moved to Dhaka from Mymensingh in 1952, he transferred to Nawabpur Government High School, and transferred once more to a private Catholic school, St Gregory's High School, in 1953. He passed entrance examination from Sylhet Government Pilot High School. He attended Dhaka College from 1957 to 1959 and earned his Bachelor of Science Degree (Civil Engineering) from Bangladesh University of Engineering and Technology (BUET) in 1963. Upon graduation from BUET (First Class First with Honours), he became a lecturer in the Civil Engineering Department that same year. He earned a Masters of Science in Engineering Degree (advanced structural engineering) in 1965 and a Ph.D. in structural engineering in 1968, both at University of Southampton.
Choudhury was awarded the honorary degree of Doctor of Engineering (Honoris Causa) by Manchester University on 20 October 2010 – the first person of Bangladeshi origin to receive this honor from a British University.
Work
Immediately after publication of results of BSC Engineering, he joined as a faculty member of the former East Pakistan University of Engineering and Technology (EPUET) pending formal appointment. In November 1963, he formally joined as a lecturer the Department of Civil Engineering and began his long teaching career. In September 1964 he was awarded a scholarship by Burmashell to pursue MS in structural engineering. His thesis was on "Cracks in Concrete Beam using Computer-Aided Design". In 1968, he was conferred a Ph.D. degree on the topic of "Shear Wall & Structural Analysis of High rise Building". After completing Ph.D. he returned to former East Pakistan in 1968 and joined the former East Pakistan University of Engineering & Technology as an assistant professor until the demise of East Pakistan and the birth of a new nation, Bangladesh. In 1973 he was promoted to associate professor of Bangladesh University of Engineering & Technology (BUET) and in 1976 a full professor. In 1975, he was offered a Nuffield scholarship to pursue a post-doctoral Fellowship at Surrey University in the UK. Until 2001 he was working as a professor at BUET. He was also entrusted with developing a "Computer Center" at BUET and was appointed the director for about 10 years. He served as vice-chancellor of BRAC University between 2001 and 2010. Choudhury was appointed the chairman of the task force for developing Software Export & IT Infrastructure in Bangladesh from 1997 to 2000 under the Ministry of Commerce. He was a ranking member of the Prime Minister's Task Force on developing Digital Bangladesh. Besides he was involved with several local and international organizations.
He was working as the vice-chancellor of the University of Asia Pacific (UAP). He was the Technical Adviser of Padma Bridge.
Awards
Ekushey Padak (2017)
Sheltech Award (2010)
Bangladesh Engineering Institution Gold medal (1998)
Dr. Rashid Gold medal, (1997)
Rotary Seed Award, (2000)
Lions International (District-315) Gold Medal
Received an honorary doctorate degree from the University of Manchester. He is the only Bangladeshi, who received a doctorate degree in engineering from a British university.
Jica Recognition Award
Star Lifetime Award (2016)
Order of the Rising Sun, 3rd Class, Gold Rays with Neck Ribbon (2018)
Death
Choudhury died on 28 April 2020 following a heart attack in his home in Dhanmondi, Dhaka, Bangladesh.
References
2020 deaths
1942 births
People from Hailakandi district
Dhaka College alumni
Bangladesh University of Engineering and Technology alumni
Alumni of the University of Southampton
Bangladeshi civil engineers
Structural engineers
Academic staff of Bangladesh University of Engineering and Technology
Advisers of caretaker governments of Bangladesh
Fellows of Bangladesh Academy of Sciences
Vice-chancellors of BRAC University
Recipients of the Ekushey Padak
Honorary Fellows of Bangla Academy
National Professors of Bangladesh
Recipients of the Order of the Rising Sun, 3rd class
Bangladeshi people of Indian descent
St. Gregory's High School and College alumni
Mymensingh Zilla School alumni | Jamilur Reza Choudhury | [
"Engineering"
] | 1,083 | [
"Structural engineering",
"Structural engineers"
] |
37,032,125 | https://en.wikipedia.org/wiki/Candicine | Candicine is a naturally occurring organic compound that is a quaternary ammonium salt with a phenethylamine skeleton. It is the N,N,N-trimethyl derivative of the well-known biogenic amine tyramine, and, being a natural product with a positively charged nitrogen atom in its molecular structure, it is classed as an alkaloid. Although it is found in a variety of plants, including barley, its properties have not been extensively studied with modern techniques. Candicine is toxic after parenteral administration, producing symptoms of neuromuscular blockade; further details are given in the "Pharmacology" section below.
Occurrence
Candicine occurs in a variety of plants, notably the cacti. This alkaloid was first isolated from the Argentinian cactus Trichocereus candicans (now reclassified as Echinopsis candicans), from which it derives its name, and from other Trichocereus species. T. candicans may contain up to 5% candicine, and is also a rich source of the closely related alkaloid hordenine.
Candicine also occurs in several plants of genus Citrus.
In the late 1950s, Japanese researchers isolated a toxic compound which they named "maltoxin" from malted barley. After the publication of some papers on its pharmacology (see "Pharmacology" section), under this name, it was determined that maltoxin was identical to candicine, and the older name has been retained in subsequent articles.
Candicine has also been found in the skin of the frog, Leptodactylus pentadactylus pentadactylus, at a concentration of 45 μg/g skin, but it is of much more limited occurrence amongst amphibians than its positional isomer, leptodactyline.
Chemistry
The dominant chemical characteristics of candicine are that it is a quaternary ammonium salt and a phenol. The quaternary ammonium cation is found in association with different anions, forming the corresponding salts, the commonest of which are the iodide and chloride, trivially named "candicine iodide" (or "hordenine methiodide") and "candicine chloride". Since it is impractical to isolate candicine from a natural source along with its original counterion(s), isolation procedures are designed so as to obtain it in association with a particular anion chosen by the investigator. The name "candicine" when used alone is thus not unequivocally chemically defined.
The presence of the phenolic group would make aqueous solutions of candicine salts weakly acidic, but no pKa seems to have been recorded.
This phenolic group has been converted to the methyl ether by treatment of candicine with methyl iodide, to make O-methyl candicine iodide.
Synthesis
One of the earliest syntheses of candicine is that of Barger, who made candicine iodide by the N-methylation of hordenine, using methyl iodide. This method has become a standard one for the conversion of tertiary amines to quaternary salts. It was used again by Buck and co-workers, who also reported the conversion of candicine iodide to candicine chloride by treatment with AgCl.
Pharmacology
The earliest pharmacological studies on candicine (under the name of hordenine methiodide) appear to be those of Barger and Dale, who studied its effects primarily in cats and isolated animal organ preparations. These researchers found candicine to closely resemble nicotine in its effects. For example, contractions of isolated sections of rabbit jejunum were produced by ~ 2 × 10−5M concentrations of the drug; 1 mg of candicine iodide given i.v. to cats produced the same rise in blood pressure as 0.5 mg nicotine; toxic doses produced respiratory paralysis. It was observed that in the same blood pressure assay, candicine iodide was about twice as potent as its structural analog tyramine, and much more potent than its even-closer analog, hordenine.
After Reti's discovery (and naming) of candicine as a natural product, a series of pharmacological investigations was carried out on this alkaloid by Luduena. These are summarized in Reti's review: as before, the similarity of effects between candicine and nicotine was noted. In Luduena's experiments, candicine first stimulated, then blocked ganglionic transmission; its effects were not altered by yohimbine, cocaine, or atropine, but completely counteracted by sparteine or tetrapropylammonium iodide. No muscarinic action was seen. Doses of 6 mg/kg were curare-like in the dog; similar effects were also observed in the toad, Bufo arenarum.
Candicine (as either the iodide or chloride) was re-investigated by Japanese pharmacologists in the early 1960s. Initial experiments on frogs, using rectus muscle and nerve-sartorius preparations from Rana nigromaculata nigromaculata, showed that the alkaloid caused contractions in the rectus at concentrations of 0.01–0.2 mg/mL, and blocked the response of the nerve-sartorius to direct or indirect electrical stimulation at similar concentrations. The contraction of the rectus was inhibited by pre-treatment with tubocurarine, as was the response of the nerve-sartorius (i.e., the normal muscle twitch was not reduced by the application of candicine subsequent to tubocurarine). The action of candicine in these assays was not affected by eserine. Taking additional observations into account, these researchers concluded that the effects on frog tissue of candicine most closely resembled those of the well-known depolarizing neuromuscular-blocking drug decamethonium. An earlier comparison of 0.2 mg of candicine chloride with 2 mg of hordenine sulfate on the rectus muscle preparation showed that hordenine was much less potent at eliciting a contraction, even at 10× the concentration of candicine.
Following their experiments on frogs, the Japanese group carried out a series of classical pharmacological investigations of candicine on cats and rabbits, and on various isolated animal organs/tissues. In rabbits, doses of 0.6 mg/kg, i.v., of candicine produced respiratory and cardiovascular disturbances lasting about 15 minutes. Body temperature was not affected; there was also mydriasis followed by miosis, and hypersalivation. In rabbits, i.v. doses of 2.1 mg/kg produced apnea, followed by death. In anesthetized cats, doses of 0.06–0.12 mg/kg, iv., also caused respiratory and cardiovascular disturbances: although the details were concentration-and time-dependent, the ultimate effects were ones of sustained respiratory stimulation and elevated blood pressure; the hypertension was not inhibited by atropine, but was antagonized by hexamethonium. Candicine caused contraction of the cat nictitating membrane. A concentration of 0.012 mg/mL applied to the isolated guinea pig atrium caused a decrease in the amplitude and rate of contractions, these effects being enhanced by eserine, but inhibited by atropine pre-treatment. Concentrations of 3-6 μg/mL produced contractions of the isolated guinea pig ileum which were inhibited by pre-treatment with atropine, hexamethonium, tubocurarine or cocaine, but were not affected by the presence of pyribenzamine or chlorpheniramine. Summarizing the results of these and other observations, the authors concluded that: candicine was primarily a stimulant of autonomic ganglia; it liberated catecholamines from the adrenal medulla; it showed muscarine-like and sympathomimetic effects in some assays, and it was a neuromuscular blocker of the depolarizing type. In many of these respects, candicine resembled nicotine and dimethylphenylpiperazinium (DMPP).
Toxicology
LD50 = 10 mg/kg (mouse; s.c.);
LD50 = 36 mg/kg (mouse; i.p.);
LD50 = 50 mg/kg (rat).
Effects on Plants
Candicine iodide has some plant growth-inhibiting properties: 50 μg/plant of the salt produced 76-100% inhibition of elongation of the second internode in beans, with indications of necrosis; ~ 100 μg of candicine iodide applied to the roots of sorghum seedlings caused a 50% inhibition in overall plant length.
Effects on Brine Shrimp
The LC50 for candicine chloride in the brine shrimp bioassay is 923 μg/mL.
See also
Tyramine
N-Methyltyramine
Hordenine
References
Quaternary ammonium compounds
Alkaloids
4-Hydroxyphenyl compounds
Plant toxins | Candicine | [
"Chemistry"
] | 1,908 | [
"Biomolecules by chemical classification",
"Natural products",
"Chemical ecology",
"Plant toxins",
"Organic compounds",
"Alkaloids"
] |
37,033,212 | https://en.wikipedia.org/wiki/Half%20sandwich%20compound | Half sandwich compounds, also known as piano stool complexes, are organometallic complexes that feature a cyclic polyhapto ligand bound to an MLn center, where L is a unidentate ligand. Thousands of such complexes are known. Well-known examples include cyclobutadieneiron tricarbonyl and (C5H5)TiCl3. Commercially useful examples include (C5H5)Co(CO)2, which is used in the synthesis of substituted pyridines, and methylcyclopentadienyl manganese tricarbonyl, an antiknock agent in petrol.
(η5-C5H5) piano stool compounds
Half sandwich complexes containing cyclopentadienyl ligands are common. Well studied examples include (η5-C5H5)V(CO)4, (η5-C5H5)Cr(CO)3H, (η5-CH3C5H4)Mn(CO)3, (η5-C5H5)Cr(CO)3H, [(η5-C5H5)Fe(CO)3]+, (η5-C5H5)V(CO)4I, and (η5-C5H5)Ru(NCMe). (η5-C5H5)Co(CO)2 is a two-legged piano stool complex. Bulky cyclopentadienyl ligands such as 1,2,4-C5H2(tert-Bu)3− form unusual half-sandwich complexes.
(η6-C6H6) piano stool compounds
In organometallic chemistry, (η6-C6H6) piano stool compounds are half-sandwich compounds with (η6-C6H6)ML3 structure (M = Cr, Mo, W, Mn(I), Re(I) and L = typically CO). (η6-C6H6) piano stool complexes are stable 18-electron coordination compounds with a variety of chemical and material applications. Early studies on (η6-C6H6)Cr(CO)3 were carried out by Natta, Ercoli and Calderazzo, and Fischer and Ofele, and the crystal structure was determined by Corradini and Allegra in 1959. The X-ray data indicate that the plane of the benzene ring is nearly parallel to the plane defined by the oxygen atoms of the carbonyl ligands, and so the structure resembles a benzene seat mounted on three carbonyl legs tethered by the metal atom.
Cr and Mn(I) (η6-C6H6) piano stool complexes
Piano stool complexes of the type (η6-C6H6)M(CO)3 are typically synthesized by heating the appropriate metal carbonyl compound with benzene. Alternately, the same compounds can be obtained by carbonylation of the bis(arene) sandwich compounds, such as (η6-C6H6)2M compound with the metal carbonyl compound. This second approach may be more appropriate for arene ligands containing thermally fragile substituents.
Reactivity of (η6-C6H6)Cr(CO)3
The benzene ligand in (η6-C6H6)Cr(CO)3Mi is prone to deprotonation. For example, Organolithium compounds form adducts featuring cyclohexadienyl ligands. Subsequent oxidation of the complex results in the release of a substituted benzene. Oxidation of the chromium atom by I2 and other iodine reagents has been shown to promote exchange of arene ligands, but the intermediate chromium iodide species has not been characterized.
(η6-C6H6)Cr(CO)3 complexes exhibit "cine" and "tele" nucleophilic aromatic addition. Processes of this type involve reaction of (η6-C6H6)Cr(CO)3 with an alkyl lithium reagent. Subsequent treatment with an acid results in the addition of a nucleophile to the benzene ring at a site ortho ("cine"), meta or para ("tele") to the ipso carbon (see Arene substitution patterns).
Reflecting its increased acidity, the benzene ligand can be lithiated with n-butyllithium. The resulting organolithium compound serves as a nucleophile in various reactions, for example, with trimethylsilyl chloride:
(η6-C6H6)Cr(CO)3 is a useful catalyst for the hydrogenation of 1,3-dienes. The product alkene results from 1,4-addition of hydrogen. The complex does not hydrogenate isolated double bonds.
A variety of arenes ligands have been installed aside from benzene. Weakly coordinating ligands may be employed to improve ligand exchange and thus the turnover rates for (η6-C6H6)M(CO)3 complexes.(η6-C6H6)M(CO)3 complexes have been incorporated into high surface area porous materials.
(η6-C6H6)M(CO)3 complexes serve as models for the interaction of metal carbonyls with graphene and carbon nanotubes. The presence of M(CO)3 on extended π-network materials has been shown to improve electrical conductivity across the material.
Reactivity of [(η6-C6H6)Mn(CO)3]+
Typical arene tricarbonyl piano stool complexes of Mn(I) and Re(I) are cationic and thus exhibit enhanced reactivity toward nucleophiles. Subsequent to nucleophilic addition, the modified arene can be recovered from the metal.
(η6-C6H6)Ru complexes
Half-sandwich compounds employing Ru(II), such as (cymene)ruthenium dichloride dimer, have been mainly investigated as catalysts for transfer hydrogenation. These complexes feature three coordination sites that are susceptible to substitution, while the arene ligand is tightly bonded and protects the metal against oxidation to Ru(III). They are prepared by reaction of RuCl3·x(H2O) with 1,3-cyclohexadienes. Work is also conducted on their potential as anticancer drugs.
(η6-C6H6)RuCl2 readily undergoes ligand exchange via cleavage of the chloride bridges, making this complex a versatile precursor to Ru(II) piano stool derivatives.
References
Organic compounds
Organometallic chemistry
Coordination chemistry | Half sandwich compound | [
"Chemistry"
] | 1,401 | [
"Organic compounds",
"Half sandwich compounds",
"Organometallic chemistry",
"Coordination chemistry"
] |
24,364,411 | https://en.wikipedia.org/wiki/Version%20vector | A version vector is a mechanism for tracking changes to data in a distributed system, where multiple agents might update the data at different times. The version vector allows the participants to determine if one update preceded another (happened-before), followed it, or if the two updates happened concurrently (and therefore might conflict with each other). In this way, version vectors enable causality tracking among data replicas and are a basic mechanism for optimistic replication. In mathematical terms, the version vector generates a preorder that tracks the events that precede, and may therefore influence, later updates.
Version vectors maintain state identical to that in a vector clock, but the update rules differ slightly; in this example, replicas can either experience local updates (e.g., the user editing a file on the local node), or can synchronize with another replica:
Initially all vector counters are zero.
Each time a replica experiences a local update event, it increments its own counter in the vector by one.
Each time two replicas and synchronize, they both set the elements in their copy of the vector to the maximum of the element across both counters: . After synchronization, the two replicas have identical version vectors.
Pairs of replicas, , , can be compared by inspecting their version vectors and determined to be either: identical (), concurrent (), or ordered ( or ). The ordered relation is defined as: Vector if and only if every element of is less than or equal to its corresponding element in , and at least one of the elements is strictly less than. If neither or , but the vectors are not identical, then the two vectors must be concurrent.
Version vectors or variants are used to track updates in many distributed file systems, such as Coda (file system) and Ficus, and are the main data structure behind optimistic replication.
Other mechanisms
Hash Histories avoid the use of counters by keeping a set of hashes of each updated version and comparing those sets by set inclusion. However this mechanism can only give probabilistic guarantees.
Concise Version Vectors allow significant space savings when handling multiple replicated items, such as in directory structures in filesystems.
Version Stamps allow tracking of a variable number of replicas and do not resort to counters. This mechanism can depict scalability problems in some settings, but can be replaced by Interval Tree Clocks.
Interval Tree Clocks generalize version vectors and vector clocks and allows dynamic numbers of replicas/processes.
Bounded Version Vectors allow a bounded implementation, with bounded size counters, as long as replica pairs can be atomically synchronized.
Dotted Version Vectors address scalability with a small set of servers mediating replica access by a large number of concurrent clients.
References
External links
Why Logical Clocks are Easy (Compares Causal Histories, Vector Clocks and Version Vectors)
Data synchronization
Logical clock algorithms
Distributed computing problems | Version vector | [
"Physics",
"Mathematics"
] | 585 | [
"Physical quantities",
"Time",
"Distributed computing problems",
"Computational problems",
"Spacetime",
"Mathematical problems",
"Logical clock algorithms"
] |
24,365,117 | https://en.wikipedia.org/wiki/Thermodynamic%20square | The thermodynamic square (also known as the thermodynamic wheel, Guggenheim scheme or Born square) is a mnemonic diagram attributed to Max Born and used to help determine thermodynamic relations. Born presented the thermodynamic square in a 1929 lecture. The symmetry of thermodynamics appears in a paper by F.O. Koenig. The corners represent common conjugate variables while the sides represent thermodynamic potentials. The placement and relation among the variables serves as a key to recall the relations they constitute.
A mnemonic used by students to remember the Maxwell relations (in thermodynamics) is "Good Physicists Have Studied Under Very Fine Teachers", which helps them remember the order of the variables in the square, in clockwise direction. Another mnemonic used here is "Valid Facts and Theoretical Understanding Generate Solutions to Hard Problems", which gives the letter in the normal left-to-right writing direction. Both times A has to be identified with F, another common symbol for Helmholtz free energy. To prevent the need for this switch the following mnemonic is also widely used:"Good Physicists Have Studied Under Very Ambitious Teachers"; another one is Good Physicists Have SUVAT, in reference to the equations of motion. One other useful variation of the mnemonic when the symbol E is used for internal energy instead of U is the following: "Some Hard Problems Go To Finish Very Easy".
Use
Derivatives of thermodynamic potentials
The thermodynamic square is mostly used to compute the derivative of any thermodynamic potential of interest. Suppose for example one desires to compute the derivative of the internal energy . The following procedure should be considered:
Place oneself in the thermodynamic potential of interest, namely (, , , ). In our example, that would be .
The two opposite corners of the potential of interest represent the coefficients of the overall result. If the coefficient lies on the left hand side of the square, a negative sign should be added. In our example, an intermediate result would be .
In the opposite corner of each coefficient, you will find the associated differential. In our example, the opposite corner to would be (volume) and the opposite corner for would be (entropy). In our example, an interim result would be: . Notice that the sign convention will affect only the coefficients, not the differentials.
Finally, always add , where denotes the chemical potential. Therefore, we would have: .
The Gibbs–Duhem equation can be derived by using this technique. Notice though that the final addition of the differential of the chemical potential has to be generalized.
Maxwell relations
The thermodynamic square can also be used to find the first-order derivatives in the common Maxwell relations. The following procedure should be considered:
Looking at the four corners of the square and make a shape with the quantities of interest.
Read the shape in two different ways by seeing it as L and ⅃. The L will give one side of the relation and the ⅃ will give the other. Note that the partial derivative is taken along the vertical stem of L (and ⅃) while the last corner is held constant.
Use L to find .
Similarly, use ⅃ to find . Again, notice that the sign convention affects only the variable held constant in the partial derivative, not the differentials.
Finally, use above equations to get the Maxwell relation: .
By rotating the shape (randomly, for example by 90 degrees counterclockwise into a shape) other relations such as:
can be found.
Natural variables of thermodynamic potentials
Finally, the potential at the center of each side is a natural function of the variables at the corner of that side. So, is a natural function of and , and is a natural function of and .
Further reading
Bejan, Adrian. Advanced Engineering Thermodynamics, John Wiley & Sons, 3rd ed., 2006, p. 231 ("star diagram").
References
Science mnemonics
Thermodynamics | Thermodynamic square | [
"Physics",
"Chemistry",
"Mathematics"
] | 838 | [
"Thermodynamics",
"Dynamical systems"
] |
24,366,867 | https://en.wikipedia.org/wiki/Dispensation%20%28theology%29 | In theology, one meaning of the term dispensation is as a distinctive arrangement or period in history that forms the framework through which God relates to mankind.
Baháʼí dispensations
In the Baháʼí Faith, a dispensation is a period of progressive revelation relating to the major religions of humanity, usually with a prophet accompanying it.
The faith's founder Bahá'u'lláh advanced the concept that dispensations tend to be millennial, mentioning in the Kitáb-i-Íqán that God will renew the "City of God" about every thousand years, and specifically mentioned that a new Manifestation of God would not appear within 1,000 years (1852–2852) of the inaugurating moment of Bahá'u'lláh's Dispensation, but that the authority of Bahá'u'lláh's message could last up to 500,000 years.
Latter Day Saint dispensations
In the Latter Day Saint movement, a dispensation is a period of time in which God gave priesthood authority to men on the Earth through prophetic callings. Between each dispensation is an apostasy where the priesthood is at least partially absent. The LDS Bible Dictionary says
Plymouth Brethren dispensations
The Plymouth Brethren systematized dispensationalism, which has since been adopted by other groups, including certain Baptists and Pentecostals. The concept of a dispensation – the arrangement of divisions in biblical history – dates back to Irenaeus in the second century. Other Christian writers and leaders since then, such as Augustine of Hippo and Joachim of Fiore (1135–1202), have also offered their own dispensation arrangements of history. Below is a table comparing some of the various dispensational schemes:
Although the divine revelation unfolds progressively, the deposit of truth in earlier time-periods is not discarded, rather it is cumulative. Thus conscience (moral responsibility) is an abiding truth in human life (Ro. 2:15; 9:1; 2 Co. 1:12; 4:2), although it does not continue as a dispensation. Similarly, the saved of this present dispensation are "not under law" as a specific test of obedience to divine revelation (Gal. 5:18; cp. Gal 2:16; 3:11), yet the law remains an integral part of Dispensational teaching. The Law clarifies that, although Christ fulfilled the law for us, by it we have had the knowledge of sin (Rom 7:7), and it is an integral part of the Holy Scriptures, which, to the redeemed, are profitable for "training in righteousness" (2 Ti. 3:16–17; cp. Ro. 15:4). The purpose of each dispensation, then, is to place man under a specific rule of conduct, but such stewardship is not a condition of salvation. In every past dispensation unregenerate man has failed, much like he is failing in the present dispensation, and will fail in the future until Eternity arrives. Salvation has been and will continue to be available to everyone by God's grace through faith.
See also
Dispensationalist theology
References
Bahá'í terminology
Christian terminology
Systematic theology
Time in religion | Dispensation (theology) | [
"Physics"
] | 705 | [
"Spacetime",
"Time in religion",
"Physical quantities",
"Time"
] |
24,369,901 | https://en.wikipedia.org/wiki/Network%20covalent%20bonding | A network solid or covalent network solid (also called atomic crystalline solids or giant covalent structures) is a chemical compound (or element) in which the atoms are bonded by covalent bonds in a continuous network extending throughout the material. In a network solid there are no individual molecules, and the entire crystal or amorphous solid may be considered a macromolecule. Formulas for network solids, like those for ionic compounds, are simple ratios of the component atoms represented by a formula unit.
Examples of network solids include diamond with a continuous network of carbon atoms and silicon dioxide or quartz with a continuous three-dimensional network of SiO2 units. Graphite and the mica group of silicate minerals structurally consist of continuous two-dimensional sheets covalently bonded within the layer, with other bond types holding the layers together. Disordered network solids are termed glasses. These are typically formed on rapid cooling of melts so that little time is left for atomic ordering to occur.
Properties
Hardness: Very hard, due to the strong covalent bonds throughout the lattice (deformation can be easier, however, in directions that do not require the breaking of any covalent bonds, as with flexing or sliding of sheets in graphite or mica).
Melting point: High, since melting means breaking covalent bonds (rather than merely overcoming weaker intermolecular forces).
Solid-phase electrical conductivity: Variable, depending on the nature of the bonding: network solids in which all electrons are used for sigma bonds (e.g. diamond, quartz) are poor conductors, as there are no delocalized electrons. However, network solids with delocalized pi bonds (e.g. graphite) or dopants can exhibit metal-like conductivity.
Liquid-phase electrical conductivity: Low, as the macromolecule consists of neutral atoms, meaning that melting does not free up any new charge carriers (as it would for an ionic compound).
Solubility: Generally insoluble in any solvent due to the difficulty of solvating such a large molecule.
Examples
Boron nitride (BN)
Diamond (carbon, C)
Quartz (SiO2)
Rhenium diboride (ReB2)
Silicon carbide (moissanite, carborundum, SiC)
Silicon (Si)
Germanium (Ge)
Aluminium nitride (AlN)
See also
Molecular solid
References
Chemical bonding | Network covalent bonding | [
"Physics",
"Chemistry",
"Materials_science"
] | 495 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
27,637,144 | https://en.wikipedia.org/wiki/Carbonyl%20oxidation%20with%20hypervalent%20iodine%20reagents | Carbonyl oxidation with hypervalent iodine reagents involves the functionalization of the α position
of carbonyl compounds through the intermediacy of a hypervalent iodine(III) enolate species. This electrophilic
intermediate may be attacked by a variety of nucleophiles or undergo rearrangement or elimination.
Introduction
Hypervalent iodine(III) compounds are attractive oxidizing agents because of their stability and selectivity. In
the presence of enolizable carbonyl compounds, they are able to accomplish oxidative functionalization of the
α position. A key iodine(III) enolate intermediate forms, which then undergoes either nucleophilic substitution
(α-functionalization), elimination (dehydrogenation), or rearrangement. Common hypervalent iodine reagents used to effect these transformations include iodosylbenzene (PhIO), Koser's reagent (PhI(OTs)OH), and (dichloroiodo)benzene (PhICl2).
Mechanism and stereochemistry
Prevailing mechanism
The mechanism of carbonyl oxidation by iodine(III) reagents varies as a function of substrate structure and
reaction conditions, but some generalizations are possible. Under basic conditions, the active iodinating species are iodine(III) compounds in which any relatively acidic ligands on iodine (such as acetate) have been replaced by alkoxide. In all cases, the α carbon forms a bond to
iodine. Reduction of iodine(III) to iodine(I) then occurs via attack of a nucleophile on the now electrophilic α
carbon. Under basic conditions, nucleophilic attack at the carbonyl carbon is faster than attack at the α carbon. Iodine
displacement is actually accomplished intramolecularly by the carbonyl oxygen, which becomes the α-hydroxyl oxygen in the
product.
Rearrangements of the iodine(III) enolate species have been observed. Under acidic conditions, oxidations of aryl enol ethers
lead to α-aryl esters via 1,2-aryl migration. Ring-contractive Favorskii rearrangements may take place under basic
conditions (see equation (12) below).
Stereochemistry
Using a chromium carbonyl complex, it was shown that displacement of iodine likely occurs with inversion
of configuration. Iodine approaches on the side opposite the chromium tricarbonyl unit due to steric hindrance. Invertive
displacement leads to a syn relationship between chromium and the α hydroxyl group.
Studies on the oxidation of unsaturated carbonyl compounds also provide stereochemical insight.
Only the isomer with a syn relationship between the α-hydroxy and β-methoxy groups was observed.
After nucleophilic attack by methoxide, iodine approaches the face opposite the methoxide. Invertive displacement
by hydroxide then leads to the syn isomer.
Scope and limitations
Under protic conditions, ketones undergo α-hydroxylation and dimethyl acetal formation.
Both iodosylbenzene and iodobenzene diacetate (IBD) can effect this transformation. This method can be used to synthesize α-hydroxy ketones after acidic hydrolysis of the ketal functionality.
In the presence of diaryliodonium salts, enolates undergo α-arylation. Bulky diaryliodoniums react
more slowly, and enolate homocoupling (see equation () below) begins to compete as the aromatic ring is substituted.
α-Oxytosylation facilitates the elaboration of carbonyl compounds into a variety of α-functionalized
products. The α-tosyloxycarbonyl compounds that result are more stable than α-halocarbonyl compounds and
are not lachrymators.
Silyl enol ethers undergo many of the same reactions as carbonyl compounds in the presence of iodine(III)
reagents. α-Alkoxylation is possible in the presence of an external alcohol nucleophile, although yields are somewhat variable.
When no external or internal nucleophile is present, oxidative homocoupling occurs, yielding 1,4-dicarbonyl compounds.
Intramolecularly tethered nucleophiles may displace iodobenzene to afford lactones or other heterocycles. If acidic hydrogens are present in the cyclic product, overoxidation can occur under the reaction conditions.
In some cases, rearrangements complicate hypervalent iodine oxidations of carbonyl compounds. Aryl
migration may occur under acidic conditions, yielding α-aryl esters from enol ethers. Favorskii rearrangements
have also been observed, and these have been particularly useful for steroid synthesis.
Synthetic applications
Oxidative functionalization of silyl enol ethers in low concentration (to avoid homocoupling) without an external nucleophile leads to dehydrogenation. This can be a useful
way to generate α,β-unsaturated carbonyl compounds in the absence of functional handles. For instance, dehydrogenation is employed in steroid synthesis to form unsaturated ketones.
Comparison with other methods
Few compounds that oxidize carbonyl compounds rival the safety, selectivity, and versatility of hypervalent iodine reagents. Other methods for the α-hydroxylation of carbonyl compounds may employ toxic organometallic compounds (such as lead tetraacetate or osmium tetroxide). One alternative to hypervalent iodine oxidation that does not employ heavy metals is the attack of a metal enolate on dioxygen, followed by reduction of the resulting peroxide (equation ()). The most popular method for the α-hydroxylation of carbonyl compounds is the Rubottom oxidation, which employs silyl enol ethers as substrates and peracids as oxidants.
Oxidative rearrangements are generally easier to accomplish using hypervalent iodine reagents than other oxidizing agents. The Willgerodt-Kindler reaction of alkyl aryl ketones, for instance, requires forcing conditions and often gives low yields of amide products.
References
Organic oxidation reactions | Carbonyl oxidation with hypervalent iodine reagents | [
"Chemistry"
] | 1,329 | [
"Organic oxidation reactions",
"Organic reactions"
] |
27,639,095 | https://en.wikipedia.org/wiki/Antimatter%20tests%20of%20Lorentz%20violation | High-precision experiments could reveal
small previously unseen differences between the behavior
of matter and antimatter.
This prospect is appealing to physicists because it may
show that nature is not Lorentz symmetric.
Introduction
Ordinary matter is made up of protons, electrons, and neutrons.
The quantum behavior of these particles can be predicted with excellent accuracy
using the Dirac equation, named after P.A.M. Dirac.
One of the triumphs of the Dirac equation is
its prediction of the existence of antimatter particles.
Antiprotons, positrons, and antineutrons
are now well understood,
and can be created and studied in experiments.
High-precision experiments have been unable to
detect any difference between the masses
of particles and
those of the corresponding antiparticles.
They also have been unable to detect any difference between the magnitudes of
the charges,
or between the lifetimes,
of particles and antiparticles.
These mass, charge, and lifetime symmetries
are required in a Lorentz and CPT symmetric universe,
but are only a small number of the properties that need to match
if the universe is Lorentz and CPT symmetric.
The Standard-Model Extension (SME),
a comprehensive theoretical framework for Lorentz and CPT violation,
makes specific predictions
about how particles and antiparticles
would behave differently in a universe
that is very close to,
but not exactly,
Lorentz symmetric.
In loose terms,
the SME can be visualized
as being constructed from
fixed background fields
that interact weakly, but differently,
with particles and antiparticles.
The behavioral differences between
matter and antimatter
are specific to each individual experiment.
Factors that determine the behavior include
the particle species involved,
the electromagnetic, gravitational, and nuclear fields controlling the system.
Furthermore,
for any Earth-bound experiment,
the rotational and orbital motion of the Earth is important,
leading to sidereal and seasonal signals.
For experiments conducted in space, the orbital motion of the craft
is an important factor in determining the signals
of Lorentz violation that might arise.
To harness the predictive power of the SME in any specific system,
a calculation has to be performed
so that all these factors can be accounted for.
These calculations are facilitated by the reasonable assumption that Lorentz
violations, if they exist,
are small. This makes it possible to use perturbation theory to obtain results
that would otherwise be extremely difficult to find.
The SME generates a modified Dirac equation
that breaks Lorentz symmetry
for some types of particle motions, but not others.
It therefore holds important information
about how Lorentz violations might have been hidden
in past experiments,
or might be revealed in future ones.
Lorentz violation tests with Penning Traps
A Penning trap
is a research apparatus
capable of trapping individual charged particles
and their antimatter counterparts.
The trapping mechanism is
a strong magnetic field that keeps the particles near a central axis,
and an electric field that turns the particles around
when they stray too far along the axis.
The motional frequencies of the trapped particle
can be monitored and measured with astonishing precision.
One of these frequencies is the anomaly frequency,
which has played an important role in the measurement
of the gyromagnetic ratio of the electron (see ).
The first calculations of SME effects
in Penning traps
were published in 1997
and 1998.
They showed that,
in identical Penning traps,
if the
anomaly frequency of an electron was increased,
then the anomaly frequency of a positron
would be decreased.
The size of the increase or decrease
in the frequency
would be a measure of
the strength of one of the SME background fields.
More specifically,
it is a measure
of the component of the background field
along the direction of the axial magnetic field.
In tests of Lorentz symmetry,
the noninertial nature of the laboratory
due to the rotational and orbital motion of the Earth
has to be taken into account.
Each Penning-trap measurement
is the projection of the background SME fields
along the axis of the experimental magnetic field
at the time of the experiment.
This is further complicated if the experiment takes
hours, days, or longer to perform.
One approach is to seek instantaneous differences,
by comparing anomaly frequencies
for a particle and an antiparticle
measured at the same time on different days.
Another approach is to seek
sidereal variations,
by continuously monitoring
the anomaly frequency for just one species of particle
over an extended time.
Each offers different challenges.
For example,
instantaneous comparisons
require the electric field in the trap to be
precisely reversed,
while sidereal tests are limited
by the stability of the magnetic field.
An experiment conducted by the physicist Gerald Gabrielse of Harvard University involved two particles confined in a Penning trap. The idea was to compare a proton and an antiproton, but to overcome the technicalities of having opposite charges,
a negatively charged hydrogen ion was used in place of the proton. The ion, two electrons bound electrostatically with a proton, and the antiproton have the same charge and can therefore be simultaneously trapped. This design allows for quick interchange of the proton and the antiproton and so an instantaneous-type Lorentz test can be performed. The cyclotron frequencies of the two trapped particles
were about 90 MHz, and the apparatus was capable of resolving differences
in these of about 1.0 Hz. The absence of Lorentz violating effects of this type
placed a limit on combinations of -type SME coefficients that had not been accessed in other experiments. The results
appeared in Physical Review Letters in 1999.
The Penning-trap group at the University of Washington, headed by the Nobel Laureate Hans Dehmelt, conducted a search for sidereal variations in the anomaly frequency of a trapped electron. The results were extracted from an experiment that ran for several weeks, and the analysis required splitting the data into "bins" according to the orientation of the apparatus in the inertial reference frame of the Sun. At a resolution of 0.20 Hz, they were unable to discern any sidereal variations in the anomaly frequency, which runs around 185,000,000 Hz. Translating this into an upper bound on the relevant
SME background field, places a bound of about
10−24 GeV on a -type electron coefficient.
This work
was published in Physical Review Letters in 1999.
Another experimental result from the Dehmelt group involved a comparison of the instantaneous type. Using data from a single trapped electron
and a single trapped positron, they again found no difference
between the two anomaly frequencies at a resolution of about 0.2 Hz.
This result placed a bound on a simpler combination of
-type coefficients at a level of about 10−24 GeV.
In addition to being a limit on Lorentz violation,
this also limits the CPT violation.
This result
appeared in Physical Review Letters in 1999.
Lorentz violation in antihydrogen
The antihydrogen atom is
the antimatter counterpart of the hydrogen atom.
It has a negatively charged antiproton
at the nucleus
that attracts a positively charged positron
orbiting around it.
The spectral lines of hydrogen have frequencies
determined by the energy differences
between the quantum-mechanical orbital states
of the electron.
These lines
have been studied in thousands of spectroscopic experiments
and are understood in great detail.
The quantum mechanics of the positron orbiting an antiproton
in the antihydrogen atom is expected to be very similar
to that of the hydrogen atom.
In fact,
conventional physics predicts that the spectrum of antihydrogen
is identical to that of regular hydrogen.
In the presence of the background fields of the SME,
the spectra of hydrogen and antihydrogen
are expected to show tiny differences
in some lines,
and no differences in others.
Calculations of these SME effects
in antihydrogen and hydrogen
were published
in Physical Review Letters
in 1999.
One of the main results found
is that hyperfine transitions
are sensitive to Lorentz breaking effects.
Several experimental groups at CERN are working on producing antihydrogen: AEGIS, ALPHA, ASACUSA, ATRAP, and GBAR.
Creating trapped antihydrogen
in sufficient quantities
to do spectroscopy
is an enormous experimental challenge.
Signatures of Lorentz violation
are similar to those expected in Penning traps.
There would be sidereal effects
causing variations in the spectral frequencies
as the experimental laboratory turns with the Earth.
There would also be the possibility of finding instantaneous
Lorentz breaking signals
when antihydrogen spectra are compared directly with conventional hydrogen spectra
In October 2017, the BASE experiment at CERN reported a measurement of the antiproton magnetic moment to a precision of 1.5 parts per billion. It is consistent with the most precise measurement of the proton magnetic moment (also made by BASE in 2014), which supports the hypothesis of CPT symmetry. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter.
Lorentz violation with muons
The muon and its positively charged antiparticle
have been used to perform tests of Lorentz symmetry.
Since the lifetime of the muon is only a few microseconds,
the experiments are quite different
from ones with electrons and positrons.
Calculations for muon experiments
aimed at probing Lorentz violation
in the SME
were first published in the year 2000.
In the year 2001,
Hughes and collaborators published their results
from a search for sidereal signals in the spectrum
of muonium,
an atom consisting of an electron bound to a negatively charged muon.
Their data,
taken over a two-year period,
showed no evidence for Lorentz violation.
This placed a stringent constraint on
a combination of -type coefficients in the SME,
published in Physical Review Letters.
In 2008,
the Muon Collaboration at the Brookhaven National Laboratory published results after searching for signals of Lorentz violation with muons and antimuons.
In one type of analysis, they compared the anomaly frequencies
for the muon and its antiparticle. In another, they looked for sidereal variations by allocating their data into one-hour "bins" according to the orientation of the Earth relative to the Sun-centered inertial reference frame.
Their results, published in Physical Review Letters in 2008,
show no signatures of Lorentz violation at the resolution of the Brookhaven experiment.
Experimental results in all sectors of the
SME are summarized in the Data Tables for Lorentz and CPT violation.
See also
AEGIS Experiment
ALPHA Experiment
ASACUSA Experiment
ATRAP Experiment
Lorentz-violating electrodynamics
Lorentz-violating neutrino oscillations
Standard-Model Extension
Tests of special relativity
Test theories of special relativity
References
External links
Background information on Lorentz and CPT violation
Data Tables for Lorentz and CPT Violation
AEGIS Experiment
ALPHA Experiment
ASACUSA Experiment
ATRAP Experiment
Muon Experiment
Antimatter | Antimatter tests of Lorentz violation | [
"Physics"
] | 2,201 | [
"Antimatter",
"Matter"
] |
27,642,888 | https://en.wikipedia.org/wiki/Electric%20utility | An electric utility, or a power company, is a company in the electric power industry (often a public utility) that engages in electricity generation and distribution of electricity for sale generally in a regulated market. The electrical utility industry is a major provider of energy in most countries.
Electric utilities include investor owned, publicly owned, cooperatives, and nationalized entities. They may be engaged in all or only some aspects of the industry. Electricity markets are also considered electric utilities—these entities buy and sell electricity, acting as brokers, but usually do not own or operate generation, transmission, or distribution facilities. Utilities are regulated by local and national authorities.
Electric utilities are facing increasing demands including aging infrastructure, reliability, and regulation.
In 2009, the French company EDF was the world's largest producer of electricity.
Organization
Power transactions
An electric power system is a group of generation, transmission, distribution, communication, and other facilities that are physically connected. The flow of electricity within the system is maintained and controlled by dispatch centers which can buy and sell electricity based on system requirements.
Executive compensation
The executive compensation received by the executives in utility companies often receives the most scrutiny in the review of operating expenses. Just as regulated utilities and their governing bodies struggle to maintain a balance between keeping consumer costs reasonable and being profitable enough to attract investors, they must also compete with private companies for talented executives and then be able to retain those executives.
Regulated companies are less likely to use incentive-based remuneration in addition to base salaries. Executives in regulated electric utilities are less likely to be paid for their performance in bonuses or stock options. They are less likely to approve compensation policies that include incentive-based pay. The compensation for electric utility executives will be the lowest in regulated utilities that have an unfavorable regulatory environment. These companies have more political constraints than those in a favorable regulatory environment and are less likely to have a positive response to requests for rate increases.
Just as increased constraints from regulation drive compensation down for executives in electric utilities, deregulation has been shown to increase remuneration. The need to encourage risk-taking behavior in seeking new investment opportunities while keeping costs under control requires deregulated companies to offer performance-based incentives to their executives. It has been found that increased compensation is also more likely to attract executives experienced in working in competitive environments.
In the United States, the Energy Policy Act of 1992 removed previous barriers to wholesale competition in the electric utility industry. Currently 24 states allow for deregulated electric utilities: Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, Texas, Virginia, Arizona, Arkansas, California, Connecticut, Delaware, Illinois, Maine, Maryland, Massachusetts, Michigan, Montana, New Hampshire, New Jersey, New Mexico, New York, and Washington D.C. As electric utility monopolies have been increasingly broken up into deregulated businesses, executive compensation has risen; particularly incentive compensation.
Oversight
Oversight is typically carried out at the national level, however it varies depending on financial support and external influences. There is no existence of any influential international energy oversight organization. There does exist a World Energy Council, but its mission is mostly to advise and share new information. It does not hold any kind of legislative or executive power.
Alternative energy promotion
Alternative energy has become more and more prevalent in recent times and as it is inherently independent of more traditional sources of energy, the market seems to have a very different structure. In the United States, to promote the production and development of alternative energies, there are many subsidies, rewards, and incentives that encourage companies to take up the challenge themselves. There is precedent for such a system working in countries like Nicaragua. In 2005, Nicaragua gave renewable energy companies tax and duty exemptions, which spurred a great deal of private investment.
The success in Nicaragua may not be an easily replicated situation however. The movement was known as Energiewende and it is generally considered a failure for many reasons. A primary reason was that it was improperly timed and was proposed during a period in which their energy economy was under more competition.
Globally, the transition of electric utilities to renewables remains slow, hindered by concurrent continued investment in the expansion of fossil fuel capacity.
Nuclear energy
Nuclear energy may be classified as a green source depending on the country. Although there used to be much more privatization in this energy sector, after the 2011 Fukushima district nuclear power plant disaster in Japan, there has been a move away from nuclear energy itself, especially for privately owned nuclear power plants. The criticism being that privatization of companies tend to have the companies themselves cutting corners and costs for profits which has proven to be disastrous in the worst-case scenarios. This placed a strain on many other countries as many foreign governments felt pressured to close nuclear power plants in response to public concerns. Nuclear energy however still holds a major part in many communities around the world.
Customer expectations
Utilities have found that it isn't simple to meet the unique needs of individual customers, whether residential, corporate, industrial, government, military, or otherwise. Customers in the twenty-first century have new and urgent expectations that demand a transformation of the electric grid. They want a system that gives them new tools, better data to help manage energy usage, advanced protections against cyberattacks, and a system that minimizes outage times and quickens power restoration.
See also
Consumer Advocate for Customers of Public Utilities
Electricity transmission
Rate Case
Rate base (energy)
References
Electric power
Public utilities
Economics of transport and utility industries | Electric utility | [
"Physics",
"Engineering"
] | 1,105 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
25,778,520 | https://en.wikipedia.org/wiki/Cosmic%20background%20radiation | Cosmic background radiation is electromagnetic radiation that fills all space. The origin of this radiation depends on the region of the spectrum that is observed. One component is the cosmic microwave background. This component is redshifted photons that have freely streamed from an epoch when the Universe became transparent for the first time to radiation. Its discovery and detailed observations of its properties are considered one of the major confirmations of the Big Bang. The discovery (by chance in 1965) of the cosmic background radiation suggests that the early universe was dominated by a radiation field, a field of extremely high temperature and pressure.
The Sunyaev–Zel'dovich effect shows the phenomena of radiant cosmic background radiation interacting with "electron" clouds distorting the spectrum of the radiation.
There is also background radiation in the infrared, x-rays, etc., with different causes, and they can sometimes be resolved into an individual source. See cosmic infrared background and X-ray background. See also cosmic neutrino background and extragalactic background light.
Timeline of significant events
1896:
Charles Édouard Guillaume estimates the "radiation of the stars" to be 5.6 K.
1926:
Sir Arthur Eddington estimates the non-thermal radiation of starlight in the galaxy has an effective temperature of 3.2 K.
1930s:
Erich Regener calculates that the non-thermal spectrum of cosmic rays in the galaxy has an effective temperature of 2.8 K.
1931:
The term microwave first appears in print: "When trials with wavelengths as low as 18 cm were made known, there was undisguised surprise that the problem of the micro-wave had been solved so soon." Telegraph & Telephone Journal XVII. 179/1"
1938:
Walther Nernst re-estimates the cosmic ray temperature as 0.75 K.
1946:
The term "microwave" is first used in print in an astronomical context in an article "Microwave Radiation from the Sun and Moon" by Robert Dicke and Robert Beringer.
1946:
Robert Dicke predicts a microwave background radiation temperature of 20 K (ref: Helge Kragh)
1946:
Robert Dicke predicts a microwave background radiation temperature of "less than 20 K" but later revised to 45 K (ref: Stephen G. Brush).
1946:
George Gamow estimates a temperature of 50 K.
1948:
Ralph Alpher and Robert Herman re-estimate Gamow's estimate at 5 K.
1949:
Ralph Alpher and Robert Herman re-re-estimate Gamow's estimate at 28 K.
1960s:
Robert Dicke re-estimates a MBR (microwave background radiation) temperature of 40 K (ref: Helge Kragh).
1965:
Arno Penzias and Robert Woodrow Wilson measure the temperature to be approximately 3 K. Robert Dicke, P. J. E. Peebles, P. G. Roll and D. T. Wilkinson interpret this radiation as a signature of the Big Bang.
See also
Hot dark matter
Irradiation
Unruh effect
References
External links
The Diffuse X-ray and Gamma-ray Background & Deep Fields
Observational astronomy
Physical cosmology
Concepts in astronomy
Electromagnetic radiation | Cosmic background radiation | [
"Physics",
"Astronomy"
] | 644 | [
"Physical phenomena",
"Astronomical sub-disciplines",
"Concepts in astronomy",
"Electromagnetic radiation",
"Theoretical physics",
"Observational astronomy",
"Astrophysics",
"Radiation",
"Physical cosmology"
] |
25,779,373 | https://en.wikipedia.org/wiki/RISSP | RISSP stands for Record of Inter System Safety Precautions
. It is a written record of inter-system safety precautions to be compiled
in accordance with the provisions of Operating Code no. 8 (OC8). Where a High Voltage electrical boundary occurs, for instance between a power station and electrical utility, the safety controllers each side of the boundary must co-ordinate their activities. For the electrical transmission system in England, Wales and Scotland, the Ofgem-defined industry standard document OC8 of the Grid code defines how safety precautions can be managed with the Record of Inter System Safety Precautions (RISSP) independently from the safety rules of the connected parties. The purpose of the RISSP is to guarantee that safety precautions provided by a third-party can be quoted in a safety document so that work can take place.
Identifying number
The RISSP form must have a uniquely identifying number, provided by the party requesting safety precautions. To achieve this, all parties connected to the national grid must have a RISSP prefix code. These are, in general, abbreviations of the company name at the time it applied for its code. All RISSPs have the suffix 'R'. The requester states the location and equipment that requires the RISSP. The implementer then states the location and nature of the precautions provided for adequate safety of that equipment.
Cascading RISSP
Some interfaces may include more than two parties, for instance on substation busbars (BB). This may require one request via a party on the BB to go via the BB controller to the other parties. It is then for the BB controller to get all required safety from other parties to cascade on to the initial requesting controller. This is sometimes referred to as a cascading RISSP. Where a third-party making a request for safety precautions controls some of the other infeeds to an interface on which safety has been requested (such as a distribution network operator who has more than one circuit on a BB, but is requesting safety for just one of those circuits) it is up to the requester themselves to control the linked safety precautions to ensure that safety from the interface. This is often referred to as linkage. It is possible to have both linkage and cascading in a single job, requiring use of RISSP forms with multiple users.
Multiple directions
In addition it is possible for to request safety in multiple directions, which results in controllers having both requesting and implementing RISSPs in force simultaneously. These RISSPs may be cascading and/or linkage. It is not possible to have RISSPs going in multiple directions if HV testing is taking place across a boundary. In these circumstances only a single safety document can be in force at any one time and any RISSPs in place can only be to ensure the safety precautions for this document.
Control boundaries
RISSPs do not have to be from one company to another, only one control boundary to another. Thus two distribution network operators owned by the same parent company but with different geographical areas of responsibility use RISSPs with each other. Also, the boundary between a High Voltage controller and a Low Voltage controller can be controlled by the RISSP process. This most commonly occurs with the fuses/links between an auxiliary transformer and a substation's 415 volt AC supplies.
Authorisation
RISSPs can only be agreed to and signed by a suitably authorised person. Any company that is connected to the grid must have somebody authorised available or on call at all times.
References
Electric power
National Grid (Great Britain) | RISSP | [
"Physics",
"Engineering"
] | 736 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
34,494,017 | https://en.wikipedia.org/wiki/Fracking%20proppants | A proppant is a solid material, typically sand, treated sand or man-made ceramic materials, designed to keep an induced hydraulic fracture open, during or following a fracturing treatment, most commonly for unconventional reservoirs. It is added to a fracking fluid which may vary in composition depending on the type of fracturing used, and can be gel, foam or slickwater-based. In addition, there may be unconventional fracking fluids. Fluids make tradeoffs in such material properties as viscosity, where more viscous fluids can carry more concentrated proppant; the energy or pressure demands to maintain a certain flux pump rate (flow velocity) that will conduct the proppant appropriately; pH, various rheological factors, among others. In addition, fluids may be used in low-volume well stimulation of high-permeability sandstone wells ( per well) to the high-volume operations such as shale gas and tight gas that use millions of gallons of water per well.
Conventional wisdom has often vacillated about the relative superiority of gel, foam and slickwater fluids with respect to each other, which is in turn related to proppant choice. For example, Zuber, Kuskraa and Sawyer (1988) found that gel-based fluids seemed to achieve the best results for coalbed methane operations, but as of 2012, slickwater treatments are more popular.
Other than proppant, slickwater fracturing fluids are mostly water, generally 99% or more by volume, but gel-based fluids can see polymers and surfactants comprising as much as 7 vol%, ignoring other additives. Other common additives include hydrochloric acid (low pH can etch certain rocks, dissolving limestone for instance), friction reducers, guar gum, biocides, emulsion breakers, emulsifiers, 2-butoxyethanol, and radioactive tracer isotopes.
Proppants have greater permeability than small mesh proppants at low closure stresses, but will mechanically fail (i.e. get crushed) and produce very fine particulates ("fines") at high closure stresses such that smaller-mesh proppants overtake large-mesh proppants in permeability after a certain threshold stress.
Though sand is a common proppant, untreated sand is prone to significant fines generation; fines generation is often measured in wt% of initial feed. One manufacturer has claimed untreated sand fines production to be 23.9% compared with 8.2% for lightweight ceramic and 0.5% for their product. One way to maintain an ideal mesh size (i.e. permeability) while having sufficient strength is to choose proppants of sufficient strength; sand might be coated with resin, to form curable resin coated sand or pre-cured resin coated sands. In certain situations a different proppant material might be chosen altogether—popular alternatives include ceramics and sintered bauxite.
Proppant weight and strength
Increased strength often comes at a cost of increased density, which in turn demands higher flow rates, viscosities or pressures during fracturing, which translates to increased fracturing costs, both environmentally and economically. Lightweight proppants conversely are designed to break the strength-density trend, or even afford greater gas permeability. Proppant geometry is also important; certain shapes or forms amplify stress on proppant particles making them especially vulnerable to crushing (a sharp discontinuity can classically allow infinite stresses in linear elastic materials).
Proppant deposition and post-treatment behaviours
Proppant mesh size also affects fracture length: proppants can be "bridged out" if the fracture width decreases to less than twice the size of the diameter of the proppant. As proppants are deposited in a fracture, proppants can resist further fluid flow or the flow of other proppants, inhibiting further growth of the fracture. In addition, closure stresses (once external fluid pressure is released) may cause proppants to reorganise or "squeeze out" proppants, even if no fines are generated, resulting in smaller effective width of the fracture and decreased permeability. Some companies try to cause weak bonding at rest between proppant particles in order to prevent such reorganisation. The modelling of fluid dynamics and rheology of fracturing fluid and its carried proppants is a subject of active research by the industry.
Proppant costs
Though good proppant choice positively impacts output rate and overall ultimate recovery of a well, commercial proppants are also constrained by cost. Transport costs from supplier to site form a significant component of the cost of proppants.
Other components of fracturing fluids
Other than proppant, slickwater fracturing fluids are mostly water, generally 99% or more by volume, but gel-based fluids can see polymers and surfactants comprising as much as 7 vol%, ignoring other additives. Other common additives include hydrochloric acid (low pH can etch certain rocks, dissolving limestone for instance), friction reducers, guar gum, biocides, emulsion breakers, emulsifiers, and 2-Butoxyethanol.
Radioactive tracer isotopes are sometimes included in the hydrofracturing fluid to determine the injection profile and location of fractures created by hydraulic fracturing. Patents describe in detail how several tracers are typically used in the same well. Wells are hydraulically fractured in different stages. Tracers with different half-lives are used for each stage. Their half-lives range from 40.2 hours (lanthanum-140) to 5.27 years (cobalt-60). Amounts per injection of radionuclide are listed in The US Nuclear Regulatory Commission (NRC) guidelines. The NRC guidelines also list a wide range of radioactive materials in solid, liquid and gaseous forms that are used as field flood or enhanced oil and gas recovery study applications tracers used in single and multiple wells.
In the US, except for diesel-based additive fracturing fluids, noted by the American Environmental Protection Agency to have a higher proportion of volatile organic compounds and carcinogenic BTEX, use of fracturing fluids in hydraulic fracturing operations was explicitly excluded from regulation under the American Clean Water Act in 2005, a legislative move that has since attracted controversy for being the product of special interests lobbying.
See also
List of additives for hydraulic fracturing
Hydraulic fracturing and radionuclides
References
Petroleum engineering
Hydraulic fracturing | Fracking proppants | [
"Chemistry",
"Engineering"
] | 1,323 | [
"Petroleum engineering",
"Petroleum technology",
"Energy engineering",
"Natural gas technology",
"Hydraulic fracturing"
] |
34,494,696 | https://en.wikipedia.org/wiki/ShelXle | The program ShelXle is a graphical user interface for the structure refinement program SHELXL. ShelXle combines an editor with syntax highlighting for the SHELXL-associated (input) and (output) files with an interactive graphical display for visualization of a three-dimensional structure including the electron density (Fo) and difference density (Fo-Fc) maps.
Overview
ShelXle can display electron density maps like the macromolecular program Coot but is more intended for smaller molecules.
A number of excellent graphical user interfaces (GUIs) exist for small molecule crystal structure refinement with SHELX (e.g., WINGX, Olex2, XSEED, PLATON and SYSTEM-S, and the Bruker programs XP and XSHELL)
ShelXle is free software, distributed under the GNU LGPL. It is available from the ShelXle website or from SourceForge. Binaries are available for Windows, macOS and the Linux distributions SuSE, Debian and Ubuntu.
The Windows binary is distributed with the NSIS Installer.
Features
Editor featuring syntax highlighting and code completion for the SHELX instructions.
Clicking on an atom in the structure view sets the text cursor to the line that contains this atom.
Locating atoms in structure view from the editor.
Rename mode with support of residues and disordered parts and free variables.
Program architecture
ShelXle uses the Qt (framework). It is written entirely in C++ and does not use any scripting language. For the refinement it calls the external binary of SHELXL which might also be SHELXH, SHELXLMP from George M. Sheldrick or XL from Bruker.
SHELX
SHELX is developed by George M. Sheldrick since the late 1960s. Important releases are SHELX76 and SHELX97. It is still developed but releases are usually after ten years of testing.
Academic users can download the SHELX programs freely after registration.
External links
ShelXle web site
ShelXle support forum at xrayforum.co.uk
ShelXle at Sourceforge
References
Crystallography software
Molecular modelling software
Free science software | ShelXle | [
"Chemistry",
"Materials_science"
] | 478 | [
"Molecular modelling software",
"Computational chemistry software",
"Molecular modelling",
"Crystallography",
"Crystallography software"
] |
34,500,031 | https://en.wikipedia.org/wiki/Glucose%20phosphate%20broth | Glucose phosphate broth is used to perform methyl red (MR) test and Voges–Proskauer test (VP).
Contents
Glucose – 5 g/L
Dipotassium phosphate – 5 g/L
Proteose Peptone – 5 g/L
Distilled water – 1000 mL
pH – 6.9
Methyl red test
Principle
It is used to determine the ability of an organism to produce mixed acids by fermentation of glucose and to overcome the buffering capacity of the medium.
Procedure
Inoculate MacConkey's (Glucose phosphate broth) with pure culture of test organism. Incubate the broth at 35 °C for 48–72 hours.
After incubation add 5 drops of methyl red directly into the broth, through the sides of the tube.
Interpretation
The development of stable red color in the surface of the medium indicates sufficient acid production to lower the pH to 4.4 and constitute a positive test. Since other organism may produce lesser quantities of acid from the test substrate, an intermediate orange color between yellow and red may develop. This does not indicate positive test.
Controls
Positive and negative controls should be run after preparation of each lot of medium.
Positive control: Escherichia coli
Negative control: Klebsiella
Voges Proskauer test
Principle
It is used to determine the ability of some organisms to produce a neutral end product, acetyl methyl carbinol (acetoin) from glucose fermentation. The production of acetoin, a neutral reacting end product produced by members such as Klebsiella, Enterobacter etc., is the chief end product of glucose metabolism and form less quantities of mixed acids. In the presence of atmospheric oxygen and 40% KOH, acetoin is converted to diacetyl and α-naphthol serves as catalyst to bring out red color complex.
Media
Glucose Phosphate Broth
Reagents
A: α-naphthol – 5 g
Absolute ethyl alcohol – 100 mL – 0.6 mL – 3 parts
B: KOH – 40 g
Distilled water – 100 mL – 0.2 mL – 1 part
Procedure
Inoculate a tube of glucose phosphate broth with a pure inoculum of test organism and incubate at 35 °C for 24 hours.
To 1 mL of this broth add 0.6 mL of 5% α-Naphthol followed by 0.2 mL of 40% KOH. Shake the tube gently to expose the medium to atmospheric oxygen and allow the tube to remain undisturbed for 10–15 minutes.
Interpretation
A positive test is represented by the development of red color in 15 minutes or more after addition of the reagents, indicating the presence of diacetyl, the oxidation product of acetoin. The test should be red, after standing for 1 hour because negative VP cultures may produce copper-like colour potentially resulting in a false positive interpretation, also because due to action of the reagents when mixed.
Controls
Positive and negative controls should be run after preparation of each lot of medium.
Positive control: Klebsiella
Negative control: Escherichia coli
References
PH indicators
Microbiology techniques | Glucose phosphate broth | [
"Chemistry",
"Materials_science",
"Biology"
] | 649 | [
"Titration",
"PH indicators",
"Chromism",
"Chemical tests",
"Equilibrium chemistry",
"Microbiology techniques"
] |
22,843,059 | https://en.wikipedia.org/wiki/Slater%20integrals | In mathematics and mathematical physics, Slater integrals are certain integrals of products of three spherical harmonics. They occur naturally when applying an orthonormal basis of functions on the unit sphere that transform in a particular way under rotations in three dimensions. Such integrals are particularly useful when computing properties of atoms which have natural spherical symmetry. These integrals are defined below along with some of their mathematical properties.
Formulation
In connection with the quantum theory of atomic structure, John C. Slater defined the integral of three spherical harmonics as a coefficient . These coefficients are essentially the product of two Wigner 3jm symbols.
These integrals are useful and necessary when doing atomic calculations of the Hartree–Fock variety where matrix elements of the Coulomb operator and Exchange operator are needed. For an explicit formula, one can use Gaunt's formula for associated Legendre polynomials.
Note that the product of two spherical harmonics can be written in terms of these coefficients. By expanding such a product over a spherical harmonic basis with the same order
one may then multiply by and integrate, using the conjugate property and being careful with phases and normalisations:
Hence
These coefficient obey a number of identities. They include
References
Atomic physics
Quantum chemistry
Rotational symmetry | Slater integrals | [
"Physics",
"Chemistry"
] | 253 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
"Atomic physics",
" molecular",
" and optical physics",
"Atomic",
"Physical chemistry stubs",
"Symmetry",
"Rotational symmetry"
] |
22,844,355 | https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Rado%20theorem | In partition calculus, part of combinatorial set theory, a branch of mathematics, the Erdős–Rado theorem is a basic result extending Ramsey's theorem to uncountable sets. It is named after Paul Erdős and Richard Rado. It is sometimes also attributed to Đuro Kurepa who proved it under the additional assumption of the generalised continuum hypothesis, and hence the result is sometimes also referred to as the Erdős–Rado–Kurepa theorem.
Statement of the theorem
If r ≥ 0 is finite and κ is an infinite cardinal, then
where exp0(κ) = κ and inductively expr+1(κ)=2expr(κ). This is sharp in the sense that expr(κ)+ cannot be replaced by expr(κ) on the left hand side.
The above partition symbol describes the following statement. If f is a coloring of the r+1-element subsets of a set of cardinality expr(κ)+, in κ many colors, then there is a homogeneous set of cardinality κ+ (a set, all whose r+1-element subsets get the same f-value).
Notes
References
Set theory
Theorems in combinatorics
Rado theorem | Erdős–Rado theorem | [
"Mathematics"
] | 261 | [
"Theorems in combinatorics",
"Set theory",
"Mathematical logic",
"Combinatorics",
"Theorems in discrete mathematics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.