id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
6,748,786 | https://en.wikipedia.org/wiki/FLARM | FLARM is a proprietary electronic system used to selectively alert pilots to potential collisions between aircraft. It is not formally an implementation of ADS-B, as it is optimized for the specific needs of light aircraft, not for long-range communication or ATC interaction. FLARM is a portmanteau of "flight" and "alarm". The installation of all physical FLARM devices is approved as a "Standard Change", and the PowerFLARM Core specifically as a "Minor Change" by the European Union Aviation Safety Agency; and in addition the Minor Change also approves the PowerFLARM Core for its IFR and at night.
Operation
FLARM obtains its position and altitude readings from an internal GPS and a barometric sensor and then broadcasts this together with forecast data about the future 3D flight track. At the same time, its receiver listens for other FLARM devices within range and processes the information received. Advanced motion prediction algorithms predict potential conflicts for up to 50 other aircraft and alert the pilot using visual and aural warnings. FLARM has an integrated obstacle collision warning system together with an obstacle database. The database includes both point and segmented obstacles, such as split power lines and cableways.
Unlike conventional transponders, FLARM has low power consumption and is relatively inexpensive to purchase and install. Furthermore, conventional Airborne Collision Avoidance Systems (ACAS) are not effective in preventing light aircraft from colliding with each other as light aircraft can be close to each other without danger of collision. ACAS would issue continuous and unnecessary warnings about all aircraft in the vicinity, whereas FLARM only issues selective warnings about collision risks.
Appraisal and attention
FLARM Technology and the inventors of FLARM have won several awards. The Swiss Office of Civil Aviation (FOCA) also published in Dec 2010: "The rapid distribution of such systems only a few months after their introduction was not accomplished through regulatory measures, but rather on a voluntary basis and as a result of the wish on the part of the involved players to contribute towards the reduction of collision risk. The FOCA recommends that glider tow planes and helicopters that operate in lower airspace should also use collision warning systems."
In addition, FLARM is mandatory on gliders in several countries including France, and the Soaring Society of America (SSA) strongly recommends FLARM in lieu of ADS-B Out.
Versions
Versions are sold for use in light aircraft, helicopters, and gliders. Newer PowerFLARM models extend the FLARM range to over 10 km. They also have an integrated ADS-B and transponder Mode-C/S receiver, making it possible to also avoid mid-air collisions with large aircraft.
Newer devices can also act as authorized flight recorders by producing files in the IGC format defined by the FAI Gliding Commission. All FLARM devices can be connected to FLARM displays or compatible avionics (EFIS, moving map, etc.) to give visual and audio warnings and also to show the intruder's position on the map. Licensed manufacturers produce integrated FLARM devices in different avionics products. FLARM devices can issue spoken warnings similar to TCAS.
Hardware
A typical FLARM system consists of the following hardware components:
Central microcontroller for data processing, e.g. Atmel AVR
ISM/SRD band transceiver, e.g. NRF905 (Europe: 868 MHz)
GPS module, e.g. U-blox LEA-4S
Barometric pressure sensor, which measures cabin pressure to estimate the altitude (not used for collision avoidance, which uses GPS altitude)
Traffic and collision warning display, e.g. light emitting diodes or LC display and a buzzer (not installed in case of special remote units)
(micro)SD card slot for configuration, logging and firmware updates
RS-232 interface for external displays and firmware updates
Protocol and criticism
The FLARM radio protocol has always been encrypted, which is reasoned by the manufacturer to ensure the integrity of the system and also because of privacy and security considerations. Version 4 used in 2008 and Version 6 used in 2015 were reverse engineered despite its encryption. However, FLARM changes the protocol on a regular basis .
The decryption of the FLARM radio protocol might be illegal, especially in EU countries.
that traffic advisory data may legally be decrypted by third parties solely for the purpose of nearby traffic advisory and collision avoidance, which is the intended use of the system.
The radio protocol has been criticised for its proprietary encryption, including a petition encouraging a change to an open protocol. It has been argued that encryption increases processing time and contradicts the goal to increase aviation safety due to a closed monopoly market, because an open protocol could enable third-party manufacturers to develop compatible devices, spreading the use of interoperable traffic advisory systems.
FLARM Technology opposed these claims as published on the petition page and published a white paper explaining the design of the system. They offer the technology to third parties, which requires the implementation of the OEM circuit board in compatible devices. Radio protocol specifications and encryption keys are not shared to third-party manufacturers.
While the FLARM serial data protocol is public, the prediction engine of FLARM is patented by Onera (France) and proprietary. It is licensed to manufacturers by FLARM Technology in Switzerland. The patent expired in 2019.
Company
FLARM was founded by Urs Rothacher and Andrea Schlapbach in 2003, who were later joined by Urban Mäder in 2004. First sales were made in early 2004. Currently there are nearly 30,000 FLARM-compatible devices (around half of them produced by FLARM Technology, the rest by licensed manufacturers who have now overtaken FLARM in current sales) in use mainly in Switzerland, Germany, France, Austria, Italy, UK, the Benelux, Scandinavia, Hungary, Israel, Australia, New Zealand and South Africa.
FLARM's technology is also used in ground-based vehicles including vehicles used in surface-mining. These products are designed and produced by the Swiss company SAFEmine, now owned by Swedish Hexagon Group.
References
External links
System Design and Compatibility
Overview of collision avoidance systems
Comparison of Mode A/C, S, FLARM and ADS-B
Enhancing the efficacy of Flarm radio communication protocol by computer simulation (English, German)
Interview with Gerhard Wesp, Development Manager Avionics at Flarm Technology GmbH, March 2014
Avionics
Aircraft collision avoidance systems
Gliding technology
Warning systems | FLARM | [
"Technology",
"Engineering"
] | 1,327 | [
"Safety engineering",
"Avionics",
"Measuring instruments",
"Aircraft collision avoidance systems",
"Aircraft instruments",
"Warning systems"
] |
21,341,034 | https://en.wikipedia.org/wiki/Ekeland%27s%20variational%20principle | In mathematical analysis, Ekeland's variational principle, discovered by Ivar Ekeland, is a theorem that asserts that there exist nearly optimal solutions to some optimization problems.
Ekeland's principle can be used when the lower level set of a minimization problems is not compact, so that the Bolzano–Weierstrass theorem cannot be applied. The principle relies on the completeness of the metric space.
The principle has been shown to be equivalent to completeness of metric spaces.
In proof theory, it is equivalent to ΠCA0 over RCA0, i.e. relatively strong.
It also leads to a quick proof of the Caristi fixed point theorem.
History
Ekeland was associated with the Paris Dauphine University when he proposed this theorem.
Ekeland's variational principle
Preliminary definitions
A function valued in the extended real numbers is said to be if and it is called if it has a non-empty , which by definition is the set
and it is never equal to In other words, a map is if is valued in and not identically
The map is proper and bounded below if and only if or equivalently, if and only if
A function is at a given if for every real there exists a neighborhood of such that for all
A function is called if it is lower semicontinuous at every point of which happens if and only if is an open set for every or equivalently, if and only if all of its lower level sets are closed.
Statement of the theorem
For example, if and are as in the theorem's statement and if happens to be a global minimum point of then the vector from the theorem's conclusion is
Corollaries
The principle could be thought of as follows: For any point which nearly realizes the infimum, there exists another point , which is at least as good as , it is close to and the perturbed function, , has unique minimum at .
A good compromise is to take in the preceding result.
See also
References
Bibliography
Convex analysis
Theorems in functional analysis
Variational analysis
Variational principles | Ekeland's variational principle | [
"Mathematics"
] | 420 | [
"Mathematical principles",
"Theorems in mathematical analysis",
"Variational principles",
"Theorems in functional analysis"
] |
2,026,889 | https://en.wikipedia.org/wiki/Garbage%20disposal%20unit | A garbage disposal unit (also known as a waste disposal unit, food waste disposer (FWD), in-sink macerator, garbage disposer, or garburator) is a device, usually electrically powered, installed under a kitchen sink between the sink's drain and the trap. The device shreds food waste into pieces small enough—generally less than in diameter—to pass through plumbing.
History
The garbage disposal unit was invented in 1927 by John W. Hammes, an architect working in Racine, Wisconsin. He applied for a patent in 1933 that was issued in 1935. His InSinkErator company put his disposer on the market in 1940.
Hammes' claim is disputed, as General Electric introduced a garbage disposal unit in 1935, known as the Disposall.
In many cities in the United States in the 1930s and the 1940s, the municipal sewage system had regulations prohibiting placing food waste (garbage) into the system. InSinkErator spent considerable effort, and was highly successful in convincing many localities to rescind these prohibitions.
Many localities in the United States prohibited the use of disposers. For many years, garbage disposers were illegal in New York City because of a perceived threat of damage to the city's sewer system. After a 21-month study with the NYC Department of Environmental Protection, the ban was rescinded in 1997 by local law 1997/071, which amended section 24-518.1, NYC Administrative Code.
In 2008, the city of Raleigh, North Carolina attempted a ban on the replacement and installation of garbage disposers, which also extended to outlying towns sharing the city's municipal sewage system, but rescinded the ban one month later.
Adoption and bans
In the United States, 50% of homes had disposal units as of 2009, compared with only 6% in the United Kingdom and 3% in Canada.
In Britain, Worcestershire County Council and Herefordshire Council started to subsidize the purchase of garbage disposal units in 2005, in order to reduce the amount of waste going to landfill and the carbon footprint of garbage runs. However, the use of macerators was banned for non-household premises in Scotland in 2016 in non-rural areas where food waste collection is available, and banned in Northern Ireland in 2017. They are expected to be banned for businesses in England and Wales in 2023. The intention is to reduce water use.
Many other countries in Europe have banned or intend to ban macerators. The intention is to realise the resource value of food waste, and reduce sewer blockages.
Rationale
Food scraps range from 10% to 20% of household waste, and are a problematic component of municipal waste, creating public health, sanitation and environmental problems at each step, beginning with internal storage and followed by truck-based collection. Burned in waste-to-energy facilities, the high water-content of food scraps means that their heating and burning consumes more energy than it generates; buried in landfills, food scraps decompose and generate methane gas, a greenhouse gas that contributes to climate change.
The premise behind the proper use of a disposer is to effectively regard food scraps as liquid (averaging 70% water, like human waste), and use existing infrastructure (underground sewers and wastewater treatment plants) for its management. Modern wastewater plants are effective at processing organic solids into fertilizer products (known as biosolids), with advanced anaerobic digestion facilities also capturing methane (biogas) for energy production.
Operation
A high-torque, insulated electric motor, usually rated at for a domestic unit, spins a circular turntable mounted horizontally above it. Induction motors rotate at 1,400–2,800 rpm and have a range of starting torques, depending on the method of starting used. The added weight and size of induction motors may be of concern, depending on the available installation space and construction of the sink bowl. Universal motors, also known as series-wound motors, rotate at higher speeds, have high starting torque, and are usually lighter, but are noisier than induction motors, partially due to the higher speeds and partially because the commutator brushes rub on the slotted commutator.
Inside the grinding chamber there is a rotating metal turntable onto which the food waste drops. Two swiveling and two fixed metal impellers mounted on top of the plate near the edge then fling the food waste against the grind ring repeatedly. Sharp cutting edges in the grind ring break down the waste until it is small enough to pass through openings in the ring. Sometimes the waste goes through a third stage where an undercutter disc further chops it, whereupon it is flushed down the drain.
Usually, there is a partial rubber closure, known as a splashguard, on the top of the disposal unit to prevent food waste from flying back up out of the grinding chamber. It may also be used to attenuate noise from the grinding chamber for quieter operation.
There are two main types of garbage disposers—continuous feed and batch feed. Continuous feed models are used by feeding in waste after being started and are more common. Batch feed units are used by placing waste inside the unit before being started. These types of units are started by placing a specially designed cover over the opening. Some covers manipulate a mechanical switch while others allow magnets in the cover to align with magnets in the unit. Small slits in the cover allow water to flow through. Batch feed models are considered safer, since the top of the disposal is covered during operation, preventing foreign objects from falling in.
Waste disposal units may jam, but can usually be cleared either by forcing the turntable round from above or by turning the motor using a hex-key wrench inserted into the motor shaft from below. Especially hard objects accidentally or deliberately introduced, such as metal cutlery, can damage the waste disposal unit and become damaged themselves, although recent advances, such as swivel impellers, have been made to minimize such damage.
Some higher-end units have an automatic reversing jam clearing feature. By using a slightly more complicated centrifugal starting switch, the split-phase motor rotates in the opposite direction from the previous run each time it is started. This can clear minor jams, but is claimed to be unnecessary by some manufacturers: Since the early 1960s, many disposal units have utilized swivel impellers which make reversing unnecessary.
Some other kinds of garbage disposal units are powered by water pressure, rather than electricity. Instead of the turntable and grind ring described above, this alternative design has a water-powered unit with an oscillating piston with blades attached to chop the waste into fine pieces. Because of this cutting action, they can handle fibrous waste. Water-powered units take longer than electric ones for a given amount of waste and need fairly high water pressure to function properly.
Environmental impact
Kitchen waste disposal units increase the load of organic matter that reaches the water treatment plant, which in turn increases the consumption of oxygen. Metcalf and Eddy quantified this impact as of biochemical oxygen demand per person per day where disposers are used. An Australian study that compared in-sink food processing to composting alternatives via a life-cycle assessment found that while the in-sink disposer performed well with respect to climate change, acidification, and energy usage, it did contribute to eutrophication and toxicity potentials.
This may result in higher costs for energy needed to supply oxygen in secondary operations. However, if the waste water treatment is finely controlled, the organic carbon in the food may help to keep the bacterial decomposition running, as carbon may be deficient in that process. This increased carbon serves as an inexpensive and continuous source of carbon necessary for biologic nutrient removal.
One result is larger amounts of solid residue from the waste-water treatment process. According to a study at the East Bay Municipal Utility District's wastewater treatment plant funded by the EPA, food waste produces three times the biogas as compared to municipal sewage sludge. The value of the biogas produced from anaerobic digestion of food waste appears to exceed the cost of processing the food waste and disposing of the residual biosolids (based on a LAX Airport proposal to divert 8,000 tons/year of bulk food waste).
In a study at the Hyperion sewage treatment plant in Los Angeles, disposer use showed minimal to no impact on the total biosolids byproduct from sewage treatment and similarly minimal impact on handling processes as the high volatile solids destruction (VSD) from food waste yield a minimum amount of solids in residue.
Power usage is typically 500–1,500 W, comparable to an electric iron, but only for a very short time, totaling approximately 3–4 kWh of electricity per household per year. Daily water usage varies, but is typically of water per person per day, comparable to an additional toilet flush. One survey of these food processing units found a slight increase in household water use.
References
20th-century inventions
American inventions
Food waste
Home appliances
Products introduced in 1935
Waste treatment technology | Garbage disposal unit | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,862 | [
"Machines",
"Water treatment",
"Physical systems",
"Home appliances",
"Environmental engineering",
"Waste treatment technology"
] |
2,027,249 | https://en.wikipedia.org/wiki/Oval | An oval () is a closed curve in a plane which resembles the outline of an egg. The term is not very specific, but in some areas (projective geometry, technical drawing, etc.) it is given a more precise definition, which may include either one or two axes of symmetry of an ellipse. In common English, the term is used in a broader sense: any shape which reminds one of an egg. The three-dimensional version of an oval is called an ovoid.
Oval in geometry
The term oval when used to describe curves in geometry is not well-defined, except in the context of projective geometry. Many distinct curves are commonly called ovals or are said to have an "oval shape". Generally, to be called an oval, a plane curve should resemble the outline of an egg or an ellipse. In particular, these are common traits of ovals:
they are differentiable (smooth-looking), simple (not self-intersecting), convex, closed, plane curves;
their shape does not depart much from that of an ellipse, and
an oval would generally have an axis of symmetry, but this is not required.
Here are examples of ovals described elsewhere:
Cassini ovals
portions of some elliptic curves
Moss's egg
superellipse
Cartesian oval
stadium
An ovoid is the surface in 3-dimensional space generated by rotating an oval curve about one of its axes of symmetry.
The adjectives ovoidal and ovate mean having the characteristic of being an ovoid, and are often used as synonyms for "egg-shaped".
Projective geometry
In a projective plane a set of points is called an oval, if:
Any line meets in at most two points, and
For any point there exists exactly one tangent line through , i.e., }.
For finite planes (i.e. the set of points is finite) there is a more convenient characterization:
For a finite projective plane of order (i.e. any line contains points) a set of points is an oval if and only if and no three points are collinear (on a common line).
An ovoid in a projective space is a set of points such that:
Any line intersects in at most 2 points,
The tangents at a point cover a hyperplane (and nothing more), and
contains no lines.
In the finite case only for dimension 3 there exist ovoids. A convenient characterization is:
In a 3-dim. finite projective space of order any pointset is an ovoid if and only if || and no three points are collinear.
Egg shape
The shape of an egg is approximated by the "long" half of a prolate spheroid, joined to a "short" half of a roughly spherical ellipsoid, or even a slightly oblate spheroid. These are joined at the equator and share a principal axis of rotational symmetry, as illustrated above. Although the term egg-shaped usually implies a lack of reflection symmetry across the equatorial plane, it may also refer to true prolate ellipsoids. It can also be used to describe the 2-dimensional figure that, if revolved around its major axis, produces the 3-dimensional surface.
Technical drawing
In technical drawing, an oval is a figure that is constructed from two pairs of arcs, with two different radii (see image on the right). The arcs are joined at a point in which lines tangential to both joining arcs lie on the same line, thus making the joint smooth. Any point of an oval belongs to an arc with a constant radius (shorter or longer), but in an ellipse, the radius is continuously changing.
In common speech
In common speech, "oval" means a shape rather like an egg or an ellipse, which may be two-dimensional or three-dimensional. It also often refers to a figure that resembles two semicircles joined by a rectangle, like a cricket infield, speed skating rink or an athletics track. However, this is most correctly called a stadium.
The term "ellipse" is often used interchangeably with oval, but it has a more specific mathematical meaning. The term "oblong" is also used to mean oval, though in geometry an oblong refers to rectangle with unequal adjacent sides, not a curved figure.
See also
Ellipse
Ellipsoidal dome
Stadium (geometry)
Vesica piscis – a pointed oval
Symbolism of domes
Notes
Plane curves
Elementary shapes | Oval | [
"Mathematics"
] | 927 | [
"Planes (geometry)",
"Euclidean plane geometry",
"Plane curves"
] |
2,028,895 | https://en.wikipedia.org/wiki/Weber%20number | The Weber number (We) is a dimensionless number in fluid mechanics that is often useful in analysing fluid flows where there is an interface between two different fluids, especially for multiphase flows with strongly curved surfaces. It is named after Moritz Weber (1871–1951). It can be thought of as a measure of the relative importance of the fluid's inertia compared to its surface tension. The quantity is useful in analyzing thin film flows and the formation of droplets and bubbles.
Mathematical expression
The Weber number may be written as:
where
is the density of the fluid (kg/m3).
is its velocity (m/s).
is its characteristic length, typically the droplet diameter (m).
is the surface tension (N/m).
is the inertial or dynamic pressure scale.
is the Laplace pressure scale.
The above is the force perspective to define the Weber number. We can also define it using energy perspective as the ratio of the kinetic energy on impact to the surface energy,
,
where
and
.
Appearance in the Navier-Stokes equations
The Weber number appears in the incompressible Navier-Stokes equations through a free surface boundary condition.
For a fluid of constant density and dynamic viscosity , at the free surface interface there is a balance between the normal stress and the curvature force associated with the surface tension:
Where is the unit normal vector to the surface, is the Cauchy stress tensor, and is the divergence operator. The Cauchy stress tensor for an incompressible fluid takes the form:
Introducing the dynamic pressure and, assuming high Reynolds number flow, it is possible to nondimensionalize the variables with the scalings:
The free surface boundary condition in nondimensionalized variables is then:
Where is the Froude number, is the Reynolds number, and is the Weber number. The influence of the Weber number can then be quantified relative to gravitational and viscous forces.
Applications
One application of the Weber number is the study of heat pipes. When the momentum flux in the vapor core of the heat pipe is high, there is a possibility that the shear stress exerted on the liquid in the wick can be large enough to entrain droplets into the vapor flow. The Weber number is the dimensionless parameter that determines the onset of this phenomenon called the entrainment limit (Weber number greater than or equal to 1). In this case the Weber number is defined as the ratio of the momentum in the vapor layer divided by the surface tension force restraining the liquid, where the characteristic length is the surface pore size.
References
Further reading
Weast, R. Lide, D. Astle, M. Beyer, W. (1989–1990). CRC Handbook of Chemistry and Physics. 70th ed. Boca Raton, Florida: CRC Press, Inc.. F-373,376.
Fluid dynamics
Dimensionless numbers of fluid mechanics | Weber number | [
"Chemistry",
"Engineering"
] | 601 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
2,029,635 | https://en.wikipedia.org/wiki/Variational%20perturbation%20theory | In mathematics, variational perturbation theory (VPT) is a mathematical method to convert divergent power series in a small expansion parameter, say
,
into a convergent series in powers
,
where is a critical exponent (the so-called index of "approach to scaling" introduced by Franz Wegner). This is possible with the help of variational parameters, which are determined by optimization order by order in . The partial sums are converted to convergent partial sums by a method developed in 1992.
Most perturbation expansions in quantum mechanics are divergent for any small coupling strength . They can be made convergent by VPT (for details see the first textbook cited below). The convergence is exponentially fast.
After its success in quantum mechanics, VPT has been developed further to become an important mathematical tool in quantum field theory with its anomalous dimensions. Applications focus on the theory of critical phenomena. It has led to the most accurate predictions of critical exponents.
More details can be read here.
References
External links
Kleinert H., Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3. Auflage, World Scientific (Singapore, 2004) (readable online here) (see Chapter 5)
Kleinert H. and Verena Schulte-Frohlinde, Critical Properties of φ4-Theories, World Scientific (Singapur, 2001); Paperback (readable online here) (see Chapter 19)
Asymptotic analysis
Perturbation theory | Variational perturbation theory | [
"Physics",
"Mathematics"
] | 313 | [
"Mathematical analysis",
"Asymptotic analysis",
"Quantum mechanics",
"Perturbation theory"
] |
2,029,677 | https://en.wikipedia.org/wiki/Critical%20exponent | Critical exponents describe the behavior of physical quantities near continuous phase transitions. It is believed, though not proven, that they are universal, i.e. they do not depend on the details of the physical system, but only on some of its general features. For instance, for ferromagnetic systems at thermal equilibrium, the critical exponents depend only on:
the dimension of the system
the range of the interaction
the spin dimension
These properties of critical exponents are supported by experimental data. Analytical results can be theoretically achieved in mean field theory in high dimensions or when exact solutions are known such as the two-dimensional Ising model. The theoretical treatment in generic dimensions requires the renormalization group approach or, for systems at thermal equilibrium, the conformal bootstrap techniques.
Phase transitions and critical exponents appear in many physical systems such as water at the critical point, in magnetic systems, in superconductivity, in percolation and in turbulent fluids.
The critical dimension above which mean field exponents are valid varies with the systems and can even be infinite.
Definition
The control parameter that drives phase transitions is often temperature but can also be other macroscopic variables like pressure or an external magnetic field. For simplicity, the following discussion works in terms of temperature; the translation to another control parameter is straightforward. The temperature at which the transition occurs is called the critical temperature . We want to describe the behavior of a physical quantity in terms of a power law around the critical temperature; we introduce the reduced temperature
which is zero at the phase transition, and define the critical exponent as:
This results in the power law we were looking for:
It is important to remember that this represents the asymptotic behavior of the function as .
More generally one might expect
Main exponents
Let us assume that the system at thermal equilibrium has two different phases characterized by an order parameter , which vanishes at and above .
Consider the disordered phase (), ordered phase () and critical temperature () phases separately. Following the standard convention, the critical exponents related to the ordered phase are primed. It is also another standard convention to use superscript/subscript + (−) for the disordered (ordered) state. In general spontaneous symmetry breaking occurs in the ordered phase.
The following entries are evaluated at (except for the entry)
The critical exponents can be derived from the specific free energy as a function of the source and temperature. The correlation length can be derived from the functional . In many cases, the critical exponents defined in the ordered and disordered phases are identical.
When the upper critical dimension is four, these relations are accurate close to the critical point in two- and three-dimensional systems. In four dimensions, however, the power laws are modified by logarithmic factors. These do not appear in dimensions arbitrarily close to but not exactly four, which can be used as a way around this problem.
Mean field critical exponents of Ising-like systems
The classical Landau theory (also known as mean field theory) values of the critical exponents for a scalar field (of which the Ising model is the prototypical example) are given by
If we add derivative terms turning it into a mean field Ginzburg–Landau theory, we get
One of the major discoveries in the study of critical phenomena is that mean field theory of critical points is only correct when the space dimension of the system is higher than a certain dimension called the upper critical dimension which excludes the physical dimensions 1, 2 or 3 in most cases. The problem with mean field theory is that the critical exponents do not depend on the space dimension. This leads to a quantitative discrepancy below the critical dimensions, where the true critical exponents differ from the mean field values. It can even lead to a qualitative discrepancy at low space dimension, where a critical point in fact can no longer exist, even though mean field theory still predicts there is one. This is the case for the Ising model in dimension 1 where there is no phase transition. The space dimension where mean field theory becomes qualitatively incorrect is called the lower critical dimension.
Experimental values
The most accurately measured value of is −0.0127(3) for the phase transition of superfluid helium (the so-called lambda transition). The value was measured on a space shuttle to minimize pressure differences in the sample. This value is in a significant disagreement with the most precise theoretical determinations coming from high temperature expansion techniques, Monte Carlo methods and the conformal bootstrap.
Theoretical predictions
Critical exponents can be evaluated via Monte Carlo methods of lattice models. The accuracy of this first principle method depends on the available computational resources, which determine the ability to go to the infinite volume limit and to reduce statistical errors. Other techniques rely on theoretical understanding of critical fluctuations. The most widely applicable technique is the renormalization group. The conformal bootstrap is a more recently developed technique, which has achieved unsurpassed accuracy for the Ising critical exponents.
Scaling functions
In light of the critical scalings, we can reexpress all thermodynamic quantities in terms of dimensionless quantities. Close enough to the critical point, everything can be reexpressed in terms of certain ratios of the powers of the reduced quantities. These are the scaling functions.
The origin of scaling functions can be seen from the renormalization group. The critical point is an infrared fixed point. In a sufficiently small neighborhood of the critical point, we may linearize the action of the renormalization group. This basically means that rescaling the system by a factor of will be equivalent to rescaling operators and source fields by a factor of for some . So, we may reparameterize all quantities in terms of rescaled scale independent quantities.
Scaling relations
It was believed for a long time that the critical exponents were the same above and below the critical temperature, e.g. or . It has now been shown that this is not necessarily true: When a continuous symmetry is explicitly broken down to a discrete symmetry by irrelevant (in the renormalization group sense) anisotropies, then the exponents and are not identical.
Critical exponents are denoted by Greek letters. They fall into universality classes and obey the scaling and hyperscaling relations
These equations imply that there are only two independent exponents, e.g., and . All this follows from the theory of the renormalization group.
Percolation theory
Phase transitions and critical exponents also appear in percolation processes where the concentration of "occupied" sites or links of a lattice are the control parameter of the phase transition (compared to temperature in classical phase transitions in physics). One of the simplest examples is Bernoulli percolation in a two dimensional square lattice. Sites are randomly occupied with probability . A cluster is defined as a collection of nearest neighbouring occupied sites. For small values of the occupied sites form only small local clusters. At the percolation threshold (also called critical probability) a spanning cluster that extends across opposite sites of the system is formed, and we have a second-order phase transition that is characterized by universal critical exponents. For percolation the universality class is different from the Ising universality class. For example, the correlation length critical exponent is for 2D Bernoulli percolation compared to for the 2D Ising model. For a more detailed overview, see Percolation critical exponents.
Anisotropy
There are some anisotropic systems where the correlation length is direction dependent.
Directed percolation can be also regarded as anisotropic percolation. In this case the critical exponents are different and the upper critical dimension is 5.
Multicritical points
More complex behavior may occur at multicritical points, at the border or on intersections of critical manifolds. They can be reached by tuning the value of two or more parameters, such as temperature and pressure.
Static versus dynamic properties
The above examples exclusively refer to the static properties of a critical system. However dynamic properties of the system may become critical, too. Especially, the characteristic time, , of a system diverges as , with a dynamical exponent . Moreover, the large static universality classes of equivalent models with identical static critical exponents decompose into smaller dynamical universality classes, if one demands that also the dynamical exponents are identical.
The equilibrium critical exponents can be computed from conformal field theory.
See also anomalous scaling dimension.
Self-organized criticality
Critical exponents also exist for self organized criticality for dissipative systems.
See also
Universality class for the numerical values of critical exponents
Complex networks
Random graphs
Rushbrooke inequality
Widom scaling
Conformal bootstrap
Ising critical exponents
Percolation critical exponents
Network science
Percolation theory
Graph theory
External links and literature
Hagen Kleinert and Verena Schulte-Frohlinde, Critical Properties of φ4-Theories, World Scientific (Singapore, 2001); Paperback
Toda, M., Kubo, R., N. Saito, Statistical Physics I, Springer-Verlag (Berlin, 1983); Hardcover
J.M.Yeomans, Statistical Mechanics of Phase Transitions, Oxford Clarendon Press
H. E. Stanley Introduction to Phase Transitions and Critical Phenomena, Oxford University Press, 1971
Universality classes from Sklogwiki
Zinn-Justin, Jean (2002). Quantum field theory and critical phenomena, Oxford, Clarendon Press (2002),
Zinn-Justin, J. (2010). "Critical phenomena: field theoretical approach" Scholarpedia article Scholarpedia, 5(5):8346.
D. Poland, S. Rychkov, A. Vichi, "The Conformal Bootstrap: Theory, Numerical Techniques, and Applications", Rev.Mod.Phys. 91 (2019) 015002, http://arxiv.org/abs/1805.04405
F. Leonard and B. Delamotte Critical exponents can be different on the two sides of a transition: A generic mechanism, Phys. Rev. Lett. 115, 200601 (2015), https://arxiv.org/abs/1508.07852,
References
Phase transitions
Critical phenomena
Renormalization group | Critical exponent | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 2,150 | [
"Physical phenomena",
"Phase transitions",
"Critical phenomena",
"Phases of matter",
"Renormalization group",
"Condensed matter physics",
"Statistical mechanics",
"Matter",
"Dynamical systems"
] |
2,029,743 | https://en.wikipedia.org/wiki/Minimum%20information%20about%20a%20microarray%20experiment | Minimum information about a microarray experiment (MIAME) is a standard created by the FGED Society for reporting microarray experiments.
MIAME is intended to specify all the information necessary to interpret the results of the experiment unambiguously and to potentially reproduce the experiment. While the standard defines the content required for compliant reports, it does not specify the format in which this data should be presented. MIAME describes the minimum information required to ensure that microarray data can be easily interpreted and that results derived from its analysis can be independently verified. There are a number of file formats used to represent this data, as well as both public and subscription-based repositories for such experiments. Additionally, software exists to aid the preparation of MIAME-compliant reports.
MIAME revolves around six key components: raw data, normalized data, sample annotations, experimental design, array annotations, and data protocols.
References
Biochemistry detection methods
Genetics techniques
Microarrays | Minimum information about a microarray experiment | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 200 | [
"Biochemistry methods",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Genetic engineering",
"Chemical tests",
"Bioinformatics",
"Molecular biology techniques",
"Biochemistry detection methods"
] |
2,030,045 | https://en.wikipedia.org/wiki/Specific%20energy | Specific energy or massic energy is energy per unit mass. It is also sometimes called gravimetric energy density, which is not to be confused with energy density, which is defined as energy per unit volume. It is used to quantify, for example, stored heat and other thermodynamic properties of substances such as specific internal energy, specific enthalpy, specific Gibbs free energy, and specific Helmholtz free energy. It may also be used for the kinetic energy or potential energy of a body. Specific energy is an intensive property, whereas energy and mass are extensive properties.
The SI unit for specific energy is the joule per kilogram (J/kg). Other units still in use worldwide in some contexts are the kilocalorie per gram (Cal/g or kcal/g), mostly in food-related topics, and watt-hours per kilogram (W⋅h/kg) in the field of batteries. In some countries the Imperial unit BTU per pound (Btu/lb) is used in some engineering and applied technical fields.
Specific energy has the same units as specific strength, which is related to the maximum specific energy of rotation an object can have without flying apart due to centrifugal force.
The concept of specific energy is related to but distinct from the notion of molar energy in chemistry, that is energy per mole of a substance, which uses units such as joules per mole, or the older but still widely used calories per mole.
Table of some non-SI conversions
The following table shows the factors for conversion to J/kg of some non-SI units:
For a table giving the specific energy of many different fuels as well as batteries, see the article Energy density.
Ionising radiation
For ionising radiation, the gray is the SI unit of specific energy absorbed by matter known as absorbed dose, from which the SI unit the sievert is calculated for the stochastic health effect on tissues, known as dose equivalent. The International Committee for Weights and Measures states: "In order to avoid any risk of confusion between the absorbed dose D and the dose equivalent H, the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed dose D and the name sievert instead of joules per kilogram for the unit of dose equivalent H."
Energy density of food
Energy density is the amount of energy per mass or volume of food. The energy density of a food can be determined from the label by dividing the energy per serving (usually in kilojoules or food calories) by the serving size (usually in grams, milliliters or fluid ounces). An energy unit commonly used in nutritional contexts within non-metric countries (e.g. the United States) is the "dietary calorie," "food calorie," or "Calorie" with a capital "C" and is commonly abbreviated as "Cal." A nutritional Calorie is equivalent to a thousand chemical or thermodynamic calories (abbreviated "cal" with a lower case "c") or one kilocalorie (kcal). Because food energy is commonly measured in Calories, the energy density of food is commonly called "caloric density". In the metric system, the energy unit commonly used on food labels is the kilojoule (kJ) or megajoule (MJ). Energy density is thus commonly expressed in metric units of cal/g, kcal/g, J/g, kJ/g, MJ/kg, cal/mL, kcal/mL, J/mL, or kJ/mL.
Energy density measures the energy released when the food is metabolized by a healthy organism when it ingests the food (see food energy for calculation). In aerobic environments, this typically requires oxygen as an input and generates waste products such as carbon dioxide and water. Besides alcohol, the only sources of food energy are carbohydrates, fats and proteins, which make up ninety percent of the dry weight of food. Therefore, water content is the most important factor in computing energy density. In general, proteins have lower energy densities (≈16 kJ/g) than carbohydrates (≈17 kJ/g), whereas fats provide much higher energy densities (≈38 kJ/g), times as much energy. Fats contain more carbon-carbon and carbon-hydrogen bonds than carbohydrates or proteins, yielding higher energy density. Foods that derive most of their energy from fat have a much higher energy density than those that derive most of their energy from carbohydrates or proteins, even if the water content is the same. Nutrients with a lower absorption, such as fiber or sugar alcohols, lower the energy density of foods as well. A moderate energy density would be 1.6 to 3 calories per gram (7–13 kJ/g); salmon, lean meat, and bread would fall in this category. Foods with high energy density have more than three calories per gram (>13 kJ/g) and include crackers, cheese, chocolate, nuts, and fried foods like potato or tortilla chips.
Fuel
Energy density is sometimes more useful than specific energy for comparing fuels. For example, liquid hydrogen fuel has a higher specific energy (energy per unit mass) than gasoline does, but a much lower volumetric energy density.
Astrodynamics
Specific mechanical energy, rather than simply energy, is often used in astrodynamics, because gravity changes the kinetic and potential specific energies of a vehicle in ways that are independent of the mass of the vehicle, consistent with the conservation of energy in a Newtonian gravitational system.
The specific energy of an object such as a meteoroid falling on the Earth from outside the Earth's gravitational well is at least one half the square of the escape velocity of 11.2 km/s. This comes to 63 MJ/kg (15 kcal/g, or 15 tonnes TNT equivalent per tonne). Comets have even more energy, typically moving with respect to the Sun, when in our vicinity, at about the square root of two times the speed of the Earth. This comes to 42 km/s, or a specific energy of 882 MJ/kg. The speed relative to the Earth may be more or less, depending on direction. Since the speed of the Earth around the Sun is about 30 km/s, a comet's speed relative to the Earth can range from 12 to 72 km/s, the latter corresponding to 2592 MJ/kg. If a comet with this speed fell to the Earth it would gain another 63 MJ/kg, yielding a total of 2655 MJ/kg with a speed of 72.9 km/s. Since the equator is moving at about 0.5 km/s, the impact speed has an upper limit of 73.4 km/s, giving an upper limit for the specific energy of a comet hitting the Earth of about 2690 MJ/kg.
If the Hale-Bopp comet (50 km in diameter) had hit Earth, it would have vaporized the oceans and sterilized the surface of Earth.
Miscellaneous
Kinetic energy per unit mass: v2, where v is the speed (giving J/kg when v is in m/s). See also kinetic energy per unit mass of projectiles.
Potential energy with respect to gravity, close to Earth, per unit mass: gh, where g is the acceleration due to gravity (standardized as ≈9.8 m/s2) and h is the height above the reference level (giving J/kg when g is in m/s2 and h is in m).
Heat: energies per unit mass are specific heat capacity times temperature difference, and specific melting heat, and specific heat of vaporization
See also
Energy density, which has tables of specific energies of devices and materials
Power-to-weight ratio
Heat of combustion
Specific orbital energy
Orders of magnitude (energy)
References
Energy (physics)
Thermodynamic properties
E | Specific energy | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,685 | [
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Mass",
"Intensive quantities",
"Energy (physics)",
"Thermodynamics",
"Wikipedia categories named after physical quantities",
"Mass-specific quantities",
"Matter"
] |
2,031,353 | https://en.wikipedia.org/wiki/Schanuel%27s%20conjecture | In mathematics, specifically transcendental number theory, Schanuel's conjecture is a conjecture about the transcendence degree of certain field extensions of the rational numbers , which would establish the transcendence of a large class of numbers, for which this is currently unknown. It is due to Stephen Schanuel and was published by Serge Lang in 1966.
Statement
Schanuel's conjecture can be given as follows:
Consequences
Schanuel's conjecture, if proven, would generalize most known results in transcendental number theory and establish a large class of numbers transcendental. Special cases of Schanuel's conjecture include:
Lindemann-Weierstrass theorem
Considering Schanuels conjecture for only gives that for nonzero complex numbers , at least one of the numbers and must be transcendental. This was proved by Ferdinand von Lindemann in 1882.
If the numbers are taken to be all algebraic and linearly independent over then the result to be transcendental and algebraically independent over . The first proof for this more general result was given by Carl Weierstrass in 1885.
This so-called Lindemann–Weierstrass theorem implies the transcendence of the numbers and . It also follows that for algebraic numbers not equal to 0 or 1, both and are transcendental. It further gives the transcendence of the trigonometric functions at nonzero algebraic values.
Baker's theorem
Another special case was proved by Alan Baker in 1966: If complex numbers are chosen to be linearly independent over the rational numbers such that are algebraic, then are also linearly independent over the algebraic numbers .
Schanuel's conjecture would strengthen this result, implying that would also be algebraically independent over (and equivalently over ).
Gelfond-Schneider theorem
In 1934 it was proved by Aleksander Gelfond and Theodor Schneider that if and are two algebraic complex numbers with and , then is transcendental.
This establishes the transcendence of numbers like Hilbert's constant and Gelfond's constant .
The Gelfond–Schneider theorem follows from Schanuel's conjecture by setting and . It also would follow from the strengthened version of Baker's theorem above.
Four exponentials conjecture
The currently unproven four exponentials conjecture would also follow from Schanuel's conjecture: If and are two pairs of complex numbers, with each pair being linearly independent over the rational numbers, then at least one of the following four numbers is transcendental:
The four exponential conjecture would imply that for any irrational number , at least one of the numbers and is transcendental. It also implies that if is a positive real number such that both and are integers, then itself must be an integer. The related six exponentials theorem has been proven.
Other consequences
Schanuel's conjecture, if proved, would also establish many nontrivial combinations of , , algebraic numbers and elementary functions to be transcendental:
In particular it would follow that and are algebraically independent simply by setting and .
Euler's identity states that . If Schanuel's conjecture is true then this is, in some precise sense involving exponential rings, the only non-trivial relation between , , and over the complex numbers.
Related conjectures and results
The converse Schanuel conjecture is the following statement:
Suppose F is a countable field with characteristic 0, and e : F → F is a homomorphism from the additive group (F,+) to the multiplicative group (F,·) whose kernel is cyclic. Suppose further that for any n elements x1,...,xn of F which are linearly independent over , the extension field (x1,...,xn,e(x1),...,e(xn)) has transcendence degree at least n over . Then there exists a field homomorphism h : F → such that h(e(x)) = exp(h(x)) for all x in F.
A version of Schanuel's conjecture for formal power series, also by Schanuel, was proven by James Ax in 1971. It states:
Given any n formal power series f1,...,fn in t[[t]] which are linearly independent over , then the field extension (t,f1,...,fn,exp(f1),...,exp(fn)) has transcendence degree at least n over (t).
Although ostensibly a problem in number theory, Schanuel's conjecture has implications in model theory as well. Angus Macintyre and Alex Wilkie, for example, proved that the theory of the real field with exponentiation, exp, is decidable provided Schanuel's conjecture is true. In fact, to prove this result, they only needed the real version of the conjecture, which is as follows:
Suppose x1,...,xn are real numbers and the transcendence degree of the field (x1,...,xn, exp(x1),...,exp(xn)) is strictly less than n, then there are integers m1,...,mn, not all zero, such that m1x1 +...+ mnxn = 0.
This would be a positive solution to Tarski's exponential function problem.
A related conjecture called the uniform real Schanuel's conjecture essentially says the same but puts a bound on the integers mi. The uniform real version of the conjecture is equivalent to the standard real version. Macintyre and Wilkie showed that a consequence of Schanuel's conjecture, which they dubbed the Weak Schanuel's conjecture, was equivalent to the decidability of exp. This conjecture states that there is a computable upper bound on the norm of non-singular solutions to systems of exponential polynomials; this is, non-obviously, a consequence of Schanuel's conjecture for the reals.
It is also known that Schanuel's conjecture would be a consequence of conjectural results in the theory of motives. In this setting Grothendieck's period conjecture for an abelian variety A states that the transcendence degree of its period matrix is the same as the dimension of the associated Mumford–Tate group, and what is known by work of Pierre Deligne is that the dimension is an upper bound for the transcendence degree. Bertolin has shown how a generalised period conjecture includes Schanuel's conjecture.
Zilber's pseudo-exponentiation
While a proof of Schanuel's conjecture seems a long way off, connections with model theory have prompted a surge of research on the conjecture.
In 2004, Boris Zilber systematically constructed exponential fields Kexp that are algebraically closed and of characteristic zero, and such that one of these fields exists for each uncountable cardinality. He axiomatised these fields and, using Hrushovski's construction and techniques inspired by work of Shelah on categoricity in infinitary logics, proved that this theory of "pseudo-exponentiation" has a unique model in each uncountable cardinal. Schanuel's conjecture is part of this axiomatisation, and so the natural conjecture that the unique model of cardinality continuum is actually isomorphic to the complex exponential field implies Schanuel's conjecture. In fact, Zilber showed that this conjecture holds if and only if both Schanuel's conjecture and the Exponential-Algebraic Closedness conjecture hold. As this construction can also give models with counterexamples of Schanuel's conjecture, this method cannot prove Schanuel's conjecture.
See also
Four exponentials conjecture
Algebraic independence
List of unsolved problems in mathematics
Existential Closedness conjecture
Zilber-Pink conjecture
Pregeometry
References
Sources
External links
Conjectures
Unsolved problems in number theory
Exponentials
Transcendental numbers | Schanuel's conjecture | [
"Mathematics"
] | 1,672 | [
"Unsolved problems in mathematics",
"Unsolved problems in number theory",
"E (mathematical constant)",
"Conjectures",
"Exponentials",
"Mathematical problems",
"Number theory"
] |
2,031,383 | https://en.wikipedia.org/wiki/Affinity%20maturation | In immunology, affinity maturation is the process by which TFH cell-activated B cells produce antibodies with increased affinity for antigen during the course of an immune response. With repeated exposures to the same antigen, a host will produce antibodies of successively greater affinities. A secondary response can elicit antibodies with several fold greater affinity than in a primary response. Affinity maturation primarily occurs on membrane immunoglobulin of germinal center B cells and as a direct result of somatic hypermutation (SHM) and selection by TFH cells.
In vivo
The process is thought to involve two interrelated processes, occurring in the germinal centers of the secondary lymphoid organs:
Somatic hypermutation: Mutations in the variable, antigen-binding coding sequences (known as complementarity-determining regions (CDR)) of the immunoglobulin genes. The mutation rate is up to 1,000,000 times higher than in cell lines outside the lymphoid system. Although the exact mechanism of the SHM is still not known, a major role for the activation-induced (cytidine) deaminase has been discussed. The increased mutation rate results in 1-2 mutations per CDR and, hence, per cell generation. The mutations alter the binding specificity and binding affinities of the resultant antibodies.
Clonal selection: B cells that have undergone SHM must compete for limiting growth resources, including the availability of antigen and paracrine signals from TFH cells. The follicular dendritic cells (FDCs) of the germinal centers present antigen to the B cells, and the B cell progeny with the highest affinities for antigen, having gained a competitive advantage, are favored for positive selection leading to their survival. Positive selection is based on steady cross-talk between TFH cells and their cognate antigen presenting GC B cell. Because a limited number of TFH cells reside in the germinal center, only highly competitive B cells stably conjugate with TFH cells and thus receive T cell-dependent survival signals. B cell progeny that have undergone SHM, but bind antigen with lower affinity will be out-competed, and be deleted. Over several rounds of selection, the resultant secreted antibodies produced will have effectively increased affinities for antigen.
In vitro
Like the natural prototype, the in vitro affinity maturation is based on the principles of mutation and selection. The in vitro affinity maturation has successfully been used to optimize antibodies, antibody fragments or other peptide molecules like antibody mimetics. Random mutations inside the CDRs are introduced using radiation, chemical mutagens or error-prone PCR. In addition, the genetic diversity can be increased by chain shuffling. Two or three rounds of mutation and selection using display methods like phage display usually results in antibody fragments with affinities in the low nanomolar range.
References
Immunology | Affinity maturation | [
"Biology"
] | 615 | [
"Immunology"
] |
2,821,615 | https://en.wikipedia.org/wiki/Electrochemical%20gradient | An electrochemical gradient is a gradient of electrochemical potential, usually for an ion that can move across a membrane. The gradient consists of two parts:
The chemical gradient, or difference in solute concentration across a membrane.
The electrical gradient, or difference in charge across a membrane.
If there are unequal concentrations of an ion across a permeable membrane, the ion will move across the membrane from the area of higher concentration to the area of lower concentration through simple diffusion. Ions also carry an electric charge that forms an electric potential across a membrane. If there is an unequal distribution of charges across the membrane, then the difference in electric potential generates a force that drives ion diffusion until the charges are balanced on both sides of the membrane.
Electrochemical gradients are essential to the operation of batteries and other electrochemical cells, photosynthesis and cellular respiration, and certain other biological processes.
Overview
Electrochemical energy is one of the many interchangeable forms of potential energy through which energy may be conserved. It appears in electroanalytical chemistry and has industrial applications such as batteries and fuel cells. In biology, electrochemical gradients allow cells to control the direction ions move across membranes. In mitochondria and chloroplasts, proton gradients generate a chemiosmotic potential used to synthesize ATP, and the sodium-potassium gradient helps neural synapses quickly transmit information.
An electrochemical gradient has two components: a differential concentration of electric charge across a membrane and a differential concentration of chemical species across that same membrane. In the former effect, the concentrated charge attracts charges of the opposite sign; in the latter, the concentrated species tends to diffuse across the membrane to an equalize concentrations. The combination of these two phenomena determines the thermodynamically-preferred direction for an ion's movement across the membrane.
The combined effect can be quantified as a gradient in the thermodynamic electrochemical potential:
with
the chemical potential of the ion species
the charge per ion of the species
, Faraday constant (the electrochemical potential is implicitly measured on a per-mole basis)
, the local electric potential.
Sometimes, the term "electrochemical potential" is abused to describe the electric potential generated by an ionic concentration gradient; that is, .
An electrochemical gradient is analogous to the water pressure across a hydroelectric dam. Routes unblocked by the membrane (e.g. membrane transport protein or electrodes) correspond to turbines that convert the water's potential energy to other forms of physical or chemical energy, and the ions that pass through the membrane correspond to water traveling into the lower river. Conversely, energy can be used to pump water up into the lake above the dam, and chemical energy can be used to create electrochemical gradients.
Chemistry
The term typically applies in electrochemistry, when electrical energy in the form of an applied voltage is used to modulate the thermodynamic favorability of a chemical reaction. In a battery, an electrochemical potential arising from the movement of ions balances the reaction energy of the electrodes. The maximum voltage that a battery reaction can produce is sometimes called the standard electrochemical potential of that reaction.
Biological context
The generation of a transmembrane electrical potential through ion movement across a cell membrane drives biological processes like nerve conduction, muscle contraction, hormone secretion, and sensation. By convention, physiological voltages are measured relative to the extracellular region; a typical animal cell has an internal electrical potential of (−70)–(−50) mV.
An electrochemical gradient is essential to mitochondrial oxidative phosphorylation. The final step of cellular respiration is the electron transport chain, composed of four complexes embedded in the inner mitochondrial membrane. Complexes I, III, and IV pump protons from the matrix to the intermembrane space (IMS); for every electron pair entering the chain, ten protons translocate into the IMS. The result is an electric potential of more than . The energy resulting from the flux of protons back into the matrix is used by ATP synthase to combine inorganic phosphate and ADP.
Similar to the electron transport chain, the light-dependent reactions of photosynthesis pump protons into the thylakoid lumen of chloroplasts to drive the synthesis of ATP. The proton gradient can be generated through either noncyclic or cyclic photophosphorylation. Of the proteins that participate in noncyclic photophosphorylation, photosystem II (PSII), plastiquinone, and cytochrome b6f complex directly contribute to generating the proton gradient. For each four photons absorbed by PSII, eight protons are pumped into the lumen.
Several other transporters and ion channels play a role in generating a proton electrochemical gradient. One is TPK3, a potassium channel that is activated by Ca2+ and conducts K+ from the thylakoid lumen to the stroma, which helps establish the electric field. On the other hand, the electro-neutral K+ efflux antiporter (KEA3) transports K+ into the thylakoid lumen and H+ into the stroma, which helps establish the pH gradient.
Ion gradients
Since the ions are charged, they cannot pass through cellular membranes via simple diffusion. Two different mechanisms can transport the ions across the membrane: active or passive transport.
An example of active transport of ions is the Na+-K+-ATPase (NKA). NKA is powered by the hydrolysis of ATP into ADP and an inorganic phosphate; for every molecule of ATP hydrolized, three Na+ are transported outside and two K+ are transported inside the cell. This makes the inside of the cell more negative than the outside and more specifically generates a membrane potential Vmembrane of about .
An example of passive transport is ion fluxes through Na+, K+, Ca2+, and Cl− channels. Unlike active transport, passive transport is powered by the arithmetic sum of osmosis (a concentration gradient) and an electric field (the transmembrane potential). Formally, the molar Gibbs free energy change associated with successful transport is where represents the gas constant, represents absolute temperature, is the charge per ion, and represents the Faraday constant.
In the example of Na+, both terms tend to support transport: the negative electric potential inside the cell attracts the positive ion and since Na+ is concentrated outside the cell, osmosis supports diffusion through the Na+ channel into the cell. In the case of K+, the effect of osmosis is reversed: although external ions are attracted by the negative intracellular potential, entropy seeks to diffuse the ions already concentrated inside the cell. The converse phenomenon (osmosis supports transport, electric potential opposes it) can be achieved for Na+ in cells with abnormal transmembrane potentials: at , the Na+ influx halts; at higher potentials, it becomes an efflux.
Proton gradients
Proton gradients in particular are important in many types of cells as a form of energy storage. The gradient is usually used to drive ATP synthase, flagellar rotation, or metabolite transport. This section will focus on three processes that help establish proton gradients in their respective cells: bacteriorhodopsin and noncyclic photophosphorylation and oxidative phosphorylation.
Bacteriorhodopsin
The way bacteriorhodopsin generates a proton gradient in Archaea is through a proton pump. The proton pump relies on proton carriers to drive protons from the side of the membrane with a low H+ concentration to the side of the membrane with a high H+ concentration. In bacteriorhodopsin, the proton pump is activated by absorption of photons of 568nm wavelength, which leads to isomerization of the Schiff base (SB) in retinal forming the K state. This moves SB away from Asp85 and Asp212, causing H+ transfer from the SB to Asp85 forming the M1 state. The protein then shifts to the M2 state by separating Glu204 from Glu194 which releases a proton from Glu204 into the external medium. The SB is reprotonated by Asp96 which forms the N state. It is important that the second proton comes from Asp96 since its deprotonated state is unstable and rapidly reprotonated with a proton from the cytosol. The protonation of Asp85 and Asp96 causes re-isomerization of the SB, forming the O state. Finally, bacteriorhodopsin returns to its resting state when Asp85 releases its proton to Glu204.
Photophosphorylation
PSII also relies on light to drive the formation of proton gradients in chloroplasts, however, PSII utilizes vectorial redox chemistry to achieve this goal. Rather than physically transporting protons through the protein, reactions requiring the binding of protons will occur on the extracellular side while reactions requiring the release of protons will occur on the intracellular side. Absorption of photons of 680nm wavelength is used to excite two electrons in P680 to a higher energy level. These higher energy electrons are transferred to protein-bound plastoquinone (PQA) and then to unbound plastoquinone (PQB). This reduces plastoquinone (PQ) to plastoquinol (PQH2) which is released from PSII after gaining two protons from the stroma. The electrons in P680 are replenished by oxidizing water through the oxygen-evolving complex (OEC). This results in release of O2 and H+ into the lumen, for a total reaction of
4h\nu + 2H2O + 2PQ + 4H+ stroma -> O2 + 2PQH2 + 4H+ lumen
After being released from PSII, PQH2 travels to the cytochrome b6f complex, which then transfers two electrons from PQH2 to plastocyanin in two separate reactions. The process that occurs is similar to the Q-cycle in Complex III of the electron transport chain. In the first reaction, PQH2 binds to the complex on the lumen side and one electron is transferred to the iron-sulfur center which then transfers it to cytochrome f which then transfers it to plastocyanin. The second electron is transferred to heme bL which then transfers it to heme bH which then transfers it to PQ. In the second reaction, a second PQH2 gets oxidized, adding an electron to another plastocyanin and PQ. Both reactions together transfer four protons into the lumen.
Oxidative phosphorylation
In the electron transport chain, complex I (CI) catalyzes the reduction of ubiquinone (UQ) to ubiquinol (UQH2) by the transfer of two electrons from reduced nicotinamide adenine dinucleotide (NADH) which translocates four protons from the mitochondrial matrix to the IMS:
Complex III (CIII) catalyzes the Q-cycle. The first step involving the transfer of two electrons from the UQH2 reduced by CI to two molecules of oxidized cytochrome c at the Qo site. In the second step, two more electrons reduce UQ to UQH2 at the Qi site. The total reaction is:
Complex IV (CIV) catalyzes the transfer of two electrons from the cytochrome c reduced by CIII to one half of a full oxygen. Utilizing one full oxygen in oxidative phosphorylation requires the transfer of four electrons. The oxygen will then consume four protons from the matrix to form water while another four protons are pumped into the IMS, to give a total reaction
See also
Concentration cell
Transmembrane potential difference
Action potential
Cell potential
Electrodiffusion
Galvanic cell
Electrochemical cell
Proton exchange membrane
Reversal potential
References
Stephen T. Abedon, "Important words and concepts from Chapter 8, Campbell & Reece, 2002 (1/14/2005)", for Biology 113 at the Ohio State University
Cellular respiration
Electrochemical concepts
Electrophysiology
Membrane biology
Physical quantities
Thermodynamics | Electrochemical gradient | [
"Physics",
"Chemistry",
"Mathematics",
"Biology"
] | 2,591 | [
"Physical phenomena",
"Cellular respiration",
"Physical quantities",
"Membrane biology",
"Quantity",
"Electrochemical concepts",
"Electrochemistry",
"Thermodynamics",
"Metabolism",
"Molecular biology",
"Biochemistry",
"Physical properties",
"Dynamical systems"
] |
2,822,022 | https://en.wikipedia.org/wiki/Sigma%20Gamma%20Tau | Sigma Gamma Tau () is the American honor society in aerospace engineering. The society formed from the merger of Tau Omega and Gamma Alpha Rho in 1953. It has chartered more than fifty chapters in the United States.
History
Sigma Gamma Tau was founded on the campus of Purdue University in West Lafayette, Indiana, on February 28, 1953. The new society was formed by the merger of two existing aeronautical honor societies, Tau Omega, and Gamma Alpha Rho. Tau Omega was established in 1927 at the University of Oklahoma. Gamma Alpha Rho was founded in 1945 at Rensselaer Polytechnic Institute.
Sigma Gamma Tau was created to recognize academic and professional achievement in aeronautical engineering and to foster ethics and professional practices within the field. With the merger of the two societies, it started with fourteen chapters, representing 1,900 initiates. The society was incorporated in Oklahoma. It held its first national convention in 1953 at Purdue University. Conventions are held every three years, often in conjunction with the American Institute of Aeronautics and Astronautics's Science and Technology Forum. Officers are elected at the convention, including the national president, national first vice-president, national second vice-president, and the national secretary-treasurer.
Gamma Sigma Tau joined the Association of College Honor Societies on but has since left that organization. By June 1966, it had nineteen chapters with 2,300 members. In 1991, it had chartered 46 chapters with 12,000 members.Gamma Sigma Tau has chartered 54 collegiate chapters and has initiated more than 30,000 members. Its activities include a mentorship program, test reviews, tutoring, and social events.
Sigma Gamma Tau's national headquarters is located at the Aerospace Engineering Department of Wichita State University.
Symbols
The name of Sigma Gamma Tau was selected by combining the Greek letter Sigma, indicating sum, with Gamma to and Tau from the initial letters of the predecessor organizations, Gamma Alpha Rho and Tau Omega.
The society's insignia is a key with the Greek letters . Its colors are red and white. Its publications are Contact and Mach.
Membership
Sigma Gamma Tau's collegiate chapters elect annually to membership those students, alumni, and professionals who, by conscientious attention to their studies or professional duties, uphold this high standard for the betterment of their profession.
Chapters
Sigma Gamma Tau has chartered 54 chapters and has 40 active chapters as of 2024.
Notable members
Robert J. Cenker, astronaut and engineer
Roger B. Chaffee, astronaut and pilot
Julie Wertz Chen, aerospace engineer
Robert Crippen, astronaut
Brian Gyetko, professional tennis player
Fred Haise, astronaut
Gregory J. Harbaugh, astronaut and engineer
Monroe W. Hatch Jr., United States Air Force general
Edgar Mitchell, astronaut and lunar explorer
Steven R. Nagel, test pilot, astronaut, and engineer
David Scott, astronaut and lunar explorer
Diana Trujillo, aerospace engineer
See also
Honor society
Professional fraternities and sororities
References
External links
Student societies in the United States
Engineering honor societies
Student organizations established in 1953
1953 establishments in Indiana
Former members of Association of College Honor Societies
Aviation organizations based in the United States | Sigma Gamma Tau | [
"Engineering"
] | 629 | [
"Engineering societies",
"Engineering honor societies"
] |
2,823,145 | https://en.wikipedia.org/wiki/Virtually%20fibered%20conjecture | In the mathematical subfield of 3-manifolds, the virtually fibered conjecture, formulated by American mathematician William Thurston, states that every closed, irreducible, atoroidal 3-manifold with infinite fundamental group has a finite cover which is a surface bundle over the circle.
A 3-manifold which has such a finite cover is said to virtually fiber. If M is a Seifert fiber space, then M virtually fibers if and only if the rational Euler number of the Seifert fibration or the (orbifold) Euler characteristic of the base space is zero.
The hypotheses of the conjecture are satisfied by hyperbolic 3-manifolds. In fact, given that the geometrization conjecture is now settled, the only case needed to be proven for the virtually fibered conjecture is that of hyperbolic 3-manifolds.
The original interest in the virtually fibered conjecture (as well as its weaker cousins, such as the virtually Haken conjecture) stemmed from the fact that any of these conjectures, combined with Thurston's hyperbolization theorem, would imply the geometrization conjecture. However, in practice all known attacks on the "virtual" conjecture take geometrization as a hypothesis, and rely on the geometric and group-theoretic properties of hyperbolic 3-manifolds.
The virtually fibered conjecture was not actually conjectured by Thurston. Rather, he posed it as a question, writing only that "[t]his dubious-sounding question seems to have a definite chance for a positive answer".
The conjecture was finally settled in the affirmative in a series of papers from 2009 to 2012. In a posting on the ArXiv on 25 Aug 2009, Daniel Wise implicitly implied (by referring to a then-unpublished longer manuscript) that he had proven the conjecture for the case where the 3-manifold is closed, hyperbolic, and Haken. This was followed by a survey article in Electronic Research Announcements in Mathematical Sciences. Several other articles
have followed, including the aforementioned longer manuscript by Wise. In March 2012, during a conference at Institut Henri Poincaré in Paris, Ian Agol announced he could prove the virtually Haken conjecture for closed hyperbolic 3-manifolds
. Taken together with Daniel Wise's results, this implies the virtually fibered conjecture for all closed hyperbolic 3-manifolds.
See also
Virtually Haken conjecture
Surface subgroup conjecture
Ehrenpreis conjecture
Notes
References
D. Gabai, On 3-manifold finitely covered by surface bundles, Low Dimensional Topology and Kleinian Groups (ed: D.B.A. Epstein), London Mathematical Society Lecture Note Series vol 112 (1986), p. 145-155.
External links
3-manifolds
Conjectures | Virtually fibered conjecture | [
"Mathematics"
] | 564 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Conjectures"
] |
2,823,319 | https://en.wikipedia.org/wiki/HNN%20extension | In mathematics, the HNN extension is an important construction of combinatorial group theory.
Introduced in a 1949 paper Embedding Theorems for Groups by Graham Higman, Bernhard Neumann, and Hanna Neumann, it embeds a given group G into another group G' , in such a way that two given isomorphic subgroups of G are conjugate (through a given isomorphism) in G' .
Construction
Let G be a group with presentation , and let be an isomorphism between two subgroups of G. Let t be a new symbol not in S, and define
The group is called the HNN extension of G relative to α. The original group G is called the base group for the construction, while the subgroups H and K are the associated subgroups. The new generator t is called the stable letter.
Key properties
Since the presentation for contains all the generators and relations from the presentation for G, there is a natural homomorphism, induced by the identification of generators, which takes G to . Higman, Neumann, and Neumann proved that this morphism is injective, that is, an embedding of G into . A consequence is that two isomorphic subgroups of a given group are always conjugate in some overgroup; the desire to show this was the original motivation for the construction.
Britton's Lemma
A key property of HNN-extensions is a normal form theorem known as Britton's Lemma. Let be as above and let w be the following product in :
Then Britton's Lemma can be stated as follows:
Britton's Lemma. If w = 1 in G∗α then
either and g0 = 1 in G
or and for some i ∈ {1, ..., n−1} one of the following holds:
εi = 1, εi+1 = −1, gi ∈ H,
εi = −1, εi+1 = 1, gi ∈ K.
In contrapositive terms, Britton's Lemma takes the following form:
Britton's Lemma (alternate form). If w is such that
either and g0 ≠ 1 ∈ G,
or and the product w does not contain substrings of the form tht−1, where h ∈ H and of the form t−1kt where k ∈ K,
then in .
Consequences of Britton's Lemma
Most basic properties of HNN-extensions follow from Britton's Lemma. These consequences include the following facts:
The natural homomorphism from G to is injective, so that we can think of as containing G as a subgroup.
Every element of finite order in is conjugate to an element of G.
Every finite subgroup of is conjugate to a finite subgroup of G.
If contains an element such that is contained in neither nor for any integer , then contains a subgroup isomorphic to a free group of rank two.
Applications and generalizations
Applied to algebraic topology, the HNN extension constructs the fundamental group of a topological space X that has been 'glued back' on itself by a mapping f : X → X (see e.g. Surface bundle over the circle). Thus, HNN extensions describe the fundamental group of a self-glued space in the same way that free products with amalgamation do for two spaces X and Y glued along a connected common subspace, as in the Seifert-van Kampen theorem. The HNN extension is a natural analogue of the amalgamated free product, and comes up in determining the fundamental group of a union when the intersection is not connected. These two constructions allow the description of the fundamental group of any reasonable geometric gluing. This is generalized into the Bass–Serre theory of groups acting on trees, constructing fundamental groups of graphs of groups.
HNN-extensions play a key role in Higman's proof of the Higman embedding theorem which states that every finitely generated recursively presented group can be homomorphically embedded in a finitely presented group. Most modern proofs of the Novikov–Boone theorem about the existence of a finitely presented group with algorithmically undecidable word problem also substantially use HNN-extensions.
The idea of HNN extension has been extended to other parts of abstract algebra, including Lie algebra theory.
See also
Group extension
References
Group theory
Combinatorics on words | HNN extension | [
"Mathematics"
] | 899 | [
"Group theory",
"Fields of abstract algebra",
"Combinatorics on words",
"Combinatorics"
] |
2,823,375 | https://en.wikipedia.org/wiki/Lacida | The Lacida, also called LCD, was a Polish rotor cipher machine. It was designed and produced before World War II by Poland's Cipher Bureau for prospective wartime use by Polish military higher commands. Lacida was also known as Crypto Machine during a TNMOC Virtual Talk.
History
The machine's name derived from the surname initials of Gwido Langer, Maksymilian Ciężki and Ludomir Danilewicz and / or his younger brother, Leonard Danilewicz. It was built in Warsaw, to the Cipher Bureau's specifications, by the AVA Radio Company.
In anticipation of war, before the September 1939 invasion of Poland, two LCDs were sent to France. From spring 1941, an LCD was used by the Polish Team Z at the Polish-, Spanish- and French-manned Cadix radio-intelligence and decryption center at Uzès, near France's Mediterranean coast.
Prior to the machine's production, it had never been subjected to rigorous decryption attempts. Now it was decided to remedy this oversight. In early July 1941, Polish cryptologists Marian Rejewski and Henryk Zygalski received LCD-enciphered messages that had earlier been transmitted to the staff of the Polish Commander-in-Chief, based in London. Breaking the first message, given to the two cryptologists on July 3, took them only a couple of hours. Further tests yielded similar results. Colonel Langer suspended the use of LCD at Cadix.
In 1974, Rejewski explained that the LCD had two serious flaws. It lacked a commutator ("plugboard"), which was one of the strong points of the German military Enigma machine. The LCD's other weakness involved the reflector and wiring. These shortcomings did not imply that the LCD, somewhat larger than the Enigma and more complicated (e.g., it had a switch for resetting to deciphering), was easy to solve. Indeed, the likelihood of its being broken by the German E-Dienst was judged slight. Theoretically it did exist, however.
See also
Biuro Szyfrów (Cipher Bureau)
References
Further reading
Władysław Kozaczuk, Enigma: How the German Machine Cipher Was Broken, and How It Was Read by the Allies in World War Two, edited and translated by Christopher Kasparek, Frederick, MD, University Publications of America, 1984, .
K. Gaj, "Polish Cipher Machine - Lacida," Cryptologia, 16 (1), January 1992, pp. 73–80.
Rotor machines
Cipher Bureau (Poland)
Cryptographic hardware
Polish inventions
Science and technology in Poland | Lacida | [
"Physics",
"Technology"
] | 549 | [
"Physical systems",
"Machines",
"Rotor machines"
] |
2,824,030 | https://en.wikipedia.org/wiki/In-system%20programming | In-system programming (ISP), or also called in-circuit serial programming (ICSP), is the ability of some programmable logic devices, microcontrollers, chipsets and other embedded devices to be programmed while installed in a complete system, rather than requiring the chip to be programmed prior to installing it into the system. It also allows firmware updates to be delivered to the on-chip memory of microcontrollers and related processors without requiring specialist programming circuitry on the circuit board, and simplifies design work.
Overview
There is no standard for in-system programming protocols for programming microcontroller devices. Almost all manufacturers of microcontrollers support this feature, but all have implemented their own protocols, which often differ even for different devices from the same manufacturer. Up to 4 pins may be required for implementing a JTAG standard interface. In general, modern protocols try to keep the number of pins used low, typically to 2 pins. Some ISP interfaces manage to achieve the same with just a single pin. Newer ATtiny microcontrollers with UPDI can even reuse that programming pin also as a general-purpose input/output.
The primary advantage of in-system programming is that it allows manufacturers of electronic devices to integrate programming and testing into a single production phase, and save money, rather than requiring a separate programming stage prior to assembling the system. This may allow manufacturers to program the chips in their own system's production line instead of buying pre-programmed chips from a manufacturer or distributor, making it feasible to apply code or design changes in the middle of a production run. The other advantage is that production can always use the latest firmware, and new features as well as bug fixes can be implemented and put into production without the delay occurring when using pre-programmed microcontrollers.
Microcontrollers are typically soldered directly to a printed circuit board and usually do not have the circuitry or space for a large external programming cable to another computer.
Typically, chips supporting ISP have internal circuitry to generate any necessary programming voltage from the system's normal supply voltage, and communicate with the programmer via a serial protocol. Most programmable logic devices use a variant of the JTAG protocol for ISP, in order to facilitate easier integration with automated testing procedures. Other devices usually use proprietary protocols or protocols defined by older standards. In systems complex enough to require moderately large glue logic, designers may implement a JTAG-controlled programming subsystem for non-JTAG devices such as flash memory and microcontrollers, allowing the entire programming and test procedure to be accomplished under the control of a single protocol.
History
Starting from the early 1990s an important technological evolution in the architecture of the microcontrollers was witnessed. At first, they were realized in two possible solutions: with OTP (one-time programmable) or with EPROM memories. In EPROM, memory-erasing process requires the chip to be exposed to ultraviolet light through a specific window above the package. In 1993 Microchip Technology introduced the first microcontroller with EEPROM memory: the PIC16C84. EEPROM memories can be electrically erased. This feature allowed to lower the realization costs by removing the erasing window above the package and initiate in-system programming technology. With ISP flashing process can be performed directly on the board at the end of the production process. This evolution gave the possibility to unify the programming and functional test phase and in production environments and to start the preliminary production of the boards even if the firmware development has not yet been completed. This way it was possible to correct bugs or to make changes at a later time. In the same year, Atmel developed the first microcontroller with flash memory, easier and faster to program and with much longer life cycle compared to EEPROM memories.
Microcontrollers that support ISP are usually provided with pins used by the serial communication peripheral to interface with the programmer, a flash/EEPROM memory and the circuitry used to supply the voltage necessary to program the microcontroller. The communication peripheral is in turn connected to a programming peripheral which provides commands to operate on the flash or EEPROM memory.
When designing electronic boards for ISP programming it’s necessary to take into account some guidelines to have a programming phase as reliable as possible. Some microcontrollers with a low number of pins share the programming lines with the I/O lines. This could be a problem if the necessary precautions are not taken into account in the design of the board; the device can suffer the damage of the I/O components during the programming. Moreover, it’s important to connect the ISP lines to high impedance circuitry both to avoid a damage of the components by the programmer and because the microcontroller often cannot supply enough current to pilot the line. Many microcontrollers need a dedicated RESET line to enter in the Programming Mode. It is necessary to pay attention to current supplied for line driving and to check for presence of watchdogs connected to the RESET line that can generate an unwanted reset and, so, to lead a programming failure. Moreover, some microcontrollers need a higher voltage to enter in Programming Mode and, hence, it’s necessary to check that this value it’s not attenuated and that this voltage is not forwarded to others components on the board.
Industrial application
In-System Programming process takes place during the final stage of production of the product and it can be performed in two different ways based on the production volumes.
In the first method, a connector is manually connected to the programmer. This solution expects the human participation to the programming process that has to connect the programmer to the electronic board with the cable. Hence, this solution is meant for low production volumes.
The second method uses test points on the board. These are specific areas placed on the printed board, or PCB, that are electrically connected to some of the electronic components on the board. Test points are used to perform functional tests for components mounted on board and, since they are connected directly to some microcontroller pins, they are very effective for ISP. For medium and high production volumes using test points is the best solution since it allows to integrate the programming phase in an assembly line.
In production lines, boards are placed on a bed of nails called fixture. The latter are integrated, based on the production volumes, in semiautomatic or automatic test systems called ATE (automatic test equipment). Fixtures are specifically designed for each board - or at most for few models similar to the board they were designed for – therefore these are interchangeable in the system environment where they are integrated. The test system, once the board and the fixture are placed in position, has a mechanism to put in contact the needles of the fixture with the Test Points on the board to test. The system it’s connected to, or has directly integrated inside, an ISP programmer. This one has to program the device or devices mounted on the board: for example, a microcontroller and/or a serial memory.
Microchip ICSP
For most Microchip microcontrollers, ICSP programming is performed using two pins, clock (PGC) and data (PGD), while a high voltage (12 V) is present on the Vpp/MCLR pin. Low voltage programming (5 V or 3.3 V) dispenses with the high voltage, but reserves exclusive use of an I/O pin. However, for newer microcontrollers, specifically PIC18F6XJXX/8XJXX microcontrollers families from Microchip Technology, entering into ICSP modes is a bit different. Entering ICSP Program/Verify mode requires the following three steps:
Voltage is briefly applied to the MCLR (master clear) pin.
A 32-bit key sequence is presented on PGD.
Voltage is reapplied to MCLR.
A separate piece of hardware, called a programmer is required to connect to an I/O port of a PC on one side and to the PIC on the other side. A list of the features for each major programming type are:
Parallel port - large bulky cable, most computers have only one port and it may be inconvenient to swap the programming cable with an attached printer. Most laptops newer than 2010 do not support this port. Parallel port programming is very fast.
Serial port (COM port) - At one time the most popular method. Serial ports usually lack adequate circuit programming supply voltage. Most computers and laptops newer than 2010 lack support for this port.
Socket (in or out of circuit) - the CPU must be either removed from circuit board, or a clamp must be attached to the chip-making access an issue.
USB cable - Small and light weight, has support for voltage source and most computers have extra ports available. The distance between the circuit to be programmed and the computer is limited by the length of USB cable - it must usually be less than 180 cm. This can make programming devices deep in machinery or cabinets a problem.
ICSP programmers have many advantages, with size, computer port availability, and power source being major features. Due to variations in the interconnect scheme and the target circuit surrounding a micro-controller, there is no programmer that works with all possible target circuits or interconnects. Microchip Technology provides a detailed ICSP programming guide Many sites provide programming and circuit examples.
PICs are programmed using five signals (a sixth pin 'aux' is provided but not used). The data is transferred using a two-wire synchronous serial scheme, three more wires provide programming and chip power. The clock signal is always controlled by the programmer.
Signals and pinout
Vpp - Programming mode voltage. This must be connected to the MCLR pin, or the Vpp pin of the optional ICSP port available on some large-pin-count PICs. To put the PIC into programming mode, this line must be in a specified range that varies from PIC to PIC. For 5V PICs, this is always some amount above Vdd, and can be as high as 13.5 V. The 3.3 V only PICs like the 18FJ, 24H, and 33F series use a special signature to enter programming mode and Vpp is a digital signal that is either at ground or Vdd. There is no one Vpp voltage that is within the valid Vpp range of all PICs. In fact, the minimum required Vpp level for some PICs can damage other PICs.
Vdd - This is the positive power input to the PIC. Some programmers require this to be provided by the circuit (circuit must be at least partially powered up), some programmers expect to drive this line themselves and require the circuit to be off, while others can be configured either way (like the Microchip ICD2). The Embed Inc programmers expect to drive the Vdd line themselves and require the target circuit to be off during programming.
Vss - Negative power input to the PIC and the zero volts reference for the remaining signals. Voltages of the other signals are implicitly with respect to Vss.
ICSPCLK - Clock line of the serial data interface. This line swings from GND to Vdd and is always driven by the programmer. Data is transferred on the falling edge.
ICSPDAT - Serial data line. The serial interface is bi-directional, so this line can be driven by either the programmer or the PIC depending on the current operation. In either case this line swings from GND to Vdd. A bit is transferred on the falling edge of PGC.
AUX/PGM - Newer PIC controllers use this pin to enable low voltage programming (LVP). By holding PGM high, the micro-controller will enter LVP mode. PIC micro-controllers are shipped with LVP enabled - so if you use a brand new chip you can use it in LVP mode. The only way to change the mode is by using a high voltage programmer. If you program the micro controller with no connection to this pin, the mode is left unchanged.
RJ11 pinout
An industry standard for using RJ11 sockets with an ICSP programmer is supported by Microchip. The illustration represents information provided in their data sheets. However, there is room for confusion. The PIC data sheets show an inverted socket and do not provide a pictorial view of pinouts so it is unclear what side of the socket Pin 1 is located on. The illustration provided here is untested but uses the phone industry standard pinout (the RJ11 plug/socket was original developed for wired desktop phones).
References
See also
Device Programmers
Digital electronics
Microcontrollers | In-system programming | [
"Engineering"
] | 2,644 | [
"Electronic engineering",
"Digital electronics"
] |
2,824,495 | https://en.wikipedia.org/wiki/Fred%20Baker%20%28engineer%29 | Frederick J. Baker (born February 28, 1952), is an American engineer, specializing in developing computer network protocols for the Internet.
Biography
Baker attended the New Mexico Institute of Mining and Technology from 1970 to 1973. He developed computer network technology starting in 1978 at Control Data Corporation (CDC), Vitalink Communications Corporation, and Advanced Computer Communications.
He joined Cisco Systems in 1994.
He became a Cisco Fellow in 1998, working in university relations and as a research ambassador, and in the IETF. He left Cisco Systems in 2016.
Since 1989, Baker has been involved with the Internet Engineering Task Force (IETF), the body that develops standards for the Internet.
He chaired a number of IETF working groups, including several that specified the management information bases (MIB) used to manage network bridges and popular telecommunications links.
Baker served as IETF chair from 1996 to 2001, when he was succeeded by Harald Tveit Alvestrand.
He served on the Internet Architecture Board from 1996 through 2002. He has co-authored or edited over fifty Request for Comments (RFC) documents on Internet protocols and contributed to others. The subjects covered include network management, Open Shortest Path First (OSPF) and Routing Information Protocol (RIPv2) routing, quality of service (using both the Integrated services and Differentiated Services models), Lawful Interception, precedence-based services on the Internet, and others.
In addition, he served as a member of the Board of Trustees of the Internet Society 2002 through 2008, and as its chair from 2002 through 2006. He was a member of the Technical Advisory Council of the US Federal Communications Commission from 2005 through 2009.
He has worked as liaison to other standards organizations such as the ITU-T.
In 2009 he became chair of the RFC Series Oversight Committee.
He co-chaired the IPv6 Operations Working Group in the IETF, represented the IETF on the National Institute of Standards and Technology Smart Grid Smart Grid Interoperability Panel and Architecture Committee (until 2013), and was Cisco's representative to a Broadband Internet Technical Advisory Group.
Baker also has several patents.
References
External links
The Debate Over Internet Governance: A Snapshot in the Year 2000, The Berkman Center for Internet & Society at Harvard Law School. Retrieved on August 6, 2007.
1952 births
Living people
People in information technology
Internet Society people | Fred Baker (engineer) | [
"Technology"
] | 475 | [
"People in information technology",
"Information technology",
"Computer specialist stubs",
"Computing stubs"
] |
2,824,875 | https://en.wikipedia.org/wiki/Termination%20factor | In molecular biology, a termination factor is a protein that mediates the termination of RNA transcription by recognizing a transcription terminator and causing the release of the newly made mRNA. This is part of the process that regulates the transcription of RNA to preserve gene expression integrity and are present in both eukaryotes and prokaryotes, although the process in bacteria is more widely understood. The most extensively studied and detailed transcriptional termination factor is the Rho (ρ) protein of E. coli.
Prokaryotic
Prokaryotes use one type of RNA polymerase, transcribing mRNAs that code for more than one type of protein. Transcription, translation and mRNA degradation all happen simultaneously. Transcription termination is essential to define boundaries in transcriptional units, a function necessary to maintain the integrity of the strands and provide quality control. Termination in E. coli may be Rho dependent, utilizing Rho factor, or Rho independent, also known as intrinsic termination. Although most operons in DNA are Rho independent, Rho dependent termination is also essential to maintain correct transcription.
ρ factor
The Rho protein is an RNA translocase that recognizes a cytosine-rich region of the elongating mRNA, but the exact features of the recognized sequences and how the cleaving takes place remain unknown. Rho forms a ring-shaped hexamer and advances along the mRNA, hydrolyzing ATP toward RNA polymerase (5' to 3' with respect to the mRNA). When the Rho protein reaches the RNA polymerase complex, transcription is terminated by dissociation of the RNA polymerase from the DNA. The structure and activity of the Rho protein is similar to that of the F1 subunit of ATP synthase, supporting the theory that the two share an evolutionary link.
Rho factor is widely present in different bacterial sequences and is responsible for the genetic polarity in E. coli. It works as a sensor of translational status, inhibiting non-productive transcriptions, suppressing antisense transcriptions and resolving conflicts that happen between transcription and replication. The process of termination by Rho factor is regulated by attenuation and antitermination mechanisms, competing with elongation factors for overlapping utilization sites (ruts and nuts), and depends on how fast Rho can move during the transcription to catch up with the RNA polymerase and activate the termination process.
Inhibition of Rho dependent termination by bicyclomycin is used to treat bacterial infections. The use of this mechanism along with other classes of antibiotics is being studied as a way to address antibiotic resistance, by suppressing the protective factors in RNA transcription while working in synergy with other inhibitors of gene expression such as tetracycline or rifampicin.
Eukaryotic
The process of transcriptional termination is less understood in eukaryotes, which have extensive post-transcriptional RNA processing, and each of the three types of eukaryotic RNA polymerase have a different termination system.
In RNA polymerase I, Transcription termination factor, RNA polymerase I binds downstream of the pre-rRNA coding regions, causing the dissociation of the RNA polymerase from the template and the release of the new RNA strand.
In RNA polymerase II, the termination occurs via a polyadenylation/cleaving complex. The 3' tail on the ending of the strand is bound at the polyadenylation site, but the strand will continue to code. The newly synthesised ribonucleotides are removed one at a time by the cleavage factors CSTF and CPSF, in a process that is still not fully understood. The remainder of the strand is disengaged by a 5′-exonuclease when the transcription is finished.
RNA polymerase III terminates after a series of uracil polymerization residues in the transcribed mRNA. Unlike in bacteria and in polymerase I, the termination RNA hairpin needs to be upstream to allow for correct cleaving.
See also
Rho factor
References
Molecular genetics
Gene expression | Termination factor | [
"Chemistry",
"Biology"
] | 827 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
2,824,931 | https://en.wikipedia.org/wiki/Hepatocyte%20growth%20factor%20receptor | Hepatocyte growth factor receptor (HGF receptor) is a protein that in humans is encoded by the MET gene. The protein possesses tyrosine kinase activity. The primary single chain precursor protein is post-translationally cleaved to produce the alpha and beta subunits, which are disulfide linked to form the mature receptor.
HGF receptor is a single pass tyrosine kinase receptor essential for embryonic development, organogenesis and wound healing. Hepatocyte growth factor/Scatter Factor (HGF/SF) and its splicing isoform (NK1, NK2) are the only known ligands of the HGF receptor. MET is normally expressed by cells of epithelial origin, while expression of HGF/SF is restricted to cells of mesenchymal origin. When HGF/SF binds its cognate receptor MET it induces its dimerization through a not yet completely understood mechanism leading to its activation.
Sometimes MET is misunderstood as of an abbreviation of Mesenchymal-Epithelial Transition. It is incorrect. The three letters of MET come from N-methyl-N'-nitro-N-nitrosoguanidine (MNNG).
Abnormal MET activation in cancer correlates with poor prognosis, where aberrantly active MET triggers tumor growth, formation of new blood vessels (angiogenesis) that supply the tumor with nutrients, and cancer spread to other organs (metastasis). MET is deregulated in many types of human malignancies, including cancers of kidney, liver, stomach, breast, and brain. Normally, only stem cells and progenitor cells express MET, which allows these cells to grow invasively in order to generate new tissues in an embryo or regenerate damaged tissues in an adult. However, cancer stem cells are thought to hijack the ability of normal stem cells to express MET, and thus become the cause of cancer persistence and spread to other sites in the body. Both the overexpression of Met/HGFR, as well as its autocrine activation by co-expression of its hepatocyte growth factor ligand, have been implicated in oncogenesis.
Various mutations in the MET gene are associated with papillary renal carcinoma.
Gene
MET proto-oncogene (GeneID: 4233) has a total length of 125,982 bp, and it is located in the 7q31 locus of chromosome 7. MET is transcribed into a 6,641 bp mature mRNA, which is then translated into a 1,390 amino-acid MET protein.
Protein
MET is a receptor tyrosine kinase (RTK) that is produced as a single-chain precursor. The precursor is proteolytically cleaved at a furin site to yield a highly glycosylated extracellular α-subunit and a transmembrane β-subunit, which are linked together by a disulfide bridge.
Extracellular
Region of homology to semaphorins (Sema domain), which includes the full α-chain and the N-terminal part of the β-chain
Cysteine-rich MET-related sequence (MRS domain)
Glycine-proline-rich repeats (G-P repeats)
Four immunoglobulin-like structures (Ig domains), a typical protein-protein interaction region.
Intracellular
A juxtamembrane segment that contains:
A serine residue (Ser 985), which inhibits the receptor kinase activity upon phosphorylation
A tyrosine residue (Tyr 1003), which is responsible for MET polyubiquitination, endocytosis, and degradation upon interaction with the ubiquitin ligase CBL
Tyrosine kinase domain, which mediates MET biological activity. Following MET activation, transphosphorylation occurs on Tyr 1234 and Tyr 1235
C-terminal region contains two crucial tyrosines (Tyr 1349 and Tyr 1356), which are inserted into the multisubstrate docking site, capable of recruiting downstream adapter proteins with Src homology-2 (SH2) domains. The two tyrosines of the docking site have been reported to be necessary and sufficient for the signal transduction both in vitro.
MET signaling pathway
MET activation by its ligand HGF induces MET kinase catalytic activity, which triggers transphosphorylation of the tyrosines Tyr 1234 and Tyr 1235. These two tyrosines engage various signal transducers, thus initiating a whole spectrum of biological activities driven by MET, collectively known as the invasive growth program. The transducers interact with the intracellular multisubstrate docking site of MET either directly, such as GRB2, SHC, SRC, and the p85 regulatory subunit of phosphatidylinositol-3 kinase (PI3K), or indirectly through the scaffolding protein Gab1
Tyr 1349 and Tyr 1356 of the multisubstrate docking site are both involved in the interaction with GAB1, SRC, and SHC, while only Tyr 1356 is involved in the recruitment of GRB2, phospholipase C γ (PLC-γ), p85, and SHP2.
GAB1 is a key coordinator of the cellular responses to MET and binds the MET intracellular region with high avidity, but low affinity. Upon interaction with MET, GAB1 becomes phosphorylated on several tyrosine residues which, in turn, recruit a number of signalling effectors, including PI3K, SHP2, and PLC-γ. GAB1 phosphorylation by MET results in a sustained signal that mediates most of the downstream signaling pathways.
Activation of signal transduction
MET engagement activates multiple signal transduction pathways:
The RAS pathway mediates HGF-induced scattering and proliferation signals, which lead to branching morphogenesis. Of note, HGF, differently from most mitogens, induces sustained RAS activation, and thus prolonged MAPK activity.
The PI3K pathway is activated in two ways: PI3K can be either downstream of RAS, or it can be recruited directly through the multifunctional docking site. Activation of the PI3K pathway is currently associated with cell motility through remodeling of adhesion to the extracellular matrix as well as localized recruitment of transducers involved in cytoskeletal reorganization, such as RAC1 and PAK. PI3K activation also triggers a survival signal due to activation of the AKT pathway.
The STAT pathway, together with the sustained MAPK activation, is necessary for the HGF-induced branching morphogenesis. MET activates the STAT3 transcription factor directly, through an SH2 domain.
The beta-catenin pathway, a key component of the Wnt signaling pathway, translocates into the nucleus following MET activation and participates in transcriptional regulation of numerous genes.
The Notch pathway, through transcriptional activation of Delta ligand (see DLL3).
Role in development
MET mediates a complex program known as invasive growth. Activation of MET triggers mitogenesis, and morphogenesis.
During embryonic development, transformation of the flat, two-layer germinal disc into a three-dimensional body depends on transition of some cells from an epithelial phenotype to spindle-shaped cells with motile behaviour, a mesenchymal phenotype. This process is referred to as epithelial-mesenchymal transition (EMT). Later in embryonic development, MET is crucial for gastrulation, angiogenesis, myoblast migration, bone remodeling, and nerve sprouting among others. MET is essential for embryogenesis, because MET −/− mice die in utero due to severe defects in placental development. Along with Ectodysplasin A, it has been shown to be involved in the differentiation of anatomical placodes, precursors of scales, feathers and hair follicles in vertebrates. Furthermore, MET is required for such critical processes as liver regeneration and wound healing during adulthood.
HGF/MET axis is also involved in myocardial development. Both HGF and MET receptor mRNAs are co-expressed in cardiomyocytes from E7.5, soon after the heart has been determined, to E9.5. Transcripts for HGF ligand and receptor are first detected before the occurrence of cardiac beating and looping, and persist throughout the looping stage, when heart morphology begins to elaborate. In avian studies, HGF was found in the myocardial layer of the atrioventricular canal, in a developmental stage in which the epithelial to mesenchymal transformation (EMT) of the endocardial cushion occurs. However, MET is not essential for heart development, since α-MHCMet-KO mice show normal heart development.
Expression
Tissue distribution
MET is normally expressed by epithelial cells. However, MET is also found on endothelial cells, neurons, hepatocytes, hematopoietic cells, melanocytes and neonatal cardiomyocytes. HGF expression is restricted to cells of mesenchymal origin.
Transcriptional control
MET transcription is activated by HGF and several growth factors. MET promoter has four putative binding sites for Ets, a family of transcription factors that control several invasive growth genes. ETS1 activates MET transcription in vitro. MET transcription is activated by hypoxia-inducible factor 1 (HIF1), which is activated by low concentration of intracellular oxygen. HIF1 can bind to one of the several hypoxia response elements (HREs) in the MET promoter. Hypoxia also activates transcription factor AP-1, which is involved in MET transcription.
Clinical significance
Role in cancer
MET pathway plays an important role in the development of cancer through:
activation of key oncogenic pathways (RAS, PI3K, STAT3, beta-catenin);
angiogenesis (sprouting of new blood vessels from pre-existing ones to supply a tumor with nutrients);
scatter (cells dissociation due to metalloprotease production), which often leads to metastasis.
Coordinated down-regulation of both MET and its downstream effector extracellular signal-regulated kinase 2 (ERK2) by miR-199a* may be effective in inhibiting not only cell proliferation but also motility and invasive capabilities of tumor cells.
MET amplification has emerged as a potential biomarker of the clear cell tumor subtype.
The amplification of the cell surface receptor MET often drives resistance to anti-EGFR therapies in colorectal cancer.
Role in autism
The SFARIgene database lists MET with an autism score of 2.0, which indicates that it is a strong candidate for playing a role in cases of autism. The database also identifies at least one study that found a role for MET in cases of schizophrenia. The gene was first implicated in autism in a study that identified a polymorphism in the promoter of the MET gene. The polymorphism reduces transcription by 50%. Further, the variant as an autism risk polymorphism has been replicated, and shown to be enriched in children with autism and gastrointestinal disturbances. A rare mutation has been found that appears in two family members, one with autism and the other with a social and communication disorder. The role of the receptor in brain development is distinct from its role in other developmental processes. Activation of the MET receptor regulates synapse formation and can impact the development and function of circuits involved in social and emotional behavior.
Role in heart function
In adult mice, MET is required to protect cardiomyocytes by preventing age-related oxidative stress, apoptosis, fibrosis and cardiac dysfunction. Moreover, MET inhibitors, such as crizotinib or PF-04254644, have been tested by short-term treatments in cellular and preclinical models, and have been shown to induce cardiomyocytes death through ROS production, activation of caspases, metabolism alteration and blockage of ion channels.
In the injured heart, HGF/MET axis plays important roles in cardioprotection by promoting pro-survival (anti-apoptotic and anti-autophagic) effects in cardiomyocytes, angiogenesis, inhibition of fibrosis, anti-inflammatory and immunomodulatory signals, and regeneration through activation of cardiac stem cells.
Interaction with tumour suppressor genes
PTEN
PTEN (phosphatase and tensin homolog) is a tumor suppressor gene encoding a protein PTEN, which possesses lipid and protein phosphatase-dependent as well as phosphatase-independent activities. PTEN protein phosphatase is able to interfere with MET signaling by dephosphorylating either PIP3 generated by PI3K, or the p52 isoform of SHC. SHC dephosphorylation inhibits recruitment of the GRB2 adapter to activated MET.
VHL
There is evidence of correlation between inactivation of VHL tumor suppressor gene and increased MET signaling in renal cell carcinoma (RCC) and also in malignant transformations of the heart.
Cancer therapies targeting HGF/MET
Since tumor invasion and metastasis are the main cause of death in cancer patients, interfering with MET signaling appears to be a promising therapeutic approach. A comprehensive list of HGF and MET targeted experimental therapeutics for oncology now in human clinical trials can be found here.
MET kinase inhibitors
Kinase inhibitors are low molecular weight molecules that prevent ATP binding to MET, thus inhibiting receptor transphosphorylation and recruitment of the downstream effectors. The limitations of kinase inhibitors include the facts that they only inhibit kinase-dependent MET activation, and that none of them is fully specific for MET.
K252a (Fermentek Biotechnology) is a staurosporine analogue isolated from Nocardiopsis sp. soil fungi, and it is a potent inhibitor of all receptor tyrosine kinases (RTKs). At nanomolar concentrations, K252a inhibits both the wild type and the mutant (M1268T) MET function.
SU11274 (SUGEN) specifically inhibits MET kinase activity and its subsequent signaling. SU11274 is also an effective inhibitor of the M1268T and H1112Y MET mutants, but not the L1213V and Y1248H mutants. SU11274 has been demonstrated to inhibit HGF-induced motility and invasion of epithelial and carcinoma cells.
PHA-665752 (Pfizer) specifically inhibits MET kinase activity, and it has been demonstrated to represses both HGF-dependent and constitutive MET phosphorylation. Furthermore, some tumors harboring MET amplifications are highly sensitive to treatment with PHA-665752.
Tivantinib (ArQule) is a promising selective inhibitor of MET, which entered a phase 2 clinical trial in 2008. (Failed a phase 3 in 2017)
Foretinib (XL880, Exelixis) targets multiple receptor tyrosine kinases (RTKs) with growth-promoting and angiogenic properties. The primary targets of foretinib are MET, VEGFR2, and KDR. Foretinib has completed a phase 2 clinical trials with indications for papillary renal cell carcinoma, gastric cancer, and head and neck cancer
SGX523 (SGX Pharmaceuticals) specifically inhibits MET at low nanomolar concentrations.
MP470 (SuperGen) is a novel inhibitor of c-KIT, MET, PDGFR, Flt3, and AXL. Phase I clinical trial of MP470 had been announced in 2007.
Vebreltinib, approved in China for the treatment of non-small-cell lung cancer.
HGF inhibitors
Since HGF is the only known ligand of MET, blocking the formation of a HGF:MET complex blocks MET biological activity. For this purpose, truncated HGF, anti-HGF neutralizing antibodies, and an uncleavable form of HGF have been utilized so far. The major limitation of HGF inhibitors is that they block only HGF-dependent MET activation.
NK4 competes with HGF as it binds MET without inducing receptor activation, thus behaving as a full antagonist. NK4 is a molecule bearing the N-terminal hairpin and the four kringle domains of HGF. Moreover, NK4 is structurally similar to angiostatins, which is why it possesses anti-angiogenic activity.
Neutralizing anti-HGF antibodies were initially tested in combination, and it was shown that at least three antibodies, acting on different HGF epitopes, are necessary to prevent MET tyrosine kinase activation. More recently, it has been demonstrated that fully human monoclonal antibodies can individually bind and neutralize human HGF, leading to regression of tumors in mouse models. Two anti-HGF antibodies are currently available: the humanized AV299 (AVEO), and the fully human AMG102 (Amgen).
Uncleavable HGF is an engineered form of pro-HGF carrying a single amino-acid substitution, which prevents the maturation of the molecule. Uncleavable HGF is capable of blocking MET-induced biological responses by binding MET with high affinity and displacing mature HGF. Moreover, uncleavable HGF competes with the wild-type endogenous pro-HGF for the catalytic domain of proteases that cleave HGF precursors. Local and systemic expression of uncleavable HGF inhibits tumor growth and, more importantly, prevents metastasis.
Decoy MET
Decoy MET refers to a soluble truncated MET receptor. Decoys are able to inhibit MET activation mediated by both HGF-dependent and independent mechanisms, as decoys prevent both the ligand binding and the MET receptor homodimerization. CGEN241 (Compugen) is a decoy MET that is highly efficient in inhibiting tumor growth and preventing metastasis in animal models.
Immunotherapy targeting MET
Drugs used for immunotherapy can act either passively by enhancing the immunologic response to MET-expressing tumor cells, or actively by stimulating immune cells and altering differentiation/growth of tumor cells.
Passive immunotherapy
Administering monoclonal antibodies (mAbs) is a form of passive immunotherapy. MAbs facilitate destruction of tumor cells by complement-dependent cytotoxicity (CDC) and cell-mediated cytotoxicity (ADCC). In CDC, mAbs bind to specific antigen, leading to activation of the complement cascade, which in turn leads to formation of pores in tumor cells. In ADCC, the Fab domain of a mAb binds to a tumor antigen, and Fc domain binds to Fc receptors present on effector cells (phagocytes and NK cells), thus forming a bridge between an effector and a target cells. This induces the effector cell activation, leading to phagocytosis of the tumor cell by neutrophils and macrophages. Furthermore, NK cells release cytotoxic molecules, which lyse tumor cells.
DN30 is monoclonal anti-MET antibody that recognizes the extracellular portion of MET. DN30 induces both shedding of the MET ectodomain as well as cleavage of the intracellular domain, which is successively degraded by proteasome machinery. As a consequence, on one side MET is inactivated, and on the other side the shed portion of extracellular MET hampers activation of other MET receptors, acting as a decoy. DN30 inhibits tumour growth and prevents metastasis in animal models.
OA-5D5 is one-armed monoclonal anti-MET antibody that was demonstrated to inhibit orthotopic pancreatic and glioblastoma tumor growth and to improve survival in tumor xenograft models. OA-5D5 is produced as a recombinant protein in Escherichia coli. It is composed of murine variable domains for the heavy and light chains with human IgG1 constant domains. The antibody blocks HGF binding to MET in a competitive fashion.
Active immunotherapy
Active immunotherapy to MET-expressing tumors can be achieved by administering cytokines, such as interferons (IFNs) and interleukins (IL-2), which triggers non-specific stimulation of numerous immune cells. IFNs have been tested as therapies for many types of cancers and have demonstrated therapeutic benefits. IL-2 has been approved by the U.S. Food and Drug Administration (FDA) for the treatment of renal cell carcinoma and metastatic melanoma, which often have deregulated MET activity.
Interactions
Met has been shown to interact with:
CDH1,
Cbl gene,
GLMN,
Grb2,
Hepatocyte growth factor,
PTPmu, and
RANBP9
See also
c-Met inhibitors
Tpr-met fusion protein
References
Further reading
External links
UniProtKB/Swiss-Prot entry P08581: MET_HUMAN, ExPASy (Expert Protein Analysis System) proteomics server of the Swiss Institute of Bioinformatics (SIB)
A table with references to significant roles of MET in cancer
Tyrosine kinase receptors
EC 2.7.10 | Hepatocyte growth factor receptor | [
"Chemistry"
] | 4,511 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
2,826,492 | https://en.wikipedia.org/wiki/RNA%20editing | RNA editing (also RNA modification) is a molecular process through which some cells can make discrete changes to specific nucleotide sequences within an RNA molecule after it has been generated by RNA polymerase. It occurs in all living organisms and is one of the most evolutionarily conserved properties of RNAs. RNA editing may include the insertion, deletion, and base substitution of nucleotides within the RNA molecule. RNA editing is relatively rare, with common forms of RNA processing (e.g. splicing, 5'-capping, and 3'-polyadenylation) not usually considered as editing. It can affect the activity, localization as well as stability of RNAs, and has been linked with human diseases.
RNA editing has been observed in some tRNA, rRNA, mRNA, or miRNA molecules of eukaryotes and their viruses, archaea, and prokaryotes. RNA editing occurs in the cell nucleus, as well as within mitochondria and plastids. In vertebrates, editing is rare and usually consists of a small number of changes to the sequence of the affected molecules. In other organisms, such as squids, extensive editing (pan-editing) can occur; in some cases the majority of nucleotides in an mRNA sequence may result from editing. More than 160 types of RNA modifications have been described so far.
RNA-editing processes show great molecular diversity, and some appear to be evolutionarily recent acquisitions that arose independently. The diversity of RNA editing phenomena includes nucleobase modifications such as cytidine (C) to uridine (U) and adenosine (A) to inosine (I) deaminations, as well as non-template nucleotide additions and insertions. RNA editing in mRNAs effectively alters the amino acid sequence of the encoded protein so that it differs from that predicted by the genomic DNA sequence.
Detection of RNA editing
Next generation sequencing
To identify diverse post-transcriptional modifications of RNA molecules and determine the transcriptome-wide landscape of RNA modifications by means of next generation RNA sequencing, recently many studies have developed conventional or specialised sequencing methods. Examples of specialised methods are MeRIP-seq, m6A-seq, PA-m5C-seq , methylation-iCLIP, m6A-CLIP, Pseudo-seq, Ψ-seq, CeU-seq, Aza-IP and RiboMeth-seq). Many of these methods are based on specific capture of the RNA species containing the specific modification, for example through antibody binding coupled with sequencing of the captured reads. After the sequencing these reads are mapped against the whole transcriptome to see where they originate from. Generally with this kind of approach it is possible to see the location of the modifications together with possible identification of some consensus sequences that might help identification and mapping further on. One example of the specialize methods is PA-m5C-seq. This method was further developed from PA-m6A-seq method to identify m5C modifications on mRNA instead of the original target N6-methyladenosine. The easy switch between different modifications as target is made possible with a simple change of the capturing antibody form m6A specific to m5C specific. Application of these methods have identified various modifications (e.g. pseudouridine, m6A, m5C, 2′-O-Me) within coding genes and non-coding genes (e.g. tRNA, lncRNAs, microRNAs) at single nucleotide or very high resolution.
Mass Spectrometry
Mass spectrometry is a way to quantify RNA modifications. More often than not, modifications cause an increase in mass for a given nucleoside. This gives a characteristic readout for the nucleoside and the modified counterpart. Moreover, mass spectrometry allows the investigation of modification dynamics by labelling RNA molecules with stable (non-radioactive) heavy isotopes in vivo. Due to the defined mass increase of heavy isotope labeled nucleosides they can be distinguished from their respective unlabelled isotopomeres by mass spectrometry. This method, called NAIL-MS (nucleic acid isotope labelling coupled mass spectrometry), enables a variety of approaches to investigate RNA modification dynamics.
Types of RNA
Messenger RNA modification
Recently, functional experiments have revealed many novel functional roles of RNA modifications. Most of the RNA modifications are found on transfer-RNA and ribosomal-RNA, but also eukaryotic mRNA has been shown to be modified with multiple different modifications. 17 naturally occurring modifications on mRNA have been identified, from which the N6-methyladenosine is the most abundant and studied. mRNA modifications are linked to many functions in the cell. They ensure the correct maturation and function of the mRNA, but also at the same time act as part of cell's immune system. Certain modifications like 2’O-methylated nucleotides has been associated with cells ability to distinguish own mRNA from foreign RNA. For example, m6A has been predicted to affect protein translation and localization, mRNA stability, alternative polyA choice and stem cell pluripotency. Pseudouridylation of nonsense codons suppresses translation termination both in vitro and in vivo, suggesting that RNA modification may provide a new way to expand the genetic code. 5-methylcytosine on the other hand has been associated with mRNA transport from the nucleus to the cytoplasm and enhancement of translation. These functions of m5C are not fully known and proven but one strong argument towards these functions in the cell is the observed localization of m5C to translation initiation site. Importantly, many modification enzymes are dysregulated and genetically mutated in many disease types. For example, genetic mutations in pseudouridine synthases cause mitochondrial myopathy, sideroblastic anemia (MLASA) and dyskeratosis congenital.
Compared to the modifications identified from other RNA species like tRNA and rRNA, the amount of identified modifications on mRNA is very small. One of the biggest reasons why mRNA modifications are not so well known is missing research techniques. In addition to the lack of identified modifications, the knowledge of associated proteins is also behind other RNA species. Modifications are results of specific enzyme interactions with the RNA molecule. Considering mRNA modifications most of the known related enzymes are the writer enzymes that add the modification on the mRNA. The additional groups of enzymes readers and erasers are for most of the modifications either poorly known of not known at all. For these reasons there has been during the past decade huge interest in studying these modifications and their function.
Transfer RNA modifications
Transfer RNA or tRNA is the most abundantly modified type of RNA. Modifications in tRNA play crucial roles in maintaining translation efficiency through supporting structure, anticodon-codon interactions, and interactions with enzymes.
Anticodon modifications are important for proper decoding of mRNA. Since the genetic code is degenerate, anticodon modifications are necessary to properly decode mRNA. Particularly, the wobble position of the anticodon determines how the codons are read. For example, in eukaryotes an adenosine at position 34 of the anticodon can be converted to inosine. Inosine is a modification that is able to base-pair with cytosine, adenine, and uridine.
Another commonly modified base in tRNA is the position adjacent to the anticodon. Position 37 is often hypermodified with bulky chemical modifications. These modifications prevent frameshifting and increase anticodon-codon binding stability through stacking interactions.
Ribosomal RNA modification
Ribosomal RNA (rRNA) is essential to the makeup of ribosomes and peptide transfer during translation processes. Ribosomal RNA modifications are made throughout ribosome synthesis, and often occur during and/or after translation. Modifications primarily play a role in the structure of the rRNA in order to protect translational efficiency. Chemical modification in rRNA consists of methylation of ribose sugars, isomerization of uridines, and methylation and acetylation of individual bases.
Methylation
Methylation of rRNA upholds structural rigidity by blocking base pair stacking and surrounds the 2’-OH group to block hydrolysis. It occurs at specific parts of eukaryotic rRNA. The template for methylation consists of 10-21 nucleotides. 2'-O-methylation of the ribose sugar is one of the most common rRNA modifications. Methylation is primarily introduced by small nucleolar RNA's, referred to as snoRNPs. There are two classes of snoRNPs that target methylation sites, and they are referred to box C/D and box H/ACA. One type of methylation, 2′-O-methylation, contributes to helical stabilization.
Isomerization
The isomerization of uridine to pseudouridine is the second most common rRNA modification. These pseudouridines are also introduced by the same classes of snoRNPs that participate in methylation. Pseudouridine synthases are the major participating enzymes in the reaction. The H/ACA box snoRNPs introduce guide sequences that are about 14-15 nucleotides long. Pseudouridylation is triggered in numerous places of rRNAs at once to preserve the thermal stability of RNA. Pseudouridine allows for increased hydrogen bonding and alters translation in rRNA and tRNA. It alters translation by increasing the affinity of the ribosome subunit to specific mRNAs.
Base Editing:
Base editing is the third major class of rRNA modification, specifically in eukaryotes. There are 8 categories of base edits that can occur at the gap between the small and large ribosomal subunits. RNA methyltransferases are the enzymes that introduce base methylation. Acetyltransferases are the enzymes responsible for acetylation of cytosine in rRNA. Base methylation plays a role in translation. These base modifications all work in conjunction with the two other main classes of modification to contribute to RNA structural stability. An example of this occurs in N7-methylation, which increases the nucleotide's charge to increase ionic interactions of proteins attaching to the RNA before translation.
Editing by insertion or deletion
RNA editing through the addition and deletion of uracil has been found in kinetoplasts from the mitochondria of Trypanosoma brucei.
Because this may involve a large fraction of the sites in a gene, it is sometimes called "pan-editing" to distinguish it from topical editing of one or a few sites.
Pan-editing starts with the base-pairing of the unedited primary transcript with a guide RNA (gRNA), which contains complementary sequences to the regions around the insertion/deletion points. The newly formed double-stranded region is then enveloped by an editosome, a large multi-protein complex that catalyzes the editing. The editosome opens the transcript at the first mismatched nucleotide and starts inserting uridines. The inserted uridines will base-pair with the guide RNA, and insertion will continue as long as A or G is present in the guide RNA and will stop when a C or U is encountered. The inserted nucleotides cause a frameshift, and result in a translated protein that differs from its gene.
The mechanism of the editosome involves an endonucleolytic cut at the mismatch point between the guide RNA and the unedited transcript. The next step is catalyzed by one of the enzymes in the complex, a terminal U-transferase, which adds Us from UTP at the 3' end of the mRNA. The opened ends are held in place by other proteins in the complex. Another enzyme, a U-specific exoribonuclease, removes the unpaired Us. After editing has made mRNA complementary to gRNA, an RNA ligase rejoins the ends of the edited mRNA transcript. As a consequence, the editosome can edit only in a 3' to 5' direction along the primary RNA transcript. The complex can act on only a single guide RNA at a time. Therefore, a RNA transcript requiring extensive editing will need more than one guide RNA and editosome complex.
Editing by deamination
C-to-U editing
The editing involves cytidine deaminase that deaminates a cytidine base into a uridine base. An example of C-to-U editing is with the apolipoprotein B gene in humans. Apo B100 is expressed in the liver and apo B48 is expressed in the intestines. In the intestines, the mRNA has a CAA sequence edited to be UAA, a stop codon, thus producing the shorter B48 form.
C-to-U editing often occurs in the mitochondrial RNA of flowering plants. Different plants have different degrees of C-to-U editing; for example, eight editing events occur in mitochondria of the moss Funaria hygrometrica, whereas over 1,700 editing events occur in the lycophytes Isoetes engelmanii. C-to-U editing is performed by members of the pentatricopeptide repeat (PPR) protein family. Angiosperms have large PPR families, acting as trans -factors for cis -elements lacking a consensus sequence; Arabidopsis has around 450 members in its PPR family. There have been a number of discoveries of PPR proteins in both plastids and mitochondria.
A-to-I editing
Adenosine-to-inosine (A-to-I) modifications contribute to nearly 90% of all editing events in RNA. The deamination of adenosine is catalyzed by the double-stranded RNA-specific adenosine deaminase (ADAR), which typically acts on pre-mRNAs. The deamination of adenosine to inosine disrupts and destabilizes the dsRNA base pairing, therefore rendering that particular dsRNA less able to produce siRNA, which interferes with the RNAi pathway.
The wobble base pairing causes deaminated RNA to have a unique but different structure, which may be related to the inhibition of the initiation step of RNA translation. Studies have shown that I-RNA (RNA with many repeats of the I-U base pair) recruits methylases that are involved in the formation of heterochromatin and that this chemical modification heavily interferes with miRNA target sites. There is active research into the importance of A-to-I modifications and their purpose in the novel concept of epitranscriptomics, in which modifications are made to RNA that alter their function. A long established consequence of A-to-I in mRNA is the interpretation of I as a G, therefore leading to functional A-to-G substitution, e.g. in the interpretation of the genetic code by ribosomes. Newer studies, however, have weakened this correlation by showing that inosines can also be decoded by the ribosome (although in a lesser extent) as adenosines or uracils. Furthermore, it was shown that I's lead to the stalling of ribosomes on the I-rich mRNA.
The development of high-throughput sequencing in recent years has allowed for the development of extensive databases for different modifications and edits of RNA. RADAR (Rigorously Annotated Database of A-to-I RNA editing) was developed in 2013 to catalog the vast variety of A-to-I sites and tissue-specific levels present in humans, mice, and flies. The addition of novel sites and overall edits to the database are ongoing. The level of editing for specific editing sites, e.g. in the filamin A transcript, is tissue-specific. The efficiency of mRNA-splicing is a major factor controlling the level of A-to-I RNA editing. Interestingly, ADAR1 and ADAR2 also affect alternative splicing via both A-to-I editing ability and dsRNA binding ability.
Alternative mRNA editing
Alternative U-to-C mRNA editing was first reported in WT1 (Wilms Tumor-1) transcripts, and non-classic G-A mRNA changes were first observed in HNRNPK (heterogeneous nuclear ribonucleoprotein K) transcripts in both malignant and normal colorectal samples. The latter changes were also later seen alongside non-classic U-to-C alterations in brain cell TPH2 (tryptophan hydroxylase 2) transcripts. Although the reverse amination might be the simplest explanation for U-to-C changes, transamination and transglycosylation mechanisms have been proposed for plant U-to-C editing events in mitochondrial transcripts. A recent study reported novel G-to-A mRNA changes in WT1 transcripts at two hotspots, proposing the APOBEC3A (apolipoprotein B mRNA editing enzyme, catalytic polypeptide 3A) as the enzyme implicated in this class of alternative mRNA editing. It was also shown that alternative mRNA changes were associated with canonical WT1 splicing variants, indicating their functional significance.
RNA editing in plant mitochondria and plastids
It has been shown in previous studies that the only types of RNA editing seen in the plants' mitochondria and plastids are conversion of C-to-U and U-to-C (very rare). RNA-editing sites are found mainly in the coding regions of mRNA, introns, and other non-translated regions. In fact, RNA editing can restore the functionality of tRNA molecules. The editing sites are found primarily upstream of mitochondrial or plastid RNAs. While the specific positions for C to U RNA editing events have been fairly well studied in both the mitochondrion and plastid, the identity and organization of all proteins comprising the editosome have yet to be established. Members of the expansive PPR protein family have been shown to function as trans-acting factors for RNA sequence recognition. Specific members of the MORF (Multiple Organellar RNA editing Factor) family are also required for proper editing at several sites. As some of these MORF proteins have been shown to interact with members of the PPR family, it is possible MORF proteins are components of the editosome complex. An enzyme responsible for the trans- or deamination of the RNA transcript remains elusive, though it has been proposed that the PPR proteins may serve this function as well.
RNA editing is essential for the normal functioning of the plant's translation and respiration activity. Editing can restore the essential base-pairing sequences of tRNAs, restoring functionality. It has also been linked to the production of RNA-edited proteins that are incorporated into the polypeptide complexes of the respiration pathway. Therefore, it is highly probable that polypeptides synthesized from unedited RNAs would not function properly and hinder the activity of both mitochondria and plastids.
C-to-U RNA editing can create start and stop codons, but it cannot destroy existing start and stop codons. A cryptic start codon is created when the codon ACG is edited to be AUG.
RNA editing in viruses
Viruses (i.e., measles, mumps, or parainfluenza), especially viruses that have an RNA genome, have been shown to have evolved to utilize RNA modifications in many ways when taking over the host cell. Viruses are known to utilize the RNA modifications in different parts of their infection cycle from immune evasion to protein translation enhancement. RNA editing is used for stability and generation of protein variants. Viral RNAs are transcribed by a virus-encoded RNA-dependent RNA polymerase, which is prone to pausing and "stuttering" at certain nucleotide combinations. In addition, up to several hundred non-templated A's are added by the polymerase at the 3' end of nascent mRNA. These As help stabilize the mRNA. Furthermore, the pausing and stuttering of the RNA polymerase allows the incorporation of one or two Gs or As upstream of the translational codon. The addition of the non-templated nucleotides shifts the reading frame, which generates a different protein.
Additionally, the RNA modifications are shown to have both positive and negative effects on the replication and translation efficiency depending on the virus. For example, Courtney et al. showed that an RNA modification called 5-methylcytosine is added to the viral mRNA in infected host cells in order to enhance the protein translation of HIV-1 virus. The inhibition of the m5C modification on viral mRNA results in significant reduction in viral protein translation, but interestingly it has no effect on the expression of viral mRNAs in the cell. On the other hand, Lichinchi et al. showed that the N6-methyladenosine modification on ZIKV mRNA inhibits the viral replication.
Origin and Evolution of RNA editing
The RNA-editing system seen in the animal may have evolved from mononucleotide deaminases, which have led to larger gene families that include the apobec-1 and adar genes. These genes share close identity with the bacterial deaminases involved in nucleotide metabolism. The adenosine deaminase of E. coli cannot deaminate a nucleoside in the RNA; the enzyme's reaction pocket is too small for the RNA strand to bind to. However, this active site is widened by amino acid changes in the corresponding human analog genes, APOBEC1 and ADAR, allowing deamination.
The gRNA-mediated pan-editing in trypanosome mitochondria, involving templated insertion of U residues, is an entirely different biochemical reaction. The enzymes involved have been shown in other studies to be recruited and adapted from different sources. But the specificity of nucleotide insertion via the interaction between the gRNA and mRNA is similar to the tRNA editing processes in the animal and Acanthamoeba mitochondria. Eukaryotic ribose methylation of rRNAs by guide RNA molecules is a similar form of modification.
Thus, RNA editing evolved more than once. Several adaptive rationales for editing have been suggested. Editing is often described as a mechanism of correction or repair to compensate for defects in gene sequences. However, in the case of gRNA-mediated editing, this explanation does not seem possible because if a defect happens first, there is no way to generate an error-free gRNA-encoding region, which presumably arises by duplication of the original gene region. A more plausible alternative for the evolutionary origins of this system is through constructive neutral evolution, where the order of steps is reversed, with the gratuitous capacity for editing preceding the "defect".
Therapeutic mRNA Editing
Directing edits to correct mutated sequences was first proposed and demonstrated in 1995. This initial work used synthetic RNA antisense oligonucleotides complementary to a pre-mature stop codon mutation in a dystrophin sequence to activate A-to-I editing of the stop codon to a read through codon in a model xenopus cell system. While this also led to nearby inadvertent A-to-I transitions, A to I (read as G) transitions can correct all three stop codons, but cannot create a stop codon. Therefore, the changes led >25% correction of the targeted stop codon with read through to a downstream luciferase reporter sequence. Follow on work by Rosenthal achieved editing of mutated mRNA sequence in mammalian cell culture by directing an oligonucleotide linked to a cytidine deaminase to correct a mutated cystic fibrosis sequence. More recently, CRISPR-Cas13 fused to deaminases has been employed to direct mRNA editing.
In 2022, therapeutic RNA editing for Cas7-11 was reported. It enables sufficiently targeted cuts and an early version of it was used for in vitro editing in 2021.
Comparison to DNA editing
Unlike DNA editing, which is permanent, the effects of RNA editing − including potential off-target mutations in RNA − are transient and are not inherited. RNA editing is therefore considered to be less risky. Furthermore, it may only require a guide RNA by using the ADAR protein already found in humans and many other eukaryotes' cells instead of needing to introduce a foreign protein into the body.
See also
DNA editing
Epigenome editing
NcRNA therapy
References
Gene expression
RNA splicing
RNA | RNA editing | [
"Chemistry",
"Engineering",
"Biology"
] | 5,115 | [
"Biological engineering",
"Gene expression",
"Genetic engineering",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
2,826,751 | https://en.wikipedia.org/wiki/Project%20portfolio%20management | Project portfolio management (PPM) is the centralized management of the processes, methods, and technologies used by project managers and project management offices (PMOs) to analyze and collectively manage current or proposed projects based on numerous key characteristics. The objectives of PPM are to determine the optimal resource mix for delivery and to schedule activities to best achieve an organization's operational and financial goals, while honouring constraints imposed by customers, strategic objectives, or external real-world factors. Standards for Portfolio Management include Project Management Institute's framework for project portfolio management, Management of Portfolios by Office of Government Commerce and the PfM² Portfolio Management Methodology by the PM² Foundation.
Key capabilities
PPM provides program and project managers in large, program/project-driven organizations with the capabilities needed to manage the time, resources, skills, and budgets necessary to accomplish all interrelated tasks. It provides a framework for issue resolution and risk mitigation, as well as the centralized visibility to help planning and scheduling teams to identify the fastest, cheapest, or most suitable approach to deliver projects and programs.
Pipeline management
Pipeline management involves steps to ensure that an adequate number of project proposals are generated and evaluated to determine whether (and how) a set of projects in the portfolio can be executed with finite development resources in a specified time. There are three major sub-components to pipeline management: ideation, work intake processes, and Phase-Gate reviews. Fundamental to pipeline management is the ability to align the decision-making process for estimating and selecting new capital investment projects with the strategic plan.
Resource manager
The focus on the efficient and effective deployment of an organization's resources where and when they are needed. These can include financial resources, inventory, human resources, technical skills, production, and design. In addition to project-level resource allocation, users can also model 'what-if' resource scenarios, and extend this view across the portfolio.
Change control
The capture and prioritization of change requests that can include new requirements, features, functions, operational constraints, regulatory demands, and technical enhancements. PPM provides a central repository for these change requests and the ability to match available resources to evolving demand within the financial and operational constraints of individual projects.
Financial management
With PPM, the Office of Finance can improve their accuracy for estimating and managing the financial resources of a project or group of projects. In addition, the value of projects can be demonstrated in relation to the strategic objectives and priorities of the organization through financial controls and to assess progress through earned value and other project financial techniques. It is an important part.
Risk management
An analysis of the risk sensitivities residing within each project, as the basis for determining confidence levels across the portfolio. The integration of cost and schedule risk management with techniques for determining contingency and risk response plans, enable organizations to gain an objective view of project uncertainties. At the portfolio level, risk management enables organizations to protect portfolio investments and balance the level of risk in the portfolio.
The history of project portfolio management
The roots of project portfolio management can be traced back to financial theories that emerged in the 1950s, often linked with the pioneering work of Harry Markowitz, which was later recognized with a Nobel Prize. In essence, portfolio theories underline the importance of coordinating diverse elements to mitigate collective investment risks. These theories enable the optimization of portfolio benefits, the effective utilization and cultivation of limited resources, and the proper consideration of portfolio stakeholders.
Enterprise project portfolio management
Enterprise project portfolio management (EPPM) is a top-down approach to managing all project-intensive work and resources across the enterprise. This contrasts with the traditional approach of combining manual processes, desktop project tools, and PPM applications for each project portfolio environment.
Business drivers for EPPM
The PPM landscape is evolving rapidly as a result of the growing preference for managing multiple capital investment initiatives from a single, enterprise-wide system. This more centralized approach, and resulting 'single version of the truth' for project and project portfolio information, provides the transparency of performance needed by management to monitor progress versus the strategic plan.
The key aims of EPPM can be summarized as follows:
Prioritize the right projects and programs: EPPM can guide decision-makers to strategically prioritize, plan, and control enterprise portfolios. It also ensures the organization continues to increase productivity and on-time delivery - adding value, strengthening performance, and improving results.
Eliminate surprises: formal portfolio project oversight provides managers and executives with a process to identify potential problems earlier in the project lifecycle, and the visibility to take corrective action before they impact financial results.
Build contingencies into the overall portfolio: flexibility often exists within individual projects but, by integrating contingency planning across the entire portfolio of investments, organizations can have greater flexibility around how, where, and when they need to allocate resources, alongside the flexibility to adjust those resources in response to a crisis.
Maintain response flexibility: with in-depth visibility into resource allocation, organizations can quickly respond to escalating emergencies by maneuvering resources from other activities, while calculating the impact this will have on the wider business.
Do more with less: For organizations to systematically review project management processes while cutting out inefficiencies and automating those workflows and to ensure a consistent approach to all projects, programs, and portfolios while reducing costs.
Ensure informed decisions and governance: by bringing together all project collaborators, data points, and processes in a single, integrated solution, a unified view of project, program, and portfolio status can be achieved within a framework of rigorous control and governance to ensure all projects consistently adhere to business objectives.
Extend best practice enterprise-wide: organizations can continuously vet project management processes and capture best practices, providing efficiency as a result.
Understand future resource needs: by aligning the right resources to the right projects at the right time, organizations can ensure individual resources are fully leveraged and requirements are clearly understood. EPPM software also allows an organization to establish complete project capacity.
Project portfolio optimization
A key result of PPM is to decide which projects to fund in an optimal manner. Project Portfolio Optimization (PPO) is the effort to make the best decisions possible under these conditions.
See also
Aggregate project plan
Comparison of project-management software
Project management
Project management software
Project management simulation
References
Further reading
Fister Gale, Sarah (2011), Prepare for the Unexpected: Investment Planning in Asset-Intensive Industries, Economist Intelligence Unit.
Management Square, What is Project Portfolio Management ?
Skaf, Mazen A. "Portfolio management in an upstream oil and gas organization." Interfaces 29.6 (1999): 84-104.
Brand management
Information technology management
Product management
Project management by type
Corporate development
Management cybernetics
pt:Gerenciamento de programas de projetos#Gerenciamento de Portfólio de Projetos | Project portfolio management | [
"Technology"
] | 1,382 | [
"Information technology",
"Information technology management"
] |
2,826,888 | https://en.wikipedia.org/wiki/Zinc%20iodide | Zinc iodide is the inorganic compound with the formula ZnI2. It exists both in anhydrous form and as a dihydrate. Both are white and readily absorb water from the atmosphere. It has no major application.
Preparation
It can be prepared by the direct reaction of zinc and iodine in water or refluxing ether:
Zn + I2 → ZnI2
Absent a solvent, the elements do not combine directly at room temperature.
Structure as solid, gas, and in solution
The structure of solid ZnI2 is unusual relative to the dichloride. While zinc centers are tetrahedrally coordinated, as in ZnCl2, groups of four of these tetrahedra share three vertices to form “super-tetrahedra” of composition {Zn4I10}, which are linked by their vertices to form a three-dimensional structure. These "super-tetrahedra" are similar to the P4O10 structure.
Molecular ZnI2 is linear as predicted by VSEPR theory with a Zn-I bond length of 238 pm.
In aqueous solution the following have been detected: Zn(H2O)62+, [ZnI(H2O)5]+, tetrahedral ZnI2(H2O)2, ZnI3(H2O)−, and ZnI42−.
Applications
Zinc iodide is often used as an x-ray opaque penetrant in industrial radiography to improve the contrast between the damage and intact composite.
United States patent 4,109,065 describes a rechargeable aqueous zinc-halogen cell that includes an aqueous electrolytic solution containing a zinc salt selected from the class consisting of zinc bromide, zinc iodide, and mixtures thereof, in both positive and negative electrode compartments.
In combination with osmium tetroxide, ZnI2 is used as a stain in electron microscopy.
As a Lewis acid, zinc iodide catalyzes for the conversion of methanol to triptane and hexamethylbenzene.
It can be used as a topical antiseptic.
References
zinc
Metal halides
iodide | Zinc iodide | [
"Chemistry"
] | 460 | [
"Inorganic compounds",
"Metal halides",
"Salts"
] |
2,827,226 | https://en.wikipedia.org/wiki/Arbiter%20%28electronics%29 | Arbiters are electronic devices that allocate access to shared resources.
Bus arbiter
There are multiple ways to perform a computer bus arbitration, with the most popular varieties being:
dynamic centralized parallel where one central arbiter is used for all masters as discussed in this article;
centralized serial (or "daisy chain") where, upon accessing the bus, the active master passes the opportunity to the next one. In essence, each connected master contains its own arbiter;
distributed arbitration by self-selection (distributed bus arbitration) where the access is self-granted based on the decision made locally by using information from other masters;
distributed arbitration by collision detection where each master tries to access the bus on its own, but detects conflicts and retries the failed operations.
A bus arbiter is a device used in a multi-master bus system to decide which bus master will be allowed to control the bus for each bus cycle.
The most common kind of bus arbiter is the memory arbiter in a system bus system.
A memory arbiter is a device used in a shared memory system to decide, for each memory cycle, which CPU will be allowed to access that shared memory.
Some atomic instructions depend on the arbiter to prevent other CPUs from reading memory "halfway through" atomic read-modify-write instructions.
A memory arbiter is typically integrated into the memory controller/DMA controller.
Some systems, such as conventional PCI, have a single centralized bus arbitration device that one can point to as "the" bus arbiter, which was usually integrated in chipset.
Other systems use decentralized bus arbitration, where all the devices cooperate to decide who goes next.
When every CPU connected to the memory arbiter has synchronized memory access cycles, the memory arbiter can be designed as a synchronous arbiter.
Otherwise the memory arbiter must be designed as an asynchronous arbiter.
Asynchronous arbiters
An important form of arbiter is used in asynchronous circuits to select the order of access to a shared resource among asynchronous requests. Its function is to prevent two operations from occurring at once when they should not. For example, in a computer that has multiple CPUs or other devices accessing computer memory, and has more than one clock, the possibility exists that requests from two unsynchronized sources could come in at nearly the same time. "Nearly" can be very close in time, in the sub-femtosecond range. The memory arbiter must then decide which request to service first. Unfortunately, it is not possible to do this in a fixed time [Anderson 1991].
Asynchronous arbiters and metastability
Arbiters break ties. Like a flip-flop circuit, an arbiter has two stable states corresponding to the two choices. If two requests arrive at an arbiter within a few picoseconds (today, femtoseconds) of each other, the circuit may become meta-stable before reaching one of its stable states to break the tie. Classical arbiters are specially designed not to oscillate wildly when meta-stable and to decay from a meta-stability as rapidly as possible, typically by using extra power. The probability of not having reached a stable state decreases exponentially with time after inputs have been provided.
A reliable solution to this problem was found in the mid-1970s. Although an arbiter that makes a decision in a fixed time is not possible, one that sometimes takes a little longer in the hard case (close calls) can be made to work. It is necessary to use a multistage synchronization circuit that detects that the arbiter has not yet settled into a stable state. The arbiter then delays processing until a stable state has been achieved. In theory, the arbiter can take an arbitrarily long time to settle (see Buridan's principle), but in practice, it seldom takes more than a few gate delay times. The classic paper is [Kinniment and Woods 1976], which describes how to build a "3 state flip flop" to solve this problem, and [Ginosar 2003], a caution to engineers on common mistakes in arbiter design.
This result is of considerable practical importance, as multiprocessor computers would not work reliably without it. The first multiprocessor computers date from the late 1960s, predating the development of reliable arbiters. Some early multiprocessors with independent clocks for each processor suffered from arbiter race conditions, and thus unreliability. Today, this is no longer a problem.
Synchronous arbiters
Arbiters are used in synchronous contexts as well in order to allocate access to a shared resource. A wavefront arbiter is an example of a synchronous arbiter that is present in one type of large network switch.
References
Sources
D.J. Kinniment and J.V. Woods. Synchronization and arbitration circuits in digital systems. Proceedings IEE. October 1976.
Carver Mead and Lynn Conway. Introduction to VLSI Systems Addison-Wesley. 1979.
Ran Ginosar. "Fourteen Ways to Fool Your Synchronizer" ASYNC 2003.
J. Anderson and M. Gouda, "A New Explanation of the Glitch Phenomenon ", Acta Informatica, Vol. 28, No. 4, pp. 297–309, April 1991.
External links
Digital Logic Metastability
Metastability Performance of Clocked FIFOs
The 'Asynchronous' Bibliography
Efficient Self-Timed Interfaces for Crossing Clock Domains
Electrical circuits | Arbiter (electronics) | [
"Engineering"
] | 1,197 | [
"Electrical engineering",
"Electronic engineering",
"Electrical circuits"
] |
2,827,371 | https://en.wikipedia.org/wiki/Immerman%E2%80%93Szelepcs%C3%A9nyi%20theorem | In computational complexity theory, the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(s(n)) = co-NSPACE(s(n)) for any function s(n) ≥ log n. The result is equivalently stated as NL = co-NL; although this is the special case when s(n) = log n, it implies the general theorem by a standard padding argument. The result solved the second LBA problem.
In other words, if a nondeterministic machine can solve a problem, another machine with the same resource bounds can solve its complement problem (with the yes and no answers reversed) in the same asymptotic amount of space. No similar result is known for the time complexity classes, and indeed it is conjectured that NP is not equal to co-NP.
The principle used to prove the theorem has become known as inductive counting. It has also been used to prove other theorems in computational complexity, including the closure of LOGCFL under complementation and the existence of error-free randomized logspace algorithms for USTCON.
Proof
We prove here that NL = co-NL. The theorem is obtained from this special case by a padding argument.
The st-connectivity problem asks, given a digraph G and two vertices s and t, whether there is a directed path from s to t in G. This problem is NL-complete, therefore its complement st-non-connectivity is co-NL-complete. It suffices to show that st-non-connectivity is in NL. This proves co-NL ⊆ NL, and by complementation, NL ⊆ co-NL.
We fix a digraph G, a source vertex s, and a target vertex t. We denote by Rk the set of vertices which are reachable from s in at most k steps. Note that if t is reachable from s, it is reachable in at most n-1 steps, where n is the number of vertices, therefore we are reduced to testing whether t ∉ Rn-1.
We remark that R0 = { s }, and Rk+1 is the set of vertices v which are either in Rk, or the target of an edge w → v where w is in Rk. This immediately gives an algorithm to decide t ∈ Rn, by successively computing R1, …, Rn. However, this algorithm uses too much space to solve the problem in NL, since storing a set Rk requires one bit per vertex.
The crucial idea of the proof is that instead of computing Rk+1 from Rk, it is possible to compute the size of Rk+1 from the size of Rk, with the help of non-determinism. We iterate over vertices and increment a counter for each vertex that is found to belong to Rk+1. The problem is how to determine whether v ∈ Rk+1 for a given vertex v, when we only have the size of Rk available.
To this end, we iterate over vertices w, and for each w, we non-deterministically guess whether w ∈ Rk. If we guess w ∈ Rk, and v = w or there is an edge w → v, then we determine that v belongs to Rk+1. If this fails for all vertices w, then v does not belong to Rk+1.
Thus, the computation that determines whether v belongs to Rk+1 splits into branches for the different guesses of which vertices belong to Rk. A mechanism is needed to make all of these branches abort (reject immediately), except the one where all the guesses were correct. For this, when we have made a “yes-guess” that w ∈ Rk, we check this guess, by non-deterministically looking for a path from s to w of length at most k. If this check fails, we abort the current branch. If it succeeds, we increment a counter of “yes-guesses”. On the other hand, we do not check the “no-guesses” that w ∉ Rk (this would require solving st-non-connectivity, which is precisely the problem that we are solving in the first place). However, at the end of the loop over w, we check that the counter of “yes-guesses” matches the size of Rk, which we know. If there is a mismatch, we abort. Otherwise, all the “yes-guesses” were correct, and there was exactly the right number of them, thus all “no-guesses” were correct as well.
This concludes the computation of the size of Rk+1 from the size of Rk. Iteratively, we compute the sizes of R1, R2, …, Rn-2. Finally, we check whether t ∈ Rn-1, which is possible from the size of Rn-2 by the sub-algorithm that is used inside the computation of the size of Rk+1.
The following pseudocode summarizes the algorithm:
function verify_reachable(G, s, w, k)
// Verifies that w ∈ Rk. If this is not the case, aborts
// the current computation branch, rejecting the input.
if s = w then
return
c ← s
repeat k times
// Aborts if there is no edge from c, otherwise
// non-deterministically branches
guess an edge c → d in G
c ← d
if c = w then
return
// We did not guess a path.
reject
function is_reachable(G, s, v, k, S)
// Assuming that Rk has size S, determines whether v ∈ Rk+1.
reachable ← false
yes_guesses ← 0 // counter of yes-guesses w ∈ Rk
for each vertex w of G do
// Guess whether w ∈ Rk
guess a boolean b
if b then
verify_reachable(G, s, w, k)
yes_guesses += 1
if v = w or there is an edge w → v in G then
reachable ← true
if yes_guesses ≠ S then
reject // wrong number of yes-guesses
return reachable
function st_non_connectivity(G, s, t)
n ← vertex_count(G)
// Size of Rk, initially 1 because R0 = {s}
S ← 1
for k from 0 to n-3 do
S' ← 0 // size of Rk+1
for each vertex v of G do
if is_reachable(G, s, v, k, S) then
S' += 1
S ← S'
return not is_reachable(G, s, t, n-2, S)
Logspace hierarchy
As a corollary, in the same article, Immerman proved that, using descriptive complexity's equality between NL and FO(Transitive Closure), the logarithmic hierarchy, i.e. the languages decided by an alternating Turing machine in logarithmic space with a bounded number of alternations, is the same class as NL.
See also
Notes
References
External links
Lance Fortnow, Foundations of Complexity, Lesson 19: The Immerman–Szelepcsenyi Theorem. Accessed 09/09/09.
Structural complexity theory
Mathematical theorems in theoretical computer science
Articles containing proofs | Immerman–Szelepcsényi theorem | [
"Mathematics"
] | 1,601 | [
"Articles containing proofs",
"Mathematical theorems",
"Mathematical problems",
"Mathematical theorems in theoretical computer science"
] |
28,868,152 | https://en.wikipedia.org/wiki/Jacobi%20operator | A Jacobi operator, also known as Jacobi matrix, is a symmetric linear operator acting on sequences which is given by an infinite tridiagonal matrix. It is commonly used to specify systems of orthonormal polynomials over a finite, positive Borel measure. This operator is named after Carl Gustav Jacob Jacobi.
The name derives from a theorem from Jacobi, dating to 1848, stating that every symmetric matrix over a principal ideal domain is congruent to a tridiagonal matrix.
Self-adjoint Jacobi operators
The most important case is the one of self-adjoint Jacobi operators acting on the Hilbert space of square summable sequences over the positive integers . In this case it is given by
where the coefficients are assumed to satisfy
The operator will be bounded if and only if the coefficients are bounded.
There are close connections with the theory of orthogonal polynomials. In fact, the solution of the recurrence relation
is a polynomial of degree n and these polynomials are orthonormal with respect to the spectral measure corresponding to the first basis vector .
This recurrence relation is also commonly written as
Applications
It arises in many areas of mathematics and physics. The case a(n) = 1 is known as the discrete one-dimensional Schrödinger operator. It also arises in:
The Lax pair of the Toda lattice.
The three-term recurrence relationship of orthogonal polynomials, orthogonal over a positive and finite Borel measure.
Algorithms devised to calculate Gaussian quadrature rules, derived from systems of orthogonal polynomials.
Generalizations
When one considers Bergman space, namely the space of square-integrable holomorphic functions over some domain, then, under general circumstances, one can give that space a basis of orthogonal polynomials, the Bergman polynomials. In this case, the analog of the tridiagonal Jacobi operator is a Hessenberg operator – an infinite-dimensional Hessenberg matrix. The system of orthogonal polynomials is given by
and . Here, D is the Hessenberg operator that generalizes the tridiagonal Jacobi operator J for this situation. Note that D is the right-shift operator on the Bergman space: that is, it is given by
The zeros of the Bergman polynomial correspond to the eigenvalues of the principal submatrix of D. That is, The Bergman polynomials are the characteristic polynomials for the principal submatrices of the shift operator.
See also
Hankel matrix
References
External links
Operator theory
Hilbert spaces
Recurrence relations | Jacobi operator | [
"Physics",
"Mathematics"
] | 507 | [
"Hilbert spaces",
"Mathematical relations",
"Quantum mechanics",
"Recurrence relations"
] |
3,787,507 | https://en.wikipedia.org/wiki/Ferroics | In physics, ferroics is the generic name given to the study of ferromagnets, ferroelectrics, and ferroelastics.
Overview
The basis of ferroics is to understand the large changes in physical characteristics that occur over a very narrow temperature range. The changes in physical characteristics occur when phase transitions take place around some critical temperature value, normally denoted by . Above this critical temperature, the crystal is in a nonferroic state and does not exhibit the physical characteristic of interest. Upon cooling the material down below it undergoes a spontaneous phase transition. Such a phase transition typically results in only a small deviation from the nonferroic crystal structure, but in altering the shape of the unit cell the point symmetry of the material is reduced. This breaking of symmetry is physically what allows the formation of the ferroic phase.
In ferroelectrics, upon lowering the temperature below , a spontaneous dipole moment is induced along an axis of the unit cell. Although individual dipole moments can sometimes be small, the effect of unit cells gives rise to an electric field that over the bulk substance that is not insignificant. An important point about ferroelectrics is that they cannot exist in a centrosymmetric crystal. A centrosymmetric crystal is one where a lattice point can be mapped onto a lattice point .
Ferromagnets is a term that most people are familiar with, and, as with ferroelastics, the spontaneous magnetization of a ferromagnet can be attributed to a breaking of point symmetry in switching from the paramagnetic to the ferromagnetic phase. In this case, is normally known as the Curie temperature.
In ferroelastic crystals, in going from the nonferroic (or prototypic phase) to the ferroic phase, a spontaneous strain is induced. An example of a ferroelastic phase transition is when the crystal structure spontaneously changes from a tetragonal structure (a square prism shape) to a monoclinic structure (a general parallelepiped). Here the shapes of the unit cell before and after the phase transition are different, and hence a strain is induced within the bulk.
In recent years, multiferroics have been attracting increased interest. These materials exhibit more than one ferroic property simultaneously in a single phase. A fourth ferroic order termed ferrotoroidic order has also been proposed.
See also
Piezoelectricity
Pyroelectricity
References
Condensed matter physics
Magnetic ordering
Phases of matter
Hysteresis | Ferroics | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 530 | [
"Physical phenomena",
"Phases of matter",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Condensed matter physics",
"Hysteresis",
"Matter"
] |
3,790,812 | https://en.wikipedia.org/wiki/Kochen%E2%80%93Specker%20theorem | In quantum mechanics, the Kochen–Specker (KS) theorem, also known as the Bell–KS theorem, is a "no-go" theorem proved by John S. Bell in 1966 and by Simon B. Kochen and Ernst Specker in 1967. It places certain constraints on the permissible types of hidden-variable theories, which try to explain the predictions of quantum mechanics in a context-independent way. The version of the theorem proved by Kochen and Specker also gave an explicit example for this constraint in terms of a finite number of state vectors.
The Kochen–Specker theorem is a complement to Bell's theorem. While Bell's theorem established nonlocality to be a feature of any hidden-variable theory that recovers the predictions of quantum mechanics, the Kochen–Specker theorem established contextuality to be an inevitable feature of such theories.
The theorem proves that there is a contradiction between two basic assumptions of the hidden-variable theories intended to reproduce the results of quantum mechanics: that all hidden variables corresponding to quantum-mechanical observables have definite values at any given time, and that the values of those variables are intrinsic and independent of the device used to measure them. The contradiction is caused by the fact that quantum-mechanical observables need not be commutative. It turns out to be impossible to simultaneously embed all the commuting subalgebras of the algebra of these observables in one commutative algebra, assumed to represent the classical structure of the hidden-variables theory, if the Hilbert space dimension is at least three.
The Kochen–Specker theorem excludes hidden-variable theories that assume that elements of physical reality can all be consistently represented simultaneously by the quantum mechanical Hilbert space formalism disregarding the context of a particular framework (technically, a projective decomposition of the identity operator) related to the experiment or analytical viewpoint under consideration. As succinctly worded by Isham and Butterfield, (under the assumption of a universal probabilistic sample space as in non-contextual hidden-variable theories) the Kochen–Specker theorem "asserts the impossibility of assigning values to all physical quantities whilst, at the same time, preserving the functional relations between them".
History
The KS theorem is an important step in the debate on the (in)completeness of quantum mechanics, boosted in 1935 by the criticism of the Copenhagen assumption of completeness in the article by Einstein, Podolsky and Rosen, creating the so-called EPR paradox. This paradox is derived from the assumption that a quantum-mechanical measurement result is generated in a deterministic way as a consequence of the existence of an element of physical reality assumed to be present before the measurement as a property of the microscopic object. In the EPR article it was assumed that the measured value of a quantum-mechanical observable can play the role of such an element of physical reality. As a consequence of this metaphysical supposition, the EPR criticism was not taken very seriously by the majority of the physics community. Moreover, in his answer Bohr had pointed to an ambiguity in the EPR article, to the effect that it assumes that you can suppose nothing would have changed in the distant results of the measurements changing the local measurement basis, even if all the universal context was different.
Taking into account the contextuality stemming from the measurement arrangement would, according to Bohr, make invalid the EPR reasoning. It was subsequently observed by Einstein that Bohr's reliance on contextuality implies nonlocality ("spooky action at a distance"), and that, in consequence, one would have to accept incompleteness if one wanted to avoid nonlocality.
In the 1950s and 1960s two lines of development were open for those not averse to metaphysics, both lines improving on a "no-go" theorem presented by von Neumann, purporting to prove the impossibility of the hidden-variable theories yielding the same results as quantum mechanics. First, Bohm developed an interpretation of quantum mechanics, generally accepted as a hidden-variable theory underpinning quantum mechanics. The nonlocality of Bohm's theory induced Bell to assume that quantum reality is nonlocal, and that probably only local hidden-variable theories are in disagreement with quantum mechanics. More importantly, Bell managed to lift the problem from the level of metaphysics to physics by deriving an inequality, the Bell inequality, that is capable of being experimentally tested.
A second line is the Kochen–Specker one. The essential difference from Bell's approach is that the possibility of underpinning quantum mechanics by a hidden-variable theory is dealt with independently of any reference to locality or nonlocality, but instead a stronger restriction than locality is made, namely that hidden variables are exclusively associated with the quantum system being measured; none are associated with the measurement apparatus. This is called the assumption of non-contextuality. Contextuality is related here with incompatibility of quantum-mechanical observables, incompatibility being associated with mutual exclusiveness of measurement arrangements. The Kochen–Specker theorem states that no non-contextual hidden-variable model can reproduce the predictions of quantum theory when the dimension of the Hilbert space is three or more.
Bell published a proof of the Kochen–Specker theorem in 1966, in an article which had been submitted to a journal earlier than his famous Bell-inequality article, but was lost on an editor's desk for two years. Considerably simpler proofs than the Kochen–Specker one were given later, amongst others, by Mermin and by Peres. However, many simpler proofs only establish the theorem for Hilbert spaces of higher dimension, e.g., from dimension four.
The first experimental test of contextuality was performed in 2000, and a version without detection, sharpness and compatibility loopholes was achieved in 2022.
Overview
The KS theorem explores whether it is possible to embed the set of quantum-mechanical observables into a set of classical quantities, in spite of the fact that all classical quantities are mutually compatible.
The first observation made in the Kochen–Specker article is that this is possible in a trivial way, namely, by ignoring the algebraic structure of the set of quantum-mechanical observables. Indeed, let pA(ak) be the probability that observable A has value ak, then the product ΠA pA(ak), taken over all possible observables A, is a valid joint probability distribution, yielding all probabilities of quantum-mechanical observables by taking marginals. Kochen and Specker note that this joint probability distribution is not acceptable, however, since it ignores all correlations between the observables. Thus, in quantum mechanics A2 has value ak2 if A has value ak, implying that the values of A and A2 are highly correlated.
More generally, it is required by Kochen and Specker that for an arbitrary function f the value of observable satisfies
If A1 and A2 are compatible (commeasurable) observables, then, by the same token, we should have the following two equalities:
and real, and
The first of these is a considerable weakening compared to von Neumann's assumption that this equality should hold independently of whether A1 and A2 are compatible or incompatible. Kochen and Specker were capable of proving that a value assignment is not possible even on the basis of these weaker assumptions. In order to do so, they restricted the observables to a special class, namely, so-called yes–no observables, having only values 0 and 1, corresponding to projection operators on the eigenvectors of certain orthogonal bases of a Hilbert space.
As long as the Hilbert space is at least three-dimensional, they were able to find a set of 117 such projection operators, not allowing to attribute to each of them in an unambiguous way either value 0 or 1. Instead of the rather involved proof by Kochen and Specker, it is more illuminating to reproduce here one of the much simpler proofs given much later, which employs a lower number of projection operators, but only proves the theorem when the dimension of the Hilbert space is at least 4. It turns out that it is possible to obtain a similar result on the basis of a set of only 18 projection operators.
In order to do so, it is sufficient to realize that if u1, u2, u3 and u4 are the four orthogonal vectors of an orthogonal basis in the four-dimensional Hilbert space, then the projection operators P1, P2, P3, P4 on these vectors are all mutually commuting (and, hence, correspond to compatible observables, allowing a simultaneous attribution of values 0 or 1). Since
it follows that
But since
it follows from = 0 or 1, , that out of the four values one must be 1, while the other three must be 0.
Cabello, extending an argument developed by Kernaghan considered 9 orthogonal bases, each basis corresponding to a column of the following table, in which the basis vectors are explicitly displayed. The bases are chosen in such a way that each projector appears in exactly two contexts, thus establishing functional relations between contexts.
Now the "no-go" theorem follows by making sure that the following is impossible: to place a value, either a 1 or a 0, into each compartment of the table above in such a way that:
(a) the value 1 appears exactly once per column, the other entries in the column being 0;
(b) equally colored compartments contain the same value – either both contain 1 or both contain 0.
As it happens, all we have to do now is ask the question, how many times should the value 1 appear in the table? On the one hand, (a) implies that 1 should appear 9 times: there are 9 columns and (a) says that 1 should appear exactly once per column. On the other hand, (b) implies that 1 should appear an even number of times: the compartments all come in equally colored pairs, and (b) says that if one member of a pair contains 1, then the other member must contain 1 as well. To repeat, (a) says that 1 appears 9 times, while (b) says that it appears an even number of times. Since 9 is not even, it follows that (a) and (b) are mutually contradictory; no distribution of 1s and 0s into the compartments could possibly satisfy both.
The usual proof of Bell's theorem (CHSH inequality) can also be converted into a simple proof of the KS theorem in dimension at least 4. Bell's setup involves four measurements with four outcomes (four pairs of a simultaneous binary measurement in each wing of the experiment) and four with two outcomes (the two binary measurements in each wing of the experiment, unaccompanied), thus 24 projection operators.
Remarks
Contextuality
In the Kochen–Specker article the possibility is discussed that the value attribution may be context-dependent, i.e. observables corresponding to equal vectors in different columns of the table need not have equal values because different columns correspond to different measurement arrangements. Since subquantum reality (as described by the hidden-variable theory) may be dependent on the measurement context, it is possible that relations between quantum-mechanical observables and hidden variables are just homomorphic rather than isomorphic. This would make obsolete the requirement of a context-independent value attribution. Hence, the KS theorem only excludes noncontextual hidden-variable theories. The possibility of contextuality has given rise to the so-called modal interpretations of quantum mechanics.
Different levels of description
By the KS theorem the impossibility is proven of Einstein's assumption that an element of physical reality is represented by a value of a quantum-mechanical observable. The value of a quantum-mechanical observable refers in the first place to the final position of the pointer of a measuring instrument, which comes into being only during the measurement, and which, for this reason, cannot play the role of an element of physical reality. Elements of physical reality, if existing, would seem to need a subquantum (hidden-variable) theory for their description rather than quantum mechanics. In later publications the Bell inequalities are discussed on the basis of hidden-variable theories in which the hidden variable is supposed to refer to a subquantum property of the microscopic object different from the value of a quantum-mechanical observable. This opens up the possibility of distinguishing different levels of reality described by different theories, which had already been practised by Louis de Broglie. For such more general theories the KS theorem is applicable only if the measurement is assumed to be a faithful one, in the sense that there is a deterministic relation between a subquantum element of physical reality and the value of the observable found on measurement.
See also
Quantum foundations
Quantum indeterminacy
References
External links
Carsten Held, The Kochen–Specker Theorem, Stanford Encyclopedia of Philosophy *
S. Kochen and E. P. Specker, The problem of hidden variables in quantum mechanics, Full text
Hidden variable theory
Theorems in quantum mechanics
No-go theorems | Kochen–Specker theorem | [
"Physics",
"Mathematics"
] | 2,750 | [
"Theorems in quantum mechanics",
"No-go theorems",
"Equations of physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Physics theorems"
] |
3,790,883 | https://en.wikipedia.org/wiki/Critical%20resolved%20shear%20stress | In materials science, critical resolved shear stress (CRSS) is the component of shear stress, resolved in the direction of slip, necessary to initiate slip in a grain. Resolved shear stress (RSS) is the shear component of an applied tensile or compressive stress resolved along a slip plane that is other than perpendicular or parallel to the stress axis. The RSS is related to the applied stress by a geometrical factor, , typically the Schmid factor:
where is the magnitude of the applied tensile stress, is the angle between the normal of the slip plane and the direction of the applied force, and is the angle between the slip direction and the direction of the applied force. The Schmid factor is most applicable to FCC single-crystal metals, but for polycrystal metals the Taylor factor has been shown to be more accurate. The CRSS is the value of resolved shear stress at which yielding of the grain occurs, marking the onset of plastic deformation. CRSS, therefore, is a material property and is not dependent on the applied load or grain orientation. The CRSS is related to the observed yield strength of the material by the maximum value of the Schmid factor:
CRSS is a constant for crystal families. Hexagonal close-packed crystals, for example, have three main families - basal, prismatic, and pyramidal - with different values for the critical resolved shear stress.
Slip systems and resolved shear stress
In crystalline metals, slip occurs in specific directions on crystallographic planes, and each combination of slip direction and slip plane will have its own Schmid factor. As an example, for a face-centered cubic (FCC) system the primary slip plane is {111} and primary slip directions exist within the <110> permutation families. The Schmid Factor for an axial applied stress in the direction, along the primary slip plane of , with the critical applied shear stress acting in the direction can be calculated by quickly determining if any of the dot product between the axial applied stress and slip plane, or dot product of axial applied stress and shear stress direction equal to zero. For the example cited above, the dot product of axial applied stress in the direction and shear stress resulting from the former in the direction yields a zero. For such a case, it is suitable to find a permutation of the family of the <110> direction. For the example completed below, the permutation direction for the shear stress slip direction has been chosen:
In a single crystal sample, the macroscopic yield stress will be determined by the Schmid factor of the single grain. Thus, in general, different yield strengths will be observed for applied stresses along different crystallographic directions. In polycrystalline specimens, the yield strength of each grain is different depending on its maximum Schmid factor, which indicates the operational slip system(s). The macroscopically observed yield stress will be related to the material's CRSS by an average Schmid factor, which is roughly 1/3.06 for FCC and 1/2.75 for body-centered cubic (BCC) structures.
The onset of plasticity in polycrystals is influenced by the number of available slip systems to accommodate incompatibilities at the grain boundaries. In the case of two adjacent, randomly oriented grains, one grain will have a larger Schmid factor and thus a smaller yield stress. Under load, this "weaker" grain will yield prior to the "stronger" grain, and as it deforms a stress concentration will build up in the stronger grain near the boundary between them. This stress concentration will activate dislocation motion in the available glide planes. These dislocations are geometrically necessary to ensure that the strain in each grain is equivalent at the grain boundary, so that the compatibility criteria are satisfied. G. I. Taylor showed that a minimum of five active slip systems are required to accommodate an arbitrary deformation. In crystal structures with fewer than 5 active slip systems, such as hexagonal close-packed (HCP) metals, the specimen will exhibit brittle failure instead of plastic deformation.
Effects of temperature and solid solution strengthening
At lower temperatures, more energy (i.e. - larger applied stress) is required to activate some slip systems. This is particularly evident in BCC materials, in which not all 5 independent slip systems are thermally activated at temperatures below the ductile-to-brittle transition temperature, or DBTT, so BCC specimens therefore become brittle. In general BCC metals have higher critical resolved shear stress values compared to FCC. However, the relationship between the CRSS and temperature and strain rate is worth examining further.
To understand the relationship between stress and temperature observed, we first divide the critical resolved shear stress into the sum of two components: an athermal term described as and a thermally dependent term known as where
can be attributed to the stresses involved with dislocation motion while dislocations move in long-range internal stress fields. These long-range stresses arise from the presence of other dislocations. however is attributed to short range internal stress fields that arise from defect atoms or precipitates within the lattice that are obstacles for dislocation glide. With increasing temperature, the dislocations within the material have sufficient energy to overcome these short-range stresses. This explains the trend in region I where stress decreases with temperature. At the boundary between region’s I and II, the term is effectively zero and the critical resolved shear stress is completely described by the athermal term, i.e. long-range internal stress fields are still significant. In the third region, diffusive processes begin to play a significant role in plastic deformation of the material and so the critically resolved shear stress decreases once again with temperature. Within region three, the equation suggested earlier no longer applies. Region I has a temperature upper bound of approximately while region III occurs at values where is the melting temperature of the material. The figure also shows the effect of increased strain rate generally increasing the critical resolved shear stress for a constant temperature as this increases the dislocation density in the material. Note that for intermediate temperatures, i.e. region II, there is a region where the strain rate has no effect on the stress. Increasing the strain rate does shift the graph to the right as more energy is needed to balance the short-term stresses with the resulting increased dislocation density.
The thermal component, can be expressed in the following manner.
Where is the thermal component at 0 K and is the temperature at which the thermal energy is sufficient to overcome the obstacles causing stress, i.e. the temperature at the transition from 1 to 2. The above equation has been verified experimentally. In general, the CRSS increases as the homologous temperature decreases because it becomes energetically more costly to activate the slip systems, although this effect is much less pronounced in FCC.
Solid solution strengthening also increases the CRSS compared to a pure single component material because the solute atoms distort the lattice, preventing the dislocation motion necessary for plasticity. With dislocation motion inhibited, it becomes harder to activate the necessary 5 independent slip systems, so the material becomes stronger and more brittle.
References
Metallurgy
Continuum mechanics
Deformation (mechanics) | Critical resolved shear stress | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,487 | [
"Continuum mechanics",
"Deformation (mechanics)",
"Metallurgy",
"Classical mechanics",
"Materials science",
"nan"
] |
3,791,224 | https://en.wikipedia.org/wiki/Little%20hierarchy%20problem | In particle physics the little hierarchy problem in the Minimal Supersymmetric Standard Model (MSSM) is a refinement of the hierarchy problem. According to quantum field theory, the mass of the Higgs boson must be rather light for the electroweak theory to work. However, the loop corrections to the mass are naturally much greater; this is known as the hierarchy problem. New physical effects such as supersymmetry may in principle reduce the size of the loop corrections, making the theory natural. However, it is known from experiments that new physics such as superpartners does not occur at very low energy scales, so even if these new particles reduce the loop corrections, they do not reduce them enough to make the renormalized Higgs mass completely natural. The expected value of the Higgs mass is about 10% of the size of the loop corrections which shows that a certain "little" amount of fine-tuning seems necessary.
Particle physicists have different opinions as to whether the little hierarchy problem is serious.
Overview
By supersymmetrizing the Standard Model, one arrives at a hypothesized solution to the gauge hierarchy, or big hierarchy, problem in that supersymmetry guarantees cancellation of quadratic divergences to all orders in perturbation theory. The simplest supersymmetrization of the SM leads to the Minimal Supersymmetric Standard Model or MSSM. In the MSSM, each SM particle has a partner particle known as a super-partner or
sparticle. For instance, the left- and right-electron helicity components have scalar partner selectrons and respectively, whilst the eight colored gluons have eight colored spin-1/2 gluino superpartners. The MSSM Higgs sector must necessarily be expanded to include two rather than one doublets leading to five physical Higgs particles h, H, A and H, whilst three of the eight Higgs component fields are absorbed by the W and Z bosons to make them massive. The MSSM is actually
supported by three different sets of measurements which test for the presence of virtual superpartners:
the celebrated weak scale measurements of the three gauge couplings strengths are just what is needed for gauge coupling unification at a scale Q ≈
the value of m ≈ 173 GeV falls squarely in the range needed to trigger a radiatively driven breakdown in electroweak symmetry and
the measured value of m ≈ 125 GeV falls within the narrow window of allowed values for the MSSM.
Nonetheless, verification of weak scale SUSY (WSS, SUSY with superpartner masses at or around the weak scale as characterized by m(W, Z, h) ≈ 100 GeV) requires the direct observation of at least some of the superpartners in sufficiently energetic colliding beam experiments. As recent as 2017, the CERN Large Hadron Collider, a p–p collider operating at centre-of-mass energy 13 TeV, has not found any evidence for superpartners. This has led to mass limits on the gluino m > 2 TeV and on the lighter top squark m > 1 TeV (within the context of certain simplified models that are assumed to make the experimental analysis more tractable). Along with these limits, the rather large measured value of m ≈ 125 GeV seems to require TeV-scale highly mixed top squarks. These combined measurements have raised concern now about an emerging Little Hierarchy problem characterized by m ≪ m. Under the Little Hierarchy, one might expect the now log-divergent light Higgs mass to blow up to the sparticle mass scale unless one fine-tunes. The Little Hierarchy problem has led to concern that WSS is perhaps not realized in nature, or at least not in the manner typically expected by theorists in years past.
Status
In the MSSM, the light Higgs mass is calculated to be
where the mixing and loop contributions are below m but where in most models, the soft SUSY breaking up-Higgs mass m is driven to large, TeV-scale negative values (in order to break electroweak symmetry). Then, to maintain the measured value of m = 125 GeV, one must tune the superpotential mass term μ to some large positive value. Alternatively, for natural SUSY, one may expect that m runs to small negative values, in which case both μ and are of order . This already leads to a prediction: since μ is supersymmetric and feeds mass to both SM particles (W, Z, h) and superpartners (higgsinos), then it is expected from the natural MSSM that light higgsinos exist nearby to the scale. This simple realization has profound implications for WSS collider and dark matter searches.
Naturalness in the MSSM has historically been expressed in terms of the Z-boson mass, and indeed this approach leads to more stringent upper bounds on sparticle masses. By minimizing the (Coleman-Weinberg) scalar potential of the MSSM, then one may relate the measured value of m = 91.2 GeV to the SUSY Lagrangian parameters:
Here, tan β ≈ 5–50 is the ratio of Higgs field vacuum expectation values v/v and m is the down-Higgs soft breaking mass term. The and contain a variety of loop corrections labelled by indices i and j, the most important of which typically comes from the top-squarks.
See also
MSSM Higgs mass
Mu problem
References
Supersymmetric quantum field theory | Little hierarchy problem | [
"Physics"
] | 1,150 | [
"Supersymmetric quantum field theory",
"Particle physics",
"Particle physics stubs",
"Supersymmetry",
"Symmetry"
] |
3,791,271 | https://en.wikipedia.org/wiki/DGP%20model | The Dvali–Gabadadze–Porrati(DGP) model is a model of gravity proposed by Gia Dvali, Gregory Gabadadze, and Massimo Porrati in 2000. The model is popular among some model builders, but has resisted being embedded into string theory.
Overview
The DGP model assumes the existence of a 4+1-dimensional Minkowski space, within which ordinary 3+1-dimensional Minkowski space is embedded. The model assumes an action consisting of two terms: One term is the usual Einstein–Hilbert action, which involves only the 4-D spacetime dimensions. The other term is the equivalent of the Einstein–Hilbert action, as extended to all 5 dimensions. The 4-D term dominates at short distances, and the 5-D term dominates at long distances.
The model was proposed in part in order to reproduce the cosmic acceleration of dark energy without any need for a small but non-zero vacuum energy density. But critics argue that this branch of the theory is unstable. However, the theory remains interesting because of Dvali's claim that the unusual structure of the graviton propagator makes non-perturbative effects important in a seemingly linear regime, such as the solar system. Because there is no four-dimensional, linearized effective theory that reproduces the DGP model for weak-field gravity, the theory avoids the vDVZ discontinuity that otherwise plagues attempts to write down a theory of massive gravity.
In 2008, Fang et al. argued that recent cosmological observations (including measurements of baryon acoustic oscillations by the Sloan Digital Sky Survey, and measurements of the cosmic microwave background and type 1a supernovae) is in direct conflict with the DGP cosmology unless a cosmological constant or some other form of dark energy is added. However, this negates the appeal of the DGP cosmology, which accelerates without needing to add dark energy.
See also
Kaluza–Klein theory
Randall–Sundrum model
Large extra dimensions
References
Theories of gravity
Quantum gravity | DGP model | [
"Physics"
] | 430 | [
"Theoretical physics",
"Unsolved problems in physics",
"Theories of gravity",
"Quantum gravity",
"Relativity stubs",
"Theory of relativity",
"Physics beyond the Standard Model"
] |
3,793,084 | https://en.wikipedia.org/wiki/POVM | In functional analysis and quantum information science, a positive operator-valued measure (POVM) is a measure whose values are positive semi-definite operators on a Hilbert space. POVMs are a generalization of projection-valued measures (PVM) and, correspondingly, quantum measurements described by POVMs are a generalization of quantum measurement described by PVMs (called projective measurements).
In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system (see purification of quantum state); analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system.
POVMs are the most general kind of measurement in quantum mechanics, and can also be used in quantum field theory. They are extensively used in the field of quantum information.
Definition
Let denote a Hilbert space and a measurable space with a Borel σ-algebra on . A POVM is a function defined on whose values are positive bounded self-adjoint operators on such that for every
is a non-negative countably additive measure on the σ-algebra and is the identity operator.
In quantum mechanics, the key property of a POVM is that it determines a probability measure on the outcome space, so that can be interpreted as the probability of the event when measuring a quantum state .
In the simplest case, in which is a finite set, is the power set of and is finite-dimensional, a POVM is equivalently a set of positive semi-definite Hermitian matrices that sum to the identity matrix,
A POVM differs from a projection-valued measure in that, for projection-valued measures, the values of are required to be orthogonal projections.
In the discrete case, the POVM element is associated with the measurement outcome , such that the probability of obtaining it when making a quantum measurement on the quantum state is given by
,
where is the trace operator. When the quantum state being measured is a pure state this formula reduces to
.
The discrete case of a POVM generalizes the simplest case of a PVM, which is a set of orthogonal projectors that sum to the identity matrix:
The probability formulas for a PVM are the same as for the POVM. An important difference is that the elements of a POVM are not necessarily orthogonal. As a consequence, the number of elements of the POVM can be larger than the dimension of the Hilbert space they act in. On the other hand, the number of elements of the PVM is at most the dimension of the Hilbert space.
Naimark's dilation theorem
Note: An alternate spelling of this is "Neumark's Theorem"
Naimark's dilation theorem shows how POVMs can be obtained from PVMs acting on a larger space. This result is of critical importance in quantum mechanics, as it gives a way to physically realize POVM measurements.
In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, Naimark's theorem says that if is a POVM acting on a Hilbert space of dimension , then there exists a PVM acting on a Hilbert space of dimension and an isometry such that for all ,
For the particular case of a rank-1 POVM, i.e., when for some (unnormalized) vectors , this isometry can be constructed as
and the PVM is given simply by . Note that here .
In the general case, the isometry and PVM can be constructed by defining , , and
Note that here , so this is a more wasteful construction.
In either case, the probability of obtaining outcome with this PVM, and the state suitably transformed by the isometry, is the same as the probability of obtaining it with the original POVM:
This construction can be turned into a recipe for a physical realisation of the POVM by extending the isometry into a unitary , that is, finding such that
for from 1 to . This can always be done.
The recipe for realizing the POVM described by on a quantum state is then to embed the quantum state in the Hilbert space , evolve it with the unitary , and make the projective measurement described by the PVM .
Post-measurement state
The post-measurement state is not determined by the POVM itself, but rather by the PVM that physically realizes it. Since there are infinitely many different PVMs that realize the same POVM, the operators alone do not determine what the post-measurement state will be. To see that, note that for any unitary the operators
will also have the property that , so that using the isometry
in the second construction above will also implement the same POVM. In the case where the state being measured is in a pure state , the resulting unitary takes it together with the ancilla to state
and the projective measurement on the ancilla will collapse to the state
on obtaining result . When the state being measured is described by a density matrix , the corresponding post-measurement state is given by
.
We see therefore that the post-measurement state depends explicitly on the unitary . Note that while is always Hermitian, generally, does not have to be Hermitian.
Another difference from the projective measurements is that a POVM measurement is in general not repeatable. If on the first measurement result was obtained, the probability of obtaining a different result on a second measurement is
,
which can be nonzero if and are not orthogonal. In a projective measurement these operators are always orthogonal and therefore the measurement is always repeatable.
An example: unambiguous quantum state discrimination
Suppose you have a quantum system with a 2-dimensional Hilbert space that you know is in either the state or the state , and you want to determine which one it is. If and are orthogonal, this task is easy: the set will form a PVM, and a projective measurement in this basis will determine the state with certainty. If, however, and are not orthogonal, this task is impossible, in the sense that there is no measurement, either PVM or POVM, that will distinguish them with certainty. The impossibility of perfectly discriminating between non-orthogonal states is the basis for quantum information protocols such as quantum cryptography, quantum coin flipping, and quantum money.
The task of unambiguous quantum state discrimination (UQSD) is the next best thing: to never make a mistake about whether the state is or , at the cost of sometimes having an inconclusive result. It is possible to do this with projective measurements. For example, if you measure the PVM , where is the quantum state orthogonal to , and obtain result , then you know with certainty that the state was . If the result was , then it is inconclusive. The analogous reasoning holds for the PVM , where is the state orthogonal to .
This is unsatisfactory, though, as you can't detect both and with a single measurement, and the probability of getting a conclusive result is smaller than with POVMs. The POVM that gives the highest probability of a conclusive outcome in this task is given by
where
Note that , so when outcome is obtained we are certain that the quantum state is , and when outcome is obtained we are certain that the quantum state is .
The probability of having a conclusive outcome is given by
when the quantum system is in state or with the same probability. This result is known as the Ivanović-Dieks-Peres limit, named after the authors who pioneered UQSD research.
Since the POVMs are rank-1, we can use the simple case of the construction above to obtain a projective measurement that physically realises this POVM. Labelling the three possible states of the enlarged Hilbert space as , , and , we see that the resulting unitary takes the state to
and similarly it takes the state to
A projective measurement then gives the desired results with the same probabilities as the POVM.
This POVM has been used to experimentally distinguish non-orthogonal polarisation states of a photon. The realisation of the POVM with a projective measurement was slightly different from the one described here.
See also
SIC-POVM
Quantum measurement
Mathematical formulation of quantum mechanics
Density matrix
Quantum operation
Projection-valued measure
Vector measure
References
POVMs
K. Kraus, States, Effects, and Operations, Lecture Notes in Physics 190, Springer (1983).
A.S. Holevo, Probabilistic and statistical aspects of quantum theory, North-Holland Publ. Cy., Amsterdam (1982).
External links
Interactive demonstration about quantum state discrimination
Quantum information theory
Quantum measurement | POVM | [
"Physics"
] | 1,805 | [
"Quantum measurement",
"Quantum mechanics"
] |
30,340,342 | https://en.wikipedia.org/wiki/Bounded%20type%20%28mathematics%29 | In mathematics, a function defined on a region of the complex plane is said to be of bounded type if it is equal to the ratio of two analytic functions bounded in that region. But more generally, a function is of bounded type in a region if and only if is analytic on and has a harmonic majorant on where . Being the ratio of two bounded analytic functions is a sufficient condition for a function to be of bounded type (defined in terms of a harmonic majorant), and if is simply connected the condition is also necessary.
The class of all such on is commonly denoted and is sometimes called the Nevanlinna class for . The Nevanlinna class includes all the Hardy classes.
Functions of bounded type are not necessarily bounded, nor do they have a property called "type" which is bounded. The reason for the name is probably that when defined on a disc, the Nevanlinna characteristic (a function of distance from the centre of the disc) is bounded.
Clearly, if a function is the ratio of two bounded functions, then it can be expressed as the ratio of two functions which are bounded by 1:
The logarithms of and of are non-negative in the region, so
The latter is the real part of an analytic function and is therefore harmonic, showing that has a harmonic majorant on Ω.
For a given region, sums, differences, and products of functions of bounded type are of bounded type, as is the quotient of two such functions as long as the denominator is not identically zero.
Examples
Polynomials are of bounded type in any bounded region. They are also of bounded type in the upper half-plane (UHP), because a polynomial of degree n can be expressed as a ratio of two analytic functions bounded in the UHP:
with
The inverse of a polynomial is also of bounded type in a region, as is any rational function.
The function is of bounded type in the UHP if and only if a is real. If a is positive the function itself is bounded in the UHP (so we can use ), and if a is negative then the function equals 1/Q(z) with .
Sine and cosine are of bounded type in the UHP. Indeed,
with
both of which are bounded in the UHP.
All of the above examples are of bounded type in the lower half-plane as well, using different P and Q functions. But the region mentioned in the definition of the term "bounded type" cannot be the whole complex plane unless the function is constant because one must use the same P and Q over the whole region, and the only entire functions (that is, analytic in the whole complex plane) which are bounded are constants, by Liouville's theorem.
Another example in the upper half-plane is a "Nevanlinna function", that is, an analytic function that maps the UHP to the closed UHP. If f(z) is of this type, then
where P and Q are the bounded functions:
(This obviously applies as well to , that is, a function whose real part is non-negative in the UHP.)
Properties
For a given region, the sum, product, or quotient of two (non-null) functions of bounded type is also of bounded type. The set of functions of bounded type is an algebra over the complex numbers and is in fact a field.
Any function of bounded type in the upper half-plane (with a finite number of roots in some neighborhood of 0) can be expressed as a Blaschke product (an analytic function, bounded in the region, which factors out the zeros) multiplying the quotient where and are bounded by 1 and have no zeros in the UHP. One can then express this quotient as
where and are analytic functions having non-negative real part in the UHP. Each of these in turn can be expressed by a Poisson representation (see Nevanlinna functions):
where c and d are imaginary constants, p and q are non-negative real constants, and μ and ν are non-decreasing functions of a real variable (well behaved so the integrals converge). The difference q−p has been given the name "mean type" by Louis de Branges and describes the growth or decay of the function along the imaginary axis:
The mean type in the upper half-plane is the limit of a weighted average of the logarithm of the function's absolute value divided by distance from zero, normalized in such a way that the value for is 1:
If an entire function is of bounded type in both the upper and the lower half-plane then it is of exponential type equal to the higher of the two respective "mean types" (and the higher one will be non-negative). An entire function of order greater than 1 (which means that in some direction it grows faster than a function of exponential type) cannot be of bounded type in any half-plane.
We may thus produce a function of bounded type by using an appropriate exponential of z and exponentials of arbitrary Nevanlinna functions multiplied by i, for example:
Concerning the examples given above, the mean type of polynomials or their inverses is zero. The mean type of in the upper half-plane is −a, while in the lower half-plane it is a. The mean type of in both half-planes is 1.
Functions of bounded type in the upper half-plane with non-positive mean type and having a continuous, square-integrable extension to the real axis have the interesting property (useful in applications) that the integral (along the real axis)
equals if z is in the upper half-plane and zero if z is in the lower half-plane. This may be termed the Cauchy formula for the upper half-plane.
See also
De Branges space
Rolf Nevanlinna
References
Complex analysis
Special functions
Types of functions | Bounded type (mathematics) | [
"Mathematics"
] | 1,222 | [
"Functions and mappings",
"Special functions",
"Mathematical objects",
"Combinatorics",
"Mathematical relations",
"Types of functions"
] |
30,342,860 | https://en.wikipedia.org/wiki/Tangent%20half-angle%20substitution | In integral calculus, the tangent half-angle substitution is a change of variables used for evaluating integrals, which converts a rational function of trigonometric functions of into an ordinary rational function of by setting . This is the one-dimensional stereographic projection of the unit circle parametrized by angle measure onto the real line. The general transformation formula is:
The tangent of half an angle is important in spherical trigonometry and was sometimes known in the 17th century as the half tangent or semi-tangent. Leonhard Euler used it to evaluate the integral in his 1768 integral calculus textbook, and Adrien-Marie Legendre described the general method in 1817.
The substitution is described in most integral calculus textbooks since the late 19th century, usually without any special name. It is known in Russia as the universal trigonometric substitution, and also known by variant names such as half-tangent substitution or half-angle substitution. It is sometimes misattributed as the Weierstrass substitution. Michael Spivak called it the "world's sneakiest substitution".
The substitution
Introducing a new variable sines and cosines can be expressed as rational functions of and can be expressed as the product of and a rational function of as follows:
Similar expressions can be written for , , , and .
Derivation
Using the double-angle formulas and and introducing denominators equal to one by the Pythagorean identity results in
Finally, since , differentiation rules imply
and thus
Examples
Antiderivative of cosecant
We can confirm the above result using a standard method of evaluating the cosecant integral by multiplying the numerator and denominator by and performing the substitution .
These two answers are the same because
The secant integral may be evaluated in a similar manner.
A definite integral
In the first line, one cannot simply substitute for both limits of integration. The singularity (in this case, a vertical asymptote) of at must be taken into account. Alternatively, first evaluate the indefinite integral, then apply the boundary values.
By symmetry,
which is the same as the previous answer.
Third example: both sine and cosine
if
Geometry
As x varies, the point (cos x, sin x) winds repeatedly around the unit circle centered at (0, 0). The point
goes only once around the circle as t goes from −∞ to +∞, and never reaches the point (−1, 0), which is approached as a limit as t approaches ±∞. As t goes from −∞ to −1, the point determined by t goes through the part of the circle in the third quadrant, from (−1, 0) to (0, −1). As t goes from −1 to 0, the point follows the part of the circle in the fourth quadrant from (0, −1) to (1, 0). As t goes from 0 to 1, the point follows the part of the circle in the first quadrant from (1, 0) to (0, 1). Finally, as t goes from 1 to +∞, the point follows the part of the circle in the second quadrant from (0, 1) to (−1, 0).
Here is another geometric point of view. Draw the unit circle, and let P be the point . A line through P (except the vertical line) is determined by its slope. Furthermore, each of the lines (except the vertical line) intersects the unit circle in exactly two points, one of which is P. This determines a function from points on the unit circle to slopes. The trigonometric functions determine a function from angles to points on the unit circle, and by combining these two functions we have a function from angles to slopes.
Hyperbolic functions
As with other properties shared between the trigonometric functions and the hyperbolic functions, it is possible to use hyperbolic identities to construct a similar form of the substitution, :
Similar expressions can be written for , , , and . Geometrically, this change of variables is a one-dimensional stereographic projection of the hyperbolic line onto the real interval, analogous to the Poincaré disk model of the hyperbolic plane.
Alternatives
There are other approaches to integrating trigonometric functions. For example, it can be helpful to rewrite trigonometric functions in terms of and using Euler's formula.
See also
Rational curve
Stereographic projection
Tangent half-angle formula
Trigonometric substitution
Euler substitution
Further reading
Second edition 1916, pp. 52–62
Notes and references
External links
Weierstrass substitution formulas at PlanetMath
Integral calculus | Tangent half-angle substitution | [
"Mathematics"
] | 942 | [
"Integral calculus",
"Calculus"
] |
30,350,056 | https://en.wikipedia.org/wiki/Auslander%E2%80%93Reiten%20theory | In algebra, Auslander–Reiten theory studies the representation theory of Artinian rings using techniques such as Auslander–Reiten sequences (also called almost split sequences) and Auslander–Reiten quivers. Auslander–Reiten theory was introduced by and developed by them in several subsequent papers.
For survey articles on Auslander–Reiten theory see , , , and the book . Many of the original papers on Auslander–Reiten theory are reprinted in .
Almost-split sequences
Suppose that R is an Artin algebra. A sequence
0→ A → B → C → 0
of finitely generated left modules over R is called an almost-split sequence (or Auslander–Reiten sequence) if it has the following properties:
The sequence is not split
C is indecomposable and any homomorphism from an indecomposable module to C that is not an isomorphism factors through B.
A is indecomposable and any homomorphism from A to an indecomposable module that is not an isomorphism factors through B.
For any finitely generated left module C that is indecomposable but not projective there is an almost-split sequence as above, which is unique up to isomorphism. Similarly for any finitely generated left module A that is indecomposable but not injective there is an almost-split sequence as above, which is unique up to isomorphism.
The module A in the almost split sequence is isomorphic to D Tr C, the dual of the transpose of C.
Example
Suppose that R is the ring k[x]/(xn) for a field k and an integer n≥1. The indecomposable modules are isomorphic to one of k[x]/(xm) for 1≤ m ≤ n, and the only projective one has m=n. The almost split sequences are isomorphic to
for 1 ≤ m < n. The first morphism takes a to (xa, a) and the second takes (b,c) to b − xc.
Auslander-Reiten quiver
The Auslander-Reiten quiver of an Artin algebra has a vertex for each indecomposable module and an arrow between vertices if there is an irreducible morphism between the corresponding modules. It has a map τ = D Tr called the translation from the non-projective vertices to the non-injective vertices, where D is the dual and Tr the transpose.
References
External links
Representation theory | Auslander–Reiten theory | [
"Mathematics"
] | 520 | [
"Representation theory",
"Fields of abstract algebra"
] |
30,350,487 | https://en.wikipedia.org/wiki/Auslander%20algebra | In mathematics, the Auslander algebra of an algebra A is the endomorphism ring of the sum of the indecomposable modules of A. It was introduced by .
An Artin algebra Γ is called an Auslander algebra if gl dim Γ ≤ 2 and if 0→Γ→I→J→K→0 is a minimal injective resolution of Γ then I and J are projective Γ-modules.
References
Representation theory | Auslander algebra | [
"Mathematics"
] | 92 | [
"Representation theory",
"Fields of abstract algebra"
] |
30,351,807 | https://en.wikipedia.org/wiki/AREsite | AREsite is a database of AU-rich elements (ARE) in vertebrate mRNA 3'-untranslated regions (UTRs). AU-rich elements are involved in the control of gene expression. They are the most common determinant of RNA stability in mammalian cells. The most recent version of AREsite is called AREsite 2. It represents an update that allows for more detailed analysis of ARE, GRE, and URE (AU, GU, and U-rich elements).
See also
AU-rich elements
References
External links
http://rna.tbi.univie.ac.at/AREsite
Biological databases
RNA
Gene expression
Cis-regulatory RNA elements | AREsite | [
"Chemistry",
"Biology"
] | 143 | [
"Gene expression",
"Bioinformatics",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Biological databases"
] |
30,351,997 | https://en.wikipedia.org/wiki/ASPicDB | ASPicDB is a database of human protein variants generated by alternative splicing, a process by which the exons of the RNA produced by transcription of a gene are reconnected in multiple ways during RNA splicing.
See also
Alternative splicing
Alternative splicing annotation project
EDAS
References
External links
https://web.archive.org/web/20150131060605/http://srv00.ibbe.cnr.it/ASPicDB/
Gene expression
Spliceosome
RNA splicing
Biological databases | ASPicDB | [
"Chemistry",
"Biology"
] | 119 | [
"Gene expression",
"Bioinformatics",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Biological databases"
] |
30,352,425 | https://en.wikipedia.org/wiki/BISC%20%28database%29 | Binary subcomplexes in proteins database (BISC) is a protein–protein interaction database about binary subcomplexes.
References
External links
Biochemistry databases
Proteomics
Biophysics organizations
Systems biology | BISC (database) | [
"Chemistry",
"Biology"
] | 41 | [
"Biochemistry",
"Biochemistry databases",
"Systems biology"
] |
30,357,017 | https://en.wikipedia.org/wiki/Conformational%20dynamics%20data%20bank | The conformational dynamics data bank (CDDB) is a database about conformational dynamics of heavy proteins and protein assemblies. The CDDB is useful when used alongside static structural data to aid research into protein function. It is also helpful in identifying protein assemblies that are essential to cell function.
Analysis is carried out by coarse-grained computation of the structures present in the electron microscopy data bank (EMDB). This analysis shows equilibrium thermal fluctuations and elastic strain energy distributions, which allows for identification of rigid and flexible protein domains. The results also provide information on correlations in molecular motions which can be used to identify molecular regions that are highly coupled dynamically.
References
External links
Data bank website
Biological databases
Protein structure | Conformational dynamics data bank | [
"Chemistry"
] | 145 | [
"Protein structure",
"Structural biology"
] |
20,304,049 | https://en.wikipedia.org/wiki/Harris%E2%80%93Benedict%20equation | The Harris–Benedict equation (also called the Harris-Benedict principle) is a method used to estimate an individual's basal metabolic rate (BMR).
The estimated BMR value may be multiplied by a number that corresponds to the individual's activity level; the resulting number is the approximate daily kilocalorie intake to maintain current body weight.
The Harris-Benedict equation may be used to assist weight loss — by reducing the kilocalorie intake number below the estimated maintenance intake of the equation.
Calculating the Harris-Benedict BMR
The original Harris–Benedict equations were published in 1918 and 1919.
The Harris–Benedict equations revised by Roza and Shizgal in 1984.
The 95% confidence range for men is ±213.0 kcal/day, and ±201.0 kcal/day for women.
The Harris–Benedict equations revised by Mifflin and St Jeor in 1990:
History
The Harris-Benedict equation sprang from a study by James Arthur Harris and Francis Gano Benedict, which was published in 1919 by the Carnegie Institution of Washington in the monograph A Biometric Study Of Basal Metabolism In Man. A 1984 revision improved its accuracy. Mifflin et al. published an equation more predictive for modern lifestyles in 1990. Later work produced BMR estimators that accounted for lean body mass.
Issues in dietary use
As the BMR equations do not attempt to take into account body composition, identical results can be calculated for a very muscular person, and an overweight person, who are both the same height, weight, age and gender. As muscle and fat require differing amounts of calories to maintain, the TEE estimates will not be accurate for such cases.
The paper behind the latest update (Mifflin et al) to the BMR formula states all participants in their study fall within the 'normal' and 'overweight' body mass index (BMI) categories, and so the results also do not necessarily apply to those in the 'underweight' or 'obese' BMI categories.
See also
Food energy
Resting metabolic rate
Institute of Medicine Equation
Schofield equation
Cited sources
External links
About.com's BMR Calculator
Calorie Calculator
Nutrition
Equations
Mathematics in medicine | Harris–Benedict equation | [
"Mathematics"
] | 465 | [
"Applied mathematics",
"Mathematics in medicine",
"Mathematical objects",
"Equations"
] |
20,309,035 | https://en.wikipedia.org/wiki/Super%20QCD | In theoretical physics, super QCD is a supersymmetric gauge theory which resembles quantum chromodynamics (QCD) but contains additional particles and interactions which render it supersymmetric.
The most commonly used version of super QCD is in 4 dimensions and contains one Majorana spinor supercharge. The particle content consists of vector supermultiplets, which include gluons and gluinos and also chiral supermultiplets which contain quarks and squarks transforming in the fundamental representation of the gauge group. This theory has many features in common with real world QCD, for example in some phases it manifests confinement and chiral symmetry breaking. The supersymmetry of this theory means that, unlike QCD, one may use nonrenormalization theorems to analytically demonstrate the existence of these phenomena and even calculate the condensate which breaks the chiral symmetry.
Phases of super QCD
Consider 4-dimensional SQCD with gauge group SU(N) and M flavors of chiral multiplets. The vacuum structure depends on M and N. The (spin-zero) squarks may be reorganized into hadrons, and the moduli space of vacua of the theory may be parametrized by their vacuum expectation values. On most of the moduli space the Higgs mechanism makes all of the fields massive, and so they may be integrated out. Classically, the resulting moduli space is singular. The singularities correspond to points where some gluons are massless, and so could not be integrated out. In the full quantum moduli space is nonsingular, and its structure depends on the relative values of M and N. For example, when M is less than or equal to N+1, the theory exhibits confinement.
When M is less than N, the effective action differs from the classical action. More precisely, while the perturbative nonrenormalization theory forbids any perturbative correction to the superpotential, the superpotential receives nonperturbative corrections. When N=M+1, these corrections result from a single instanton. For larger values of N the instanton calculation suffers from infrared divergences, however the correction may nonetheless be determined precisely from the gaugino condensation. The quantum correction to the superpotential was calculated in The Massless Limit Of Supersymmetric Qcd. If the chiral multiplets are massless, the resulting potential energy has no minimum and so the full quantum theory has no vacuum. Instead the fields roll forever to larger values.
When M is equal to or greater than N, the classical superpotential is exact. When M is equal to N, however, the moduli space receives quantum corrections from a single instanton. This correction renders the moduli space nonsingular, and also leads to chiral symmetry breaking. Then M is equal to N+1 the moduli space is not modified and so there is no chiral symmetry breaking, however there is still confinement.
When M is greater than N+1 but less than 3N/2, the theory is asymptotically free. However at low energies the theory becomes strongly coupled, and is better described by a Seiberg dual description in terms of magnetic variables with the same global flavor symmetry group but a new gauge symmetry SU(M-N). Notice that the gauge group is not an observable, but simply reflects the redundancy or a description and so may well differ in various dual theories, as it does in this case. On the other hand, the global symmetry group is an observable so it is essential that it is the same, SU(M), in both descriptions. The dual magnetic theory is free in the infrared, the coupling constant shrinks logarithmically, and so by the Dirac quantization condition the electric coupling constant grows logarithmically in the infrared. This implies that the potential between two electric charges, at long distances, scales as the logarithm of their distance divided by the distance.
When M is between 3N/2 and 3N, in the theory has an infrared fixed point where it becomes a nontrivial conformal field theory. The potential between electric charges obeys the usual Colomb law, it is inversely proportional to the distance between the charges.
When M is greater than 3N, the theory is free in the infrared, and so the force between two charges is inversely proportional to the product of the distance times the logarithm of the distance between the charges. However the theory is ill-defined in the ultraviolet, unless one includes additional heavy degrees of freedom which lead, for example, to a Seiberg dual theory of the type described above at N+1<M<3N/2.
References
Lectures on supersymmetric gauge theories and electric-magnetic duality by Nathan Seiberg and Kenneth Intriligator.
Supersymmetric quantum field theory
Quantum chromodynamics | Super QCD | [
"Physics"
] | 1,040 | [
"Supersymmetric quantum field theory",
"Supersymmetry",
"Symmetry"
] |
20,310,362 | https://en.wikipedia.org/wiki/Stellar%20kinematics | In astronomy, stellar kinematics is the observational study or measurement of the kinematics or motions of stars through space.
Stellar kinematics encompasses the measurement of stellar velocities in the Milky Way and its satellites as well as the internal kinematics of more distant galaxies. Measurement of the kinematics of stars in different subcomponents of the Milky Way including the thin disk, the thick disk, the bulge, and the stellar halo provides important information about the formation and evolutionary history of our Galaxy. Kinematic measurements can also identify exotic phenomena such as hypervelocity stars escaping from the Milky Way, which are interpreted as the result of gravitational encounters of binary stars with the supermassive black hole at the Galactic Center.
Stellar kinematics is related to but distinct from the subject of stellar dynamics, which involves the theoretical study or modeling of the motions of stars under the influence of gravity. Stellar-dynamical models of systems such as galaxies or star clusters are often compared with or tested against stellar-kinematic data to study their evolutionary history and mass distributions, and to detect the presence of dark matter or supermassive black holes through their gravitational influence on stellar orbits.
Space velocity
The component of stellar motion toward or away from the Sun, known as radial velocity, can be measured from the spectrum shift caused by the Doppler effect. The transverse, or proper motion must be found by taking a series of positional determinations against more distant objects. Once the distance to a star is determined through astrometric means such as parallax, the space velocity can be computed. This is the star's actual motion relative to the Sun or the local standard of rest (LSR). The latter is typically taken as a position at the Sun's present location that is following a circular orbit around the Galactic Center at the mean velocity of those nearby stars with low velocity dispersion. The Sun's motion with respect to the LSR is called the "peculiar solar motion".
The components of space velocity in the Milky Way's Galactic coordinate system are usually designated U, V, and W, given in km/s, with U positive in the direction of the Galactic Center, V positive in the direction of galactic rotation, and W positive in the direction of the North Galactic Pole. The peculiar motion of the Sun with respect to the LSR is
(U, V, W) = (11.1, 12.24, 7.25) km/s,
with statistical uncertainty (+0.69−0.75, +0.47−0.47, +0.37−0.36) km/s and systematic uncertainty (1, 2, 0.5) km/s. (Note that V is 7 km/s larger than estimated in 1998 by Dehnen et al.)
Use of kinematic measurements
Stellar kinematics yields important astrophysical information about stars, and the galaxies in which they reside. Stellar kinematics data combined with astrophysical modeling produces important information about the galactic system as a whole. Measured stellar velocities in the innermost regions of galaxies including the Milky Way have provided evidence that many galaxies host supermassive black holes at their center. In farther out regions of galaxies such as within the galactic halo, velocity measurements of globular clusters orbiting in these halo regions of galaxies provides evidence for dark matter. Both of these cases derive from the key fact that stellar kinematics can be related to the overall potential in which the stars are bound. This means that if accurate stellar kinematics measurements are made for a star or group of stars orbiting in a certain region of a galaxy, the gravitational potential and mass distribution can be inferred given that the gravitational potential in which the star is bound produces its orbit and serves as the impetus for its stellar motion. Examples of using kinematics combined with modeling to construct an astrophysical system include:
Rotation of the Milky Way's disc: From the proper motions and radial velocities of stars within the Milky way disc one can show that there is differential rotation. When combining these measurements of stars' proper motions and their radial velocities, along with careful modeling, it is possible to obtain a picture of the rotation of the Milky Way disc. The local character of galactic rotation in the solar neighborhood is encapsulated in the Oort constants.
Structural components of the Milky Way: Using stellar kinematics, astronomers construct models which seek to explain the overall galactic structure in terms of distinct kinematic populations of stars. This is possible because these distinct populations are often located in specific regions of galaxies. For example, within the Milky Way, there are three primary components, each with its own distinct stellar kinematics: the disc, halo and bulge or bar. These kinematic groups are closely related to the stellar populations in the Milky Way, forming a strong correlation between the motion and chemical composition, thus indicating different formation mechanisms. For the Milky Way, the speed of disk stars is and an RMS (Root mean square) velocity relative to this speed of . For bulge population stars, the velocities are randomly oriented with a larger relative RMS velocity of and no net circular velocity. The Galactic stellar halo consists of stars with orbits that extend to the outer regions of the galaxy. Some of these stars will continually orbit far from the galactic center, while others are on trajectories which bring them to various distances from the galactic center. These stars have little to no average rotation. Many stars in this group belong to globular clusters which formed long ago and thus have a distinct formation history, which can be inferred from their kinematics and poor metallicities. The halo may be further subdivided into an inner and outer halo, with the inner halo having a net prograde motion with respect to the Milky Way and the outer a net retrograde motion.
External galaxies: Spectroscopic observations of external galaxies make it possible to characterize the bulk motions of the stars they contain. While these stellar populations in external galaxies are generally not resolved to the level where one can track the motion of individual stars (except for the very nearest galaxies) measurements of the kinematics of the integrated stellar population along the line of sight provides information including the mean velocity and the velocity dispersion which can then be used to infer the distribution of mass within the galaxy. Measurement of the mean velocity as a function of position gives information on the galaxy's rotation, with distinct regions of the galaxy that are redshifted / blueshifted in relation to the galaxy's systemic velocity.
Mass distributions: Through measurement of the kinematics of tracer objects such as globular clusters and the orbits of nearby satellite dwarf galaxies, we can determine the mass distribution of the Milky Way or other galaxies. This is accomplished by combining kinematic measurements with dynamical modeling.
Recent advancements due to Gaia
In 2018, the Gaia Data Release 2 (GAIA DR2) marked a significant advancement in stellar kinematics, offering a rich dataset of precise measurements. This release included detailed stellar kinematic and stellar parallax data, contributing to a more nuanced understanding of the Milky Way's structure. Notably, it facilitated the determination of proper motions for numerous celestial objects, including the absolute proper motions of 75 globular clusters situated at distances extending up to and a bright limit of . Furthermore, Gaia's comprehensive dataset enabled the measurement of absolute proper motions in nearby dwarf spheroidal galaxies, serving as crucial indicators for understanding the mass distribution within the Milky Way. GAIA DR3 improved the quality of previously published data by providing detailed astrophysical parameters. While the complete GAIA DR4 is yet to be unveiled, the latest release offers enhanced insights into white dwarfs, hypervelocity stars, cosmological gravitational lensing, and the merger history of the Galaxy.
Stellar kinematic types
Stars within galaxies may be classified based on their kinematics. For example, the stars in the Milky Way can be subdivided into two general populations, based on their metallicity, or proportion of elements with atomic numbers higher than helium. Among nearby stars, it has been found that population I stars with higher metallicity are generally located in the stellar disk while older population II stars are in random orbits with little net rotation. The latter have elliptical orbits that are inclined to the plane of the Milky Way. Comparison of the kinematics of nearby stars has also led to the identification of stellar associations. These are most likely groups of stars that share a common point of origin in giant molecular clouds.
There are many additional ways to classify stars based on their measured velocity components, and this provides detailed information about the nature of the star's formation time, its present location, and the general structure of the galaxy. As a star moves in a galaxy, the smoothed out gravitational potential of all the other stars and other mass within the galaxy plays a dominant role in determining the stellar motion. Stellar kinematics can provide insights into the location of where the star formed within the galaxy. Measurements of an individual star's kinematics can identify stars that are peculiar outliers such as a high-velocity star moving much faster than its nearby neighbors.
High-velocity stars
Depending on the definition, a high-velocity star is a star moving faster than 65 km/s to 100 km/s relative to the average motion of the other stars in the star's neighborhood. The velocity is also sometimes defined as supersonic relative to the surrounding interstellar medium. The three types of high-velocity stars are: runaway stars, halo stars and hypervelocity stars. High-velocity stars were studied by Jan Oort, who used their kinematic data to predict that high-velocity stars have very little tangential velocity.
Runaway stars
A runaway star is one that is moving through space with an abnormally high velocity relative to the surrounding interstellar medium. The proper motion of a runaway star often points exactly away from a stellar association, of which the star was formerly a member, before it was hurled out.
Mechanisms that may give rise to a runaway star include:
Gravitational interactions between stars in a stellar system can result in large accelerations of one or more of the involved stars. In some cases, stars may even be ejected. This can occur in seemingly stable star systems of only three stars, as described in studies of the three-body problem in gravitational theory.
A collision or close encounter between stellar systems, including galaxies, may result in the disruption of both systems, with some of the stars being accelerated to high velocities, or even ejected. A large-scale example is the gravitational interaction between the Milky Way and the Large Magellanic Cloud.
A supernova explosion in a multiple star system can accelerate both the supernova remnant and remaining stars to high velocities.
Multiple mechanisms may accelerate the same runaway star. For example, a massive star that was originally ejected due to gravitational interactions with its stellar neighbors may itself go supernova, producing a remnant with a velocity modulated by the supernova kick. If this supernova occurs in the very nearby vicinity of other stars, it is possible that it may produce more runaways in the process.
An example of a related set of runaway stars is the case of AE Aurigae, 53 Arietis and Mu Columbae, all of which are moving away from each other at velocities of over 100 km/s (for comparison, the Sun moves through the Milky Way at about 20 km/s faster than the local average). Tracing their motions back, their paths intersect near to the Orion Nebula about 2 million years ago. Barnard's Loop is believed to be the remnant of the supernova that launched the other stars.
Another example is the X-ray object Vela X-1, where photodigital techniques reveal the presence of a typical supersonic bow shock hyperbola.
Halo stars
Halo stars are very old stars that have a low metallicity and do not follow circular orbits around the center of the Milky Way within its disk. Instead, the halo stars travel in elliptical orbits, often inclined to the disk, which take them well above and below the plane of the Milky Way. Although their orbital velocities relative to the Milky Way may be no faster than disk stars, their different paths result in high relative velocities.
Typical examples are the halo stars passing through the disk of the Milky Way at steep angles. One of the nearest 45 stars, called Kapteyn's Star, is an example of the high-velocity stars that lie near the Sun: Its observed radial velocity is −245 km/s, and the components of its space velocity are and
Hypervelocity stars
Hypervelocity stars (designated as HVS or HV in stellar catalogues) have substantially higher velocities than the rest of the stellar population of a galaxy. Some of these stars may even exceed the escape velocity of the galaxy. In the Milky Way, stars usually have velocities on the order of 100 km/s, whereas hypervelocity stars typically have velocities on the order of 1000 km/s. Most of these fast-moving stars are thought to be produced near the center of the Milky Way, where there is a larger population of these objects than further out. One of the fastest known stars in our Galaxy is the O-class sub-dwarf US 708, which is moving away from the Milky Way with a total velocity of around 1200 km/s.
Jack G. Hills first predicted the existence of HVSs in 1988. This was later confirmed in 2005 by Warren Brown, Margaret Geller, Scott Kenyon, and Michael Kurtz. 10 unbound HVSs were known, one of which is believed to have originated from the Large Magellanic Cloud rather than the Milky Way. Further measurements placed its origin within the Milky Way. Due to uncertainty about the distribution of mass within the Milky Way, determining whether a HVS is unbound is difficult. A further five known high-velocity stars may be unbound from the Milky Way, and 16 HVSs are thought to be bound. The nearest currently known HVS (HVS2) is about 19 kpc from the Sun.
, there have been roughly 20 observed hypervelocity stars. Though most of these were observed in the Northern Hemisphere, the possibility remains that there are HVSs only observable from the Southern Hemisphere.
It is believed that about 1,000 HVSs exist in the Milky Way. Considering that there are around 100 billion stars in the Milky Way, this is a minuscule fraction (~0.000001%). Results from the second data release of Gaia (DR2) show that most high-velocity late-type stars have a high probability of being bound to the Milky Way. However, distant hypervelocity star candidates are more promising.
In March 2019, LAMOST-HVS1 was reported to be a confirmed hypervelocity star ejected from the stellar disk of the Milky Way.
In July 2019, astronomers reported finding an A-type star, S5-HVS1, traveling , faster than any other star detected so far. The star is in the Grus (or Crane) constellation in the southern sky and is about from Earth. It may have been ejected from the Milky Way after interacting with Sagittarius A*, the supermassive black hole at the center of the galaxy.
Origin of hypervelocity stars
HVSs are believed to predominantly originate by close encounters of binary stars with the supermassive black hole in the center of the Milky Way. One of the two partners is gravitationally captured by the black hole (in the sense of entering orbit around it), while the other escapes with high velocity, becoming a HVS. Such maneuvers are analogous to the capture and ejection of interstellar objects by a star.
Supernova-induced HVSs may also be possible, although they are presumably rare. In this scenario, a HVS is ejected from a close binary system as a result of the companion star undergoing a supernova explosion. Ejection velocities up to 770 km/s, as measured from the galactic rest frame, are possible for late-type B-stars. This mechanism can explain the origin of HVSs which are ejected from the galactic disk.
Known HVSs are main-sequence stars with masses a few times that of the Sun. HVSs with smaller masses are also expected and G/K-dwarf HVS candidates have been found.
Some HVSs may have originated from a disrupted dwarf galaxy. When it made its closest approach to the center of the Milky Way, some of its stars broke free and were thrown into space, due to the slingshot-like effect of the boost.
Some neutron stars are inferred to be traveling with similar speeds. This could be related to HVSs and the HVS ejection mechanism. Neutron stars are the remnants of supernova explosions, and their extreme speeds are very likely the result of an asymmetric supernova explosion or the loss of their near partner during the supernova explosions that forms them. The neutron star RX J0822-4300, which was measured to move at a record speed of over 1,500 km/s (0.5% of the speed of light) in 2007 by the Chandra X-ray Observatory, is thought to have been produced the first way.
One theory regarding the ignition of Type Ia supernovae invokes the onset of a merger between two white dwarfs in a binary star system, triggering the explosion of the more massive white dwarf. If the less massive white dwarf is not destroyed during the explosion, it will no longer be gravitationally bound to its destroyed companion, causing it to leave the system as a hypervelocity star with its pre-explosion orbital velocity of 1000–2500 km/s. In 2018, three such stars were discovered using data from the Gaia satellite.
Partial list of HVSs
As of 2014, twenty HVS were known.
HVS 1 – (SDSS J090744.99+024506.8) (a.k.a. The Outcast Star) – the first hypervelocity star to be discovered
HVS 2 – (SDSS J093320.86+441705.4 or US 708)
HVS 3 – (HE 0437-5439) – possibly from the Large Magellanic Cloud
HVS 4 – (SDSS J091301.00+305120.0)
HVS 5 – (SDSS J091759.42+672238.7)
HVS 6 – (SDSS J110557.45+093439.5)
HVS 7 – (SDSS J113312.12+010824.9)
HVS 8 – (SDSS J094214.04+200322.1)
HVS 9 – (SDSS J102137.08-005234.8)
HVS 10 – (SDSS J120337.85+180250.4)
Kinematic groups
A set of stars with similar space motion and ages is known as a kinematic group. These are stars that could share a common origin, such as the evaporation of an open cluster, the remains of a star forming region, or collections of overlapping star formation bursts at differing time periods in adjacent regions. Most stars are born within molecular clouds known as stellar nurseries. The stars formed within such a cloud compose gravitationally bound open clusters containing dozens to thousands of members with similar ages and compositions. These clusters dissociate with time. Groups of young stars that escape a cluster, or are no longer bound to each other, form stellar associations. As these stars age and disperse, their association is no longer readily apparent and they become moving groups of stars.
Astronomers are able to determine if stars are members of a kinematic group because they share the same age, metallicity, and kinematics (radial velocity and proper motion). As the stars in a moving group formed in proximity and at nearly the same time from the same gas cloud, although later disrupted by tidal forces, they share similar characteristics.
Stellar associations
A stellar association is a very loose star cluster, whose stars share a common origin and are still moving together through space, but have become gravitationally unbound. Associations are primarily identified by their common movement vectors and ages. Identification by chemical composition is also used to factor in association memberships.
Stellar associations were first discovered by the Armenian astronomer Viktor Ambartsumian in 1947. The conventional name for an association uses the names or abbreviations of the constellation (or constellations) in which they are located; the association type, and, sometimes, a numerical identifier.
Types
Viktor Ambartsumian first categorized stellar associations into two groups, OB and T, based on the properties of their stars. A third category, R, was later suggested by Sidney van den Bergh for associations that illuminate reflection nebulae. The OB, T, and R associations form a continuum of young stellar groupings. But it is currently uncertain whether they are an evolutionary sequence, or represent some other factor at work. Some groups also display properties of both OB and T associations, so the categorization is not always clear-cut.
OB associations
Young associations will contain 10 to 100 massive stars of spectral class O and B, and are known as OB associations. In addition, these associations also contain hundreds or thousands of low- and intermediate-mass stars. Association members are believed to form within the same small volume inside a giant molecular cloud. Once the surrounding dust and gas is blown away, the remaining stars become unbound and begin to drift apart. It is believed that the majority of all stars in the Milky Way were formed in OB associations. O-class stars are short-lived, and will expire as supernovae after roughly one million years. As a result, OB associations are generally only a few million years in age or less. The O-B stars in the association will have burned all their fuel within ten million years. (Compare this to the current age of the Sun at about five billion years.)
The Hipparcos satellite provided measurements that located a dozen OB associations within 650 parsecs of the Sun. The nearest OB association is the Scorpius–Centaurus association, located about 400 light-years from the Sun.
OB associations have also been found in the Large Magellanic Cloud and the Andromeda Galaxy. These associations can be quite sparse, spanning 1,500 light-years in diameter.
T associations
Young stellar groups can contain a number of infant T Tauri stars that are still in the process of entering the main sequence. These sparse populations of up to a thousand T Tauri stars are known as T associations. The nearest example is the Taurus-Auriga T association (Tau–Aur T association), located at a distance of 140 parsecs from the Sun. Other examples of T associations include the R Corona Australis T association, the Lupus T association, the Chamaeleon T association and the Velorum T association. T associations are often found in the vicinity of the molecular cloud from which they formed. Some, but not all, include O–B class stars. Group members have the same age and origin, the same chemical composition, and the same amplitude and direction in their vector of velocity.
R associations
Associations of stars that illuminate reflection nebulae are called R associations, a name suggested by Sidney van den Bergh after he discovered that the stars in these nebulae had a non-uniform distribution. These young stellar groupings contain main sequence stars that are not sufficiently massive to disperse the interstellar clouds in which they formed. This allows the properties of the surrounding dark cloud to be examined by astronomers. Because R associations are more plentiful than OB associations, they can be used to trace out the structure of the galactic spiral arms. An example of an R association is Monoceros R2, located 830 ± 50 parsecs from the Sun.
Moving groups
If the remnants of a stellar association drift through the Milky Way as a somewhat coherent assemblage, then they are termed a moving group or kinematic group. Moving groups can be old, such as the HR 1614 moving group at two billion years, or young, such as the AB Dor Moving Group at only 120 million years.
Moving groups were studied intensely by Olin Eggen in the 1960s. A list of the nearest young moving groups has been compiled by López-Santiago et al. The closest is the Ursa Major Moving Group which includes all of the stars in the Plough / Big Dipper asterism except for Dubhe and Alkaid. This is sufficiently close that the Sun lies in its outer fringes, without being part of the group. Hence, although members are concentrated at declinations near 60°N, some outliers are as far away across the sky as Triangulum Australe at 70°S.
The list of young moving groups is constantly evolving. The Banyan Σ tool currently lists 29 nearby young moving groups Recent additions to nearby moving groups are the Volans-Carina Association (VCA), discovered with Gaia, and the Argus Association (ARG), confirmed with Gaia. Moving groups can sometimes be further subdivided in smaller distinct groups. The Great Austral Young Association (GAYA) complex was found to be subdivided into the moving groups Carina, Columba, and Tucana-Horologium. The three Associations are not very distinct from each other, and have similar kinematic properties.
Young moving groups have well known ages and can support the characterization of objects with hard-to-estimate ages, such as brown dwarfs. Members of nearby young moving groups are also candidates for directly imaged protoplanetary disks, such as TW Hydrae or directly imaged exoplanets, such as Beta Pictoris b or GU Psc b.
Stellar streams
A stellar stream is an association of stars orbiting a galaxy that was once a globular cluster or dwarf galaxy that has now been torn apart and stretched out along its orbit by tidal forces.
Known kinematic groups
Some nearby kinematic groups include:
Local Association (Pleiades moving group)
AB Doradus moving group
Alpha Persei moving cluster
Beta Pictoris moving group
Castor moving group
Corona Australis association
Eta Chamaeleontis cluster
Hercules-Lyra association
Hercules stream
Hyades Stream
IC 2391 supercluster (Argus Association)
Kapteyn group
MBM 12 association
TW Hydrae association
Ursa Major Moving Group
Wolf 630 moving group
Zeta Herculis moving group
Pisces-Eridanus stellar stream
Tucana-Horologium association
See also
Astrometry
Gaia (spacecraft)
Hipparcos
n-body problem
Open cluster remnant
List of nearby stellar associations and moving groups
Stellar association
References
Further reading
External links
ESO press release about runaway stars
Entry in the Encyclopedia of Astrobiology, Astronomy, and Spaceflight
Two Exiled Stars Are Leaving Our Galaxy Forever
Entry in the Encyclopedia of Astrobiology, Astronomy, and Spaceflight
https://myspaceastronomy.com/magnetar-the-most-magnetic-stars-in-the-universe-my-space/
Young stellar kinematic groups, David Montes, Departamento de Astrofísica, Universidad Complutense de Madrid.
Kinematics
Galactic astronomy
Kinematics
Concepts in stellar astronomy | Stellar kinematics | [
"Physics",
"Astronomy"
] | 5,683 | [
"Concepts in astrophysics",
"Galactic astronomy",
"Concepts in stellar astronomy",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
25,731,059 | https://en.wikipedia.org/wiki/OECD%20Guidelines%20for%20the%20Testing%20of%20Chemicals | OECD Guidelines for the Testing of Chemicals (OECD TG) are a set of internationally accepted specifications for the testing of chemicals decided on by the Organisation for Economic Co-operation and Development (OECD). They were first published in 1981. They are split into five sections:
Section 1: Physical Chemical Properties
Section 2: Effects on Biotic Systems
Section 3: Environmental Fate and Behaviour
Section 4: Health Effects
Section 5: Other Test Guidelines
Guidelines are numbered with three digit numbers, the section number being the first number. Sometimes guidelines are suffixed with a letter.
Guidelines are under constant review, with guidelines being periodically updated, new guidelines being adopted, and guidelines being withdrawn. Previous guidelines are maintained on the website for reference purposes. Animal welfare concerns are dealt with by ensuring that animal tests are only permitted where necessary.
The guidelines are available in both English and French.
List of guidelines
Section 1: Physical Chemical Properties
Section 2: Effects on Biotic Systems
Section 3: Environmental Fate and Behaviour
Section 4: Health Effects
Section 5: Other Test Guidelines
External links
OECD Guidelines for the Testing of Chemicals
OECD
Toxicology
Regulation of chemicals | OECD Guidelines for the Testing of Chemicals | [
"Chemistry",
"Environmental_science"
] | 233 | [
"Regulation of chemicals",
"Toxicology"
] |
27,599,234 | https://en.wikipedia.org/wiki/Millennium%20Prize%20Problems | The Millennium Prize Problems are seven well-known complex mathematical problems selected by the Clay Mathematics Institute in 2000. The Clay Institute has pledged a US $1 million prize for the first correct solution to each problem.
The Clay Mathematics Institute officially designated the title Millennium Problem for the seven unsolved mathematical problems, the Birch and Swinnerton-Dyer conjecture, Hodge conjecture, Navier–Stokes existence and smoothness, P versus NP problem, Riemann hypothesis, Yang–Mills existence and mass gap, and the Poincaré conjecture at the Millennium Meeting held on May 24, 2000. Thus, on the official website of the Clay Mathematics Institute, these seven problems are officially called the Millennium Problems.
To date, the only Millennium Prize problem to have been solved is the Poincaré conjecture. The Clay Institute awarded the monetary prize to Russian mathematician Grigori Perelman in 2010. However, he declined the award as it was not also offered to Richard S. Hamilton, upon whose work Perelman built.
Overview
The Clay Institute was inspired by a set of twenty-three problems organized by the mathematician David Hilbert in 1900 which were highly influential in driving the progress of mathematics in the twentieth century. The seven selected problems span a number of mathematical fields, namely algebraic geometry, arithmetic geometry, geometric topology, mathematical physics, number theory, partial differential equations, and theoretical computer science. Unlike Hilbert's problems, the problems selected by the Clay Institute were already renowned among professional mathematicians, with many actively working towards their resolution.
The seven problems were officially announced by John Tate and Michael Atiyah during a ceremony held on May 24, 2000 (at the amphithéâtre Marguerite de Navarre) in the Collège de France in Paris.
Grigori Perelman, who had begun work on the Poincaré conjecture in the 1990s, released his proof in 2002 and 2003. His refusal of the Clay Institute's monetary prize in 2010 was widely covered in the media. The other six Millennium Prize Problems remain unsolved, despite a large number of unsatisfactory proofs by both amateur and professional mathematicians.
Andrew Wiles, as part of the Clay Institute's scientific advisory board, hoped that the choice of US$1 million prize money would popularize, among general audiences, both the selected problems as well as the "excitement of mathematical endeavor". Another board member, Fields medalist Alain Connes, hoped that the publicity around the unsolved problems would help to combat the "wrong idea" among the public that mathematics would be "overtaken by computers".
Some mathematicians have been more critical. Anatoly Vershik characterized their monetary prize as "show business" representing the "worst manifestations of present-day mass culture", and thought that there are more meaningful ways to invest in public appreciation of mathematics. He viewed the superficial media treatments of Perelman and his work, with disproportionate attention being placed on the prize value itself, as unsurprising. By contrast, Vershik praised the Clay Institute's direct funding of research conferences and young researchers. Vershik's comments were later echoed by Fields medalist Shing-Tung Yau, who was additionally critical of the idea of a foundation taking actions to "appropriate" fundamental mathematical questions and "attach its name to them".
Solved problem
Poincaré conjecture
In the field of geometric topology, a two-dimensional sphere is characterized by the fact that it is the only closed and simply-connected two-dimensional surface. In 1904, Henri Poincaré posed the question of whether an analogous statement holds true for three-dimensional shapes. This came to be known as the Poincaré conjecture, the precise formulation of which states:
Although the conjecture is usually stated in this form, it is equivalent (as was discovered in the 1950s) to pose it in the context of smooth manifolds and diffeomorphisms.
A proof of this conjecture, together with the more powerful geometrization conjecture, was given by Grigori Perelman in 2002 and 2003. Perelman's solution completed Richard Hamilton's program for the solution of the geometrization conjecture, which he had developed over the course of the preceding twenty years. Hamilton and Perelman's work revolved around Hamilton's Ricci flow, which is a complicated system of partial differential equations defined in the field of Riemannian geometry.
For his contributions to the theory of Ricci flow, Perelman was awarded the Fields Medal in 2006. However, he declined to accept the prize. For his proof of the Poincaré conjecture, Perelman was awarded the Millennium Prize on March 18, 2010. However, he declined the award and the associated prize money, stating that Hamilton's contribution was no less than his own.
Unsolved problems
Birch and Swinnerton-Dyer conjecture
The Birch and Swinnerton-Dyer conjecture deals with certain types of equations: those defining elliptic curves over the rational numbers. The conjecture is that there is a simple way to tell whether such equations have a finite or infinite number of rational solutions. More specifically, the Millennium Prize version of the conjecture is that, if the elliptic curve has rank , then the L-function associated with it vanishes to order at .
Hilbert's tenth problem dealt with a more general type of equation, and in that case it was proven that there is no algorithmic way to decide whether a given equation even has any solutions.
The official statement of the problem was given by Andrew Wiles.
Hodge conjecture
The Hodge conjecture is that for projective algebraic varieties, Hodge cycles are rational linear combinations of algebraic cycles.
We call this the group of Hodge classes of degree 2k on X.
The modern statement of the Hodge conjecture is:
Let X be a non-singular complex projective variety. Then every Hodge class on X is a linear combination with rational coefficients of the cohomology classes of complex subvarieties of X.
The official statement of the problem was given by Pierre Deligne.
Navier–Stokes existence and smoothness
The Navier–Stokes equations describe the motion of fluids, and are one of the pillars of fluid mechanics. However, theoretical understanding of their solutions is incomplete, despite its importance in science and engineering. For the three-dimensional system of equations, and given some initial conditions, mathematicians have not yet proven that smooth solutions always exist. This is called the Navier–Stokes existence and smoothness problem.
The problem, restricted to the case of an incompressible flow, is to prove either that smooth, globally defined solutions exist that meet certain conditions, or that they do not always exist and the equations break down. The official statement of the problem was given by Charles Fefferman.
P versus NP
The question is whether or not, for all problems for which an algorithm can verify a given solution quickly (that is, in polynomial time), an algorithm can also find that solution quickly. Since the former describes the class of problems termed NP, while the latter describes P, the question is equivalent to asking whether all problems in NP are also in P. This is generally considered one of the most important open questions in mathematics and theoretical computer science as it has far-reaching consequences to other problems in mathematics, to biology, philosophy and to cryptography (see P versus NP problem proof consequences). A common example of an NP problem not known to be in P is the Boolean satisfiability problem.
Most mathematicians and computer scientists expect that P ≠ NP; however, it remains unproven.
The official statement of the problem was given by Stephen Cook.
Riemann hypothesis
The Riemann zeta function ζ(s) is a function whose arguments may be any complex number other than 1, and whose values are also complex. Its analytical continuation has zeros at the negative even integers; that is, ζ(s) = 0 when s is one of −2, −4, −6, .... These are called its trivial zeros. However, the negative even integers are not the only values for which the zeta function is zero. The other ones are called nontrivial zeros. The Riemann hypothesis is concerned with the locations of these nontrivial zeros, and states that:
The real part of every nontrivial zero of the Riemann zeta function is 1/2.
The Riemann hypothesis is that all nontrivial zeros of the analytical continuation of the Riemann zeta function have a real part of . A proof or disproof of this would have far-reaching implications in number theory, especially for the distribution of prime numbers. This was Hilbert's eighth problem, and is still considered an important open problem a century later.
The problem has been well-known ever since it was originally posed by Bernhard Riemann in 1860. The Clay Institute's exposition of the problem was given by Enrico Bombieri.
Yang–Mills existence and mass gap
In quantum field theory, the mass gap is the difference in energy between the vacuum and the next lowest energy state. The energy of the vacuum is zero by definition, and assuming that all energy states can be thought of as particles in plane-waves, the mass gap is the mass of the lightest particle.
For a given real field , we can say that the theory has a mass gap if the two-point function has the property
with being the lowest energy value in the spectrum of the Hamiltonian and thus the mass gap. This quantity, easy to generalize to other fields, is what is generally measured in lattice computations.
Quantum Yang–Mills theory is the current grounding for the majority of theoretical applications of thought to the reality and potential realities of elementary particle physics. The theory is a generalization of the Maxwell theory of electromagnetism where the chromo-electromagnetic field itself carries charge. As a classical field theory it has solutions which travel at the speed of light so that its quantum version should describe massless particles (gluons). However, the postulated phenomenon of color confinement permits only bound states of gluons, forming massive particles. This is the mass gap. Another aspect of confinement is asymptotic freedom which makes it conceivable that quantum Yang-Mills theory exists without restriction to low energy scales. The problem is to establish rigorously the existence of the quantum Yang–Mills theory and a mass gap.
Prove that for any compact simple gauge group G, a non-trivial quantum Yang–Mills theory exists on and has a mass gap Δ > 0. Existence includes establishing axiomatic properties at least as strong as those cited in Streater & Wightman (1964), Osterwalder & Schrader (1973), and Osterwalder & Schrader (1975).
The official statement of the problem was given by Arthur Jaffe and Edward Witten.
See also
Beal conjecture
Hilbert's problems
List of mathematics awards
List of unsolved problems in mathematics
Smale's problems
Paul Wolfskehl (offered a cash prize for the solution to Fermat's Last Theorem)
abc conjecture
References
Further reading
External links
The Millennium Prize Problems
Challenge awards
Unsolved problems in mathematics | Millennium Prize Problems | [
"Mathematics"
] | 2,262 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Millennium Prize Problems"
] |
27,601,402 | https://en.wikipedia.org/wiki/Dimethylmagnesium | Dimethylmagnesium is an organomagnesium compound. It is a white pyrophoric solid. Dimethylmagnesium is used in the synthesis of organometallic compounds.
Preparation
Like other dialkylmagnesium compounds, dimethylmagnesium is prepared by adding dioxane to a solution of methylmagnesium halide:
2 CH3MgX + 2 dioxane (CH3)2Mg + MgX2(μ-dioxane)2↓
In such procedures, the dimethylmagnesium exists as the ether adduct, not the polymer.
Addition of 1,4-dioxane causes precipitation of solid MgX2(μ-dioxane)2, a coordination polymer. This precipitation drives the Schlenk equilibrium toward (CH3)2Mg. Related methods have been applied to other dialkylmagnesium compounds.
Dimethylmagnesium can also be prepared by combining dimethylmercury and magnesium.
Properties
The structure of this compound has been determined by X-ray crystallography. The material is a polymer with the same connectivity as silicon disulfide, featuring tetrahedral magnesium centres, each surrounded by bridging methyl groups. The Mg-C distances are 224 pm.
Related compounds
The linear chain structure seen for dimethylmagnesium is also observed for diethylmagnesium and dimethylberyllium. Di(tert-butyl)magnesium is however a dimer.
References
Organomagnesium compounds
Pyrophoric materials
Polymers | Dimethylmagnesium | [
"Chemistry",
"Materials_science",
"Technology"
] | 333 | [
"Polymers",
"Organomagnesium compounds",
"Polymer chemistry",
"Reagents for organic chemistry"
] |
27,606,794 | https://en.wikipedia.org/wiki/Chemerin%20peptide | Chemerin peptides are short peptides (on the order of 9 amino acids) that are produced from the carboxyl terminus of the chemokine chemerin. They display the same activities as chemerin, although at higher efficacy and potency.
A particular synthetic chemerin-derived peptide, termed C15, was developed at Oxford University. It showed anti-inflammatory activities. Intraperitoneal administration of C15 (0.32 ng/kg) to mice before zymosan challenge conferred significant protection against zymosan-induced peritonitis, suppressing neutrophil (63%) and monocyte (62%) recruitment with a concomitant reduction in proinflammatory mediator expression.
C15 was found to promote phagocytosis and efferocytosis in peritoneal macrophages at picomolar concentrations. C15 enhanced macrophage clearance of microbial particles and apoptotic cells by factor of 360% in vitro
References
See also
Chemerin
CMKLR1
Chemokine
Efferocytosis | Chemerin peptide | [
"Chemistry",
"Biology"
] | 234 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
27,606,828 | https://en.wikipedia.org/wiki/4-5%20kisrhombille | In geometry, the 4-5 kisrhombille or order-4 bisected pentagonal tiling is a semiregular dual tiling of the hyperbolic plane. It is constructed by congruent right triangles with 4, 8, and 10 triangles meeting at each vertex.
The name 4-5 kisrhombille is by Conway, seeing it as a 4-5 rhombic tiling, divided by a kis operator, adding a center point to each rhombus, and dividing into four triangles.
The image shows a Poincaré disk model projection of the hyperbolic plane.
It is labeled V4.8.10 because each right triangle face has three types of vertices: one with 4 triangles, one with 8 triangles, and one with 10 triangles.
Dual tiling
It is the dual tessellation of the truncated tetrapentagonal tiling which has one square and one octagon and one decagon at each vertex.
Related polyhedra and tilings
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
Hexakis triangular tiling
List of uniform tilings
Uniform tilings in hyperbolic plane
Hyperbolic tilings
Isohedral tilings
Semiregular tilings
John Horton Conway
eo:Ordo-3 dusekcita seplatera kahelaro | 4-5 kisrhombille | [
"Physics"
] | 304 | [
"Semiregular tilings",
"Tessellation",
"Hyperbolic tilings",
"Isohedral tilings",
"Symmetry"
] |
6,750,203 | https://en.wikipedia.org/wiki/Spinor%20spherical%20harmonics | In quantum mechanics, the spinor spherical harmonics (also known as spin spherical harmonics, spinor harmonics and Pauli spinors) are special functions defined over the sphere. The spinor spherical harmonics are the natural spinor analog of the vector spherical harmonics. While the standard spherical harmonics are a basis for the angular momentum operator, the spinor spherical harmonics are a basis for the total angular momentum operator (angular momentum plus spin). These functions are used in analytical solutions to Dirac equation in a radial potential. The spinor spherical harmonics are sometimes called Pauli central field spinors, in honor to Wolfgang Pauli who employed them in the solution of the hydrogen atom with spin–orbit interaction.
Properties
The spinor spherical harmonics are the spinors eigenstates of the total angular momentum operator squared:
where , where , , and are the (dimensionless) total, orbital and spin angular momentum operators, j is the total azimuthal quantum number and m is the total magnetic quantum number.
Under a parity operation, we have
For spin-1/2 systems, they are given in matrix form by
where are the usual spherical harmonics.
References
Spinors
Rotational symmetry
Special functions | Spinor spherical harmonics | [
"Physics",
"Mathematics"
] | 250 | [
"Symmetry",
"Special functions",
"Quantum mechanics",
"Combinatorics",
"Quantum physics stubs",
"Rotational symmetry"
] |
6,751,583 | https://en.wikipedia.org/wiki/Bochner%20space | In mathematics, Bochner spaces are a generalization of the concept of spaces to functions whose values lie in a Banach space which is not necessarily the space or of real or complex numbers.
The space consists of (equivalence classes of) all Bochner measurable functions with values in the Banach space whose norm lies in the standard space. Thus, if is the set of complex numbers, it is the standard Lebesgue space.
Almost all standard results on spaces do hold on Bochner spaces too; in particular, the Bochner spaces are Banach spaces for
Bochner spaces are named for the mathematician Salomon Bochner.
Definition
Given a measure space a Banach space and the Bochner space is defined to be the Kolmogorov quotient (by equality almost everywhere) of the space of all Bochner measurable functions such that the corresponding norm is finite:
In other words, as is usual in the study of spaces, is a space of equivalence classes of functions, where two functions are defined to be equivalent if they are equal everywhere except upon a -measure zero subset of As is also usual in the study of such spaces, it is usual to abuse notation and speak of a "function" in rather than an equivalence class (which would be more technically correct).
Applications
Bochner spaces are often used in the functional analysis approach to the study of partial differential equations that depend on time, e.g. the heat equation: if the temperature is a scalar function of time and space, one can write to make a family (parametrized by time) of functions of space, possibly in some Bochner space.
Application to PDE theory
Very often, the space is an interval of time over which we wish to solve some partial differential equation, and will be one-dimensional Lebesgue measure. The idea is to regard a function of time and space as a collection of functions of space, this collection being parametrized by time. For example, in the solution of the heat equation on a region in and an interval of time one seeks solutions
with time derivative
Here denotes the Sobolev Hilbert space of once-weakly differentiable functions with first weak derivative in that vanish at the boundary of Ω (in the sense of trace, or, equivalently, are limits of smooth functions with compact support in Ω); denotes the dual space of
(The "partial derivative" with respect to time above is actually a total derivative, since the use of Bochner spaces removes the space-dependence.)
See also
References
Functional analysis
Partial differential equations
Sobolev spaces
Lp spaces | Bochner space | [
"Mathematics"
] | 523 | [
"Functional analysis",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
6,752,192 | https://en.wikipedia.org/wiki/South%E2%80%93North%20Water%20Transfer%20Project | The South–North Water Transfer Project, also translated as the South-to-North Water Diversion Project, is a multi-decade infrastructure mega-project in China that aims to channel 44.8 cubic kilometers (44.8 billion cubic meters) of fresh water each year from the Yangtze River in southern China to the more arid and industrialized north through three canal systems:
The Eastern Route through the course of the Grand Canal;
The Central Route from the upper reaches of the Han River (a tributary of the Yangtze) via the Grand Aqueduct to Beijing and Tianjin;
The Western Route, which goes from three tributaries of the Yangtze near Bayankala Mountain to the provinces of Qinghai, Gansu, Shaanxi, Shanxi, Inner Mongolia, and Ningxia.
History
Mao Zedong discussed the idea for a mass engineering project as an answer to China's water problems as early as 1952. He reportedly said, "there's plenty of water in the south, not much water in the north. If at all possible, borrowing some water would be good".
Engineer Wang Mengshu initially proposed transferring water from the Songhua River in Jilin, around 900 km from Beijing. Before the construction, it was predicted that the development of the Bohai Economic Rim was significantly constrained by lack of water resources. The decision to start the project was also based on the strategic need to safeguard Beijing's water supply, which could theoretically also be met at similar cost through desalinization. In addition, water has been strategically diverted to Beijing from the surrounding regions in Hebei, which themselves lack water resources.
Construction of the project began in 2003.
In 2024 it was reported that 76.7 km³ of water had been transported in the ten years since operation.
East route
The Eastern Route Project (ERP), or Jiangdu Hydro Project, consists of an upgrade to the Grand Canal and will be used to divert a fraction of the total flow of the Yangtze River to northern China. According to local hydrologists, the entire flow of the Yangtze at the point of its discharge into the East China Sea is, on average, 956 km3 per year; the annual flow does not fall below approximately 600 km3 per year, even in the driest years. As the project progresses, the amount of water to be diverted to the north will increase from 8.9 km3/year to 10.6 km3/year to 14.8 km3/year.
Water from the Yangtze River will be drawn into the canal in Jiangdu, where a giant 400 m3/s (12.6 km3/year if operated continuously) pumping station was built in the 1980s. The water will then be pumped by stations along the Grand Canal and through a tunnel under the Yellow River and down an aqueduct to reservoirs near Tianjin. Construction on the Eastern route officially began on 27 December 2002, and water was expected to reach Tianjin by 2013. However, in addition to construction delays, water pollution has affected the viability of the route. Initially, the route was expected to provide water for the provinces of Shandong, Jiangsu, and Hebei, with trial operations to begin in mid-2013. Water started arriving in Shandong in 2014, and it is expected one cubic kilometer of water will have been transferred in 2018.
As of October 2017, water had reached Tianjin. Tianjin is expected to receive 1 km3/year. The Eastern route is not expected to supply Beijing, which is to be supplied by the central route.
The completed line will be slightly over 1,152 km (716 miles) long, equipped with 23 pumping stations with a power capacity of 454 megawatts.
An important element of the Eastern Route will be a tunnel crossing under the Yellow River, on the border of Dongping and Dong'e counties of Shandong Province. The crossing will consist of two 9.3 m diameter horizontal tunnels, positioned 70 m under the bed of the Yellow River.
Due to the topography of the Yangtze Plain and the North China Plain, pumping stations will be needed to raise water from the Yangtze to the Yellow River crossing; farther north, the water will be flowing downhill in an aqueduct.
Central route
The central route, known colloquially as the Grand Aqueduct, runs from Danjiangkou Reservoir on the Han River, a tributary of the Yangtze, to Beijing. This project involved raising the height of the Danjiangkou Dam by increasing the dam's crest elevation from 162 m to 176.6 m above sea level. This addition to the dam's height allows the water level in the reservoir to rise from 157 m to 170 m above sea level and thus permits the flow into the water diversion canal to begin downhill, pulled by gravity into the lower elevation of the canals.
The central route crosses the North China Plain. The canal was constructed to create a continuous downhill flow all the way from the Danjiangkou Reservoir to Beijing without the need for pumping stations. The greatest engineering challenge of the route was building two tunnels under the Yellow River to carry the canal's flow. Construction on the central route began in 2004. In 2008, the 307 km-long northern stretch of the central route was completed at a cost of $2 billion. Water in that stretch of the canal does not come from the Han River but from reservoirs in Hebei Province, south of Beijing. Farmers and industries in Hebei had to cut back on water consumption to allow for water to be transferred to Beijing.
On mapping services, one can see the canal's intake at the Danjiangkou Reservoir (); its crossing of the Baihe River north of Nanyang, Henan (); the Shahe River in
Lushan County (); the Ying River in Yuzhou (); and the Yellow River northeast of Zhengzhou (); as well as its entrance into the southwestern suburbs of Beijing at the Juma River in Zhuozhou, Hebei ().
The whole project was expected to be completed around 2010. Final completion was on 12 December 2014, to allow for more environmental protection along the route. One problem was the impact of the project on the Han River below the Danjiangkou Dam, from which approximately one-third of the route's total water is diverted. To mitigate this, another canal is being built to divert water from the Three Gorges Reservoir to the Danjiangkou Reservoir. Construction of this project, named the Yinjiangbuhan tunnel, began in July 2022. It is set to take an estimated ten years to complete.
Another major challenge was the resettlement of around 330,000 people who lived near Danjiangkou Reservoir at its former lower elevation and along the route of the project. On 18 October 2009, Chinese officials began to relocate residents from the areas of Hubei and Henan provinces that would be affected by the project. The completed route of the Grand Aqueduct is about 1,264 km long and initially provided 9.5 km3 of water annually. By 2030, the project is slated to increase this transfer to 12–13 km3 per year. Although the transfer will be lower in dry years, it is projected that it will be able to provide a flow of at least 6.2 km3/year at all times with 95% confidence.
Industries are prohibited from locating on the reservoir's watershed to keep its water drinkable.
West route
There are long-standing plans to divert about 200 cubic kilometers of water per year from the upstream sections of six rivers in southwestern China, including the Mekong (Lancang River), the Yarlung Zangbo (called Brahmaputra further downstream), and the Salween (Nu River), to the Yangtze River, the Yellow River, and ultimately to the dry areas of northern China through a system of reservoirs, tunnels, and natural rivers.
Financing
In 2008, construction costs for the eastern and central routes was estimated to be 254.6 billion yuan ($37.44 billion). The government had budgeted only 53.87 billion yuan ($7.9 billion), less than a quarter of the total cost, at that time. This included 26 billion from the central government and special accounts, 8 billion from local governments, and almost 20 billion in loans. As of 2008, around 30 billion yuan had been spent on the construction of the eastern (5.66 billion yuan) and central routes (24.82 billion yuan). Costs of the projects have increased significantly.
By 2014, more than 208.2 billion RMB (34 billion USD) had been spent, with construction on the western route not yet started. In 2024, 500 billion RMB had been spent on the project.
Criticism
The project required resettling at least 330,000 people in central China. Critics have warned the water diversion will cause environmental damage, and some villagers said officials had forced them to sign agreements to relocate.
In 2013, Radio Free Asia reported that fish farmers on Dongping Lake, on the project's eastern route, in Shandong, claimed that the polluted Yangtze River water entering the lake was killing their fish. Subsequent scientific research showed that the water diversion improved the water environment of Dongping Lake.
Scientists have been concerned that the project will increase water evaporation loss. The exact amount of evaporation loss is not known, but it may be improved in the future as more water is transferred and the flow rate increases.
Engineer Wang Mengshu noted that a tunnel structure would have reduced the project's cost, as the ground-level canal required more excavation and land acquisition as well as the construction of 1,300 bridges.
See also
Water resources of China
Meng Xuenong, the project's deputy director (2003–2007)
Irtysh–Karamay–Ürümqi Canal, in Xinjiang province
Central Yunnan Water Diversion Project, similar project under construction in southern China
References
External links
Official website: 中国南水北调 (in Chinese) / South-to-North Water Diversion (in English)
"Thirsty China to divert the mighty Yangtze", CNN, 15 November 2001
Water industry article
"Beneath Booming Cities, China's Future Is Drying Up", The New York Times, 28 September 2007
Map of the South–North Water Transfer Project at The New York Times
Aqueducts in China
Proposed infrastructure in China
Proposed canals
Interbasin transfer
Environmental issues in China
Irrigation in China
Canals in China
Macro-engineering
Megaprojects | South–North Water Transfer Project | [
"Engineering",
"Environmental_science"
] | 2,116 | [
"Interbasin transfer",
"Hydrology",
"Macro-engineering",
"Megaprojects"
] |
6,752,609 | https://en.wikipedia.org/wiki/Underwater%20vision | Underwater vision is the ability to see objects underwater, and this is significantly affected by several factors. Underwater, objects are less visible because of lower levels of natural illumination caused by rapid attenuation of light with distance passed through the water. They are also blurred by scattering of light between the object and the viewer, also resulting in lower contrast. These effects vary with wavelength of the light, and color and turbidity of the water. The vertebrate eye is usually either optimised for underwater vision or air vision, as is the case in the human eye. The visual acuity of the air-optimised eye is severely adversely affected by the difference in refractive index between air and water when immersed in direct contact. Provision of an airspace between the cornea and the water can compensate, but has the side effect of scale and distance distortion. The diver learns to compensate for these distortions. Artificial illumination is effective to improve illumination at short range.
Stereoscopic acuity, the ability to judge relative distances of different objects, is considerably reduced underwater, and this is affected by the field of vision. A narrow field of vision caused by a small viewport in a helmet results in greatly reduced stereoacuity, and associated loss of hand-eye coordination. At very short range in clear water distance is underestimated, in accordance with magnification due to refraction through the flat lens of the mask, but at greater distances - greater than arm's reach, the distance tends to be overestimated to a degree influenced by turbidity. Both relative and absolute depth perception are reduced underwater. Loss of contrast results in overestimation, and magnification effects account for underestimation at short range. Divers can to a large extent adapt to these effects over time and with practice.
Light rays bend when they travel from one medium to another; the amount of bending is determined by the refractive indices of the two media. If one medium has a particular curved shape, it functions as a lens. The cornea, humours, and crystalline lens of the eye together form a lens that focuses images on the retina. The eye of most land animals is adapted for viewing in air. Water, however, has approximately the same refractive index as the cornea (both about 1.33), effectively eliminating the cornea's focusing properties. When immersed in water, instead of focusing images on the retina, they are focused behind the retina, resulting in an extremely blurred image from hypermetropia. This is largely avoided by having an air space between the water and the cornea, trapped inside the mask or helmet.
Water attenuates light due to absorption and as light passes through water colour is selectively absorbed by the water. Color absorption is also affected by turbidity of the water and dissolved material. Water preferentially absorbs red light, and to a lesser extent, yellow, green and violet light, so the color that is least absorbed by water is blue light. Particulates and dissolved materials may absorb different frequencies, and this will affect the color at depth, with results such as the typically green color in many coastal waters, and the dark red-brown color of many freshwater rivers and lakes due to dissolved organic matter.
Visibility is a term which generally predicts the ability of some human, animal, or instrument to optically detect an object in the given environment, and may be expressed as a measure of the distance at which an object or light can be discerned. Factors affecting visibility include illumination, length of the light path, particles which cause scattering, dissolved pigments which absorb specific colours, and salinity and temperature gradients which affect refractive index. Visibility can be measured in any arbitrary direction, and for various colour targets, but horizontal visibility of a black target reduces the variables and meets the requirements for a straight-forward and robust parameter for underwater visibility. Instruments are available for field estimates of visibility from the surface, which can inform the dive team on probable complications.
Illumination
Illumination of underwater environment is limited by the characteristics of the water.
{{expand section}|daylight, lighy emitted by organisma}, artificial lighting}}
Artificial illumination
backscatter has a greater effect when from artificial illumination as the light source is more likely to be close to the viewer than for natural light.
Focus
Water has a significantly different refractive index to air, and this affects the focusing of the eye. Most animals' eyes are adapted to either underwater or air vision, and do not focus properly when in the other environment.
Fish
The crystalline lenses of fishes' eyes are extremely convex, almost spherical, and their refractive indices are the highest of all the animals. These properties enable proper focusing of the light rays and in turn proper image formation on the retina. This convex lens gives the name to the fisheye lens in photography.
Humans
By wearing a flat diving mask, humans can see clearly underwater. The mask's flat window separates the eyes from the surrounding water by a layer of air. Light rays entering from water into the flat parallel window change their direction minimally within the window material itself. But when these rays exit the window into the air space between the flat window and the eye, the refraction is quite noticeable. The view paths refract (bend) in a manner similar to viewing fish kept in an aquarium. Linear polarizing filters decrease visibility underwater by limiting ambient light and dimming artificial light sources.
While wearing a flat scuba mask or goggles, objects underwater will appear 33% bigger (34% bigger in salt water) or 25% closer than they actually are. Also pincushion distortion and lateral chromatic aberration are noticeable. Double-dome masks restore natural sized underwater vision and field of view, with certain limitations.
Optical correction
Divers can wear contact lenses under the diving mask or helmet. The risk of loss depends on the security of the mask or helmet, and is very low with a helmet. Framed lenses are available for wear in some helmets and full-face masks, but they can be difficult to defog if there is no fresh, dry, gas flow over them. The frame may be mounted to the helmet or mask, or worn on the head in the usual way, but they cannot be adjusted during a dive if they move out of position.
Glasses worn outside the mask will have different refraction out of the water to underwater, because of the different refractive indices of air and water in contact with the lens surfaces.
Diving masks can be fitted with lenses for divers needing optical correction to improve vision. Corrective lenses are ground flat on one side and optically cemented to the inside face of the mask lens. This provides the same amount of correction above and below the surface of the water as the curved surface of the lens is in contact with air in both cases. Bifocal lenses are also available for this application. Some masks are made with removable lenses, and a range of standard corrective lenses are available which can be fitted. Plastic self-adhesive lenses that can be applied to the inside of the mask may fall off if the mask is flooded for a significant period. Contact lenses may be worn under a mask or helmet, but there is some risk of losing them if the mask floods.
Physiological variations
A very near-sighted person can see more or less normally underwater . Scuba divers with interest in underwater photography may notice presbyopic changes while diving before they recognize the symptoms in their normal routines due to the near focus in low light conditions.
The Moken people of South-East Asia are able to focus underwater to pick up tiny shellfish and other food items. Gislén et al. have compared Moken and untrained European children and found that the underwater visual acuity of the Moken was twice that of their untrained European counterparts. European children after 1 month of training also showed the same level of underwater visual acuity.
This is due to the contraction of the pupil, instead of the usual dilation (mydriasis) that is undergone when a normal, untrained eye, accustomed to viewing in air, is submerged.
Color vision
Water attenuates light due to absorption which varies as a function of frequency. In other words, as light passes through a greater distance of water color is selectively absorbed by the water. Color absorption is also affected by turbidity of the water and dissolved material.
Water preferentially absorbs red light, and to a lesser extent, yellow, green and violet light, so the color that is least absorbed by water is blue light. Particulates and dissolved materials may absorb different frequencies, and this will affect the color at depth, with results such as the typically green color in many coastal waters, and the dark red-brown color of many freshwater rivers and lakes due to dissolved organic matter.
Fluorescent paints absorb higher frequency light to which the human eye is relatively insensitive and emit lower frequencies, which are more easily detected. The emitted light and the reflected light combine and may be considerably more visible than the original light. The most visible frequencies are also those most rapidly attenuated in water, so the effect is for greatly increased colour contrast over a short range, until the longer wavelengths are attenuated by the water.
The best colors to use for visibility in water was shown by Luria et al. and quoted from Adolfson and Berghage below:
A. For murky, turbid water of low visibility (rivers, harbors, etc.)
1. With natural illumination:
a. Fluorescent yellow, orange, and red.
b. Regular yellow, orange, and white.
2. With incandescent illumination:
a. Fluorescent and regular yellow, orange, red and white.
3. With a mercury light source:
a. Fluorescent yellow-green and yellow-orange.
b. Regular yellow and white.
B. For moderately turbid water (sounds, bays, coastal water).
1. With natural illumination or incandescent light source:
a. Any fluorescent in the yellows, oranges, and reds.
b. Regular yellow, orange, and white.
2. With a mercury light source:
a. Fluorescent yellow-green and yellow-orange.
b. Regular yellow and white.
C. For clear water (southern water, deep water offshore, etc.).
1. With any type of illumination fluorescent paints are superior.
a. With long viewing distances, fluorescent green and yellow-green.
b. With short viewing distances, fluorescent orange is excellent.
2. With natural illumination:
a. Fluorescent paints.
b. Regular yellow, orange, and white.
3. With incandescent light source:
a. Fluorescent paints.
b. Regular yellow, orange, and white.
4. With a mercury light source:
a. Fluorescent paints.
b. Regular yellow, white.
The most difficult colors at the limits of visibility with a water background are dark colors such as gray or black.
Visibility
Visibility is a term which generally predicts the ability of some human or instrument to detect an object in the given environment, and may be expressed as a measure of the distance at which an object or light can be discerned. The theoretical black body visibility of pure water based on the values for the optical properties of water for light of 550 nm has been estimated at 74 m. For the case of a relatively large object, sufficiently illuminated by daylight, the horizontal visibility of the object is a function of the photopic beam attenuation coefficient (spectral sensitivity of the eye). This function has been reported as 4.6 divided by the photopic beam attenuation coefficient.
Factors affecting visibility include: particles in the water (turbidity), salinity gradients (haloclines), temperature gradients (thermoclines) and dissolved organic matter.
Reduction of contrast with distance in a horizontal plane at a specific wavelength has been found to depend directly on the beam attenuation coefficient for that wavelength. The inherent contrast of a black target is -1, so the visibility of a black target in the horizontal direction depends on a single parameter, which is not the case for any other colour or direction, making horizontal visibility of a black target the simplest case, and for this reason it has been proposed as a standard for underwater visibility, as it can be measured with reasonably simple instrumentation.
The photopic beam attenuation coefficient, on which diver visibility depends, is the attenuation of natural light as perceived by the human eye, but in practice it is simpler and more usual to measure the attenuation coefficient for one or more wavelength bands. It has been shown that the function 4.8 divided by the photopic beam attenuation coefficient, as derived by Davies-Colley, gives a value for visibility with an average error of less than 10% for a large range of typical coastal and inland water conditions and viewing conditions, and the beam attenuation coefficients for a single wavelength band at about 530 nm peak is a suitable proxy for the full visible spectrum for many practical purposes with some small adjustments.
Measurement of visibility
The standard measurement for underwater visibility is the distance at which a Secchi disc can be seen.
The range of underwater vision is usually limited by turbidity. In very clear water visibility may extend as far as about 80m, and a record Secchi depth of 79 m has been reported from a coastal polynya of the Eastern Weddell Sea, Antarctica. In other sea waters, Secchi depths in the 50 to 70 m range have occasionally been recorded, including a 1985 record of 53 m in the Eastern and up to 62 m in the tropical Pacific Ocean. This level of visibility is seldom found in surface freshwater. Crater Lake, Oregon, is often cited for clarity, but the maximum recorded Secchi depth using a 2 m disc is 44 m. The lakes of the McMurdo Dry Valleys of Antarctica and Silfra in Iceland have also been reported as exceptionally clear.
Visibility can be measured in an arbitrary direction, and of various colour targets, but horizontal visibility of a black target reduces the variables and meets the requirements for a straight-forward and robust parameter for underwater visibility, which can be used to make operational decisions for mine hunters and explosive ordnance disposal teams.
An instrument for measuring underwater visibility basically measures light transmission through the water between the target and the observer, to calculate the loss, and is called a transmissometer. By measuring the amount of light which is transmitted from a light source of known strength and wavelength distribution, through a known distance of water to a calibrated light meter, the clarity of water can be objectively quantified. A wavelength of 532 nm (green) aligns well with the peak of the human visual perception spectrum, but other wavelengths may be used. Transmissometers are more sensitive at low particulate concentration and are better suited for measuring relatively clear water.
Measurement of turbidity
Nephelometers are used for measuring suspended particles in turbid waters where they have a more linear response than transmissometers. Turbidity, or cloudiness, of water is a relative measure. It is an apparent optical property which varies depending on the properties of the suspended particles, illumination, and instrument characteristics. Turbidity is measured in nephelometer units referenced to a turbidity standard or in Formazin Turbidity Units.
Nephelometers measure the light scattered by suspended particles and respond mainly to the first-order effects of particle size and concentration. Depending on manufacturer, nephelometers measure scattered light in the range between about 90° to 165° off the axis of the beam, and usually use infra-red light with a wavelength of around 660 nm because this wavelength is rapidly absorbed by water, so there is very little contamination of the source due to ambient daylight except near to the surface.
Low visibility
Low visibility refers to a diving environment where the diving medium is turbid and objects cannot be seen clearly at short range even with artificial illumination. The term is not usually used to refer to a simple lack of illumination when the medium is clear. Zero visibility is used to describe conditions when the diver can effectively see nothing outside the mask of helmet, and a light must be put against the viewport to see if it is switched on, and it is not possible for a person with normal vision to read normal instruments. (some mask-integrated head-up displays may be legible)
Low visibility is defined by NOAA for operational purposes as: "When visual contact with the dive buddy can no longer be maintained."
DAN-Southern Africa suggest that limited visibility is when a "buddy cannot be discerned at a distance greater than 3 metres."
See also
References
Further reading
Vision
Underwater diving physics | Underwater vision | [
"Physics"
] | 3,423 | [
"Applied and interdisciplinary physics",
"Underwater diving physics"
] |
6,754,094 | https://en.wikipedia.org/wiki/Recognition%20sequence | A recognition sequence is a DNA sequence to which a structural motif of a DNA-binding domain exhibits binding specificity. Recognition sequences are palindromes.
The transcription factor Sp1 for example, binds the sequences 5'-(G/T)GGGCGG(G/A)(G/A)(C/T)-3', where (G/T) indicates that the domain will bind a guanine or thymine at this position.
The restriction endonuclease PstI recognizes, binds, and cleaves the sequence 5'-CTGCAG-3'.
A recognition sequence is different from a recognition site. A given recognition sequence can occur one or more times, or not at all, on a specific DNA fragment. A recognition site is specified by the position of the site. For example, there are two PstI recognition sites in the following DNA sequence fragment, starting at base 9 and 31 respectively. A recognition sequence is a specific sequence, usually very short (less than 10 bases). Depending on the degree of specificity of the protein, a DNA-binding protein can bind to more than one specific sequence. For PstI, which has a single sequence specificity, it is 5'-CTGCAG-3'. It is always the same whether at the first recognition site or the second in the following example sequence. For Sp1, which has multiple (16) sequence specificity as shown above, the two recognition sites in the following example sequence fragment are at 18 and 32, and their respective recognition sequences are 5'-GGGGCGGAGC-3' and 5'-TGGGCGGAAC-3'.
5'-AACGTTAGCTGCAGTCGGGGCGGAGCTAGGCTGCAGGAATTGGGCGGAACCT-3'
See also
DNA-binding domain
Transcription factor#Classes, for more examples
References
Genetics
Genome editing
Protein structural motifs | Recognition sequence | [
"Engineering",
"Biology"
] | 398 | [
"Genetics techniques",
"Genetics",
"Genome editing",
"Genetic engineering",
"Protein classification",
"Protein structural motifs"
] |
6,754,224 | https://en.wikipedia.org/wiki/Autologous%20endometrial%20coculture | Autologous Endometrial Coculture is a technique of assisted reproductive technology. It involves placing a patient’s fertilized eggs on top of a layer of cells from her own uterine lining, creating a more natural environment for embryo development and maximizing the chance for an in vitro fertilization (IVF) pregnancy.
How Coculture is performed
A typical Coculture cycle consists of the following steps:
1. Once a patient has been deemed an appropriate candidate for the procedure, she undergoes an endometrial biopsy during which a small piece of her uterine lining is removed.
2. The uterine lining sample is sent to a research lab, where it is treated, purified and frozen.
3. The patient then undergoes a typical IVF cycle and is given medication to stimulate egg growth in her ovaries.
4. The patient’s eggs are retrieved and mixed with the sperm. At this time, the lab begins thawing and growing her endometrial cells.
5. Once fertilization is confirmed, the patient’s embryos are placed on top of her own (and now thawed) endometrial cells.
6. Over the next two days, the embryos are closely monitored for growth and development.
7. The patient’s embryos are transferred into her uterus for implantation and pregnancy.
The potential candidate
Coculture can be an effective treatment for patients who have failed previous IVF cycles or who have poor embryo quality.
Advantages
A study of 12,377 embryo cultures showed that endometrial coculture is significantly better than sequential culture media; the rates (fraction) reaching blastocyst stage were 56% versus 46% in the coculture versus the sequential system, respectively, with own oocytes. With eggs from ovum donations, the rates were 71% versus 56%, respectively. Pregnancy rates were 39% vs. 28% and implantation rates were 33% vs. 21%.
In addition to being noninvasive and relatively pain free, Coculture can be performed during a short office visit. The procedure also can improve embryo quality and stimulate embryo growth.
Risks
The risks of Coculture are minimal. The procedure has been performed in over 1000 patients with no reported detrimental effects on embryo growth. Complications involving uterine infection or damage caused by endometrial biopsy are extremely rare.
References
Assisted reproductive technology | Autologous endometrial coculture | [
"Biology"
] | 488 | [
"Assisted reproductive technology",
"Medical technology"
] |
6,755,482 | https://en.wikipedia.org/wiki/Membrane%20reactor | A membrane reactor is a physical device that combines a chemical conversion process with a membrane separation process to add reactants or remove products of the reaction.
Chemical reactors making use of membranes are usually referred to as membrane reactors. The membrane can be used for different tasks:
Separation
Selective extraction of products
Retention of the catalyst
Distribution/dosing of a reactant
Catalyst support (often combined with distribution of reactants)
Membrane reactors are an example for the combination of two unit operations in one step, e.g., membrane filtration with the chemical reaction. The integration of reaction section with selective extraction of a reactant allows an enhancement of the conversions compared to the equilibrium value. This characteristic makes membrane reactors suitable to perform equilibrium-limited endothermic reactions.
Benefits and critical issues
Selective membranes inside the reactor lead to several benefits: reactor section substitutes several downstream processes. Moreover, removing a product allows to exceed thermodynamics limitations. In this way, it is possible to reach higher conversions of the reactants or to obtain the same conversion with a lower temperature.
Reversible reactions are usually limited by thermodynamics: when direct and reverse reactions, whose rate depends from reactants and product concentrations, are balanced, a chemical equilibrium state is achieved. If temperature and pressure are fixed, this equilibrium state is a constraint for the ratio of products versus reactants concentrations, obstructing the possibility to reach higher conversions.
This limit can be overcome by removing a product of the reaction: in this way, the system cannot reach equilibrium and the reaction continues, reaching higher conversions (or same conversion at lower temperature).
Nevertheless, there are several hurdles in an industrial commercialization due to technical difficulties in designing membranes with long stabilities and due to the high costs of membranes. Moreover, there is a lack of a process which lead the technology, even if in recent years this technology was successfully applied to hydrogen production and hydrocarbon dehydrogenation.
Reactor configurations
Generally, membrane reactors can be classified based on the membrane position and reactor configuration. Usually there is a catalyst inside: if the catalyst is installed inside the membrane, the reactor is called catalytic membrane reactor (CMR); if the catalyst (and the support) are packed and fixed inside, the reactor is called packed bed membrane reactor; if the speed of the gas is high enough, and the particle size is small enough, fluidization of the bed occurs and the reactor is called fluidized bed membrane reactor. Other types of reactor take the name from the membrane material, e.g., zeolite membrane reactor.
Among these configurations, higher attention in recent years, particularly in hydrogen production, is given to fixed bed and fluidized bed: in these cases the standard reactor is simply integrated with membranes inside reaction space.
Membrane reactors for hydrogen production
Today hydrogen is mainly used in chemical industry as a reactant in ammonia production and methanol synthesis, and in refinery processes for hydrocracking. Moreover, there is a growing interest in its use as energy carrier and as fuel in fuel cells.
More than 50% of hydrogen is currently produced from steam reforming of natural gas, due to low costs and the fact that it is a mature technology. Traditional processes are composed by a steam reforming section, to produce syngas from natural gas, two water gas shift reactors which enhance hydrogen in syngas and a pressure swing adsorption unit for hydrogen purification. Membrane reactors make a process intensification including all these sections in one single unit, with both economic and environmental benefits.
Membranes for hydrogen production
To be suitable for hydrogen production industry, membranes must have a high flux, high selectivity towards hydrogen, low cost and high stability. Among membranes, dense inorganic are the most suitable having a selectivity orders of magnitude bigger than porous ones. Among dense membranes, metallic ones are the most used due to higher fluxes compared to ceramic ones.
The most used material in hydrogen separation membranes is palladium, particularly its alloy with silver. This metal, even if is more expensive than other ones, shows very high solubility towards hydrogen.
The transport mechanism of hydrogen inside palladium membranes follows a solution/diffusion mechanism: hydrogen molecule is adsorbed onto the surface of the membrane, then it is split into hydrogen atoms; these atoms go across the membrane through diffusion (see palladium hydride) and then recombine again into hydrogen molecule on the low-pressure side of the membrane; then, it is desorbed from the surface.
In recent years, several works were performed to study the integration of palladium membranes inside fluidized bed membrane reactors for hydrogen production.
Other applications
Membrane bioreactors for wastewater treatment
Submerged and sidestream membrane bioreactors in wastewater treatment plants are the most developed filtration based membrane reactors.
Electrochemical membrane reactors ecMR
The production of chloride (Cl2) and caustic soda NaOH from NaCl is carried out industrially by the chlor-alkali-process using a proton conducting polyelectrolyte membrane. It is used on large scale and has replaced diaphragm electrolysis. Nafion has been developed as a bilayer membrane to withstand the harsh conditions during the chemical conversion.
Biological systems
In biological systems, membranes fulfill a number of essential functions. The compartmentalization of biological cells is achieved by membranes. The semi-permeability allows to separate reactions and reaction environments. A number of enzymes are membrane bound and often mass transport through the membrane is active rather than passive as in artificial membranes, allowing the cell to keep up gradients for example by using active transport of protons or water.
The use of a natural membrane is the first example of the utilization for a chemical reaction. By using the selective permeability of a pig's bladder, water could be removed from a condensation reaction to shift the equilibrium position of the reaction towards the condensation products according to Le Chatelier's principle.
Size exclusion: Enzyme Membrane Reactor
As enzymes are macromolecules and often differ greatly in size from reactants, they can be separated by size exclusion membrane filtration with ultra- or nanofiltration artificial membranes. This is used on industrial scale for the production of enantiopure amino acids by kinetic racemic resolution of chemically derived racemic amino acids. The most prominent example is the production of L-methionine on a scale of 400t/a. The advantage of this method over other forms of immobilization of the catalyst is that the enzymes are not altered in activity or selectivity as it remains solubilized.
The principle can be applied to all macromolecular catalysts which can be separated from the other reactants by means of filtration. So far, only enzymes have been used to a significant extent.
Reaction combined with pervaporation
In pervaporation, dense membranes are used for separation. For dense membranes the separation is governed by the difference of the chemical potential of the components in the membrane. The selectivity of the transport through the membrane is dependent on the difference in solubility of the materials in the membrane and their diffusivity through the membrane. For example, for the selective removal of water by using lipophilic membranes. This can be used to overcome thermodynamic limitations of condensation, e.g., esterification reactions by removing water.
Dosing: Partial oxidation of methane to methanol
In the STAR process for the catalytic conversion of methane from natural gas with oxygen from air, to methanol by the partial oxidation 2CH4 + O2 2CH3OH.
The partial pressure of oxygen has to be low to prevent the formation of explosive mixtures and to suppress the successive reaction to carbon monoxide, carbon dioxide and water. This is achieved by using a tubular reactor with an oxygen-selective membrane. The membrane allows the uniform distribution of oxygen as the driving force for the permeation of oxygen through the membrane is the difference in partial pressures on the air side and the methane side.
Notes
References
External links
European project Fuelcell website, about membrane reactors application for bio-ethanol conversion
European project Bionico website, about membrane reactors application in hydrogen production from biogas
European project Macbeth website, about various applications of membrane reactors and their industrialization
Chemical reactors
Membrane technology
Industrial water treatment | Membrane reactor | [
"Chemistry",
"Engineering"
] | 1,698 | [
"Chemical reaction engineering",
"Separation processes",
"Chemical reactors",
"Water treatment",
"Chemical equipment",
"Industrial water treatment",
"Membrane technology"
] |
31,456,567 | https://en.wikipedia.org/wiki/Flow%20conditioning | Flow conditioning ensures that the "real world" environment closely resembles the "laboratory" environment for proper performance of inferential flowmeters like orifice, turbine, coriolis, ultrasonic etc.
Types of flow
Basically, Flow in pipes can be classified as follows –
Fully developed flow (found in world-class flow laboratories)
Pseudo-fully developed flow
Non-swirling, non-symmetrical flow
Moderate swirling, non-symmetrical flow
High swirling, symmetrical flow
Types of flow conditioners
Flow conditioners shown in fig.(a) can be grouped into following three types –
Those that eliminate swirl only (tube bundles)
Those that eliminate swirl and non-symmetry, but do not produce pseudo fully developed flow
Those that eliminate swirl and non-symmetry and produce pseudo fully developed flow (high-performance flow conditioners)
Straightening devices such as honeycombs and vanes inserted upstream of the flow meter can reduce the length of straight pipe required. However, they produce only marginal improvements in measurement accuracy and may still require significant length of straight pipe, which a cramped installation site may not permit.
A flow straightener, sometimes called a honeycomb, is a device used to straighten the air flow in a wind tunnel. It is a passage of ducts, laid along the axis of main air stream to minimize the lateral velocity components caused by swirling motion in the air flow during entry. The cross-section shapes of these "honeycombs" may be of square, circular and regular hexagonal cells.
A low-cost handmade flow straightener
A low-cost flow straightener can be constructed using drinking straws, as they have low cost and good efficiency. The MythBusters television show used such a construction for their wind tunnel, as did an experimental wind tunnel at MIT (Maniet). The straws should be cut to equal size and placed in a frame.
Effectiveness of honeycomb
The effectiveness of honeycomb, in reducing the swirl and turbulence level, is studied by simulating the flow field using standard k-ε turbulence model in commercial computational fluid dynamics (CFD). CFD is the most precise and economical approach to estimate the effectiveness of a honeycomb.
Computational model
A computational domain of honeycomb is created as shown in Fig. 1
We know computationally, it is very difficult to provide the realistic non-uniform flow at the entry of honeycomb as experienced in the experiments. Such random inlet conditions would essentially simulate the realistic case in which air can enter the honeycomb from any direction and at any level of turbulence. Therefore, special domain is designed for introducing practical inlet condition
Meshing of Computational Models
The solid model of honeycomb is meshed in GAMBIT 2.3.16. As shown in Fig. 2. A structured rectangular mesh is used for the simulation with square honeycomb configuration. Governing equations for mass and momentum conservations for subsonic flow along with the equations for turbulence and porous flow are solved for the honeycomb using commercial CFD. RANS type RNG k-ε model is used for the turbulence modeling.
Boundary Conditions
The separate domain created upstream of the honeycomb is provided with various inlet conditions to arrive at the disorderly motion at the exit, which should be given as an inlet to the honeycomb cells. This essentially simulates the more realistic case that the flow can enter into the honeycomb from any direction. Specifications of this inlet along with other necessary boundary conditions are mentioned here. Flow at the inlet of the honeycomb shall necessarily have turbulent and swirling motions. Therefore, in order to incorporate these requirements, a separate fluid domain is constructed.
Top and bottom circular faces are considered as inlet to this domain to get a flow field with higher magnitude of lateral velocity. This domain is provided with vertical and horizontal cylinders as an obstruction to the inlet to produce sufficient swirling at the exit of this section. A tetrahedral mesh as shown in Fig. 3 with tetrahedral elements is generated for this geometry. The number of nodes are 1,47,666. Three faces of this configuration are specified as inlets with velocity boundary conditions. Fluid velocity at these inlet faces has been so taken that averaged mean velocity at the outlet is 1 m/s, which is in the operational wind tunnel.
A pressure outlet boundary condition is used at exit of the settling chamber where pressure at outlet is set to zero for gauge pressure.
It is always possible to predict the entire flow field by meshing whole fluid domain; however simulation for the prediction of entire flow field using symmetry boundary condition. This approach reduces the mesh requirement and computational efforts. Therefore, symmetry boundary is used at the periphery of the computational domain.
All the solid boundaries in the computational domain are specified as viscous walls with no-slip wall boundary condition.
Turbulence intensity profile at the exit of turbulence model is shown in Fig. 4. This figure shows the turbulence intensity and which is maximum at the center (30%) and at the walls is around 16-18%, now this profile is incorporated inside the honeycomb as shown in Fig. 2, the profile of turbulence intensity comes out from the honeycomb is shown in Fig. 5. In this profile we can see that the turbulence intensity is reduced from 30% to 1.2% at the center and 16% to 3.5%, it means the honeycomb effectiveness is very high which is around 96%.
Natural gas measurement
Natural gas that carries a lot of liquids with it is known as wet gas whereas natural gas that is produced without liquid is known dry gas. Dry gas is also treated as to remove all liquids. The effect of flow conditioning for various popular meters which is used in gas measurement is explained below.
Pipe flow conditions
The most important as well as most difficult to measure aspects of flow measurement are flow conditions within a pipe upstream of a meter. Flow conditions mainly refer to the flow velocity profile, irregularities in the profile, varying turbulence levels within the flow velocity or turbulence intensity profile, swirl and any other fluid flow characteristics which will cause the meter to register flow different than that expected. It will change the value from the original calibration state referred to as reference conditions that are free of installation effects.
Installation effects
Installation effects such as insufficient straight pipe, exceptional pipe roughness or smoothness, elbows, valves, tees and reducers causes the flow conditions within the pipe to vary from the reference conditions. How these installation effects impact the meter is very important since devices which create upstream installation effects are common components of any standard metering design. Flow Conditioning refers to the process of artificially generating a reference, fully developed flow profile and is essential to enable accurate measurement while maintaining a cost-competitive meter standard design. The meter calibration factors are valid only of geometric and dynamic similarity exists between the metering and calibration conditions. In fluid mechanics, this is commonly referred to as the Law of Similarity.
Law of similarity
The principle of Law of Similarity is used extensively for theoretical and experimental fluid machines. With respect to calibration of flowmeters, the Law of Similarity is the foundation for flow measurement standards. To satisfy the Law of Similarity, the central facility concept requires geometric and dynamic similarity between the laboratory meter and the installed conditions of this same meter over the entire custody transfer period. This approach assumes that the selected technology does not exhibit any significant sensitivity to operating or mechanical variations between calibrations. The meter factor determined at the time of calibration is valid if both dynamic and geometric similarity exists between the field installation and the laboratory installation of the artifact.
A proper manufacturer's experimental pattern locates sensitive regions to explore, measure and empirically adjust. The manufacturer's recommended correlation method is a rational basis for performance prediction provided the physics do not change. For instance, the physics are different between subsonic and sonic flow. To satisfy the Law of Similarity the in situ calibration concept requires geometric and dynamic similarity between the calibrated meter and the installed conditions of this same meter over the entire custody transfer period. This approach assumes that the selected technology does not exhibit any significant sensitivity to operating or mechanical variations between calibrations. The meter factor determined at the time of calibration is valid if both dynamic and geometric similarity exists in the "field meter installation" over the entire custody transfer period.
Velocity flow profile
The most commonly used description of flow conditions within the pipe is the flow velocity profile. Fig.(1) shows the typical flow velocity profile for natural gas measurement. The shape of the flow velocity profile is given by the following equation, ---- (1)
The value of n determines the shape of the flow velocity profile. The eq.(1) can be used to determine the flow profile's shape within the pipe by fitting a curve to experimentally measured velocity data. In 1993, the transverse flow velocities were being measured within the high pressure natural gas environment using hot wire technology to accomplish the data fit. A fully developed flow profile was used as the reference state for meter calibration and determination of Coefficient of Discharge (Cd). For Reynolds Number to n is approximately 7.5; for Re of , n is approximately 10.0 where a fully developed profile in a smooth pipe was assumed. Since n is a function of Reynolds Number and friction factor, more accurate values of n can be estimated by using the eq.(2),
---- (2)
Where, f is friction factor. A good estimate of a fully developed velocity profile can be used for those without adequate equipment to actually measure the flow velocities within the pipe. The following straight pipe equivalent length in eq.(3) was utilized to ensure a fully developed flow profile exists.
---- (3)
In eq.(3) the pipe lengths required is significant, hence we need some devices that can able to condition the flow over a shorter pipe length allowing metering packages to be cost competitive and accurate. Here the velocity flow profile is generally three-dimensional. Normally the description requires no axial orientation indication if the profile is asymmetric and if it does exists, then axial orientation with respect to some suitable plane of reference is required. Asymmetry exists downstream of installation effects such as elbows or tees. Usually, the velocity flow profile is described on two planes 90° apart. Using the latest software technology a full pipe cross sectional description of the velocity profile is possible provided sufficient data points are given.
Turbulence intensity
The second description of the flow field state within the pipe is the turbulence intensity. According to an experiment in 1994, the metering errors may exist even when the velocity flow profile is fully developed with perfect pipe flow conditions. Conversely, it was found zero metering error at times when the velocity profile was not fully developed. Hence this behavior was referred to the turbulence intensity of the gas flow that can cause metering bias error. This behavior accounts in part for the less than adequate performance of the conventional tube bundle.
Swirl
The third description of the flow field's state is swirl. Swirl is the tangential flow component of the velocity vector. The velocity profile should be referred to as the axial velocity profile. As the velocity vector can be resolved into three mutually orthogonal components, the velocity profile only represents the axial component of velocity. fig.(2) showing the Swirl Angle which explains the definition of flow swirl and swirl angle. Note that swirl is usually referenced to full body rotation (that which the full pipeline flow follows one axis of swirl). In real pipeline conditions, such as downstream of elbows two or more mechanisms of swirl may be present.
Effects on flow measurement devices
The condition of a flow can affect the performance and accuracy of devices that measure the flow.
Effects of flow conditioning on Orifice meter
The basic orifice mass flow equation provided by API 14.3 and ISO 5167 is given as,
<br/ > ----(4)
<br/ >Where,
= Mass flow
= Coefficient of discharge
= Velocity of approach factor
Y = Expansion factor
d = orifice diameter
= density of the fluid
= differential pressure
Now to use the eq.(4), the flow field entering the orifice plate must be free of swirl and exhibit a fully developed flow profile. API 14.3 (1990) and ISO standards determined the Coefficient of Discharge by completing numerous calibration tests where the indicated mass flow was compared to the actual mass flow to determine coefficient of discharge. In all testing the common requirement was a fully developed flow profile entering the orifice plate. Accurate standard compliant meter designs must therefore ensure that a swirl free, fully developed flow profile is impinging on the orifice plate. There are numerous methods available to accomplish this. These methods are commonly known as "flow conditioning".
The first installation option is to revert to no flow conditioning, but adequate pipe lengths must be provided by the eq.(2) mentioned above. This generally makes the manufacturing costs for a flow measurement facility unrealistic due to excessively long meter tubes; Imagine meter tubes 75 diameters long.
The second and most well known option is the 19-tube tube-bundle flow conditioner. The majority of flow installations in North America contain the tube bundle. With the help of hot wire, pitot tube and laser-based computerized measurement systems which allow detailed measurement of velocity profile and turbulence intensity; we know that the tube bundle does not provide fully developed flow. Therefore, this device is causing biased orifice flow measurement. As a result of these recent findings, few tube bundles are specified for flow measurement and reduce the use of such device. Numerous references are available providing performance results indicating less than acceptable meter performance when using the conventional 19-tube test bundle. The individual results should be reviewed to ascertain details such as beta ratio, meter tube lengths, Re and test conditions.
The general indications are that the conventional tube bundle will cause the orifice installation to over register flow values up to 1.5% when the tube bundle is 1 pipe diameter to approximately 11 pipe diameters from the orifice plate. This is caused by a flat velocity profile that creates higher differential pressures than with a fully developed profile. There is a crossover region from approximately 10 to 15 pipe diameters where the error band is approximately zero. Then a slight under-registration of flows occurs for distances between approximately 15 to 25 pipe diameters. This is due to a peaked velocity profile that creates lower differential pressures than a fully developed profile. At distances greater than 25 pipe diameters the error asymptotes to zero. Fig.(3) showing the Conventional Tube Bundle Performance explaining typical characteristic behavior of the popular 19 tube, tube-bundle. An additional drawback of the conventional 19 tube, tube bundle is variation in sizing.
The conventional tube bundle provides errors very much dependent on installation details, that is, the elbows on and out of plane, tees, valves and distances from the last pipe installation to the conditioner and conditioner to the orifice plate. These errors have a great significance. Therefore, the latest findings regarding conventional tube bundle performance should be reviewed prior to meter station design and installation.
The final installation option for orifice metering is perforated plate flow conditioners. There is a variety of perforated plates have entered the market. These devices generally are designed to rectify the drawbacks of the conventional tube bundle (accuracy and repeatability insufficiency). The reader is cautioned to review the performance of the chosen perforated plate carefully prior to installation. A flow conditioner performance test guideline should be utilized to determine performance. The key elements of a flow conditioner test are -
Perform a baseline calibration test with an upstream length of 70 to 100 pipe diameters of straight meter tube. The baseline Coefficient of Discharge values should be within the 95% confidence interval for the RG orifice equation (i.e. the coefficient of discharge equation as provided by AGA-3).
Select values of upstream meter tube length, and flow conditioner location, to be used for the performance evaluation. Install the flow conditioner at the desired location. First, perform a test for either the two 90° elbows out-of-plane installation, or the high swirl installation for = 0.40 and for = 0.67. This test will show whether the flow conditioner removes swirl from the disturbed flow. If the is within the acceptable region for both values of i.e. 0.40 and 0.67, and if the Cd results vary as , then the conditioner is successful in removing swirl. The tests for the other three installations namely, good flow conditions, partly closed valve and highly disturbed flow) may be performed for = 0.67, and the results for other (i ratios predicted from the correlation. Otherwise, the tests should be performed for a range of p ratios between 0.20 and 0.75.
Perform test and determine the flow conditioner performance for the flow conditioner installed in good flow conditions, downstream of a half closed valve, and for either the double 90° elbow out-of-plane or the high swirl installation.
Effects of flow conditioning on turbine meter
The turbine meter is available in various manufacturer's configurations of a common theme; turbine blades and rotor configured devices. These devices are designed such that when a gas stream passes through them they will spin proportionally to the amount of gas passing over the blades in a repeatable fashion. Accuracy is then ensured by completion of a calibration, indicating the relationship between rotational speed and volume, at various Reynolds Numbers. The fundamental difference between the orifice meter and the turbine meter is the flow equation derivation. The orifice meter flow calculation is based on fluid flow fundamentals (a 1st Law of Thermodynamics derivation utilizing the pipe diameter and vena contracta diameters for the continuity equation). Deviations from theoretical expectation can be assumed under the Coefficient of Discharge. Thus, one can manufacture an orifice meter of known uncertainty with only the measurement standard in hand and access to a machine shop. The need for flow conditioning, and hence, a fully developed velocity flow profile is driven from the original determination of Cd which utilized fully developed or 'reference profiles' as explained above.
Conversely, the turbine meter operation is not rooted deeply in fundamentals of thermodynamics. This is not to say that the turbine meter is in any way an inferior device. There are sound engineering principles providing theoretical background. It is essentially an extremely repeatable device that is then assured accuracy via calibration. The calibration provides the accuracy. It is carried out in good flow conditions (flow conditions free of swirl and a uniform velocity flow profile) this is carried out for every meter manufactured. Deviations from the as-calibrated conditions would be considered installation effects, and the sensitivity of the turbine meter to these installation effects is of interest. The need for flow conditioning is driven from the sensitivity of the meter to deviations from as calibrated conditions of swirl and velocity profile.
Generally, recent research indicates that turbine meters are sensitive to swirl but not to the shape of the velocity profile. A uniform velocity profile is recommended, but no strict requirements for fully developed flow profiles are indicated. Also, no significant errors are evident when installing single or dual rotor turbine meters downstream of two elbows out-of-plane without flow conditioning devices.
Effects of flow conditioning on ultrasonic meter
Due to the relative age of the technology, it may be beneficial to discuss the operation of the multipath ultrasonic meter to illustrate the effects of flow profile distortion and swirl. There are various types of flow measurements utilizing high frequency sound. The custody transfer measurement devices available today utilize the time of travel concept. The difference in time of flight with the flow is compared to the time of flight against the flow. This difference is used to infer average flow velocity on the sound path. Fig.(5) showing the Ultrasonic Meter sound path no flow which illustrates this concept.
The resulting flow equation for the mean velocity experienced by the sound path is given by,
----(5)
The case of no flow gives the actual path of the sound when there is zero flow (by equating eq.(5) to zero). In case of theoretical flow profile, say a uniform velocity flow profile where the no-slip condition on the pipe walls is not applied, Fig.(6) shows Ultrasonic Meter sound path - uniform velocity profile which illustrates the resultant sound path.
A theoretical derivation of the Mean velocity equation for this sound path becomes much more complicated. In case of a perfect fully developed real velocity profile of Ultrasonic meter which is shown in Fig.(7) indicating a possible sound path as a result of an installation in a real flow.
Here a mathematical derivation for this Ultrasonic meter is also becomes very complicated. Developing a robust flow algorithm to calculate the mean flow velocity for the sound path can be quite complicated. Now add to this; sound path reflection from the pipe wall, multipaths to add degrees of freedom, swirl and departure from axisymmetric fully developed flow profile and the problem of integrating the actual velocity flow profile to yield volume flow rate can be an accomplishment. Hence the real performance of ultrasonic meters downstream of perturbations, and the need for calibrations is required.
Effects of flow conditioning on Coriolis meter
Coriolis meter shown in fig.(8) is very accurate in single-phase conditions but inaccurate to measure two-phase flows. It poses a complex fluid structure interaction problem in case of two-phase operation. There is a scarcity of theoretical models available to predict the errors reported by Coriolis meter in aforementioned conditions. Flow conditioners make no effect on meter accuracy while using wet gas due to the annular flow regime, which is not highly affected by flow conditioners. In single-phase conditions, Coriolis meter gives accurate measurement even in presence of severe flow disturbances. There is no need for flow conditioning before the meter to obtain accurate readings from it, which would be the case in other metering technologies like orifice and turbine. On the other hand, in two-phase flows, the meter consistently gives negative errors. The use of flow conditioners clearly affects the reading of the meter in aerated liquids. This phenomenon can be used to get fairly accurate estimate of flow rate in low gas volume fraction liquid flows.
Liquid flow measurement
Flow conditioning makes a huge effect on the accuracy of liquid turbine meter which results in flow disturbances. These effects are mainly caused by debris on strainer screens, for various upstream piping geometries and different types of flow conditioners.
The effectiveness of a flow conditioner can be indicated by the following two key measurements:
Percentage variation of an average meter factor over the defined range of flow disturbances for a given flow rate and inlet piping geometry. The lesser the value of percentage variation of an average meter factor over the range of flow disturbances, the better will be the performance of flow conditioner.
Percentage meter factor repeatability for each flow disturbance, at a given flow rate and inlet piping geometry. The lesser the value of percentage meter factor repeatability at a given set of installation/operating conditions, the better will be the performance of flow conditioner.
See also
Flow measurement
Orifice meter
Turbine meter
Ultrasonic flow meter
Coriolis meter
Fluid dynamics
Wet gas
Dry gas
Orifice plate
Mass flow meter
Mass flow rate
Volumetric flow rate
References
Bibliography
ANSYS Inc., 2007. Release 11 Documentation for ANSYS Workbench.
Cermak, J.E., 2003. Wind-tunnel development and trends in applications to civil engineering. J. Wind Eng. Ind. Aerodyn. 91 (3), 355–370.
Cermak, J.E., Cochran, L.S., 1992. Physical modeling of the atmospheric surface layer. J. Wind Eng. Ind. Aerodyn. 41–44, 935–946.
Collar, A.R., 1939. The effect of a gauze on velocity distribution in a uniform duct. Aeronaut. Res. Counc. Rep. Memo No. 1867. Desai, S.S., 2003.
Relative roles of computational fluid dynamics and wind tunnel testing in the development-of aircraft. Curr. Sci. 84 (1), 49–64.
Derbunovich, G.I., Zemskaya, A.S., Repik, E.U., Sosedko, Y.P., 1993. Optimum Conditions of Turbulence Reduction with Screens, Mechanics of Nonuniform and Turbulent Flows. Nauka, Moscow, pp. 35.
Dryden, H.I., Schubauer, G.B., 1947. The use of damping screens for the reduction of wind tunnel turbulence. J. Aeronautical Sci. 14, 221–228.
Farell, C., Youssef, S., 1996. Experiments on turbulence management using screens and honeycombs. ASME J. Fluids Eng. 118, 26–32.
Ghani, S.A.A.A., Aroussi, A., Rice, E., 2001. Simulation of road vehicle natural environment in a climatic wind tunnel. Simul. Pract. Theory 8 (6–7), 359–375.
Gordon, R., Imbabi, M.S., 1998. CFD simulation and experimental validation of a new closed circuit wind/water tunnel design. J. Fluids Eng. Trans. ASME 120 (2), 311–318.
Groth, J., Johansson, A., 1988. Turbulence reduction by screens. J. Fluids Mech. 197, 139–155.
Hansen, S.O., Sorensen, E.G., 1985. A new boundary-layer wind tunnel at the Danish Maritime Institute. J. Wind Eng. Ind. Aerodyn. 18, 213–224.
Continuum mechanics
Aerodynamics
Chemical process engineering
Fluid dynamics
Fluid mechanics
Piping | Flow conditioning | [
"Physics",
"Chemistry",
"Engineering"
] | 5,276 | [
"Continuum mechanics",
"Building engineering",
"Chemical engineering",
"Classical mechanics",
"Aerodynamics",
"Civil engineering",
"Mechanical engineering",
"Aerospace engineering",
"Piping",
"Chemical process engineering",
"Fluid mechanics",
"Fluid dynamics"
] |
31,457,489 | https://en.wikipedia.org/wiki/Villa%20Girasole | The Villa Girasole (il girasole meaning ‘the sunflower’ in Italian) is a house constructed in the 1930s in Marcellise, northern Italy, near Verona. The conception of architect Angelo Invernizzi, the Girasole rotates to follow the sun as it moves, just as a sunflower opens up and turns to follow the sun. This is how the unique house got its name.
Architect
Angelo Invernizzi, a wealthy Italian engineer of Genoa, Italy, dreamed of building a house that would “maximize the health properties of the sun by rotating to follow it”. He designed the house for himself with the help of Romolo Carapacchi, a mechanical engineer; Fausto Saccorotti, an interior decorator; and Ettore Fagiuoli, an architect. Invernizzi’s daughter, Lidia Invernizzi, described in the 17-minute film “Il girasole: una casa vicino a Verona” by Marcel Meili and Christoph Schaub, Invernizzi could have built the house himself, but he instead invited many people to participate in its creation: painters, sculptors, furniture makers, and more. “People who believed in a new era: nothing should be built as before.” Having a family connection to Marcellise, even though working and living in Genoa, he wanted to build the house there in its hilly splendor and with its memories of a simpler life.
History and construction
Invernizzi first began drawing designs for his rotating house in 1929, but construction started in 1931, only during summer months. Invernizzi and his team used the project as a means to experiment with new materials, like concrete and fibre cement. "In keeping with the project's experimental nature, a considerable amount of adaptation and refinement accompanied construction". They ended up using aluminium sheeting to replace the concrete on the outside walls because the concrete had left cracks. At first, Invernizzi only expected the house to make a 180 degree turn, but eventually after he saw it make the 180 degree turn, he "decided to make the complete turn" of 360 degrees. The project was complete in 1935, after four years.
Interior/Exterior
The Girasole has two storeys and is shaped like the letter "L". It sits on an over 44 metre circular base, with a 42 metre tall tower at the centre. This is where the house rotates from, using motors. The "L" rotates "over three circular tracks where 15 trolleys can slide the 5,000 cubic metres building at a speed of 4 mm per second and it takes 9 hours and 20 minutes to rotate fully". There is a manual control panel located in the moving part of the house that is used to control its rotation. The first floor of the moving part is known as the "day zone", and includes the dining room, the music room, Mr. and Mrs. Invernizzi’s studies, with the kitchen, pantry, and toilet located right near the central tower. An assortment of bedrooms and bathrooms are found on the second floor. Villa Girasole’s interior design is such that one experiences a number of different progressions of light throughout the day: "Though the views from either wing differ at any given moment, they share a general orientation to the sun, reducing the chance for conflict over which direction the house should point. All rooms could share an equal amount of daylight or shade".
Machinery
Villa Girasole runs on two diesel fuel motors which move the house over circular tracks and allow trolleys to slide the house along. It has been suggested that because the front of the house faces the sun all day, installing solar panels on the roof would be beneficial: gaining and storing energy for those times when the sun is not out.
References
External links
(with images)
Villa Girasole on Architectuul
Girasole
Solar design
Houses completed in 1930 | Villa Girasole | [
"Engineering"
] | 808 | [
"Solar design",
"Energy engineering"
] |
32,949,621 | https://en.wikipedia.org/wiki/DI%20Herculis | DI Herculis is an Algol-type eclipsing binary star in the constellation of Hercules. The system has an apparent magnitude of about +8.5 and consists of two young blue stars of spectral type B5 and B4. It is about two thousand light years from Earth.
The orbit of the stars around their mutual centre of gravity is very elliptical, with an eccentricity of 0.489 and a semi-major axis of 0.201 astronomical units, resulting in an extremely close approach of the two stars at periastron.
Stellar masses of 5.15 and 4.52 solar masses lead to a theoretical precession of 4.27 degrees per century, at odds with the observed precession. However, detailed observations reveal an unexpectedly extreme obliquity of the spin axes of the two stars. One of the two stars is tipped over by at least 70 degrees from the vertical, and the other is tipped the opposite way by more than 80 degrees. Incorporating the effect of oblateness of the stars due to the unusually tilted axes, the predicted precession is consistent with general relativity.
Precession of periastron
The precession of the periastron of the orbit of the stars serves as a test of the predictions of Einstein's general theory of relativity.
The known factors of the orbital distance of the stars, eccentricity, and stellar masses allows a theoretical prediction of precession of 4.27 degrees per century (1.93 degrees from classical effects and 2.34 degrees from general relativistic effects). However, the observed precession can be measured from eclipse timing, leading to an original measure of 1.04 degrees per century, and a more precise recent measurement of 1.39 degrees per century.
This discrepancy between theory and experiment has led to extensive studies of the bright binary system in the last thirty years; solutions discussed included
new theories of gravitation such as MOND
tidal forces (perhaps due to unusual internal structure in the stars) leading to a circularisation of the elliptical orbit
a third body in the system
presence of a circumstellar cloud between the two components
unusual rotation axes of the stars
After observations of the Rossiter–McLaughlin effect in 2009, it emerged that the rotation axes of the two stars lay roughly in the orbital plane of the system. When this is taken account in calculating the rate of precession, the difference between expected and observed precession disappears; so DI Hercules is no longer a test case for a possible falsification of general relativity. However, a more recent research article shows that the 2009 study leaves many questions unanswered regarding the solution for the axes. For example, orbital effects caused by the tilting of the axes have not been observed; also, the stars' rotation axes themselves may also be precessing.
Journal references
References
External links
DI Herculis – Ein ungewöhnlich exzentrisches Algolsystem; Astronomische Nachrichten, volume 265, p.101, 1938
Algol variables
Eclipsing binaries
Hercules (constellation)
B-type giants
Herculis, DI
175227
92708
BD+24 3568
Tests of general relativity | DI Herculis | [
"Astronomy"
] | 661 | [
"Hercules (constellation)",
"Constellations"
] |
32,954,209 | https://en.wikipedia.org/wiki/Guanylate-binding%20protein | In molecular biology, the guanylate-binding proteins family is a family of GTPases that is induced by interferon (IFN)-gamma. GTPases induced by IFN-gamma (Interferon-inducible GTPase) are key to the protective immunity against microbial and viral pathogens. These GTPases are classified into three groups: the small 47-KD immunity-related GTPases (IRGs), the Mx proteins (MX1, MX2), and the large 65- to 67-kd GTPases. Guanylate-binding proteins (GBP) fall into the last class.
Genetic Information
GBP genes have been universally recognized in mammalian as well as in most other vertebrate genomes. A single cluster of seven human GBP genes (GBP1-GBP7) is found on chromosome 1q22.2. Unlike humans, in genetically controllable disease models such as mice and zebrafish, members of the GBPs gene family are organized in more than one cluster, in this case, 11 (Gbp2b- Gbp110 and 4 genes (Gbp1-Gbp4), respectively. Examinations of GBP-related sequences have shown that zebrafish gbp3 and gbp4 contain an additional function to find (FIIND) and a caspase recruitment (CARD) domains that resemble those found within the inflammasome-related proteins: Apoptosis-associated speck-like protein containing a CARD (PYCARD) and NLR Family Pyrin Domain Containing 1 (NLRP1).
Structure
Structurally, GBPs consist of two domains: a globular N- terminal domain harboring the GTPase function, and an extended C- terminal helical domain. In addition, some members of the GBPs family harbor motifs (e.g., CaaX motifs) or additional domains that are thought to operate in protein-protein or protein-membrane interactions.
Activity
Some GBPs have exhibited the ability to bind not only guanosine triphosphate (GTP) to produce guanosine diphosphate (GDP) but also GDP to produce guanosine monophosphate (GMP) with equimolar affinity and high intrinsic rates of hydrolyzation. The physiological relevance of the GBP's GDPase activity might yield important insights to elucidate GBP-specific defensive profile versus other INF-induced GTPases(e.g.IRGs). Evidence has suggested GBPs as important players in a variety of disease conditions ranging from infectious and metabolic inflammatory diseases to cancer,
In the context of cell protection against bacteria, early efforts conducting loss-function assays revealed a reduced host resistance to several pathogens when lacking GBPs. More recent studies have indicated that GBPs appear to be an agent that disturbs the structural integrity of bacteria, stimulates inflammasome signaling, forms complexes on pathogen-containing vesicles in infected cells, and fosters autophagy and oxidative mechanisms helping pathogen clearance.
Human GBP1 is secreted from cells without the need of a leader peptide, and has been shown to exhibit antiviral activity against Vesicular stomatitis virus and Encephalomyocarditis virus, as well as being able to regulate the inhibition of proliferation and invasion of endothelial cells in response to IFN-gamma.
GBP1, the most widely studied GBP, has been studied for its antimicrobial properties. It can effectively polymerize and target the lipopolysaccharide cell wall of gram-negative bacteria. In said bacteria type, the GBP1 polymer coating alters the lipopolysaccharide membrane, allowing access to other parts of the membrane by other innate antimicrobial agents within the cell to cause pathogen cell death. Besides detecting pathogens and causing bacterial cell lysis, GBP1 can also cause host-programmed cell death.
The GBP family of proteins is highly conserved among many different phyla. They are believed to be a shared gene family that is used to fight off mostly viral, parasitic, and bacterial infections. On that note, the expression of GBPs is noted to increase in humans once the body detects many different types of diseases ranging from the infections listed above to cancer. Due to the similarity between murine and human GBPs, mouse knockout studies have been utilized to investigate the different roles GBPs have in fighting off other diseases. These studies have confirmed that knocking out different GBPs has different effects on combating different infections.
References
Protein domains | Guanylate-binding protein | [
"Biology"
] | 956 | [
"Protein domains",
"Protein classification"
] |
32,959,753 | https://en.wikipedia.org/wiki/HVTN%20505 | HVTN 505 is a clinical trial testing an HIV vaccine regimen on research participants. The trial is conducted by the HIV Vaccine Trials Network and sponsored by the National Institute of Allergy and Infectious Diseases. Vaccinations were stopped in April 2013 due to initial results showing that the vaccine was ineffective in preventing HIV infections and lowering viral load among those participants who had become infected with HIV. All study participants will continue to be monitored for safety and any long-term effects.
Organizers
The study is sponsored by the National Institute of Allergy and Infectious Diseases (NIAID) and the HIV Vaccine Trials Network (HVTN) is conducting the trial. The Vaccine Research Center (VRC) developed the vaccines being researched in the trial. The research sites were in the following places:
Annandale, Virginia
Atlanta
Aurora, Colorado
Bethesda, Maryland
Birmingham, Alabama
Boston
Chicago
Cleveland
Dallas
Decatur, Georgia
Houston
Los Angeles
Orlando
Nashville
New York City
Philadelphia
Rochester, New York
San Francisco
Seattle
Purpose
HVTN 505 is being conducted to determine the safety and efficacy of a Vaccine Research Center DNA/rAd5 vaccine regimen in healthy males and male-to-female transgender persons who have sex with men. All participants must be fully circumcised, and must have no evidence of previous infection with Adenovirus 5, which is a common virus that causes colds and respiratory infections. Potential participants were tested for antibodies to Adenovirus 5 as part of the screening process to determine their eligibility.
When the study began the primary outcome being measured was whether the vaccine decreased the viral load, which is amount of HIV in the blood of study participants who received the vaccine then later became infected with HIV. At that time, researchers stated that the vaccine was very unlikely to provide any protection from HIV infection. In August 2011 because of new data from other clinical trials, NIAID shifted the focus of the study to determine whether vaccination was also able to prevent HIV infection. As a result of this change to the research questions, NIAID also announced an expansion in the desired enrollment to a total of 2200 participants. The study was further expanded to 2500 participants in 2012 to ensure that there would be enough data to meaningfully answer the research questions.
Study design
HVTN 505 is a phase IIB, randomized, placebo-controlled, double-blind clinical trial. The original 2009 design was for 1,350 volunteers to participate and for half to get the experimental vaccine and half to get placebo. The study's enrollment target was expanded to 2,200 in 2011 to gather additional data which would allow researchers to determine the extent to which the vaccine regimen also protected against infection. When the vaccinations were stopped on April 23, 2013, the study had enrolled 2,504 volunteers at 21 sites in 19 cities in the United States.
Volunteers wanting to join the trial had to meet the following criteria: must be a man who has sex with men or a trans woman (with or without sex reassignment surgery) who has sex with men, between 18 and 50 years old, HIV negative, fully circumcised, and without detectable antibodies to adenovirus type 5 (which would mean that the person had no evidence of prior adenovirus type 5 infection). The criteria about circumcision and adenovirus antibodies were added as a precaution in light of the results of the prior STEP study. In STEP, uncircumcised men with Ad 5 antibodies contracted HIV more often than the control group, and HVTN 505 researchers responded by only recruiting circumcised men with no Ad 5 antibodies.
The study regimen started with a set of three immunizations over eight weeks. These three injections were with a DNA vaccine which was intended to prime the immune system. This vaccine contained genetic material artificially modeled after - but not containing or derived from - surface and internal structures of HIV. 24 weeks (6 months) after a volunteer began the study regimen, that person would get a single injection of the study vaccine. This vaccine was a recombinant DNA vaccine based on adenovirus 5 as a live vector vaccine which was carrying artificial genetic material matching HIV antigens of the three major HIV subtypes.
Vaccinations stopped
On 22 April 2013 the independent data safety monitoring board (DSMB) conducted a scheduled interim review of unblinded data from the study. They concluded that the vaccine regimen had met the definitions for futility that were stated in the study protocol. As a result, they recommend that researchers should no longer administer any study injections and the HVTN and NIAID agreed. Vaccinations were halted the following day, April 23, 2013. In addition, HVTN and NIAID felt that the participants should be told whether they had received the experimental vaccine regimen (unblinded), and the study sites began contacting participants on April 26 to provide this information.
The DSMB review also noted that more persons who received the vaccine became infected than those in the control group - 41 persons among the vaccinated and 30 among the placebo recipients. Further consideration only of participants who were diagnosed after having been in the study 28 weeks - which is the time required for the vaccine regimen to reach its potential - found that the vaccine group had 27 HIV infections compared to 21 infections in the placebo group. These differences are not statistically significant, but all participants were asked to remain in the study for the full-time planned so that researchers can monitor their safety and continue to learn as much as possible.
Reaction
When the results were announced, persons who called the result disappointing include Anthony Fauci, Mitchell Warren of the AIDS Vaccine Advocacy Coalition, and HVTN 505 study leader Scott Hammer.
One of the participants, Josh Robbins, who became infected during the study reported that he was happy to have been in the study because it allowed him to become diagnosed and receive treatment more quickly than is typical. He also released a statement saying, this is just a finding, not a failure.
Some media organizations speculated that a possible cause for the failure in the study is its use of an adenovirus 5 booster, which was also used in an earlier trial called the STEP study. In the STEP study, participants who received the vaccine contracted HIV at a rate that was significantly higher than placebo recipients. In the HVTN 505 study participants who received the vaccine contracted HIV at a rate that was slightly (i.e. not statistically significantly) higher number than those in the placebo arm of the study. The vaccines used in HVTN 505 are different from the ones used in the Step Study. Additionally, the studies were done with different populations and in different geographical areas.
Scientific results
Even though the vaccines did not work the way the study designers hoped, there continues to be valuable scientific information gleaned from information and specimens gathered as a part of this study. For example, people with one type of immune response to the vaccines that was particularly strong seemed to be less likely to become HIV infected than those who did not have this kind of immune response. This makes us think that the vaccines did make a difference, even though they did not prevent HIV infection overall.
The raw datasets used in published analysis from HVTN 505 are now publicly available. These include data used in the original efficacy analysis, as well as recent studies describing the effects of the vaccines used on the immune responses of participants.
References
External links
official page at ClinicalTrials.gov
HVTN's 505 flyer
a page of Frequently Asked Questions about the study
HIV vaccine research
Clinical trials related to HIV
Clinical trials sponsored by NIAID
2010s in science | HVTN 505 | [
"Chemistry"
] | 1,576 | [
"HIV vaccine research",
"Drug discovery"
] |
32,960,675 | https://en.wikipedia.org/wiki/Laser-assisted%20water%20condensation | Laser-assisted water condensation is an experimental technique for artificially causing rainfall. This technique was developed in 2011 by scientists from the University of Geneva. It is related to cloud seeding.
The technique works by creating nitric particles in the clouds that cause condensation with laser pulses.
References
Weather modification | Laser-assisted water condensation | [
"Engineering"
] | 64 | [
"Planetary engineering",
"Weather modification"
] |
5,133,160 | https://en.wikipedia.org/wiki/Glucosepane | Glucosepane is a lysine-arginine protein cross-linking product and advanced glycation end product (AGE) derived from D-glucose. It is an irreversible, covalent cross-link product that has been found to make intermolecular and intramolecular cross-links in the collagen of the extracellular matrix (ECM) and crystallin of the eyes. Covalent protein cross-links irreversibly link proteins together in the ECM of tissues. Glucosepane is present in human tissues at levels 10 to 1000 times higher than any other cross-linking AGE, and is currently considered to be the most important cross-linking AGE.
Role in aging
Aging leads to progressive loss of elasticity and stiffening of tissues rich in the ECM such as joints, cartilage, arteries, lungs and skin. It has been shown that these effects are brought about by the accumulation of cross-links in the ECM on long-lived proteins. Studies done on glucosepane by the Monnier group have shown that the level of glucosepane cross-links in human collagen in the ECM increases progressively with age and at a more rapid pace in people with diabetes, thus suggesting the role of glucosepane in the long-term effects associated with diabetes and aging such as arteriosclerosis, joint stiffening and skin wrinkling. In fact, they report that in the ECM of the skin of a non-diabetic 90-year-old, glucosepane accounts for about 50 times the protein cross-linking as all other forms of protein cross-linking. Further, the build up of cross-links such as glucosepane within and between proteins is shown to reduce proteolytic degradation in the ECM. This leads to increased cross-link accumulation and is thought to be linked to the thickening of basement membranes in capillaries, glomeruli, lens, and lungs.
Atomic-force microscopy experiments identified nanoscale morphologic differences in collagen fibril structures as a function of ageing in skin. A decrease in Young's modulus of the transverse fibril was observed. These changes are thought to be due to the accumulation of glucosepane in tissue. It is proposed that this is due to a change in the fibril density caused by age-related differences in water retention. Computational studies using all-atom simulations revealed that glucosepane results in less tightly held helical structure in the collagen molecule and increase porosity to water. This was confirmed with water content measurement that showed higher content in Achilles and anterior tibias tendon tissue from older individuals compared to young people.
Formation
As an AGE, the reaction pathway that leads to glucosepane formation is known as the Maillard Reaction, or non-enzymatic browning. Glucosepane is found to form through a non-oxidative path. The exact mechanism leading to glucosepane has been a challenge for researchers to determine. However, it is currently well characterized up to the ring formation.
The formation of glucosepane within connective tissues has been shown to be site-specific. For example, studies using Molecular Dynamics simulations of a complete collagen fibril revealed energetically favourable locations, particularly within the collagen fibril gap-region. This may be due to the lower protein density and higher intra-fibrillar water content within the gap-region.
Overall reaction pathway
The overall pathway of glucosepane formation starts with lysine attacking the reducing sugar D-glucose to form the unstable imine known as a Schiff base, which then rearranges to form the more stable aminoketose Amadori product. From there, the stable Amadori Product slowly degrades to form glucosepane through an α-dicarbonyl intermediate.
Mechanism of α-dicarbonyl formation from the Amadori product
The particular reaction path proceeding from the Amadori product to the α-dicarbonyl intermediate that will yield glucosepane was difficult to determine. Initially, researchers hypothesized an α-dicarbonyl intermediate in which the carbonyls were located on C-2 and C-3 of D-Glucose. However, by using glucose with C-1, the carbonyl carbon, marked with the isotope 13C in the reaction, researchers found that the α-dicarbonyl formed has the carbonyls located at C-5 and C-6 of the original glucose backbone. The best mechanism proposed is that the α-dicarbonyl N 6-(2,3-dihydroxy-5,6-dioxohexyl)-L-lysinate, a key intermediate in the glucosepane reaction, forms from the Amadori product through a carbonyl shift all the way down the 6 carbon sugar backbone by keto-enol tautomerism and the elimination of the C-4 hydroxyl. Further, evidence was given for the extent of the hypothesized carbonyl shift by using heavy hydrogen in the solvent water, D2O. Researchers found that all the H-C-OH of the carbon backbone were converted to D-C-OH after the reaction, demonstrating that all the hydrogens got transferred out through keto-enol tautomerism, and thus the carbonyl shift went all the way down the backbone, finally eliminating the C-4 hydroxy group.
Ring closure to arginine cross-linking
It is still relatively unclear how the ring is formed and when. One article suggests, and it seems the current belief, that the ring must form in the step after the α-dicarbonyl is formed. The study hypothesized, and another found correlating evidence, that the most likely mechanism of getting from the α-dicarbonyl to glucosepane is through the intramolecular aldimine 6-(3,4-dihydroxy-6-oxo-3,4,5,6-tetrahydro-2H-azepinium-1-yl) norleucine. The ring is hypothesized to form by a nucleophilic attack of N on C-6 carbonyl, followed by elimination of a water (2). This then condenses with the arginine side chain to yield glucosepane in nucleophilic addition-elimination reactions of the nitrogens of arginine and the electrophilic carbonyls on the ring, eliminating two waters.
Accumulation
Glycation processes that lead to AGEs particularly affect long-lived proteins in the human body, such as collagen in the skin and crystallin in the eyes. Skin collagen, for instance, has a half-life of fifteen years. Because these proteins do not degrade as quickly as other proteins in the body, the Amadori product, which is stable and thus transforms very slowly, has time enough to convert into glucosepane. It has been estimated that 50-60% of the steady state levels of Amadori product is converted into glucosepane in old age. A suspected reason for the prevalence of the glucosepane cross-link product as opposed to others is that the α−dicarbonyl from which it forms, N 6-(2,3-dihydroxy-5,6-dioxohexyl)-L-lysinate, is a persisting glycating agent because it is irreversibly bound through lysine to a protein. Therefore, it is not easily degraded and thus is more commonly available to form a cross-link with arginine, unlike other cross-link α-dicarbonyl intermediates, which are found bound and free and thus more susceptible to being degraded by enzymes in the ECM.
Prospects for inhibition or removal
Because of the important role glucosepane has been found to play in many pathologies of aging, many researchers have been investigating ways in which the levels of glucosepane could be reduced in tissues. Various methods of doing so have been examined.
α-Dicarbonyl trap
One method attempted to inhibit glucosepane formation is to use an α-dicarbonyl trap molecule, aminoguanidine (AG). AG reacts with the α-dicarbonyl intermediate with a higher affinity than arginine, thus blocking the cross-link. While this method has been seen to have some success, it did not greatly interfere with the normal aging of rats.
Thiazolium salts
Another method that has been investigated is the use of thiazolium salts to break the α-dicarbonyl intermediate, therefore cutting off the reaction pathway that leads to glucosepane. These compounds are thought to act as bidentate nucleophiles that attack the adjacent carbonyls in the alpha-dicarbonyl intermediate, which then leads to the cleaving of the C-C bond between the carbonyls. However, an alternate hypothesis as to how they work is that they act as chelating agents. Two thiazolium molecules, PTB (N-phenacylthiazolium bromide) and ALT-711, have demonstrated success at reducing glucosepane levels in rats.
ECM turnover
A completely different approach to reducing cross-links that has been proposed is enhancing the ECM turnover processes, which would force the degradation of cross-linked proteins to replace them with new. However, a potential downside to this would be leaky blood vessels resulting from too far enhanced turnover.
See also
Glycation
Glycosylation
Advanced glycation end product
Maillard Reaction
References
Post-translational modification | Glucosepane | [
"Chemistry"
] | 2,002 | [
"Post-translational modification",
"Gene expression",
"Biochemical reactions"
] |
5,133,456 | https://en.wikipedia.org/wiki/Haven%20%28graph%20theory%29 | In graph theory, a haven is a certain type of function on sets of vertices in an undirected graph. If a haven exists, it can be used by an evader to win a pursuit–evasion game on the graph, by consulting the function at each step of the game to determine a safe set of vertices to move into. Havens were first introduced by as a tool for characterizing the treewidth of graphs. Their other applications include proving the existence of small separators on minor-closed families of graphs, and characterizing the ends and clique minors of infinite graphs.
Definition
If is an undirected graph, and is a set of vertices, then an -flap is a nonempty connected component of the subgraph of formed by deleting . A haven of order in is a function that assigns an -flap to every set of fewer than vertices. This function must also satisfy additional constraints which are given differently by different authors.
The number is called the order of the haven.
In the original definition of Seymour and Thomas, a haven is required to satisfy the property that every two flaps and must touch each other: either they share a common vertex or there exists an edge with one endpoint in each flap. In the definition used later by Alon, Seymour, and Thomas, havens are instead required to satisfy a weaker monotonicity property: if , and both and have fewer than vertices, then . The touching property implies the monotonicity property, but not necessarily vice versa. However, it follows from the results of Seymour and Thomas that, in finite graphs, if a haven with the monotonicity property exists, then one with the same order and the touching property also exists.
Havens with the touching definition are closely related to brambles, families of connected subgraphs of a given graph that all touch each other. The order of a bramble is the minimum number of vertices needed in a set of vertices that hits all of the subgraphs in the family. The set of flaps for a haven of order (with the touching definition) forms a bramble of order at least , because any set of fewer than vertices fails to hit the subgraph . Conversely, from any bramble of order , one may construct a haven of the same order, by defining (for each choice of ) to be the -flap that includes all of the subgraphs in the bramble that are disjoint from . The requirement that the subgraphs in the bramble all touch each other can be used to show that this -flap exists, and that all of the flaps chosen in this way touch each other. Thus, a graph has a bramble of order if and only if it has a haven of order .
Example
As an example, let be a nine-vertex grid graph. Define a haven of order 4 in , mapping each set of three or fewer vertices to an -flap , as follows:
If there is a unique -flap that is larger than any of the other -flaps, let be that unique large -flap.
Otherwise, choose arbitrarily to be any -flap.
It is straightforward to verify by a case analysis that this function satisfies the required monotonicity property of a haven. If and has fewer than two vertices, or has two vertices that are not the two neighbors of a corner vertex of the grid, then there is only one -flap and it contains every -flap. In the remaining case, consists of the two neighbors of a corner vertex and has two -flaps: one consisting of that corner vertex, and another (chosen as ) consisting of the six remaining vertices. No matter which vertex is added to to form , there will be a -flap with at least four vertices, which must be the unique largest flap since it contains more than half of the vertices not in . This large -flap will be chosen as and will be a subset of . Thus in each case monotonicity holds.
Pursuit–evasion
Havens model a certain class of strategies for an evader in a pursuit–evasion game in which fewer than pursuers attempt to capture a single evader, the pursuers and evader are both restricted to the vertices of a given undirected graph, and the positions of the pursuers and evader are known to both players. At each move of the game, a new pursuer may be added to an arbitrary vertex of the graph (as long as fewer than pursuers are placed on the graph at any time) or one of the already-added pursuers may be removed from the graph. However, before a new pursuer is added, the evader is first informed of its new location and may move along the edges of the graph to any unoccupied vertex. While moving, the evader may not pass through any vertex that is already occupied by any of the pursuers.
If a -haven (with the monotonicity property) exists, then the evader may avoid being captured indefinitely, and win the game, by always moving to a vertex of where is the set of vertices that will be occupied by pursuers at the end of the move. The monotonicity property of a haven guarantees that, when a new pursuer is added to a vertex of the graph, the vertices in are always reachable from the current position of the evader.
For instance, an evader can win this game against three pursuers on a grid by following this strategy with the haven of order 4 described in the example. However, on the same graph, four pursuers can always capture the evader, by first moving onto three vertices that split the grid onto two three-vertex paths, then moving into the center of the path containing the evader, forcing the evader into one of the corner vertices, and finally removing one of the pursuers that is not adjacent to this corner and placing it onto the evader. Therefore, the grid can have no haven of order 5.
Havens with the touching property allow the evader to win the game against more powerful pursuers that may simultaneously jump from one set of occupied vertices to another.
Connections to treewidth, separators, and minors
Havens may be used to characterize the treewidth of graphs: a graph has a haven of order if and only if it has treewidth at least . A tree decomposition may be used to describe a winning strategy for the pursuers in the same pursuit–evasion game, so it is also true that a graph has a haven of order if and only if the evader wins with best play against fewer than pursuers. In games won by the evader, there is always an optimal strategy in the form described by a haven, and in games won by the pursuer, there is always an optimal strategy in the form described by a tree decomposition. For instance, because the grid has a haven of order 4, but does not have a haven of order 5, it must have treewidth exactly 3. The same min-max theorem can be generalized to infinite graphs of finite treewidth, with a definition of treewidth in which the underlying tree is required to be rayless (that is, having no ends).
Havens are also closely related to the existence of separators, small sets of vertices in an -vertex graph such that every -flap has at most vertices. If a graph does not have a -vertex separator, then every set of at most vertices has a (unique) -flap with more than vertices. In this case, has a haven of order , in which is defined to be this unique large -flap. That is, every graph has either a small separator or a haven of high order.
If a graph has a haven of order , with for some integer , then must also have a complete graph as a minor. In other words, the Hadwiger number of an -vertex graph with a haven of order is at least . As a consequence, the -minor-free graphs have treewidth less than and separators of size less than . More generally an bound on treewidth and separator size holds for any nontrivial family of graphs that can be characterized by forbidden minors, because for any such family there is a constant such that the family does not include .
In infinite graphs
If a graph contains a ray, a semi-infinite simple path with a starting vertex but no ending vertex, then it has a haven of order : that is, a function that maps each finite set of vertices to an -flap, satisfying the consistency condition for havens. Namely, define to be the unique -flap that contains infinitely many vertices of the ray. Thus, in the case of infinite graphs the connection between treewidth and havens breaks down: a single ray, despite itself being a tree, has havens of all finite orders and even more strongly a haven of order . Two rays of an infinite graph are considered to be equivalent if there is no finite set of vertices that separates infinitely many vertices of one ray from infinitely many vertices of the other ray; this is an equivalence relation, and its equivalence classes are called ends of the graph.
The ends of any graph are in one-to-one correspondence with its havens of order . For, every ray determines a haven, and every two equivalent rays determine the same haven. Conversely, every haven is determined by a ray in this way, as can be shown by the following case analysis:
If the haven has the property that the intersection (where the intersection ranges over all finite sets ) is itself an infinite set , then every finite simple path that ends in a vertex of can be extended to reach an additional vertex of , and repeating this extension process produces a ray passing through infinitely many vertices of . This ray determines the given haven.
On the other hand, if is finite, then (by working in the subgraph )it can be assumed to be empty. In this case, for each finite set of vertices there is a finite set with the property that is disjoint from . If a robber follows the evasion strategy determined by the haven, and the police follow a strategy given by this sequence of sets, then the path followed by the robber forms a ray that determines the haven.
Thus, every equivalence class of rays defines a unique haven, and every haven is defined by an equivalence class of rays.
For any cardinal number , an infinite graph has a haven of order if and only if it has a clique minor of order . That is, for uncountable cardinalities, the largest order of a haven in is the Hadwiger number of .
References
Graph theory objects
Graph minor theory
Pursuit–evasion | Haven (graph theory) | [
"Mathematics"
] | 2,187 | [
"Mathematical relations",
"Graph minor theory",
"Graph theory",
"Graph theory objects"
] |
5,135,155 | https://en.wikipedia.org/wiki/High-ozone%20shock%20treatment | High ozone shock treatment or ozone blasting is a process for removing unwanted odour, and killing mold, vermin and microorganisms in commercial and residential buildings. The treatment is less expensive than some alternative methods of sterilizing indoor spaces - cleaning or removal of building material, or in extreme cases the abandonment of sick buildings.
Process
High ozone shock treatment involves using an ozone generator with a timer to create lethal levels of ozone in an enclosed odour ridden or mold-affected room or building for a short period of time, between one and several hours. For safety reasons, the affected area must be evacuated of people, animals and live plants for the duration of the exposure, and for a long enough period afterwards to allow the ozone to dissipate.
Results
Exposure to high levels of ozone kills living organisms and weakens odours.
By killing microorganisms and mold, ozone treatment slows ripening and reduces spoilage of stored fruit.
Concerns
Critics point to a 1997 study which found exposure to high levels of ozone ineffective at mold decontamination, and to the lack of studies showing high ozone shock treatment to be effective. They also point out that killing mold inside walls does not remove the mold, and that dead mold may continue to have adverse health effects on building inhabitants.
The Federal Trade Commission ruled in 1996 against a manufacturer of an Ozone generator, ordering them to cease representing their product's ability to "eliminate, remove, clean or clear any indoor air pollutant from a user's environment"
Ozone is a powerful oxidizing agent which could damage rubber and other materials, and ozone reactions with other material present in buildings could lead to increased levels of noxious chemicals such as formaldehyde.
References
"Ozone and Mold", Jim Holland
Ozone
Building biology
Indoor air pollution | High-ozone shock treatment | [
"Chemistry",
"Engineering"
] | 365 | [
"Ozone",
"Building biology",
"Oxidizing agents",
"Building engineering"
] |
5,135,754 | https://en.wikipedia.org/wiki/Fej%C3%A9r%27s%20theorem | In mathematics, Fejér's theorem, named after Hungarian mathematician Lipót Fejér, states the following:
Explanation of Fejér's Theorem's
Explicitly, we can write the Fourier series of f as
where the nth partial sum of the Fourier series of f may be written as
where the Fourier coefficients are
Then, we can define
with Fn being the nth order Fejér kernel.
Then, Fejér's theorem asserts that
with uniform convergence. With the convergence written out explicitly, the above statement becomes
Proof of Fejér's Theorem
We first prove the following lemma:
Proof: Recall the definition of , the Dirichlet Kernel:We substitute the integral form of the Fourier coefficients into the formula for above
Using a change of variables we get
This completes the proof of Lemma 1.
We next prove the following lemma:
Proof: Recall the definition of the Fejér Kernel
As in the case of Lemma 1, we substitute the integral form of the Fourier coefficients into the formula for
This completes the proof of Lemma 2.
We next prove the 3rd Lemma:
Proof: a) Given that is the mean of , the integral of which is 1, by linearity, the integral of is also equal to 1.
b) As is a geometric sum, we get an simple formula for and then for ,using De Moivre's formula :
c) For all fixed ,
This shows that the integral converges to zero, as goes to infinity.
This completes the proof of Lemma 3.
We are now ready to prove Fejér's Theorem. First, let us recall the statement we are trying to prove
We want to find an expression for . We begin by invoking Lemma 2:
By Lemma 3a we know that
Applying the triangle inequality yields
and by Lemma 3b, we get
We now split the integral into two parts, integrating over the two regions and .
The motivation for doing so is that we want to prove that . We can do this by proving that each integral above, integral 1 and integral 2, goes to zero. This is precisely what we'll do in the next step.
We first note that the function f is continuous on [-π,π]. We invoke the theorem that every periodic function on [-π,π] that is continuous is also bounded and uniformily continuous. This means that . Hence we can rewrite the integral 1 as follows
Because and By Lemma 3a we then get for all n
This gives the desired bound for integral 1 which we can exploit in final step.
For integral 2, we note that since f is bounded, we can write this bound as
We are now ready to prove that . We begin by writing
Thus,By Lemma 3c we know that the integral goes to 0 as n goes to infinity, and because epsilon is arbitrary, we can set it equal to 0. Hence , which completes the proof.
Modifications and Generalisations of Fejér's Theorem
In fact, Fejér's theorem can be modified to hold for pointwise convergence.
Sadly however, the theorem does not work in a general sense when we replace the sequence with . This is because there exist functions whose Fourier series fails to converge at some point. However, the set of points at which a function in diverges has to be measure zero. This fact, called Lusins conjecture or Carleson's theorem, was proven in 1966 by L. Carleson. We can however prove a corollary relating which goes as follows:
A more general form of the theorem applies to functions which are not necessarily continuous . Suppose that f is in L1(-π,π). If the left and right limits f(x0±0) of f(x) exist at x0, or if both limits are infinite of the same sign, then
Existence or divergence to infinity of the Cesàro mean is also implied. By a theorem of Marcel Riesz, Fejér's theorem holds precisely as stated if the (C, 1) mean σn is replaced with (C, α) mean of the Fourier series .
References
.
Fourier series
Theorems in approximation theory | Fejér's theorem | [
"Mathematics"
] | 845 | [
"Theorems in approximation theory",
"Theorems in mathematical analysis"
] |
5,138,563 | https://en.wikipedia.org/wiki/Satisfiability%20modulo%20theories | In computer science and mathematical logic, satisfiability modulo theories (SMT) is the problem of determining whether a mathematical formula is satisfiable. It generalizes the Boolean satisfiability problem (SAT) to more complex formulas involving real numbers, integers, and/or various data structures such as lists, arrays, bit vectors, and strings. The name is derived from the fact that these expressions are interpreted within ("modulo") a certain formal theory in first-order logic with equality (often disallowing quantifiers). SMT solvers are tools that aim to solve the SMT problem for a practical subset of inputs. SMT solvers such as Z3 and cvc5 have been used as a building block for a wide range of applications across computer science, including in automated theorem proving, program analysis, program verification, and software testing.
Since Boolean satisfiability is already NP-complete, the SMT problem is typically NP-hard, and for many theories it is undecidable. Researchers study which theories or subsets of theories lead to a decidable SMT problem and the computational complexity of decidable cases. The resulting decision procedures are often implemented directly in SMT solvers; see, for instance, the decidability of Presburger arithmetic. SMT can be thought of as a constraint satisfaction problem and thus a certain formalized approach to constraint programming.
Terminology and examples
Formally speaking, an SMT instance is a formula in first-order logic, where some function and predicate symbols have additional interpretations, and SMT is the problem of determining whether such a formula is satisfiable. In other words, imagine an instance of the Boolean satisfiability problem (SAT) in which some of the binary variables are replaced by predicates over a suitable set of non-binary variables. A predicate is a binary-valued function of non-binary variables. Example predicates include linear inequalities (e.g., ) or equalities involving uninterpreted terms and function symbols (e.g., where is some unspecified function of two arguments). These predicates are classified according to each respective theory assigned. For instance, linear inequalities over real variables are evaluated using the rules of the theory of linear real arithmetic, whereas predicates involving uninterpreted terms and function symbols are evaluated using the rules of the theory of uninterpreted functions with equality (sometimes referred to as the empty theory). Other theories include the theories of arrays and list structures (useful for modeling and verifying computer programs), and the theory of bit vectors (useful in modeling and verifying hardware designs). Subtheories are also possible: for example, difference logic is a sub-theory of linear arithmetic in which each inequality is restricted to have the form for variables and and constant .
The examples above show the use of Linear Integer Arithmetic over inequalities. Other examples include:
Satisfiability: Determine if is satisfiable.
Array access: Find a value for array
A such that A[0]=5.
Bit vector arithmetic: Determine if x and y are distinct 3-bit numbers.
<li> Uninterpreted functions: Find values for x and y such that and .
Most SMT solvers support only quantifier-free fragments of their logics.
Relationship to automated theorem proving
There is substantial overlap between SMT solving and automated theorem proving (ATP). Generally, automated theorem provers focus on supporting full first-order logic with quantifiers, whereas SMT solvers focus more on supporting various theories (interpreted predicate symbols). ATPs excel at problems with lots of quantifiers, whereas SMT solvers do well on large problems without quantifiers. The line is blurry enough that some ATPs participate in SMT-COMP, while some SMT solvers participate in CASC.
Expressive power
An SMT instance is a generalization of a Boolean SAT instance in which various sets of variables are replaced by predicates from a variety of underlying theories. SMT formulas provide a much richer modeling language than is possible with Boolean SAT formulas. For example, an SMT formula allows one to model the datapath operations of a microprocessor at the word rather than the bit level.
By comparison, answer set programming is also based on predicates (more precisely, on atomic sentences created from atomic formulas). Unlike SMT, answer-set programs do not have quantifiers, and cannot easily express constraints such as linear arithmetic or difference logic—answer set programming is best suited to Boolean problems that reduce to the free theory of uninterpreted functions. Implementing 32-bit integers as bitvectors in answer set programming suffers from most of the same problems that early SMT solvers faced: "obvious" identities such as x+y=y+x are difficult to deduce.
Constraint logic programming does provide support for linear arithmetic constraints, but within a completely different theoretical framework. SMT solvers have also been extended to solve formulas in higher-order logic.
Solver approaches
Early attempts for solving SMT instances involved translating them to Boolean SAT instances (e.g., a 32-bit integer variable would be encoded by 32 single-bit variables with appropriate weights and word-level operations such as 'plus' would be replaced by lower-level logic operations on the bits) and passing this formula to a Boolean SAT solver. This approach, which is referred to as the eager approach (or bitblasting), has its merits: by pre-processing the SMT formula into an equivalent Boolean SAT formula existing Boolean SAT solvers can be used "as-is" and their performance and capacity improvements leveraged over time. On the other hand, the loss of the high-level semantics of the underlying theories means that the Boolean SAT solver has to work a lot harder than necessary to discover "obvious" facts (such as for integer addition.) This observation led to the development of a number of SMT solvers that tightly integrate the Boolean reasoning of a DPLL-style search with theory-specific solvers (T-solvers) that handle conjunctions (ANDs) of predicates from a given theory. This approach is referred to as the lazy approach.
Dubbed DPLL(T), this architecture gives the responsibility of Boolean reasoning to the DPLL-based SAT solver which, in turn, interacts with a solver for theory T through a well-defined interface. The theory solver only needs to worry about checking the feasibility of conjunctions of theory predicates passed on to it from the SAT solver as it explores the Boolean search space of the formula. For this integration to work well, however, the theory solver must be able to participate in propagation and conflict analysis, i.e., it must be able to infer new facts from already established facts, as well as to supply succinct explanations of infeasibility when theory conflicts arise. In other words, the theory solver must be incremental and backtrackable.
Decidable theories
Researchers study which theories or subsets of theories lead to a decidable SMT problem and the computational complexity of decidable cases. Since full first-order logic is only semidecidable, one line of research attempts to find efficient decision procedures for fragments of first-order logic such as effectively propositional logic.
Another line of research involves the development of specialized decidable theories, including linear arithmetic over rationals and integers, fixed-width bitvectors, floating-point arithmetic (often implemented in SMT solvers via bit-blasting, i.e., reduction to bitvectors), strings, (co)-datatypes, sequences (used to model dynamic arrays), finite sets and relations, separation logic, finite fields, and uninterpreted functions among others.
Boolean monotonic theories are a class of theory that support efficient theory propagation and conflict analysis, enabling practical use within DPLL(T) solvers. Monotonic theories support only boolean variables (boolean is the only sort), and all their functions and predicates obey the axiom
Examples of monotonic theories include graph reachability, collision detection for convex hulls, minimum cuts, and computation tree logic. Every Datalog program can be interpreted as a monotonic theory.
SMT for undecidable theories
Most of the common SMT approaches support decidable theories. However, many real-world systems, such as an aircraft and its behavior, can only be modelled by means of non-linear arithmetic over the real numbers involving transcendental functions. This fact motivates an extension of the SMT problem to non-linear theories, such as determining whether the following equation is satisfiable:
where
Such problems are, however, undecidable in general. (On the other hand, the theory of real closed fields, and thus the full first order theory of the real numbers, are decidable using quantifier elimination. This is due to Alfred Tarski.) The first order theory of the natural numbers with addition (but not multiplication), called Presburger arithmetic, is also decidable. Since multiplication by constants can be implemented as nested additions, the arithmetic in many computer programs can be expressed using Presburger arithmetic, resulting in decidable formulas.
Examples of SMT solvers addressing Boolean combinations of theory atoms from undecidable arithmetic theories over the reals are ABsolver, which employs a classical DPLL(T) architecture with a non-linear optimization packet as (necessarily incomplete) subordinate theory solver, iSAT, building on a unification of DPLL SAT-solving and interval constraint propagation called the iSAT algorithm, and cvc5.
Solvers
The table below summarizes some of the features of the many available SMT solvers. The column "SMT-LIB" indicates compatibility with the SMT-LIB language; many systems marked 'yes' may support only older versions of SMT-LIB, or offer only partial support for the language. The column "CVC" indicates support for the language. The column "DIMACS" indicates support for the DIMACS format.
Projects differ not only in features and performance, but also in the viability of the surrounding community, its ongoing interest in a project, and its ability to contribute documentation, fixes, tests and enhancements.
Standardization and the SMT-COMP solver competition
There are multiple attempts to describe a standardized interface to SMT solvers (and automated theorem provers, a term often used synonymously). The most prominent is the SMT-LIB standard, which provides a language based on S-expressions. Other standardized formats commonly supported are the DIMACS format supported by many Boolean SAT solvers, and the CVC format used by the CVC automated theorem prover.
The SMT-LIB format also comes with a number of standardized benchmarks and has enabled a yearly competition between SMT solvers called SMT-COMP. Initially, the competition took place during the Computer Aided Verification conference (CAV), but as of 2020 the competition is hosted as part of the SMT Workshop, which is affiliated with the International Joint Conference on Automated Reasoning (IJCAR).
Applications
SMT solvers are useful both for verification, proving the correctness of programs, software testing based on symbolic execution, and for synthesis, generating program fragments by searching over the space of possible programs. Outside of software verification, SMT solvers have also been used for type inference and for modelling theoretic scenarios, including modelling actor beliefs in nuclear arms control.
Verification
Computer-aided verification of computer programs often uses SMT solvers. A common technique is to translate preconditions, postconditions, loop conditions, and assertions into SMT formulas in order to determine if all properties can hold.
There are many verifiers built on top of the Z3 SMT solver. Boogie is an intermediate verification language that uses Z3 to automatically check simple imperative programs. The VCC verifier for concurrent C uses Boogie, as well as Dafny for imperative object-based programs, Chalice for concurrent programs, and Spec# for C#. F* is a dependently typed language that uses Z3 to find proofs; the compiler carries these proofs through to produce proof-carrying bytecode. The Viper verification infrastructure encodes verification conditions to Z3. The sbv library provides SMT-based verification of Haskell programs, and lets the user choose among a number of solvers such as Z3, ABC, Boolector, cvc5, MathSAT and Yices.
There are also many verifiers built on top of the Alt-Ergo SMT solver. Here is a list of mature applications:
Why3, a platform for deductive program verification, uses Alt-Ergo as its main prover;
CAVEAT, a C-verifier developed by CEA and used by Airbus; Alt-Ergo was included in the qualification DO-178C of one of its recent aircraft;
Frama-C, a framework to analyse C-code, uses Alt-Ergo in the Jessie and WP plugins (dedicated to "deductive program verification");
SPARK uses CVC4 and Alt-Ergo (behind GNATprove) to automate the verification of some assertions in SPARK 2014;
Atelier-B can use Alt-Ergo instead of its main prover (increasing success from 84% to 98% on the ANR Bware project benchmarks);
Rodin, a B-method framework developed by Systerel, can use Alt-Ergo as a back-end;
Cubicle, an open source model checker for verifying safety properties of array-based transition systems.
EasyCrypt, a toolset for reasoning about relational properties of probabilistic computations with adversarial code.
Many SMT solvers implement a common interface format called SMTLIB2 (such files usually have the extension ".smt2"). The LiquidHaskell
tool implements a refinement type based verifier for Haskell that can use any SMTLIB2 compliant solver, e.g. cvc5, MathSat, or Z3.
Symbolic-execution based analysis and testing
An important application of SMT solvers is symbolic execution for analysis and testing of programs (e.g., concolic testing), aimed particularly at finding security vulnerabilities. Example tools in this category include SAGE from Microsoft Research, KLEE, S2E, and Triton. SMT solvers that have been used for symbolic-execution applications include Z3, STP , the Z3str family of solvers, and Boolector.
Interactive theorem proving
SMT solvers have been integrated with proof assistants, including Coq and Isabelle/HOL.
See also
Answer set programming
Automated theorem proving
SAT solver
First-order logic
Theory of pure equality
Notes
References
, pp. , .
SMT-LIB: The Satisfiability Modulo Theories Library
SMT-COMP: The Satisfiability Modulo Theories Competition
Decision procedures - an algorithmic point of view
This article was originally adapted from a column in the ACM SIGDA e-newsletter by Prof. Karem A. Sakallah. Original text is available here
Constraint programming
Electronic design automation
Formal methods
Logic in computer science
NP-complete problems
Satisfiability problems | Satisfiability modulo theories | [
"Mathematics",
"Engineering"
] | 3,260 | [
"Logic in computer science",
"Automated theorem proving",
"Mathematical logic",
"Computational problems",
"Software engineering",
"Mathematical problems",
"Formal methods",
"NP-complete problems",
"Satisfiability problems"
] |
5,139,283 | https://en.wikipedia.org/wiki/Geometric%20albedo | In astronomy, the geometric albedo of a celestial body is the ratio of its actual brightness as seen from the light source (i.e. at zero phase angle) to that of an idealized flat, fully reflecting, diffusively scattering (Lambertian) disk with the same cross-section. (This phase angle refers to the direction of the light paths and is not a phase angle in its normal meaning in optics or electronics.)
Diffuse scattering implies that radiation is reflected isotropically with no memory of the location of the incident light source. Zero phase angle corresponds to looking along the direction of illumination. For Earth-bound observers, this occurs when the body in question is at opposition and on the ecliptic.
The visual geometric albedo refers to the geometric albedo quantity when accounting for only electromagnetic radiation in the visible spectrum.
Airless bodies
The surface materials (regoliths) of airless bodies (in fact, the majority of bodies in the Solar System) are strongly non-Lambertian and exhibit the opposition effect, which is a strong tendency to reflect light straight back to its source, rather than scattering light diffusely.
The geometric albedo of these bodies can be difficult to determine because of this, as their reflectance is strongly peaked for a small range of phase angles near zero. The strength of this peak differs markedly between bodies, and can only be found by making measurements at small enough phase angles. Such measurements are usually difficult due to the necessary precise placement of the observer very close to the incident light. For example, the Moon is never seen from the Earth at exactly zero phase angle, because then it is being eclipsed. Other Solar System bodies are not in general seen at exactly zero phase angle even at opposition, unless they are also simultaneously located at the ascending or descending node of their orbit, and hence lie on the ecliptic. In practice, measurements at small nonzero phase angles are used to derive the parameters which characterize the directional reflectance properties for the body (Hapke parameters). The reflectance function described by these can then be extrapolated to zero phase angle to obtain an estimate of the geometric albedo.
For very bright, solid, airless objects such as Saturn's moons Enceladus and Tethys, whose total reflectance (Bond albedo) is close to one, a strong opposition effect combines with the high Bond albedo to give them a geometric albedo above unity (1.4 in the case of Enceladus). Light is preferentially reflected straight back to its source even at low angle of incidence such as on the limb or from a slope, whereas a Lambertian surface would scatter the radiation much more broadly. A geometric albedo above unity means that the intensity of light scattered back per unit solid angle towards the source is higher than is possible for any Lambertian surface.
Stars
Stars shine intrinsically, but they can also reflect light. In a close binary star system polarimetry can be used to measure the light reflected from one star off another (and vice versa) and therefore also the geometric albedos of the two stars. This task has been accomplished for the two components of the Spica system, with the geometric albedo of Spica A and B being measured as 0.0361 and 0.0136 respectively. The geometric albedos of stars are in general small, for the Sun a value of 0.001 is expected, but for hotter or lower-gravity (i.e. giant) stars the amount of reflected light is expected to be several times that of the stars in the Spica system.
Equivalent definitions
For the hypothetical case of a plane surface, the geometric albedo is the albedo of the surface when the illumination is provided by a beam of radiation that comes in perpendicular to the surface.
Examples
The geometric albedo may be greater or smaller than the Bond albedo, depending on surface and atmospheric properties of the body in question. Some examples:
See also
Albedo
Anisotropy
Bond albedo
Lambertian reflectance
References
Further reading
NASA JPL glossary
K.P. Seidelmann, Ed. (1992) Explanatory Supplement to the Astronomical Almanac, University Science Books, Mill Valley, California.
Observational astronomy
Radiometry
Scattering, absorption and radiative transfer (optics) | Geometric albedo | [
"Chemistry",
"Astronomy",
"Engineering"
] | 884 | [
"Telecommunications engineering",
" absorption and radiative transfer (optics)",
"Observational astronomy",
"Scattering",
"Astronomical sub-disciplines",
"Radiometry"
] |
5,139,503 | https://en.wikipedia.org/wiki/Optical%20scalars | In general relativity, optical scalars refer to a set of three scalar functions (expansion), (shear) and (twist/rotation/vorticity) describing the propagation of a geodesic null congruence.
In fact, these three scalars can be defined for both timelike and null geodesic congruences in an identical spirit, but they are called "optical scalars" only for the null case. Also, it is their tensorial predecessors that are adopted in tensorial equations, while the scalars mainly show up in equations written in the language of Newman–Penrose formalism.
Definitions: expansion, shear and twist
For geodesic timelike congruences
Denote the tangent vector field of an observer's worldline (in a timelike congruence) as , and then one could construct induced "spatial metrics" that
where works as a spatially projecting operator. Use to project the coordinate covariant derivative and one obtains the "spatial" auxiliary tensor ,
where represents the four-acceleration, and is purely spatial in the sense that . Specifically for an observer with a geodesic timelike worldline, we have
Now decompose into its symmetric and antisymmetric parts and ,
is trace-free () while has nonzero trace, . Thus, the symmetric part can be further rewritten into its trace and trace-free part,
Hence, all in all we have
For geodesic null congruences
Now, consider a geodesic null congruence with tangent vector field . Similar to the timelike situation, we also define
which can be decomposed into
where
Here, "hatted" quantities are utilized to stress that these quantities for null congruences are two-dimensional as opposed to the three-dimensional timelike case. However, if we only discuss null congruences in a paper, the hats can be omitted for simplicity.
Definitions: optical scalars for null congruences
The optical scalars come straightforwardly from "scalarization" of the tensors in Eq(9).
The expansion of a geodesic null congruence is defined by (where for clearance we will adopt another standard symbol "" to denote the covariant derivative )
Comparison with the "expansion rates of a null congruence": As shown in the article "Expansion rate of a null congruence", the outgoing and ingoing expansion rates, denoted by and respectively, are defined by
where represents the induced metric. Also, and can be calculated via
where and are respectively the outgoing and ingoing non-affinity coefficients defined by
Moreover, in the language of Newman–Penrose formalism with the convention , we have
As we can see, for a geodesic null congruence, the optical scalar plays the same role with the expansion rates and . Hence, for a geodesic null congruence, will be equal to either or .
The shear of a geodesic null congruence is defined by
The twist of a geodesic null congruence is defined by
In practice, a geodesic null congruence is usually defined by either its outgoing () or ingoing () tangent vector field (which are also its null normals). Thus, we obtain two sets of optical scalars and , which are defined with respect to and , respectively.
Applications in decomposing the propagation equations
For a geodesic timelike congruence
The propagation (or evolution) of for a geodesic timelike congruence along respects the following equation,
Take the trace of Eq(13) by contracting it with , and Eq(13) becomes
in terms of the quantities in Eq(6). Moreover, the trace-free, symmetric part of Eq(13) is
Finally, the antisymmetric component of Eq(13) yields
For a geodesic null congruence
A (generic) geodesic null congruence obeys the following propagation equation,
With the definitions summarized in Eq(9), Eq(14) could be rewritten into the following componential equations,
For a restricted geodesic null congruence
For a geodesic null congruence restricted on a null hypersurface, we have
Spin coefficients, Raychaudhuri's equation and optical scalars
For a better understanding of the previous section, we will briefly review the meanings of relevant NP spin coefficients in depicting null congruences. The tensor form of Raychaudhuri's equation governing null flows reads
where is defined such that . The quantities in Raychaudhuri's equation are related with the spin coefficients via
where Eq(24) follows directly from and
See also
Raychaudhari equation
Congruence (general relativity)
References
General relativity | Optical scalars | [
"Physics"
] | 996 | [
"General relativity",
"Theory of relativity"
] |
5,140,039 | https://en.wikipedia.org/wiki/Hapticity | In coordination chemistry, hapticity is the coordination of a ligand to a metal center via an uninterrupted and contiguous series of atoms. The hapticity of a ligand is described with the Greek letter η ('eta'). For example, η2 describes a ligand that coordinates through 2 contiguous atoms. In general the η-notation only applies when multiple atoms are coordinated (otherwise the κ-notation is used). In addition, if the ligand coordinates through multiple atoms that are contiguous then this is considered denticity (not hapticity), and the κ-notation is used once again. When naming complexes care should be taken not to confuse η with μ ('mu'), which relates to bridging ligands.
History
The need for additional nomenclature for organometallic compounds became apparent in the mid-1950s when Dunitz, Orgel, and Rich described the structure of the "sandwich complex" ferrocene by X-ray crystallography where an iron atom is "sandwiched" between two parallel cyclopentadienyl rings. Cotton later proposed the term hapticity derived from the adjectival prefix hapto (from the Greek haptein, to fasten, denoting contact or combination) placed before the name of the olefin, where the Greek letter η (eta) is used to denote the number of contiguous atoms of a ligand that bind to a metal center. The term is usually employed to refer to ligands containing extended π-systems or where agostic bonding is not obvious from the formula.
Historically important compounds where the ligands are described with hapticity
Ferrocene: bis(η5-cyclopentadienyl)iron
Uranocene: bis(η8-1,3,5,7-cyclooctatetraene)uranium
W(CO)3(PPri3)2(η2-H2): the first compound to be synthesized with a dihydrogen ligand.
IrCl(CO)[P(C6H5)3]2(η2-O2): the dioxygen derivative which forms reversibly upon oxygenation of Vaska's complex.
Examples
The η-notation is encountered in many coordination compounds:
Side-on bonding of molecules containing σ-bonds like H2:
W(CO)3(PiPr3)2(η2-H2)
Side-on bonded ligands containing multiple bonded atoms, e.g. ethylene in Zeise's salt or with fullerene, which is bonded through donation of the π-bonding electrons:
K[PtCl3(η2-C2H4)].H2O
Related complexes containing bridging π-ligands:
(μ-η2:η2-C2H2)Co2(CO)6 and (Cp*2Sm)2(μ-η2:η2-N2)
Dioxygen in bis{(trispyrazolylborato)copper(II)}(μ-η2:η2-O2),
Note that with some bridging ligands, an alternative bridging mode is observed, e.g. κ1,κ1, like in (Me3SiCH2)3V(μ-N2-κ1(N),κ1(N′))V(CH2SiMe3)3 contains a bridging dinitrogen molecule, where the molecule is end-on coordinated to the two metal centers (see hapticity vs. denticity).
The bonding of π-bonded species can be extended over several atoms, e.g. in allyl, butadiene ligands, but also in cyclopentadienyl or benzene rings can share their electrons.
Apparent violations of the 18-electron rule sometimes are explicable in compounds with unusual hapticities:
The 18-VE complex (η5-C5H5)Fe(η1-C5H5)(CO)2 contains one η5 bonded cyclopentadienyl, and one η1 bonded cyclopentadienyl.
Reduction of the 18-VE compound [Ru(η6-C6Me6)2]2+ (where both aromatic rings are bonded in an η6-coordination), results in another 18-VE compound: [Ru(η6-C6Me6)(η4-C6Me6)].
Examples of polyhapto coordinated heterocyclic and inorganic rings: Cr(η5-C4H4S)(CO)3 contains the sulfur heterocycle thiophene and Cr(η6-B3N3Me6)(CO)3 contains a coordinated inorganic ring (B3N3 ring).
Electrons donated by "π-ligands" versus hapticity
Changes in hapticity
The hapticity of a ligand can change in the course of a reaction. E.g. in a redox reaction:
Here one of the η6-benzene rings changes to a η4-benzene.
Similarly hapticity can change during a substitution reaction:
Here the η5-cyclopentadienyl changes to an η3-cyclopentadienyl, giving room on the metal for an extra 2-electron donating ligand 'L'. Removal of one molecule of CO and again donation of two more electrons by the cyclopentadienyl ligand restores the η5-cyclopentadienyl. The so-called indenyl effect also describes changes in hapticity in a substitution reaction.
Hapticity vs. denticity
Hapticity must be distinguished from denticity. Polydentate ligands coordinate via multiple coordination sites within the ligand. In this case the coordinating atoms are identified using the κ-notation, as for example seen in coordination of 1,2-bis(diphenylphosphino)ethane (Ph2PCH2CH2PPh2), to NiCl2 as dichloro[ethane-1,2-diylbis(diphenylphosphane)-κ2P]nickel(II). If the coordinating atoms are contiguous (connected to each other), the η-notation is used, as e.g. in titanocene dichloride: dichlorobis(η5-2,4-cyclopentadien-1-yl)titanium.
Hapticity and fluxionality
Molecules with polyhapto ligands are often fluxional, also known as stereochemically non-rigid. Two classes of fluxionality are prevalent for organometallic complexes of polyhapto ligands:
Case 1, typically: when the hapticity value is less than the number of sp2 carbon atoms. In such situations, the metal will often migrate from carbon to carbon, maintaining the same net hapticity. The η1-C5H5 ligand in (η5-C5H5)Fe( η1-C5H5)(CO)2 rearranges rapidly in solution such that Fe binds alternatingly to each carbon atom in the η1-C5H5 ligand. This reaction is degenerate and, in the jargon of organic chemistry, it is an example of a sigmatropic rearrangement. A related example is Bis(cyclooctatetraene)iron, in which the η4- and η6-C8H8 rings interconvert.
Case 2, typically: complexes containing cyclic polyhapto ligands with maximized hapticity. Such ligands tend to rotate. A famous example is ferrocene, Fe(η5-C5H5)2, wherein the Cp rings rotate with a low energy barrier about the principal axis of the molecule that "skewers" each ring (see rotational symmetry). This "ring torsion" explains, inter alia, why only one isomer can be isolated for Fe(η5-C5H4Br)2 since the torsional barrier is very low.
References
Coordination chemistry | Hapticity | [
"Chemistry"
] | 1,702 | [
"Coordination chemistry"
] |
22,802,888 | https://en.wikipedia.org/wiki/Biexciton | In condensed matter physics, biexcitons are created from two free excitons, analogous to di-positronium in vacuum.
Formation of biexcitons
In quantum information and computation, it is essential to construct coherent combinations of quantum states.
The basic quantum operations can be performed on a sequence of pairs of physically distinguishable quantum bits and, therefore, can be illustrated by a simple four-level system.
In an optically driven system where the and states can be directly excited, direct excitation of the upper level from the ground state is usually forbidden and the most efficient alternative is coherent nondegenerate two-photon excitation, using or as an intermediate state.
Observation of biexcitons
Three possibilities of observing biexcitons exist:
(a) excitation from the one-exciton band to the biexciton band (pump-probe experiments);
(b) two-photon absorption of light from the ground state to the biexciton state;
(c) luminescence from a biexciton state made up from two free excitons in a dense exciton system.
Binding energy of biexcitons
The biexciton is a quasi-particle formed from two excitons, and its energy is expressed as
where is the biexciton energy, is the exciton energy, and is the biexciton binding energy.
When a biexciton is annihilated, it disintegrates into a free exciton and a photon. The energy of the photon is smaller than that of the exciton by the biexciton binding energy,
so the biexciton luminescence peak appears on the low-energy side of the exciton peak.
The biexciton binding energy in semiconductor quantum dots has been the subject of extensive theoretical study. Because a biexciton is a composite of two electrons and two holes, we must solve a four-body problem under spatially restricted conditions. The biexciton binding energies for CuCl quantum dots, as measured by the site selective luminescence method, increased with decreasing quantum dot size. The data were well fitted by the function
where is biexciton binding energy, is the radius of the quantum dots, is the binding energy of bulk crystal, and and are fitting parameters.
A simple model for describing binding energy of biexcitons
In the effective-mass approximation, the Hamiltonian of the system consisting of two electrons (1, 2) and two holes (a, b) is given by
where and are the effective masses of electrons and holes, respectively, and
where denotes the Coulomb interaction between the charged particles and ( denote the two electrons and two holes in the biexciton) given by
where is the dielectric constant of the material.
Denoting and are the c.m. coordinate and the relative coordinate of the biexciton, respectively, and is the effective mass of the exciton, the Hamiltonian becomes
where ; and are the Laplacians with respect to relative coordinates between electron and hole, respectively.
And is that with respect to relative coordinate between the c. m. of excitons, and is that with respect to the c. m. coordinate of the system.
In the units of the exciton Rydberg and Bohr radius, the Hamiltonian can be written in dimensionless form
where with neglecting kinetic energy operator of c. m. motion. And can be written as
To solve the problem of the bound states of the biexciton complex, it is required to find the wave functions satisfying the wave equation
If the eigenvalue can be obtained, the binding energy of the biexciton can be also acquired
where is the binding energy of the biexciton and is the energy of exciton.
Numerical calculations of the binding energies of biexcitons
The diffusion Monte Carlo (DMC) method provides a straightforward means of calculating the binding energies of biexcitons within the effective mass approximation. For a biexciton composed of four distinguishable particles (e.g., a spin-up electron, a spin-down electron, a spin-up hole and a spin-down hole), the ground-state wave function is nodeless and hence the DMC method is exact. DMC calculations have been used to calculate the binding energies of biexcitons in which the charge carriers interact via the Coulomb interaction in two and three dimensions, indirect biexcitons in coupled quantum wells, and biexcitons in monolayer transition metal dichalcogenide semiconductors.
Binding energy in nanotubes
Biexcitons with bound complexes formed by two excitons are predicted to be surprisingly stable for carbon nanotube in a wide diameter range.
Thus, a biexciton binding energy exceeding the inhomogeneous exciton line width is predicted for a wide range of nanotubes.
The biexciton binding energy in carbon nanotube is quite accurately approximated by an inverse dependence on , except perhaps for the smallest values of .
The actual biexciton binding energy is inversely proportional to the physical nanotube radius.
Experimental evidence of biexcitons in carbon nanotubes was found in 2012.
Binding energy in quantum dots
The binding energy of biexcitons in a quantum dot decreases with size. In CuCl, the biexciton's size dependence and bulk value are well represented by the expression
(meV)
where is the effective radius of microcrystallites in a unit of nm. The enhanced Coulomb interaction in microcrystallites still increase the biexciton binding energy in the large-size regime, where the quantum confinement energy of excitons is not considerable.
References
Spintronics
Quasiparticles | Biexciton | [
"Physics",
"Materials_science"
] | 1,209 | [
"Matter",
"Spintronics",
"Condensed matter physics",
"Quasiparticles",
"Subatomic particles"
] |
22,809,096 | https://en.wikipedia.org/wiki/United%20Heavy%20Machinery | United Heavy Machinery or Uralmash-Izhora Group, (, OMZ) is a large Russia-based international heavy industry and manufacturing conglomerate. OMZ manufactures a wide range of steel, custom and industrial components for nuclear power plants, petrochemical and mining operations and utilities. In particular OMZ is a manufacturer of reactor pressure vessels for the VVER type of nuclear reactors and the manufacturer of EKG open-cut mining power shovels.
As a Russian open joint-stock company, shares in OMZ may be publicly traded subject to terms of constitutive documents and merger agreements.
OMZ was formed in 1996 in the incorporation of Ural Machine-Building Plants with ZSMK. Izhora Plants merged with OMZ in 1999 and the company was renamed OMZ (Uralmash-Izhora Group). In 2003 the company combined with Pilsen Steel and Škoda JS, the former steel and nuclear subsidiaries of Škoda Works. In 2008 CHETENG Engineering also joined. OMZ is a 50% owner of the Uralmash Machine-Building Corporation formed in a 2007 agreement with Metalloinvest.
The company's shares were delisted from the Moscow and London stock exchanges in 2014 "due to the economic inexpedience of supporting the insignificant free-float of less than 0.33% of the capital."
Operation
PJSC EMZ is engaged in industrial technology development, invests in technological assets with growth potential in strategic industries.
Financial indicators
The company's revenue under IFRS in 2023 amounted to 26,007,318 thousand rubles, net profit was received in the amount of 3,462,836 thousand rubles.
Owners and management
The company is controlled by Gazprombank JSC. The general director of the company is Roman Sergeevich Kuvshinov.
See also
Uralmash
Izhorsky Zavod
Kakha Bendukidze
Gazprombank
External links
OMZ company website
Financial information
Skoda JS
Pilsen Steel
CHETENG s.r.o
Uralmash Machine-Building Joint Venture
References
Manufacturing companies of Russia
Engineering companies of Russia
Nuclear technology companies of Russia
Manufacturing companies established in 1996
Companies listed on the Moscow Exchange
Multinational companies headquartered in Russia
Conglomerate companies of Russia
Industrial machine manufacturers
Manufacturing companies based in Moscow | United Heavy Machinery | [
"Engineering"
] | 482 | [
"Industrial machine manufacturers",
"Industrial machinery"
] |
22,809,664 | https://en.wikipedia.org/wiki/Sparse%20ruler | A sparse ruler is a ruler in which some of the distance marks may be missing. More abstractly, a sparse ruler of length with marks is a sequence of integers where . The marks and correspond to the ends of the ruler. In order to measure the distance , with there must be marks and such that .
A complete sparse ruler allows one to measure any integer distance up to its full length. A complete sparse ruler is called minimal if there is no complete sparse ruler of length with marks. In other words, if any of the marks is removed one can no longer measure all of the distances, even if the marks could be rearranged. A complete sparse ruler is called maximal if there is no complete sparse ruler of greater length with marks. Complete minimal rulers of length 135 and 136 require one more mark than those of lengths 124-134, 137 and 138. A sparse ruler is called optimal if it is both minimal and maximal.
Since the number of distinct pairs of marks is , this is an upper bound on the length of any maximal sparse ruler with marks. This upper bound can be achieved only for 2, 3 or 4 marks. For larger numbers of marks, the difference between the optimal length and the bound grows gradually, and unevenly.
For example, for 6 marks the upper bound is 15, but the maximal length is 13. There are 3 different configurations of sparse rulers of length 13 with 6 marks. One is {0, 1, 2, 6, 10, 13}. To measure a length of 7, say, with this ruler one would take the distance between the marks at 6 and 13.
A Golomb ruler is a sparse ruler that requires all of the differences be distinct. In general, a Golomb ruler with marks will be considerably longer than an optimal sparse ruler with marks, since is a lower bound for the length of a Golomb ruler. A long Golomb ruler will have gaps, that is, it will have distances which it cannot measure. For example, the optimal Golomb ruler {0, 1, 4, 10, 12, 17} has length 17, but cannot measure lengths of 14 or 15.
Wichmann rulers
As found by Brian Wichmann, many optimal rulers are of the form where represents segments of length . Thus, if and , then has (in order):
1 segment of length 1,
1 segment of length 2,
1 segment of length 3,
2 segments of length 7,
2 segments of length 4,
1 segment of length 1.
A minor variant is , with a length one less than .
gives the ruler {0, 1, 3, 6, 13, 20, 24, 28, 29}, while gives {0, 1, 3, 6, 9, 16, 23, 27, 28}. The length of a Wichmann ruler is and the number of marks is . Note that not all Wichmann rulers are optimal and not all optimal rulers can be generated this way. None of the optimal rulers of length 1, 13, 17, 23 and 58 follow this pattern. That sequence ends with 58 if the Optimal Ruler Conjecture of Peter Luschny is correct. The conjecture is known to be true to length 213.
Asymptotics
For every let be the smallest number of marks for a ruler of length . For example, . The asymptotic of the function was studied by Erdos, Gal (1948) and continued by Leech (1956) who proved that the limit exists and is lower and upper bounded by
Much better upper bounds exist for -perfect rulers. Those are subsets of such that each positive number can be written as a difference for some . For every number let be the smallest cardinality of an -perfect ruler. It is clear that . The asymptotics of the sequence was studied by Redei, Renyi (1949) and then by Leech (1956) and Golay (1972). Due to their efforts the following upper and lower bounds were obtained:
Define the excess as . In 2020, Pegg proved by construction that ≤ 1 for all lengths . If the Optimal Ruler Conjecture is true, then for all , leading to the ″dark mills″ pattern when arranged in columns, OEIS A326499. All of the windows in the dark mills pattern are Wichmann rulers. None of the best known sparse rulers are proven minimal as of Sep 2020. Many of the current best known constructions for are believed to non-minimal, especially the "cloud" values.
Examples
The following are examples of minimal sparse rulers. Optimal rulers are highlighted. When there are too many to list, not all are included. Mirror images are not shown.
Incomplete sparse rulers
A few incomplete rulers can fully measure up to a longer distance than an optimal sparse ruler with the same number of marks. , , , and can each measure up to 18, while an optimal sparse ruler with 7 marks can measure only up to 17. The table below lists these rulers, up to rulers with 13 marks. Mirror images are not shown. Rulers that can fully measure up to a longer distance than any shorter ruler with the same number of marks are highlighted.
See also
Gauge block
Golomb ruler
Perfect ruler
References
http://www.luschny.de/math/rulers/prulers.html
http://oeis.org/wiki/User:Peter_Luschny/PerfectRulers
http://www.iwriteiam.nl/Ha_sparse_rulers.html
http://www.maa.org/editorial/mathgames/mathgames_11_15_04.html
http://www.contestcen.com/scale.htm
http://members.cox.net/wnmyers/sparse_rulers.txt
Number theory
Combinatorics
Length, distance, or range measuring devices | Sparse ruler | [
"Mathematics"
] | 1,199 | [
"Discrete mathematics",
"Number theory",
"Combinatorics"
] |
907,108 | https://en.wikipedia.org/wiki/Alfv%C3%A9n%20wave | In plasma physics, an Alfvén wave, named after Hannes Alfvén, is a type of plasma wave in which ions oscillate in response to a restoring force provided by an effective tension on the magnetic field lines.
Definition
An Alfvén wave is a low-frequency (compared to the ion gyrofrequency) travelling oscillation of the ions and magnetic field in a plasma. The ion mass density provides the inertia and the magnetic field line tension provides the restoring force. Alfvén waves propagate in the direction of the magnetic field, and the motion of the ions and the perturbation of the magnetic field are transverse to the direction of propagation. However, Alfvén waves existing at oblique incidences will smoothly change into magnetosonic waves when the propagation is perpendicular to the magnetic field.
Alfvén waves are dispersionless.
Alfvén velocity
The low-frequency relative permittivity of a magnetized plasma is given by
where is the magnetic flux density, is the speed of light, is the permeability of the vacuum, and the mass density is the sum
over all species of charged plasma particles (electrons as well as all types of ions).
Here species has number density
and mass per particle .
The phase velocity of an electromagnetic wave in such a medium is
For the case of an Alfvén wave
where
is the Alfvén wave group velocity.
(The formula for the phase velocity assumes that the plasma particles are moving at non-relativistic speeds, the mass-weighted particle velocity is zero in the frame of reference, and the wave is propagating parallel to the magnetic field vector.)
If , then . On the other hand, when , . That is, at high field or low density, the group velocity of the Alfvén wave approaches the speed of light, and the Alfvén wave becomes an ordinary electromagnetic wave.
Neglecting the contribution of the electrons to the mass density, , where is the ion number density and is the mean ion mass per particle, so that
Alfvén time
In plasma physics, the Alfvén time is an important timescale for wave phenomena. It is related to the Alfvén velocity by:
where denotes the characteristic scale of the system. For example, could be the minor radius of the torus in a tokamak.
Relativistic case
The Alfvén wave velocity in relativistic magnetohydrodynamics is
where is the total energy density of plasma particles, is the total plasma pressure, and
is the magnetic pressure. In the non-relativistic limit, where , this formula reduces to the one given previously.
History
The coronal heating problem
The study of Alfvén waves began from the coronal heating problem, a longstanding question in heliophysics. It was unclear why the temperature of the solar corona is hot (about one million kelvins) compared to its surface (the photosphere), which is only a few thousand kelvins. Intuitively, it would make sense to see a decrease in temperature when moving away from a heat source, but this does not seem to be the case even though the photosphere is denser and would generate more heat than the corona.
In 1942, Hannes Alfvén proposed in Nature the existence of an electromagnetic-hydrodynamic wave which would carry energy from the photosphere to heat up the corona and the solar wind. He claimed that the sun had all the necessary criteria to support these waves and they may in turn be responsible for sun spots. He stated:
If a conducting liquid is placed in a constant magnetic field, every motion of the liquid gives rise to an E.M.F. which produces electric currents. Owing to the magnetic field, these currents give mechanical forces which change the state of motion of the liquid. Thus a kind of combined electromagnetic–hydrodynamic wave is produced.
This would eventually turn out to be Alfvén waves. He received the 1970 Nobel Prize in Physics for this discovery.
Experimental studies and observations
The convection zone of the Sun, the region beneath the photosphere in which energy is transported primarily by convection, is sensitive to the motion of the core due to the rotation of the Sun. Together with varying pressure gradients beneath the surface, electromagnetic fluctuations produced in the convection zone induce random motion on the photospheric surface and produce Alfvén waves. The waves then leave the surface, travel through the chromosphere and transition zone, and interact with the ionized plasma. The wave itself carries energy and some of the electrically charged plasma.
In the early 1990s, de Pontieu and Haerendel suggested that Alfvén waves may also be associated with the plasma jets known as spicules. It was theorized these brief spurts of superheated gas were carried by the combined energy and momentum of their own upward velocity, as well as the oscillating transverse motion of the Alfvén waves.
In 2007, Alfvén waves were reportedly observed for the first time traveling towards the corona by Tomczyk et al., but their predictions could not conclude that the energy carried by the Alfvén waves was sufficient to heat the corona to its enormous temperatures, for the observed amplitudes of the waves were not high enough. However, in 2011, McIntosh et al. reported the observation of highly energetic Alfvén waves combined with energetic spicules which could sustain heating the corona to its million-kelvin temperature. These observed amplitudes (20.0 km/s against 2007's observed 0.5 km/s) contained over one hundred times more energy than the ones observed in 2007. The short period of the waves also allowed more energy transfer into the coronal atmosphere. The 50,000 km-long spicules may also play a part in accelerating the solar wind past the corona. Alfvén waves are routinely observed in solar wind, in particular in fast solar wind streams. The role of Alfvénic oscillations in the interaction between fast solar wind and the Earth's magnetosphere is currently under debate.
However, the above-mentioned discoveries of Alfvén waves in the complex Sun's atmosphere, starting from the Hinode era in 2007 for the next 10 years, mostly fall in the realm of Alfvénic waves essentially generated as a mixed mode due to transverse structuring of the magnetic and plasma properties in the localized flux tubes. In 2009, Jess et al. reported the periodic variation of H-alpha line-width as observed by Swedish Solar Telescope (SST) above chromospheric bright-points. They claimed first direct detection of the long-period (126–700 s), incompressible, torsional Alfvén waves in the lower solar atmosphere.
After the seminal work of Jess et al. (2009), in 2017 Srivastava et al. detected the existence of high-frequency torsional Alfvén waves in the Sun's chromospheric fine-structured flux tubes. They discovered that these high-frequency waves carry substantial energy capable of heating the Sun's corona and also originating the supersonic solar wind. In 2018, using spectral imaging observations, non-LTE (local thermodynamic equilibrium) inversions and magnetic field extrapolations of sunspot atmospheres, Grant et al. found evidence for elliptically polarized Alfvén waves forming fast-mode shocks in the outer regions of the chromospheric umbral atmosphere. They provided quantification of the degree of physical heat provided by the dissipation of such Alfvén wave modes above active region spots.
In 2024, a paper was published in the journal Science detailing a set of observations of what turned out to be the same jet of solar wind made by Parker Solar Probe and Solar Orbiter in February 2022, and implying Alfvén waves were what kept the jet's energy high enough to match the observations.
Historical timeline
1942: Alfvén suggests the existence of electromagnetic-hydromagnetic waves in a paper published in Nature 150, 405–406 (1942).
1949: Laboratory experiments by S. Lundquist produce such waves in magnetized mercury, with a velocity that approximated Alfvén's formula.
1949: Enrico Fermi uses Alfvén waves in his theory of cosmic rays.
1950: Alfvén publishes the first edition of his book, Cosmical Electrodynamics, detailing hydromagnetic waves, and discussing their application to both laboratory and space plasmas.
1952: Additional confirmation appears in experiments by Winston Bostick and Morton Levine with ionized helium.
1954: Bo Lehnert produces Alfvén waves in liquid sodium.
1958: Eugene Parker suggests hydromagnetic waves in the interstellar medium.
1958: Berthold, Harris, and Hope detect Alfvén waves in the ionosphere after the Argus nuclear test, generated by the explosion, and traveling at speeds predicted by Alfvén formula.
1958: Eugene Parker suggests hydromagnetic waves in the Solar corona extending into the Solar wind.
1959: D. F. Jephcott produces Alfvén waves in a gas discharge.
1959: C. H. Kelley and J. Yenser produce Alfvén waves in the ambient atmosphere.
1960: Coleman et al. report the measurement of Alfvén waves by the magnetometer aboard the Pioneer and Explorer satellites.
1961: Sugiura suggests evidence of hydromagnetic waves in the Earth's magnetic field.
1961: Normal Alfvén modes and resonances in liquid sodium are studied by Jameson.
1966: R. O. Motz generates and observes Alfvén waves in mercury.
1970: Hannes Alfvén wins the 1970 Nobel Prize in Physics for "fundamental work and discoveries in magneto-hydrodynamics with fruitful applications in different parts of plasma physics".
1973: Eugene Parker suggests hydromagnetic waves in the intergalactic medium.
1974: J. V. Hollweg suggests the existence of hydromagnetic waves in interplanetary space.
1977: Mendis and Ip suggest the existence of hydromagnetic waves in the coma of Comet Kohoutek.
1984: Roberts et al. predict the presence of standing MHD waves in the solar corona and opens the field of coronal seismology.
1999: Aschwanden et al. and Nakariakov et al. report the detection of damped transverse oscillations of solar coronal loops observed with the extreme ultraviolet (EUV) imager on board the Transition Region And Coronal Explorer (TRACE), interpreted as standing kink (or "Alfvénic") oscillations of the loops. This confirms the theoretical prediction of Roberts et al. (1984).
2007: Tomczyk et al. reported the detection of Alfvénic waves in images of the solar corona with the Coronal Multi-Channel Polarimeter (CoMP) instrument at the National Solar Observatory, New Mexico. However, these observations turned out to be kink waves of coronal plasma structures.doi:10.1051/0004-6361/200911840
2007: A special issue on the Hinode space observatory was released in the journal Science. Alfvén wave signatures in the coronal atmosphere were observed by Cirtain et al., Okamoto et al., and De Pontieu et al. By estimating the observed waves' energy density, De Pontieu et al. have shown that the energy associated with the waves is sufficient to heat the corona and accelerate the solar wind.
2008: Kaghashvili et al. uses driven wave fluctuations as a diagnostic tool to detect Alfvén waves in the solar corona.
2009: Jess et al. detect torsional Alfvén waves in the structured Sun's chromosphere using the Swedish Solar Telescope.
2011: Alfvén waves are shown to propagate in a liquid metal alloy made of Gallium.
2017: 3D numerical modelling performed by Srivastava et al. show that the high-frequency (12–42 mHz) Alfvén waves detected by the Swedish Solar Telescope can carry substantial energy to heat the Sun's inner corona.
2018: Using spectral imaging observations, non-LTE inversions and magnetic field extrapolations of sunspot atmospheres, Grant et al. found evidence for elliptically polarized Alfvén waves forming fast-mode shocks in the outer regions of the chromospheric umbral atmosphere. For the first time, these authors provided quantification of the degree of physical heat provided by the dissipation of such Alfvén wave modes.
2024: Alfvén waves are implied to be behind a smaller than expected energy loss in solar wind jets out as far as Venus' orbit, based on Parker Solar Probe and Solar Orbiter observations only two days apart.
See also
Alfvén surface
Computational magnetohydrodynamics
Electrohydrodynamics
Electromagnetic pump
Ferrofluid
Magnetic flow meter
Magnetohydrodynamic turbulence
MHD generator
MHD sensor
Molten salt
Plasma stability
Shocks and discontinuities (magnetohydrodynamics)
References
Further reading
External links
Mysterious Solar Ripples Detected Dave Mosher 2 September 2007 Space.com
EurekAlert! notification of 7 December 2007 Science special issue
EurekAlert! notification: "Scientists find solution to solar puzzle"
Waves in plasmas | Alfvén wave | [
"Physics"
] | 2,706 | [
"Waves in plasmas",
"Waves",
"Physical phenomena",
"Plasma phenomena"
] |
907,139 | https://en.wikipedia.org/wiki/Agmatine | Agmatine, also known as 4-aminobutyl-guanidine, was discovered in 1910 by Albrecht Kossel. It is a chemical substance which is naturally created from the amino acid arginine. Agmatine has been shown to exert modulatory action at multiple molecular targets, notably: neurotransmitter systems, ion channels, nitric oxide (NO) synthesis, and polyamine metabolism and this provides bases for further research into potential applications.
History
The term agmatine stems from A- (for amino-) + g- (from guanidine) + -ma- (from ptomaine) + -in (German)/-ine (English) suffix with insertion of -t- apparently for euphony. A year after its discovery, it was found that agmatine could increase blood flow in rabbits; however, the physiological relevance of these findings were questioned given the high concentrations (high μM range) required. In the 1920s, researchers in the diabetes clinic of Oskar Minkowski showed that agmatine can exert mild hypoglycemic effects. In 1994, endogenous agmatine synthesis in mammals was discovered.
Metabolic pathways
Agmatine is a cationic amine formed by decarboxylation of L-arginine by the mitochondrial enzyme arginine decarboxylase (ADC). Agmatine degradation occurs mainly by hydrolysis, catalyzed by agmatinase into urea and putrescine, the diamine precursor of polyamine biosynthesis. An alternative pathway, mainly in peripheral tissues, is by diamine oxidase-catalyzed oxidation into agmatine-aldehyde, which is in turn converted by aldehyde dehydrogenase into guanidinobutyrate and secreted by the kidneys.
Mechanisms of action
Agmatine was found to exert modulatory actions directly and indirectly at multiple key molecular targets underlying cellular control mechanisms of cardinal importance in health and disease. It is considered capable of exerting its modulatory actions simultaneously at multiple targets. The following outline indicates the categories of control mechanisms, and identifies their molecular targets:
Neurotransmitter receptors and receptor ionophores. Nicotinic, imidazoline I1 and I2, α2-adrenergic, glutamate NMDAr, and serotonin 5-HT2A and 5HT-3 receptors.
Ion channels. Including: ATP-sensitive K+ channels, voltage-gated Ca2+ channels, and acid-sensing ion channels (ASICs).
Membrane transporters. Agmatine specific-selective uptake sites, organic cation transporters (mostly OCT2 subtype), extraneuronal monoamine transporters (ENT), polyamine transporters, and mitochondrial agmatine specific-selective transport system.
Nitric oxide (NO) synthesis modulation. Both differential inhibition and activation of NO synthase (NOS) isoforms is reported.
Polyamine metabolism. Agmatine is a precursor for polyamine synthesis, competitive inhibitor of polyamine transport, inducer of spermidine/spermine acetyltransferase (SSAT), and inducer of antizyme.
Protein ADP-ribosylation. Inhibition of protein arginine ADP-ribosylation.
Matrix metalloproteases (MMPs). Indirect down-regulation of the enzymes MMP 2 and 9.
Advanced glycation end product (AGE) formation. Direct blockade of AGEs formation.
NADPH oxidase. Activation of the enzyme leading to H2O2 production.
Food consumption
Agmatine sulfate injection can increase food intake with carbohydrate preference in satiated, but not hungry, rats and this effect may be mediated by neuropeptide Y. However, supplementation in rat drinking water results in slight reductions in water intake, body weight, and blood pressure. In addition, force feeding with agmatine leads to a reduction in body weight gain during rat development. It is also found that many fermented foods contain agmatine.
Pharmacokinetics
Agmatine is present in small amounts in plant-, animal-, and fish-derived foodstuff and gut microbial production is an added source for agmatine. Oral agmatine is absorbed from the gastrointestinal tract and readily distributed throughout the body. Rapid elimination from non-brain organs of ingested (un-metabolized) agmatine by the kidneys has indicated a blood half life of about 2 hours.
Research
A number of potential medical uses for agmatine have been suggested.
Cardiovascular
Agmatine produces mild reductions in heart rate and blood pressure, apparently by activating both central and peripheral control systems via modulation of several of its molecular targets including: imidazoline receptors subtypes, norepinephrine release and NO production.
Glucose regulation
Agmatine hypoglycemic effects are the result of simultaneous modulation of several molecular mechanisms involved in blood glucose regulation.
Kidney functions
Agmatine has been shown to enhance glomerular filtration rate (GFR) and to exert nephroprotective effects.
Neurotransmission
Agmatine has been discussed as a putative neurotransmitter. It is synthesized in the brain, stored in synaptic vesicles, accumulated by uptake, released by membrane depolarization, and inactivated by agmatinase. Agmatine binds to α2-adrenergic receptor and imidazoline receptor binding sites, and blocks NMDA receptors and other cation ligand-gated channels. However, while agmatine binds to α2-adrenergic receptors, it exerts neither an agonistic nor antagonistic effect on these receptors, lacking any intrinsic activity. Short only of identifying specific ("own") post-synaptic receptors, agmatine fulfills Henry Dale's criteria for a neurotransmitter and is hence considered a neuromodulator and co-transmitter. The existence of theoretical agmatinergic-mediated neuronal systems has not yet been demonstrated although the existence of such receptors is implied by its prominence in the mediation of both the central and peripheral nervous systems. Research into agmatine-specific receptors and transmission pathways continues.
Due to its ability to pass through open cationic channels, agmatine has also been used as a surrogate metric of integrated ionic flux into neural tissue upon stimulation. When neural tissue is incubated in agmatine and an external stimulus is applied, only cells with open channels will be filled with agmatine, allowing identification of which cells are sensitive to that stimuli and the degree to which they opened their cationic channels during the stimulation period.
Opioid liability
Systemic agmatine can potentiate opioid analgesia, and prevent tolerance to chronic morphine in laboratory rodents. Since then, cumulative evidence amply shows that agmatine inhibits opioid dependence and relapse in several animal species.
See also
Agmatine deiminase
Agmatinase
References
Further reading
Amines
Guanidines
Metabolism
Imidazoline agonists
Neurotransmitters
NMDA receptor antagonists | Agmatine | [
"Chemistry",
"Biology"
] | 1,519 | [
"Guanidines",
"Neurotransmitters",
"Functional groups",
"Amines",
"Cellular processes",
"Biochemistry",
"Neurochemistry",
"Bases (chemistry)",
"Metabolism"
] |
908,261 | https://en.wikipedia.org/wiki/International%20Society%20of%20Electrochemistry | The International Society of Electrochemistry (ISE) is a global scientific society founded in 1949. The Head Office of ISE is located now in Lausanne, Switzerland. ISE is a Member Organization of IUPAC. The Society has now more than 1900 Individual Members, 15 Corporate Members (Universities and non-profit research organizations from Belgium, Croatia, Finland, Germany, India, Italy, New Zealand, Poland, Spain, Switzerland and Serbia) and 16 Corporate Sustaining Members. ISE has also 8 Divisions and Regional Representatives.
ISE's objectives are:
to advance electrochemical science and technology
to disseminate scientific and technological knowledge
to promote international cooperation in electrochemistry
to maintain a high professional standard among its members.
See also
Electrochemistry
Quantum electrochemistry
Revaz Dogonadze
Rudolph A. Marcus
External links
International Society of Electrochemistry (ISE)
IUPAC
Electrochemistry
International scientific organizations | International Society of Electrochemistry | [
"Chemistry"
] | 188 | [
"Electrochemistry",
"Physical chemistry stubs",
"Electrochemistry stubs"
] |
908,518 | https://en.wikipedia.org/wiki/Interchangeable%20parts | Interchangeable parts are parts (components) that are identical for practical purposes. They are made to specifications that ensure that they are so nearly identical that they will fit into any assembly of the same type. One such part can freely replace another, without any custom fitting, such as filing. This interchangeability allows easy assembly of new devices, and easier repair of existing devices, while minimizing both the time and skill required of the person doing the assembly or repair.
The concept of interchangeability was crucial to the introduction of the assembly line at the beginning of the 20th century, and has become an important element of some modern manufacturing but is missing from other important industries.
Interchangeability of parts was achieved by combining a number of innovations and improvements in machining operations and the invention of several machine tools, such as the slide rest lathe, screw-cutting lathe, turret lathe, milling machine and metal planer. Additional innovations included jigs for guiding the machine tools, fixtures for holding the workpiece in the proper position, and blocks and gauges to check the accuracy of the finished parts. Electrification allowed individual machine tools to be powered by electric motors, eliminating line shaft drives from steam engines or water power and allowing higher speeds, making modern large-scale manufacturing possible. Modern machine tools often have numerical control (NC) which evolved into CNC (computerized numeric control) when microprocessors became available.
Methods for industrial production of interchangeable parts in the United States were first developed in the nineteenth century. The term American system of manufacturing was sometimes applied to them at the time, in distinction from earlier methods. Within a few decades such methods were in use in various countries, so American system is now a term of historical reference rather than current industrial nomenclature.
First use
Evidence of the use of interchangeable parts can be traced back over two thousand years to Carthage in the First Punic War. Carthaginian ships had standardized, interchangeable parts that even came with assembly instructions akin to "tab A into slot B" marked on them.
Origins of the modern concept
In the late-18th century, French General Jean-Baptiste Vaquette de Gribeauval promoted standardized weapons in what became known as the after it was issued as a royal order in 1765. (At the time the system focused on artillery more than on muskets or handguns.) One of the accomplishments of the system was that solid-cast cannons were bored to precise tolerances, which allowed the walls to be thinner than cannons poured with hollow cores. However, because cores were often off-center, the wall thickness determined the size of the bore. Standardized boring made for shorter cannons without sacrificing accuracy and range because of the tighter fit of the shells; it also allowed standardization of the shells.
Before the 18th century, devices such as guns were made one at a time by gunsmiths in a unique manner. If one single component of a firearm needed a replacement, the entire firearm either had to be sent to an expert gunsmith for custom repairs, or discarded and replaced by another firearm. During the 18th and early-19th centuries, the idea of replacing these methods with a system of interchangeable manufacture gradually developed. The development took decades and involved many people.
Gribeauval provided patronage to Honoré Blanc, who attempted to implement the at the musket level. By around 1778, Honoré Blanc began producing some of the first firearms with interchangeable flintlock mechanisms, although they were carefully made by craftsmen. Blanc demonstrated in front of a committee of scientists that his muskets could be fitted with flintlock mechanisms picked at random from a pile of parts.
In 1785 muskets with interchangeable locks caught the attention of the United States' Ambassador to France, Thomas Jefferson, through the efforts of Honoré Blanc. Jefferson tried unsuccessfully to persuade Blanc to move to America, then wrote to the American Secretary of War with the idea, and when he returned to the USA he worked to fund its development. President George Washington approved of the concept, and in 1798 Eli Whitney signed a contract to mass-produce 12,000 muskets built under the new system.
Between 4th July 1793 and 25th November 1795, the London gunsmith Henry Nock delivered 12,010 'screwless' or 'Duke's' locks to the British Board of Ordnance. These locks were intended to be interchangeable, being manufactured in large volumes in a steam-powered factory using gauges and lathes. Subsequent experiments have suggested that the lock's components were interchangeable at a higher rate than those of the later British New Land Pattern musket and the American M1816 musket.
Louis de Tousard, who fled the French Revolution, joined the U.S. Corp of Artillerists in 1795 and wrote an influential artillerist's manual that stressed the importance of standardization.
Implementation
Numerous inventors began to try to implement the principle Blanc had described. The development of the machine tools and manufacturing practices required would be a great expense to the U.S. Ordnance Department, and for some years while trying to achieve interchangeability, the firearms produced cost more to manufacture. By 1853, there was evidence that interchangeable parts, then perfected by the Federal Armories, led to savings. The Ordnance Department freely shared the techniques used with outside suppliers.
Eli Whitney and an early attempt
In the US, Eli Whitney saw the potential benefit of developing "interchangeable parts" for the firearms of the United States military. In July 1801 he built ten guns, all containing the same exact parts and mechanisms, then disassembled them before the United States Congress. He placed the parts in a mixed pile and, with help, reassembled all of the firearms in front of Congress, much as Blanc had done some years before.
The Congress was captivated and ordered a standard for all United States equipment. The use of interchangeable parts removed the problems of earlier eras concerning the difficulty or impossibility of producing new parts for old equipment. If one firearm part failed, another could be ordered, and the firearm would not need to be discarded. The catch was that Whitney's guns were costly and handmade by skilled workmen.
Charles Fitch credited Whitney with successfully executing a firearms contract with interchangeable parts using the American System, but historians Merritt Roe Smith and Robert B. Gordon have since determined that Whitney never actually achieved interchangeable parts manufacturing. His family's arms company, however, did so after his death.
Brunel's sailing blocks
Mass production using interchangeable parts was first achieved in 1803 by Marc Isambard Brunel in cooperation with Henry Maudslay and Simon Goodrich, under the management of (and with contributions by) Brigadier-General Sir Samuel Bentham, the Inspector General of Naval Works at Portsmouth Block Mills, Portsmouth Dockyard, Hampshire, England. At the time, the Napoleonic War was at its height, and the Royal Navy was in a state of expansion that required 100,000 pulley blocks to be manufactured a year. Bentham had already achieved remarkable efficiency at the docks by introducing power-driven machinery and reorganising the dockyard system.
Marc Brunel, a pioneering engineer, and Maudslay, a founding father of machine tool technology who had developed the first industrially practical screw-cutting lathe in 1800 which standardized screw thread sizes for the first time, collaborated on plans to manufacture block-making machinery; the proposal was submitted to the Admiralty who agreed to commission his services. By 1805, the dockyard had been fully updated with the revolutionary, purpose-built machinery at a time when products were still built individually with different components. A total of 45 machines were required to perform 22 processes on the blocks, which could be made in three different sizes. The machines were almost entirely made of metal, thus improving their accuracy and durability. The machines would make markings and indentations on the blocks to ensure alignment throughout the process. One of the many advantages of this new method was the increase in labour productivity due to the less labour-intensive requirements of managing the machinery. Richard Beamish, assistant to Brunel's son and engineer, Isambard Kingdom Brunel, wrote:
So that ten men, by the aid of this machinery, can accomplish with uniformity, celerity and ease, what formerly required the uncertain labour of one hundred and ten.
By 1808, annual production had reached 130,000 blocks and some of the equipment was still in operation as late as the mid-twentieth century.
Terry's clocks: success in wood
Eli Terry was using interchangeable parts using a milling machine as early as 1800. Ward Francillon, a horologist, concluded in a study that Terry had already accomplished interchangeable parts as early as 1800. The study examined several of Terry's clocks produced between 1800–1807. The parts were labelled and interchanged as needed. The study concluded that all clock pieces were interchangeable.
The very first mass production using interchangeable parts in America was Eli Terry's 1806 Porter Contract, which called for the production of 4000 clocks in three years. During this contract, Terry crafted four-thousand wooden gear tall case movements, at a time when the annual average was about a dozen. Unlike Eli Whitney, Terry manufactured his products without government funding. Terry saw the potential of clocks becoming a household object. With the use of a milling machine, Terry was able to mass-produce clock wheels and plates a few dozen at the same time. Jigs and templates were used to make uniform pinions, so that all parts could be assembled using an assembly line.
North and Hall: success in metal
The crucial step toward interchangeability in metal parts was taken by Simeon North, working only a few miles from Eli Terry. North created one of the world's first true milling machines to do metal shaping that had been done by hand with a file. Diana Muir believes that North's milling machine was online around 1816. Muir, Merritt Roe Smith, and Robert B. Gordon all agree that before 1832 both Simeon North and John Hall were able to mass-produce complex machines with moving parts (guns) using a system that entailed the use of rough-forged parts, with a milling machine that milled the parts to near-correct size, and that were then "filed to gage by hand with the aid of filing jigs."
Historians differ over the question of whether Hall or North made the crucial improvement. Merrit Roe Smith believes that it was done by Hall. Muir demonstrates the close personal ties and professional alliances between Simeon North and neighbouring mechanics mass-producing wooden clocks to argue that the process for manufacturing guns with interchangeable parts was most probably devised by North in emulation of the successful methods used in mass-producing clocks. It may not be possible to resolve the question with absolute certainty unless documents now unknown should surface in the future.
Late 19th and early 20th centuries: dissemination throughout manufacturing
Skilled engineers and machinists, many with armoury experience, spread interchangeable manufacturing techniques to other American industries, including clockmakers and sewing machine manufacturers Wilcox and Gibbs and Wheeler and Wilson, who used interchangeable parts before 1860. Late to adopt the interchangeable system were
Singer Corporation sewing machine (1860s-70s), reaper manufacturer McCormick Harvesting Machine Company (1870s–1880s) and several large steam engine manufacturers such as Corliss (mid-1880s) as well as locomotive makers. Typewriters followed some years later. Then large scale production of bicycles in the 1880s began to use the interchangeable system.
During these decades, true interchangeability grew from a scarce and difficult achievement into an everyday capability throughout the manufacturing industries. In the 1950s and 1960s, historians of technology broadened the world's understanding of the history of the development. Few people outside that academic discipline knew much about the topic until as recently as the 1980s and 1990s, when the academic knowledge began finding wider audiences. As recently as the 1960s, when Alfred P. Sloan published his famous memoir and management treatise, My Years with General Motors, even the long-time president and chair of the largest manufacturing enterprise that had ever existed knew very little about the history of the development, other than to say that:
[Henry M. Leland was], I believe, one of those mainly responsible for bringing the technique of interchangeable parts into automobile manufacturing. […] It has been called to my attention that Eli Whitney, long before, had started the development of interchangeable parts in connection with the manufacture of guns, a fact which suggests a line of descent from Whitney to Leland to the automobile industry.
One of the better-known books on the subject, which was first published in 1984 and has enjoyed a readership beyond academia, has been David A. Hounshell's From the American System to Mass Production, 1800–1932: The Development of Manufacturing Technology in the United States.
See also
Allowance (engineering)
Configuration management
Engineering fit
Engineering tolerance
Fungibility
Just-in-time (business)
Louis de Tousard
Modular design
Preferred numbers
References
Bibliography
.
.
.
Traces in detail the ideal of interchangeable parts, from its origins in 18th-century France, through the gradual development of its practical application via the armory practice ("American system") of the 19th century, to its apex in true mass production beginning in the early 20th century.
A seminal classic of machine tool history. Extensively cited by later works.
.
.
. (Copies available from the British Thesis service of the British Library).
Further reading
External links
Origins of interchangeable parts
History of science and technology in the United States
Manufacturing
Second Industrial Revolution
Industrial design
Interoperability | Interchangeable parts | [
"Engineering"
] | 2,760 | [
"Industrial design",
"Design engineering",
"Telecommunications engineering",
"Manufacturing",
"Interoperability",
"Mechanical engineering",
"Design"
] |
910,107 | https://en.wikipedia.org/wiki/Biological%20small-angle%20scattering | Biological small-angle scattering is a small-angle scattering method for structure analysis of biological materials. Small-angle scattering is used to study the structure of a variety of objects such as solutions of biological macromolecules, nanocomposites, alloys, and synthetic polymers. Small-angle X-ray scattering (SAXS) and small-angle neutron scattering (SANS) are the two complementary techniques known jointly as small-angle scattering (SAS). SAS is an analogous method to X-ray and neutron diffraction, wide angle X-ray scattering, as well as to static light scattering. In contrast to other X-ray and neutron scattering methods, SAS yields information on the sizes and shapes of both crystalline and non-crystalline particles. When used to study biological materials, which are very often in aqueous solution, the scattering pattern is orientation averaged.
SAS patterns are collected at small angles of a few degrees. SAS is capable of delivering structural information in the resolution range between 1 and 25 nm, and of repeat distances in partially ordered systems of up to 150 nm in size. Ultra small-angle scattering (USAS) can resolve even larger dimensions. The grazing-incidence small-angle scattering (GISAS) is a powerful technique for studying of biological molecule layers on surfaces.
In biological applications SAS is used to determine the structure of a particle in terms of average particle size and shape. One can also get information on the surface-to-volume ratio. Typically, the biological macromolecules are dispersed in a liquid. The method is accurate, mostly non-destructive and usually requires only a minimum of sample preparation. However, biological molecules are always susceptible to radiation damage.
In comparison to other structure determination methods, such as solution NMR or X-ray crystallography, SAS allows one to overcome some restraints. For example, solution NMR is limited to protein size, whereas SAS can be used for small molecules as well as for large multi-molecular assemblies. Solid-State NMR is still an indispensable tool for determining atomic level information of macromolecules greater than 40 kDa or non-crystalline samples such as amyloid fibrils. Structure determination by X-ray crystallography may take several weeks or even years, whereas SAS measurements take days. SAS can also be coupled to other analytical techniques like size-exclusion chromatography to study heterogeneous samples. However, with SAS it is not possible to measure the positions of the atoms within the molecule.
Method
Conceptually, small-angle scattering experiments are simple: the sample is exposed to X-rays or neutrons and the scattered radiation is registered by a detector. As the SAS measurements are performed very close to the primary beam ("small angles"), the technique needs a highly collimated or focused X-ray or neutron beam. The biological small-angle X-ray scattering is often performed at synchrotron radiation sources, because biological molecules normally scatter weakly and the measured solutions are dilute. The biological SAXS method profits from the high intensity of X-ray photon beams provided by the synchrotron storage rings. The X-ray or neutron scattering curve (intensity versus scattering angle) is used to create a low-resolution model of a protein, shown here on the right picture. One can further use the X-ray or neutron scattering data and fit separate domains (X-ray or NMR structures) into the "SAXS envelope".
In a scattering experiment, a solution of macromolecules is exposed to X-rays (with wavelength λ typically around 0.15 nm) or thermal neutrons (λ≈0.5 nm). The scattered intensity I(s) is recorded as a function of momentum transfer s (s=4πsinθ/λ, where 2θ is the angle between the incident and scattered radiation). From the intensity of the solution the scattering from only the solvent is subtracted. The random positions and orientations of particles result in an isotropic intensity distribution which, for monodisperse non-interacting particles, is proportional to the scattering from a single particle averaged over all orientations. The net particle scattering is proportional to the squared difference in scattering length density (electron density for X-rays and nuclear/spin density for neutrons) between particle and solvent – the so-called contrast. The contrast can be varied in neutron scattering using H2O/D2O mixtures or selective deuteration to yield additional information. The information content of SAS data is illustrated here in the figure on the right, which shows X-ray scattering patterns from proteins with different folds and molecular masses. At low angles (2-3 nm resolution) the curves are rapidly decaying functions of s essentially determined by the particle shape, which clearly differ. At medium resolution (2 to 0.5 nm) the differences are already less pronounced and above 0.5 nm resolution all curves are very similar. SAS thus contains information about the gross structural features – shape, quaternary and tertiary structure – but is not suitable for the analysis of the atomic structure.
History
First applications date back to the late 1930s when the main principles of SAXS were developed in the fundamental work of Guinier following his studies of metallic alloys. In the first monograph on SAXS by Guinier and Fournet it was already demonstrated that the method yields not only information on the sizes and shapes of particles but also on the internal structure of disordered and partially ordered systems.
In the 1960s, the method became increasingly important in the study of biological macromolecules in solution as it allowed one to get low-resolution structural information on the overall shape and internal structure in the absence of crystals. A breakthrough in SAXS and SANS experiments came in the 1970s, thanks to the availability of synchrotron radiation and neutron sources, the latter paving the way for contrast variation by solvent exchange of H2O for D2O and specific deuteration methods. It was realised that scattering studies on solution provide, at a minimal investment of time and effort, useful insights into the structure of non-crystalline biochemical systems. Moreover, SAXS/SANS also made possible real time investigations of intermolecular interactions, including assembly and large-scale conformational changes in macromolecular assemblies.
The main difficulty of SAS as a structural method is to extract the three-dimensional structural information of the object from the one-dimensional experimental data. In the past, only overall particle parameters (e.g. volume, radius of gyration) of the macromolecules were directly determined from the experimental data, whereas the analysis in terms of three-dimensional models was limited to simple geometrical bodies (e.g. ellipsoids, cylinders, etc.) or was performed on an ad hoc trial-and-error basis. Electron microscopy was often used as a constraint in building consensus models. In the 1980s, progress in other structural methods led to a decline of the interest of biochemists in SAS studies, which drew structural conclusions from just a couple of overall parameters or were based on trial-and-error models.
The 1990s brought a breakthrough in SAXS/SANS data analysis methods, which opened the way for reliable ab initio modelling of macromolecular complexes, including detailed determination of shape and domain structure and application of rigid body refinement techniques. This progress was accompanied by further advances in instrumentation, allowing sub-ms time resolutions to be achieved on third generation SR sources in the studies of protein and nucleic acid folding.
In 2005, a four-year project was started. Small-Angle X-Ray scattering Initiative for EuRope (SAXIER) with the goal to combine SAXS methods with other analytical techniques and create automated software to rapidly analyse large quantities of data. The project created a unified European SAXS infrastructure, using the most advanced methods available.
Data analysis
In a good quality SAS experiment, several solutions with different concentrations of the macromolecule under investigation are measured. By extrapolating the scattering curves measured at varying concentrations to zero concentration, one is able to get a scattering curve that represents infinite dilution. Then concentration effects should not affect the scattering curve. Data analysis of the extrapolated scattering curve begins with the inspection of the start of the scattering curve in the region around s = 0. If the region follows the Guinier approximation (also known as Guinier law), the sample is not aggregated. Then the shape of the particle in question can be determined by various methods, of which some are described in the following reference.
Indirect Fourier transform
First step is usually to compute a Fourier transform of the scattering curve. Transformed curve can be interpreted as distance distribution function inside a particle. This transformation gives also a benefit of regularization of input data.
Low-resolution models
One problem in SAS data analysis is to get a three-dimensional structure from a one-dimensional scattering pattern. The SAS data does not imply a single solution. Many different proteins, for example, may have the same scattering curve. Reconstruction of 3D structure might result in large number of different models. To avoid this problem a number of simplifications need to be considered.
An additional approach is to combine small-angle X-ray and neutron scattering data and model with the program MONSA.
Freely available SAS analysis computer programs have been intensively developed at EMBL. In the first general ab initio approach, an angular envelope function of the particle r=F(ω), where (r,ω) are spherical coordinates, is described by a series of spherical harmonics. The low resolution shape is thus defined by a few parameters – the coefficients of this series – which fit the scattering data. The approach was further developed and implemented in the computer program SASHA (Small Angle Scattering Shape Determination). It was demonstrated that under certain circumstances a unique envelope can be extracted from the scattering data. This method is only applicable to globular particles with relatively simple shapes and without significant internal cavities. To overcome these limitations, there was another approach developed, which uses different types of Monte-Carlo searches. DALAI_GA is an elegant program, which takes a sphere with diameter equal to the maximum particle size Dmax, which is determined from the scattering data, and fills it with beads. Each bead belongs either to the particle (index=1) or to the solvent (index=0). The shape is thus described by the binary string of length M. Starting from a random string, a genetic algorithm searches for a model that fits the data. Compactness and connectivity constrains are imposed in the search, implemented in the program DAMMIN. If the particle symmetry is known, SASHA and DAMMIN can utilise it as useful constraints. The 'give-n-take' procedure SAXS3D and the program SASMODEL, based on interconnected ellipsoids are ab initio Monte Carlo approaches without limitation in the search space.
An approach that uses an ensemble of Dummy Residues (DRs) and simulated annealing to build a locally "chain-compatible" DR-model inside a sphere of diameter Dmax lets one extract more details from SAXS data. This method is implemented in the program GASBOR.
Solution scattering patterns of multi-domain proteins and macromolecular complexes can also be fitted using models built from high resolution (NMR or X-ray) structures of individual domains or subunits assuming that their tertiary structure is preserved. Depending on the complexity of the object, different approaches are employed for the global search of the optimum configuration of subunits fitting the experimental data.
Consensus model
The Monte-Carlo based models contain hundreds or thousand parameters, and caution is required to avoid overinterpretation. A common approach is to align a set of models resulting from independent shape reconstruction runs to obtain an average model retaining the most persistent- and conceivably also most reliable-features (e.g. using the program SUPCOMB).
Adding missing loops
Disordered surface amino acids ("loops") are frequently unobserved in NMR and crystallographic studies, and may be left missing in the reported models. Such disordered element contribute to the scattering intensity and their probable locations can be found by fixing the known part of the structure and adding the missing parts to fit the SAS pattern from the entire particle. The Dummy Residue approach was extended and the algorithms for adding missing loops or domains were implemented in the program suite CREDO.
Hybrid methods
Recently a few methods proposed that use SAXS data as constraints. The authors aimed to improve results of fold recognition and de novo protein structure prediction methods. SAXS data provide the Fourier transform of the histogram of atomic pair distances (pair distribution function) for a given protein. This can serve as a structural constraint on methods used to determine the native conformational fold of the protein. Threading or fold recognition assumes that 3D structure is more conserved than sequence. Thus, very divergent sequences may have similar structure. Ab initio methods, on the other hand, challenge one of the biggest problems in molecular biology, namely, to predict the folding of a protein "from scratch", using no homologous sequences or structures. Using the "SAXS filter", the authors were able to purify the set of de novo protein models significantly. This was further proved by structure homology searches. It was also shown, that the combination of SAXS scores with scores, used in threading methods, significantly improves the performance of fold recognition. On one example it was demonstrated how approximate tertiary structure of modular proteins can be assembled from high resolution NMR structures of domains, using SAXS data, confining the translational degrees of freedom. Another example shows how the SAXS data can be combined together with NMR, X-ray crystallography and electron microscopy to reconstruct the quaternary structure of multidomain protein.
Flexible systems
An elegant method to tackle the problem of intrinsically disordered or multi-domain proteins with flexible linkers was proposed recently. It allows coexistence of different conformations of a protein, which together contribute to the average experimental scattering pattern. Initially, EOM (ensemble optimization method) generates a pool of models covering the protein configuration space. The scattering curve is then calculated for each model. In the second step, the program selects subsets of protein models. Average experimental scattering is calculated for each subset and fitted to the experimental SAXS data. If the best fit is not found, models are reshuffled between different subsets and new average scattering calculation and fitting to the experimental data is performed. This method has been tested on two proteins– denatured lysozyme and Bruton's protein kinase. It gave some interesting and promising results.
Biological molecule layers and GISAS
Coatings of biomolecules can be studied with grazing-incidence X-ray and neutron scattering. IsGISAXS (grazing incidence small angle X-ray scattering) is a software program dedicated to the simulation and analysis of GISAXS from nanostructures. IsGISAXS only encompasses the scattering by nanometric sized particles, which are buried in a matrix subsurface or supported on a substrate or buried in a thin layer on a substrate. The case of holes is also handled. The geometry is restricted to a plane of particles. The scattering cross section is decomposed in terms of interference function and particle form factor. The emphasis is put on the grazing incidence geometry which induces a "beam refraction effect". The particle form factor is calculated within the distorted wave Born approximation (DWBA), starting as an unperturbed state with sharp interfaces or with the actual perpendicular profile of refraction index. Various kinds of simple geometrical shapes are available with a full account of size and shape distributions in the Decoupling Approximation (DA), in the local monodisperse approximation (LMA) and also in the size-spacing correlation approximation (SSCA). Both, disordered systems of particles defined by their particle-particle pair correlation function and bi-dimensional crystal or para-crystal are considered.
See also
Anton Paar
Bruker
Electron microscopy
Fluctuation X-ray scattering (FXS)
Grazing-incidence small-angle X-ray scattering (GISAXS)
Homology modeling
Neutron spin echo
Protein Data Bank
Protein dynamics
Protein folding
Protein threading
Rigaku
Rosetta@home
X-ray crystallography
References
Further reading
External links
SAXS/WAXS Beamline Australian Synchrotron, Melbourne, Australia
SIBYLS – beamline at Advanced Light Source, Berkeley, USA
SAXS – beamline at ELETTRA Synchrotron Light Laboratory, Trieste, Italy
X33 – beamline at DESY, Hamburg, Germany
D11A – beamline at Brazilian Synchrotron Light Laboratory, Campinas, Brazil
X21 and X9 – beamlines at National Synchrotron Light Source at Brookhaven National Laboratory, Upton, USA
F2 and G1 – beamlines at Cornell Laboratory for Accelerator-based Sciences and Education, Ithaca, USA
Bio-SANS – beamline at High Flux Isotope Reactor at Oak Ridge National Laboratory, Oak Ridge, TN, USA
X-rays
Small-angle scattering
Polymer physics
nl:SAXS
ja:X線小角散乱 | Biological small-angle scattering | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,532 | [
"Polymer physics",
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Polymer chemistry"
] |
910,234 | https://en.wikipedia.org/wiki/Cut-elimination%20theorem | The cut-elimination theorem (or Gentzen's Hauptsatz) is the central result establishing the significance of the sequent calculus. It was originally proved by Gerhard Gentzen in part I of his landmark 1935 paper "Investigations in Logical Deduction" for the systems LJ and LK formalising intuitionistic and classical logic respectively. The cut-elimination theorem states that any judgement that possesses a proof in the sequent calculus making use of the cut rule also possesses a cut-free proof, that is, a proof that does not make use of the cut rule.
The cut rule
A sequent is a logical expression relating multiple formulas, in the form , which is to be read as "If all of hold, then at least one of must hold", or (as Gentzen glossed): "If ( and and …) then ( or or …)." Note that the left-hand side (LHS) is a conjunction (and) and the right-hand side (RHS) is a disjunction (or).
The LHS may have arbitrarily many or few formulae; when the LHS is empty, the RHS is a tautology. In LK, the RHS may also have any number of formulae—if it has none, the LHS is a contradiction, whereas in LJ the RHS may only have one formula or none: here we see that allowing more than one formula in the RHS is equivalent, in the presence of the right contraction rule, to the admissibility of the law of the excluded middle. However, the sequent calculus is a fairly expressive framework, and there have been sequent calculi for intuitionistic logic proposed that allow many formulae in the RHS. From Jean-Yves Girard's logic LC it is easy to obtain a rather natural formalisation of classical logic where the RHS contains at most one formula; it is the interplay of the logical and structural rules that is the key here.
"Cut" is a rule of inference in the normal statement of the sequent calculus, and equivalent to a variety of rules in other proof theories, which, given
and
allows one to infer
That is, it "cuts" the occurrences of the formula out of the inferential relation.
Cut elimination
The cut-elimination theorem states that (for a given system) any sequent provable using the rule Cut can be proved without use of this rule.
For sequent calculi that have only one formula in the RHS, the "Cut" rule reads, given
and
allows one to infer
If we think of as a theorem, then cut-elimination in this case simply says that a lemma used to prove this theorem can be inlined. Whenever the theorem's proof mentions lemma , we can substitute the occurrences for the proof of . Consequently, the cut rule is admissible.
Consequences of the theorem
For systems formulated in the sequent calculus, analytic proofs are those proofs that do not use Cut. Typically such a proof will be longer, of course, and not necessarily trivially so. In his essay "Don't Eliminate Cut!" George Boolos demonstrated that there was a derivation that could be completed in a page using cut, but whose analytic proof could not be completed in the lifespan of the universe.
The theorem has many, rich consequences:
A system is inconsistent if it admits a proof of the absurd. If the system has a cut elimination theorem, then if it has a proof of the absurd, or of the empty sequent, it should also have a proof of the absurd (or the empty sequent), without cuts. It is typically very easy to check that there are no such proofs. Thus, once a system is shown to have a cut elimination theorem, it is normally immediate that the system is consistent.
Normally also the system has, at least in first-order logic, the subformula property, an important property in several approaches to proof-theoretic semantics.
Cut elimination is one of the most powerful tools for proving interpolation theorems. The possibility of carrying out proof search based on resolution, the essential insight leading to the Prolog programming language, depends upon the admissibility of Cut in the appropriate system.
For proof systems based on higher-order typed lambda calculus through a Curry–Howard isomorphism, cut elimination algorithms correspond to the strong normalization property (every proof term reduces in a finite number of steps into a normal form).
See also
Deduction theorem
Gentzen's consistency proof for Peano's axioms
Notes
References
External links
Theorems in the foundations of mathematics
Proof theory | Cut-elimination theorem | [
"Mathematics"
] | 959 | [
"Mathematical theorems",
"Foundations of mathematics",
"Proof theory",
"Mathematical logic",
"Mathematical problems",
"Theorems in the foundations of mathematics"
] |
910,263 | https://en.wikipedia.org/wiki/Hawaiian%20earring | In mathematics, the Hawaiian earring is the topological space defined by the union of circles in the Euclidean plane with center and radius for endowed with the subspace topology:
The space is homeomorphic to the one-point compactification of the union of a countable family of disjoint open intervals.
The Hawaiian earring is a one-dimensional, compact, locally path-connected metrizable space. Although is locally homeomorphic to at all non-origin points, is not semi-locally simply connected at . Therefore, does not have a simply connected covering space and is usually given as the simplest example of a space with this complication.
The Hawaiian earring looks very similar to the wedge sum of countably infinitely many circles; that is, the rose with infinitely many petals, but these two spaces are not homeomorphic. The difference between their topologies is seen in the fact that, in the Hawaiian earring, every open neighborhood of the point of intersection of the circles contains all but finitely many of the circles (an -ball around contains every circle whose radius is less than ); in the rose, a neighborhood of the intersection point might not fully contain any of the circles. Additionally, the rose is not compact: the complement of the distinguished point is an infinite union of open intervals; to those add a small open neighborhood of the distinguished point to get an open cover with no finite subcover.
Fundamental group
The Hawaiian earring is neither simply connected nor semilocally simply connected since, for all the loop parameterizing the th circle is not homotopic to a trivial loop. Thus, has a nontrivial fundamental group sometimes referred to as the Hawaiian earring group. The Hawaiian earring group is uncountable, and it is not a free group. However, is locally free in the sense that every finitely generated subgroup of is free.
The homotopy classes of the individual loops generate the free group on a countably infinite number of generators, which forms a proper subgroup of . The uncountably many other elements of arise from loops whose image is not contained in finitely many of the Hawaiian earring's circles; in fact, some of them are surjective. For example, the path that on the interval circumnavigates the th circle. More generally, one may form infinite products of the loops indexed over any countable linear order provided that for each , the loop and its inverse appear within the product only finitely many times.
It is a result of John Morgan and Ian Morrison that embeds into the inverse limit of the free groups with generators, , where the bonding map from to simply kills the last generator of . However, is a proper subgroup of the inverse limit since each loop in may traverse each circle of only finitely many times. An example of an element of the inverse limit that does not correspond an element of is an infinite product of commutators , which appears formally as the sequence in the inverse limit .
First singular homology
Katsuya Eda and Kazuhiro Kawamura proved that the abelianisation of and therefore the first singular homology group is isomorphic to the group
The first summand is the direct product of infinitely many copies of the infinite cyclic group (the Baer–Specker group). This factor represents the singular homology classes of loops that do not have winding number around every circle of and is precisely the first Cech Singular homology group . Additionally, may be considered as the infinite abelianization of , since every element in the kernel of the natural homomorphism is represented by an infinite product of commutators. The second summand of consists of homology classes represented by loops whose winding number around every circle of is zero, i.e. the kernel of the natural homomorphism . The existence of the isomorphism with is proven abstractly using infinite abelian group theory and does not have a geometric interpretation.
Higher dimensions
It is known that is an aspherical space, i.e. all higher homotopy and homology groups of are trivial.
The Hawaiian earring can be generalized to higher dimensions. Such a generalization was used by Michael Barratt and John Milnor to provide examples of compact, finite-dimensional spaces with nontrivial singular homology groups in dimensions larger than that of the space. The -dimensional Hawaiian earring is defined as
Hence, is a countable union of -spheres which have one single point in common, and the topology is given by a metric in which the sphere's diameters converge to zero as Alternatively, may be constructed as the Alexandrov compactification of a countable union of disjoint s. Recursively, one has that consists of a convergent sequence, is the original Hawaiian earring, and is homeomorphic to the reduced suspension .
For , the -dimensional Hawaiian earring is a compact, -connected and locally -connected. For , it is known that is isomorphic to the Baer–Specker group
For and Barratt and Milnor showed that the singular homology group is a nontrivial uncountable group for each such .
See also
List of topologies
References
Further reading
.
.
.
.
.
.
Topological spaces | Hawaiian earring | [
"Mathematics"
] | 1,066 | [
"Topological spaces",
"Mathematical structures",
"Topology",
"Space (mathematics)"
] |
910,281 | https://en.wikipedia.org/wiki/Variscite | Variscite is a hydrated aluminium phosphate mineral (). It is a relatively rare phosphate mineral. It is sometimes confused with turquoise; however, variscite is usually greener in color. The green color results from the presence of small amounts of trivalent chromium ().
Geology
Variscite is a secondary mineral formed by direct deposition from phosphate-bearing water which has reacted with aluminium-rich rocks in a near-surface environment. It occurs as fine-grained masses in nodules, cavity fillings, and crusts. Variscite often contains white veins of the calcium aluminium phosphate mineral crandallite.
It was first described in 1837 and named for the locality of Variscia, the historical name of the Vogtland, in Germany. At one time, variscite was called Utahlite. At times, materials which may be turquoise or may be variscite have been marketed as "variquoise". Appreciation of the color ranges typically found in variscite have made it a popular gem in recent years.
Variscite from Nevada typically contains black spiderwebbing in the matrix and is often confused with green turquoise. Most of the Nevada variscite recovered in recent decades has come from mines located in Lander County and Esmeralda County, specifically in the Candelaria Hills.
Notable localities are Lucin, Snowville, and Fairfield in Utah, United States. Most recently found in Wyoming as well. It is also found in Germany, Australia, Poland, Spain, Italy (Sardinia), and Brazil.
Jewelry
Variscite has been used in Europe to make personal ornaments, especially beads, since Neolithic times. Its use continued during the Bronze Age and in Roman times although it was not until the 19th century that it was determined that all variscite used in Europe came from three sites in Spain, Gavá (Barcelona), Palazuelo de las Cuevas (Zamora), and Encinasola (Huelva).
Variscite is sometimes used as a semi-precious stone, and is popular for carvings and ornamental use due to its beautiful and intense green color, and is commonly used in silversmithing in place of turquoise. Variscite is more rare and less common than turquoise, but because it is not as commonly available as turquoise or as well known to the general public, raw variscite tends to be less expensive than turquoise.
Gallery
See also
(same etymology, as named from the ancient locality of Variscia in Germany)
List of minerals
References
Aluminium minerals
Phosphate minerals
Orthorhombic minerals
Minerals in space group 61
Luminescent minerals
Gemstones
Dihydrate minerals
Minerals described in 1837 | Variscite | [
"Physics",
"Chemistry"
] | 546 | [
"Luminescence",
"Luminescent minerals",
"Materials",
"Gemstones",
"Matter"
] |
910,505 | https://en.wikipedia.org/wiki/Proof-theoretic%20semantics | Proof-theoretic semantics is an approach to the semantics of logic that attempts to locate the meaning of propositions and logical connectives not in terms of interpretations, as in Tarskian approaches to semantics, but in the role that the proposition or logical connective plays within a system of inference.
Overview
Gerhard Gentzen is the founder of proof-theoretic semantics, providing the formal basis for it in his account of cut-elimination for the sequent calculus, and some provocative philosophical remarks about locating the meaning of logical connectives in their introduction rules within natural deduction. The history of proof-theoretic semantics since then has been devoted to exploring the consequences of these ideas.
Dag Prawitz extended Gentzen's notion of analytic proof to natural deduction, and suggested that the value of a proof in natural deduction may be understood as its normal form. This idea lies at the basis of the Curry–Howard isomorphism, and of intuitionistic type theory. His inversion principle lies at the heart of most modern accounts of proof-theoretic semantics.
Michael Dummett introduced the very fundamental idea of logical harmony, building on a suggestion of Nuel Belnap. In brief, a language, which is understood to be associated with certain patterns of inference, has logical harmony if it is always possible to recover analytic proofs from arbitrary demonstrations, as can be shown for the sequent calculus by means of cut-elimination theorems and for natural deduction by means of normalisation theorems. A language that lacks logical harmony will suffer from the existence of incoherent forms of inference: it will likely be inconsistent.
See also
Inferential role semantics
Truth-conditional semantics
References
Proof-Theoretic Semantics, at the Stanford Encyclopedia of Philosophy
Logical Consequence, Deductive-Theoretic Conceptions, at the Internet Encyclopedia of Philosophy.
Nissim Francez, "On a Distinction of Two Facets of Meaning and its Role in Proof-theoretic Semantics", Logica Universalis 9, 2015.
Thomas Piecha, Peter Schroeder-Heister (eds), "Advances in Proof-Theoretic Semantics", Trends in Logic 43, Springer, 2016.
External links
Arché Bibliography on Proof-Theoretic Semantics.
Proof-Theoretic Semantics Network
Mathematical logic
Philosophical logic
Proof theory
Semantics | Proof-theoretic semantics | [
"Mathematics"
] | 479 | [
"Mathematical logic",
"Proof theory"
] |
34,437,234 | https://en.wikipedia.org/wiki/Energy%20Fair | Energy Fair in the United Kingdom is a group of six people leading a campaign that claims that the nuclear power industry receives unfair subsidies, consisting of:
Dörte Fouquet, senior partner of the law firm Becker Büttner Held (BBH) and Director of the European Renewable Energies Federation.
Antony Froggatt, energy policy consultant, senior research fellow at Chatham House.
David Lowry, research policy consultant, specialising in nuclear issues.
Pete Roche, energy consultant, policy adviser to the Scottish Nuclear Free Local Authorities, and the National Steering Committee of United Kingdom Nuclear Free Local Authorities.
Stephen Thomas, energy policy researcher, University of Greenwich Business School.
Gerry Wolff, coordinator, Desertec-UK and the Kyoto2 Support Group.
In February 2011 and January 2012, the group, supported by other organisations and environmentalists, lodged formal complaints with the European Union's Directorate General for Competition, alleging that the Government was providing unlawful State aid in the form of subsidies for nuclear power industry, in breach of European Union competition law.
One of the largest subsidies is the cap on liabilities for nuclear accidents which the nuclear power industry has negotiated with governments. “Like car drivers, the operators of nuclear plants should be properly insured,” said Gerry Wolff, coordinator of the Energy Fair group. The group calculates that, "if nuclear operators were fully insured against the cost of nuclear disasters like those at Chernobyl and Fukushima, the price of nuclear electricity would rise by at least €0.14 per kWh and perhaps as much as €2.36, depending on assumptions made".
See also
The World Nuclear Industry Status Report
Nuclear power in the United Kingdom
Anti-nuclear movement in the United Kingdom
Energy subsidies
Nuclear or Not?
References
External links
The nuclear industry's secret subsidies
Political advocacy groups in the United Kingdom
Anti-nuclear organizations
Anti-nuclear movement in the United Kingdom | Energy Fair | [
"Engineering"
] | 377 | [
"Nuclear organizations",
"Anti-nuclear organizations"
] |
34,439,024 | https://en.wikipedia.org/wiki/California%20Building%20Standards%20Code | The California Building Standards Code is the building code for California, and Title 24 of the California Code of Regulations (CCR). It is maintained by the California Building Standards Commission which is granted the authority to oversee processes related to the California building codes by California Building Standards Law. Code amendments are proposed by the California Department of Housing and Community Development. The California building codes under Title 24 are established based on several criteria: standards adopted by states based on national model codes, national model codes adapted to meet California conditions, and standards passed by the California legislature that address concerns specific to California.
Title 24 of the California Code of Regulations consist of 13 parts:
Part 1-California Administrative Code
Part 2-California Building Code
Part 2.5-California Residential Code
Part 3-California Electrical Code
Part 4-California Mechanical Code
Part 5-California Plumbing Code
Part 6-California Energy Code (this section is commonly known as “Title 24” in the construction trade)
Part 7- Reserved
Part 8-California Historical Building Code
Part 9-California Fire Code
Part 10-California Existing Building Code
Part 11-California Green Building Standards Code (also referred to as CALGreen)
Part 12-California Referenced Standards Code
Portions of editions of the California building codes are published by the International Code Council (ICC), National Fire Protection Association (NFPA), International Association of Plumbing and Mechanical Officials (IAPMO), and BNi Building News. As they are, in effect, amended versions of copyright works such as the International Building Code (IBC) maintained by the International Code Council (ICC), the regulations have substantial portions under copyright, and hence may be withheld from the public or individuals, but still have the force of law. In 2008, Carl Malamud published the California Building Standards Code on Public.Resource.Org for free.
Code adoption cycle
New editions of the California Building Standards Code are published every three years in a triennial cycle with supplemental information published during other years. Publication of triennial editions of the CCR began in 1989. The most recent version of the code was the 2019 edition published January 1, 2020. Changes made to each edition are based on proposals made by state agencies. Proposals are presented to the California Building Standards Commission and must provide thorough justification for proposed changes. Proposals go through multiple phases during the adoption cycle.
References
External links
California Building Standards Code
Building codes
Building Standards Code
Government of California
Standards of the United States | California Building Standards Code | [
"Engineering"
] | 490 | [
"Building engineering",
"Building codes"
] |
34,439,226 | https://en.wikipedia.org/wiki/Jacobi%20ellipsoid | A Jacobi ellipsoid is a triaxial (i.e. scalene) ellipsoid under hydrostatic equilibrium which arises when a self-gravitating, fluid body of uniform density rotates with a constant angular velocity. It is named after the German mathematician Carl Gustav Jacob Jacobi.
History
Before Jacobi, the Maclaurin spheroid, which was formulated in 1742, was considered to be the only type of ellipsoid which can be in equilibrium. Lagrange in 1811 considered the possibility of a tri-axial ellipsoid being in equilibrium, but concluded that the two equatorial axes of the ellipsoid must be equal, leading back to the solution of Maclaurin spheroid. But Jacobi realized that Lagrange's demonstration is a sufficiency condition, but not necessary. He remarked:
"One would make a grave mistake if one supposed that the spheroids of revolution are the only admissible figures of equilibrium even under the restrictive assumption of second-degree surfaces" (...) "In fact a simple consideration shows that ellipsoids with three unequal axes can very well be figures of equilibrium; and that one can assume an ellipse of arbitrary shape for the equatorial section and determine the third axis (which is also the least of the three axes) and the angular velocity of rotation such that the ellipsoid is a figure of equilibrium."
Jacobi formula
For an ellipsoid with equatorial semi-principal axes and polar semi-principal axis , the angular velocity about is given by
where is the density and is the gravitational constant, subject to the condition
For fixed values of and , the above condition has solution for such that
The integrals can be expressed in terms of incomplete elliptic integrals. In terms of the Carlson symmetric form elliptic integral , the formula for the angular velocity becomes
and the condition on the relative size of the semi-principal axes is
The angular momentum of the Jacobi ellipsoid is given by
where is the mass of the ellipsoid and is the mean radius, the radius of a sphere of the same volume as the ellipsoid.
Relationship with Dedekind ellipsoid
The Jacobi and Dedekind ellipsoids are both equilibrium figures for a body of rotating homogeneous self-gravitating fluid. However, while the Jacobi ellipsoid spins bodily, with no internal flow of the fluid in the rotating frame, the Dedekind ellipsoid maintains a fixed orientation, with the constituent fluid circulating within it. This is a direct consequence of Dedekind's theorem.
For any given Jacobi ellipsoid, there exists a Dedekind ellipsoid with the same semi-principal axes and same mass and with a flow velocity field of
where are Cartesian coordinates on axes aligned respectively with the axes of the ellipsoid. Here is the vorticity, which is uniform throughout the spheroid (). The angular velocity of the Jacobi ellipsoid and vorticity of the corresponding Dedekind ellipsoid are related by
That is, each particle of the fluid of the Dedekind ellipsoid describes a similar elliptical circuit in the same period in which the Jacobi spheroid performs one rotation.
In the special case of , the Jacobi and Dedekind ellipsoids (and the Maclaurin spheroid) become one and the same; bodily rotation and circular flow amount to the same thing. In this case , as is always true for a rigidly rotating body.
In the general case, the Jacobi and Dedekind ellipsoids have the same energy, but the angular momentum of the Jacobi spheroid is the greater by a factor of
See also
Maclaurin spheroid
Riemann ellipsoid
Roche ellipsoid
Dirichlet's ellipsoidal problem
Spheroid
Ellipsoid
References
Quadrics
Astrophysics
Fluid dynamics
Equations of astronomy | Jacobi ellipsoid | [
"Physics",
"Chemistry",
"Astronomy",
"Engineering"
] | 828 | [
"Concepts in astronomy",
"Chemical engineering",
"Astrophysics",
"Equations of astronomy",
"Piping",
"Astronomical sub-disciplines",
"Fluid dynamics"
] |
549,322 | https://en.wikipedia.org/wiki/Spatial%20light%20modulator | A spatial light modulator (SLM) is a device that can control the intensity, phase, or polarization of light in a spatially varying manner. A simple example is an overhead projector transparency. Usually when the term SLM is used, it means that the transparency can be controlled by a computer.
SLMs are primarily marketed for image projection, displays devices, and maskless lithography. SLMs are also used in optical computing and holographic optical tweezers.
Usually, an SLM modulates the intensity of the light beam. However, it is also possible to produce devices that modulate the phase of the beam or both the intensity and the phase simultaneously. It is also possible to produce devices that modulate the polarization of the beam, and modulate the polarization, phase, and intensity simultaneously.
SLMs are used extensively in holographic data storage setups to encode information into a laser beam similarly to the way a transparency does for an overhead projector. They can also be used as part of a holographic display technology.
In the 1980s, large SLMs were placed on overhead projectors to project computer monitor contents to the screen. Since then, more modern projectors have been developed where the SLM is built inside the projector. These are commonly used in meetings for presentations.
Liquid crystal SLMs can help solve problems related to laser microparticle manipulation. In this case spiral beam parameters can be changed dynamically.
Electrically-addressed spatial light modulator (EASLM)
As its name implies, the image on an electrically addressed spatial light modulator is created and changed electronically, as in most electronic displays. EASLMs usually receive input via a conventional interface such as VGA or DVI input. They are available at resolutions up to QXGA (2048 × 1536). Unlike ordinary displays, they are usually much smaller (having an active area of about 2 cm²) as they are not normally meant to be viewed directly. An example of an EASLM is the digital micromirror device (DMD) at the heart of DLP displays or LCoS Displays using ferroelectric liquid crystals (FLCoS) or nematic liquid crystals (electrically controlled birefringence effect).
Spatial light modulators can be either reflective or transmissive depending on their design and purpose.
DMDs, short for digital micromirror devices, are spatial light modulators that specifically work with binary amplitude-only modulation. Each pixel on the SLM can only be in one of two states: "on" or "off". The main purpose of the SLM is to control and adjust the amplitude of the light.
Phase modulation can be achieved using a DMD by using Lee holography techniques, or by using the superpixel method.
Optically-addressed spatial light modulator (OASLM)
The image on an optically addressed spatial light modulator, also known as a light valve, is created and changed by shining light encoded with an image on its front or back surface. A photosensor allows the OASLM to sense the brightness of each pixel and replicate the image using liquid crystals. As long as the OASLM is powered, the image is retained even after the light is extinguished. An electrical signal is used to clear the whole OASLM at once.
They are often used as the second stage of a very-high-resolution display, such as one for a computer-generated holographic display. In a process called active tiling, images displayed on an EASLM are sequentially transferred to different parts on an OASLM, before the whole image on the OASLM is presented to the viewer. As EASLMs can run as fast as 2500 frames per second, it is possible to tile around 100 copies of the image on the EASLM onto an OASLM while still displaying full-motion video on the OASLM. This potentially gives images with resolutions of above 100 megapixels.
Application in ultrafast pulse measuring and shaping
Multiphoton intrapulse interference phase scan (MIIPS) is a technique based on the computer-controlled phase scan of a linear-array spatial light modulator. Through the phase scan to an ultrashort pulse, MIIPS can not only characterize but also manipulate the ultrashort pulse to get the needed pulse shape at target spot (such as transform-limited pulse for optimized peak power, and other specific pulse shapes). This technique features with full calibration and control of the ultrashort pulse, with no moving parts, and simple optical setup. Linear array SLMs that use nematic liquid crystal elements are available that can modulate amplitude, phase, or both simultaneously.
See also
Active filters in femtosecond pulse shaping
Photoelastic modulator
Waveplate
References
Larry J. Hornbeck (TI), Digital Light Processing for High-Brightness, High-Resolution Applications, 21st century Archives
Coomber, Stuart D.; Cameron, Colin D.; Hughes, Jonathon R.; Sheerin, David T.; Slinger, Christopher W.; Smith, Mark A.; Stanley, Maurice (QinetiQ), "Optically addressed spatial light modulators for replaying computer-generated holograms", Proc. SPIE Vol. '4457', p. 9-19 (2001)
Liquid Crystal Optically Addressed Spatial Light Modulator,
Slinger, C.; Cameron, C.; Stanley, M.; "Computer-Generated Holography as a Generic Display Technology", IEEE Computer, Volume 38, Issue 8, Aug. 2005, pp 46–53
External links
9781510613010/10.1117/3.2281295?SSO=1 How to Shape Light with Spatial Light Modulators
SLM ToolBox A free Windows application for controlling phase-only spatial light modulators.
Phase calibration of a Spatial Light Modulator
Optical components
Display technology
Optical devices | Spatial light modulator | [
"Materials_science",
"Technology",
"Engineering"
] | 1,241 | [
"Glass engineering and science",
"Optical components",
"Optical devices",
"Electronic engineering",
"Display technology",
"Components"
] |
549,742 | https://en.wikipedia.org/wiki/Verdet%20constant | The Verdet constant is an optical property named after the French physicist Émile Verdet. It describes the strength of the Faraday effect for a particular material. For a constant magnetic field parallel to the path of the light, it can be calculated as
where is the angle between the starting and ending polarizations, is the Verdet constant, is the strength of the magnetic flux density, and is the path length in the material.
The Verdet constant of a material is wavelength-dependent and for most materials is extremely small. It is strongest in substances containing paramagnetic ions such as terbium. The highest Verdet constants in bulk media are found in terbium-doped dense flint glasses or in crystals of terbium gallium garnet (TGG). These materials have excellent transparency properties and high damage thresholds for laser radiation. Atomic vapours, however, can have Verdet constants which are orders of magnitude larger than TGG, but only over a very narrow wavelength range. Alkali vapours can therefore be used as an optical isolator or as an extremely sensitive magnetometer.
The Faraday effect is chromatic (i.e. it depends on wavelength), and therefore the Verdet constant is quite a strong function of wavelength. At 632.8 nm, the Verdet constant for TGG is reported to be , whereas at 1064 nm it falls to . This behavior means that the devices manufactured with a certain degree of rotation at one wavelength will produce much less rotation at longer wavelengths. Many Faraday rotators and isolators are adjustable by varying the degree to which the active TGG rod is inserted into the magnetic field of the device. In this way, the device can be tuned for use with a range of lasers within the design range of the device. Truly broadband sources (such as ultrashort-pulse lasers and the tunable vibronic lasers) will not see the same rotation across the whole wavelength band.
References
Magneto-optic effects | Verdet constant | [
"Physics",
"Chemistry",
"Materials_science"
] | 410 | [
"Optical phenomena",
"Physical phenomena",
"Electric and magnetic fields in matter",
"Magneto-optic effects"
] |
549,787 | https://en.wikipedia.org/wiki/Fernico | Fernico describes a family of metal alloys made primarily of iron, nickel and cobalt. The family includes Kovar, FerNiCo I, FerNiCo II, and Dumet. The name is made up of the chemical symbols of its constituent three elements. "Dumet" is a portmanteau of "dual" and "metal," because it is a heterogeneous alloy, usually fabricated in the form of a wire with an alloy core and a copper cladding. These alloys possess the properties of electrical conductivity, minimal oxidation and formation of porous surfaces at working temperatures of glass and thermal coefficients of expansion which match glass closely. These requirements allow the alloys to be used in glass seals, such that the seal does not crack, fracture or leak with changes in temperature.
Dumet is most commonly used in seals where lead-in wires pass through the glass bulb wall of standard household electric lamps (light bulbs) among other things.
The two Fernico alloys both consist of iron (Fe), nickel (Ni), and cobalt (Co). Fernico is used at high temperatures and is identical to Kovar. Fernico II is used at cryogenic temperatures in the range. Both are used to create electrically conductive paths through the walls of sealed borosilicate glass containers. Dumet is used for a similar purpose, but is tailored for seals through soda lime and lead alkali silicate glasses.
These alloys adhere to lead-tin, tin, and silver solders. Other metals, including copper, molybdenum, nickel, and steel can be spot-welded to the FerNiCo alloys forming low resistance electrical connections.
Typical compositions
Given in weight %
FerNiCo I has the same linear coefficient of expansion as certain types of borosilicate ("hard" glass), (c6.5 × 10−6K−1, thus serving as an ideal material for the lead-out wires or other seal structures in light bulbs and thermionic valves. Dumet is also used for this purpose, but for passing through softer soda-lime and lead-alkali glasses. This wire is often coated with a glass-like film of sodium metaborate, (), so the molten glass will "wet" and adhere to it. 25% by mass of the finished wire is copper. Cunife exhibits a similar property.
Uses
There are very few uses of Fernico. Some of them are:
It is used to seal metals and glass.
It is often used in the form of nanopowder.
See also
Copper-clad aluminium wire
References
External links
FerNiCo composition page
Ferrous alloys
Magnetic alloys
Ferromagnetic materials | Fernico | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 544 | [
"Ferrous alloys",
"Alloy stubs",
"Ferromagnetic materials",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic alloys",
"Materials",
"Alloys",
"Matter"
] |
550,137 | https://en.wikipedia.org/wiki/Elliptic%20operator | In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions.
Elliptic operators are typical of potential theory, and they appear frequently in electrostatics and continuum mechanics. Elliptic regularity implies that their solutions tend to be smooth functions (if the coefficients in the operator are smooth). Steady-state solutions to hyperbolic and parabolic equations generally solve elliptic equations.
Definitions
Let be a linear differential operator of order m on a domain in Rn given by
where denotes a multi-index, and denotes the partial derivative of order in .
Then is called elliptic if for every x in and every non-zero in Rn,
where .
In many applications, this condition is not strong enough, and instead a uniform ellipticity condition may be imposed for operators of order m = 2k:
where C is a positive constant. Note that ellipticity only depends on the highest-order terms.
A nonlinear operator
is elliptic if its linearization is; i.e. the first-order Taylor expansion with respect to u and its derivatives about any point is an elliptic operator.
Example 1 The negative of the Laplacian in Rd given by is a uniformly elliptic operator. The Laplace operator occurs frequently in electrostatics. If ρ is the charge density within some region Ω, the potential Φ must satisfy the equation
Example 2 Given a matrix-valued function A(x) which uniformly positive definite for every x, having components aij, the operator is elliptic. This is the most general form of a second-order divergence form linear elliptic differential operator. The Laplace operator is obtained by taking A = I. These operators also occur in electrostatics in polarized media.
Example 3 For p a non-negative number, the p-Laplacian is a nonlinear elliptic operator defined by A similar nonlinear operator occurs in glacier mechanics. The Cauchy stress tensor of ice, according to Glen's flow law, is given by for some constant B. The velocity of an ice sheet in steady state will then solve the nonlinear elliptic system where ρ is the ice density, g is the gravitational acceleration vector, p is the pressure and Q is a forcing term.
Elliptic regularity theorems
Let L be an elliptic operator of order 2k with coefficients having 2k continuous derivatives. The Dirichlet problem for L is to find a function u, given a function f and some appropriate boundary values, such that Lu = f and such that u has the appropriate boundary values and normal derivatives. The existence theory for elliptic operators, using Gårding's inequality, Lax–Milgram lemma and Fredholm alternative, states the sufficient condition for a weak solution u to exist in the Sobolev space Hk.
For example, for a Second-order Elliptic operator as in Example 2,
There is a number γ>0 such that for each μ>γ, each , there exists a unique solution of the boundary value problem, which is based on Lax-Milgram lemma.
Either (a) for any , (1) has a unique solution, or (b) has a solution , which is based on the property of compact operators and Fredholm alternative.
This situation is ultimately unsatisfactory, as the weak solution u might not have enough derivatives for the expression Lu to be well-defined in the classical sense.
The elliptic regularity theorem guarantees that, provided f is square-integrable, u will in fact have 2k square-integrable weak derivatives. In particular, if f is infinitely-often differentiable, then so is u.
For L as in Example 2,
Interior regularity: If m is a natural number, (2) , is a weak solution to (1), then for any open set V in U with compact closure, (3), where C depends on U, V, L, m, per se , which also holds if m is infinity by Sobolev embedding theorem.
Boundary regularity: (2) together with the assumption that is indicates that (3) still holds after replacing V with U, i.e. , which also holds if m is infinity.
Any differential operator exhibiting this property is called a hypoelliptic operator; thus, every elliptic operator is hypoelliptic. The property also means that every fundamental solution of an elliptic operator is infinitely differentiable in any neighborhood not containing 0.
As an application, suppose a function satisfies the Cauchy–Riemann equations. Since the Cauchy-Riemann equations form an elliptic operator, it follows that is smooth.
Properties
For L as in Example 2 on U, which is an open domain with C1 boundary, then there is a number γ>0 such that for each μ>γ, satisfies the assumptions of Lax–Milgram lemma.
Invertibility: For each μ>γ, admits a compact inverse.
Eigenvalues and eigenvectors: If A is symmetric, bi,c are zero, then (1) Eigenvalues of L, are real, positive, countable, unbounded (2) There is an orthonormal basis of L2(U) composed of eigenvectors of L. (See Spectral theorem.)
Generates a semigroup on L2(U): −L generates a semigroup of bounded linear operators on L2(U) s.t. in the norm of L2(U), for every , by Hille–Yosida theorem.
General definition
Let be a (possibly nonlinear) differential operator between vector bundles of any rank. Take its principal symbol with respect to a one-form . (Basically, what we are doing is replacing the highest order covariant derivatives by vector fields .)
We say is weakly elliptic if is a linear isomorphism for every non-zero .
We say is (uniformly) strongly elliptic if for some constant ,
for all and all .
The definition of ellipticity in the previous part of the article is strong ellipticity. Here is an inner product. Notice that the are covector fields or one-forms, but the are elements of the vector bundle upon which acts.
The quintessential example of a (strongly) elliptic operator is the Laplacian (or its negative, depending upon convention). It is not hard to see that needs to be of even order for strong ellipticity to even be an option. Otherwise, just consider plugging in both and its negative. On the other hand, a weakly elliptic first-order operator, such as the Dirac operator can square to become a strongly elliptic operator, such as the Laplacian. The composition of weakly elliptic operators is weakly elliptic.
Weak ellipticity is nevertheless strong enough for the Fredholm alternative, Schauder estimates, and the Atiyah–Singer index theorem. On the other hand, we need strong ellipticity for the maximum principle, and to guarantee that the eigenvalues are discrete, and their only limit point is infinity.
See also
Sobolev space
Hypoelliptic operator
Elliptic partial differential equation
Hyperbolic partial differential equation
Parabolic partial differential equation
Hopf maximum principle
Elliptic complex
Ultrahyperbolic wave equation
Semi-elliptic operator
Weyl's lemma
Notes
References
Review:
External links
Linear Elliptic Equations at EqWorld: The World of Mathematical Equations.
Nonlinear Elliptic Equations at EqWorld: The World of Mathematical Equations.
Differential operators | Elliptic operator | [
"Mathematics"
] | 1,554 | [
"Mathematical analysis",
"Differential operators"
] |
550,622 | https://en.wikipedia.org/wiki/Tetraquark | In particle physics, a tetraquark is an exotic meson composed of four valence quarks. A tetraquark state has long been suspected to be allowed by quantum chromodynamics, the modern theory of strong interactions. A tetraquark state is an example of an exotic hadron that lies outside the conventional quark model classification. A number of different types of tetraquark have been observed.
History and discoveries
Several tetraquark candidates have been reported by particle physics experiments in the 21st century. The quark contents of these states are almost all qQ, where q represents a light (up, down or strange) quark, Q represents a heavy (charm or bottom) quark, and antiquarks are denoted with an overline. The existence and stability of tetraquark states with the qq (or QQ) have been discussed by theoretical physicists for a long time, however these are yet to be reported by experiments.
Timeline
In 2003, a particle temporarily called X(3872), by the Belle experiment in Japan, was proposed to be a tetraquark candidate, as originally theorized. The name X is a temporary name, indicating that there are still some questions about its properties to be tested. The number following is the mass of the particle in .
In 2004, the DsJ(2632) state seen in Fermilab's SELEX was suggested as a possible tetraquark candidate.
In 2007, Belle announced the observation of the Z(4430) state, a tetraquark candidate. There are also indications that the Y(4660), also discovered by Belle in 2007, could be a tetraquark state.
In 2009, Fermilab announced that they have discovered a particle temporarily called Y(4140), which may also be a tetraquark.
In 2010, two physicists from DESY and a physicist from Quaid-i-Azam University re-analyzed former experimental data and announced that, in connection with the (5S) meson (a form of bottomonium), a well-defined tetraquark resonance exists.
In June 2013, the BES III experiment in China and the Belle experiment in Japan independently reported on Zc(3900), the first confirmed four-quark state.
In 2014, the Large Hadron Collider experiment LHCb confirmed the existence of the Z(4430) state with a significance of over 13.9 σ.
In February 2016, the DØ experiment reported evidence of a narrow tetraquark candidate, named X(5568), decaying to .
In December 2017, DØ also reported observing the X(5568) using a different final state.
However, it was not observed in searches by the LHCb, CMS, CDF, or ATLAS experiments.
In June 2016, LHCb announced the discovery of three additional tetraquark candidates, called X(4274), X(4500) and X(4700).
In 2020, LHCb announced the discovery of a
tetraquark: X(6900). In 2022, ATLAS also observed X(6900), and in 2023, CMS reported an observation of three such states, X(6600), X(6900), and X(7300).
In 2021, LHCb announced the discovery of four additional tetraquarks, including cu.
In 2022, LHCb announced the discovery of cu and cd.
See also
Color confinement
Double-charm tetraquark
Hadron
Pentaquark
Tetraneutron
References
External links
The Belle experiment (press release)
Mesons
Hypothetical composite particles
Nuclear physics
Quantum chromodynamics | Tetraquark | [
"Physics"
] | 786 | [
"Nuclear physics"
] |
551,359 | https://en.wikipedia.org/wiki/Magnetic%20quantum%20number | In atomic physics, a magnetic quantum number is a quantum number used to distinguish quantum states of an electron or other particle according to its angular momentum along a given axis in space. The orbital magnetic quantum number ( or ) distinguishes the orbitals available within a given subshell of an atom. It specifies the component of the orbital angular momentum that lies along a given axis, conventionally called the z-axis, so it describes the orientation of the orbital in space. The spin magnetic quantum number specifies the z-axis component of the spin angular momentum for a particle having spin quantum number . For an electron, is , and is either + or −, often called "spin-up" and "spin-down", or α and β. The term magnetic in the name refers to the magnetic dipole moment associated with each type of angular momentum, so states having different magnetic quantum numbers shift in energy in a magnetic field according to the Zeeman effect.
The four quantum numbers conventionally used to describe the quantum state of an electron in an atom are the principal quantum number n, the azimuthal (orbital) quantum number , and the magnetic quantum numbers and . Electrons in a given subshell of an atom (such as s, p, d, or f) are defined by values of (0, 1, 2, or 3). The orbital magnetic quantum number takes integer values in the range from to , including zero. Thus the s, p, d, and f subshells contain 1, 3, 5, and 7 orbitals each. Each of these orbitals can accommodate up to two electrons (with opposite spins), forming the basis of the periodic table.
Other magnetic quantum numbers are similarly defined, such as for the z-axis component the total electronic angular momentum , and for the nuclear spin . Magnetic quantum numbers are capitalized to indicate totals for a system of particles, such as or for the total z-axis orbital angular momentum of all the electrons in an atom.
Derivation
There is a set of quantum numbers associated with the energy states of the atom. The four quantum numbers , , , and specify the complete quantum state of a single electron in an atom called its wavefunction or orbital. The Schrödinger equation for the wavefunction of an atom with one electron is a separable partial differential equation. (This is not the case for the neutral helium atom or other atoms with mutually interacting electrons, which require more sophisticated methods for solution) This means that the wavefunction as expressed in spherical coordinates can be broken down into the product of three functions of the radius, colatitude (or polar) angle, and azimuth:
The differential equation for can be solved in the form . Because values of the azimuth angle differing by 2 radians (360 degrees) represent the same position in space, and the overall magnitude of does not grow with arbitrarily large as it would for a real exponent, the coefficient must be quantized to integer multiples of , producing an imaginary exponent: . These integers are the magnetic quantum numbers. The same constant appears in the colatitude equation, where larger values of tend to decrease the magnitude of and values of greater than the azimuthal quantum number do not permit any solution for
As a component of angular momentum
The axis used for the polar coordinates in this analysis is chosen arbitrarily. The quantum number refers to the projection of the angular momentum in this arbitrarily-chosen direction, conventionally called the -direction or quantization axis. , the magnitude of the angular momentum in the -direction, is given by the formula:
.
This is a component of the atomic electron's total orbital angular momentum , whose magnitude is related to the azimuthal quantum number of its subshell by the equation:
,
where is the reduced Planck constant. Note that this for and approximates for high . It is not possible to measure the angular momentum of the electron along all three axes simultaneously. These properties were first demonstrated in the Stern–Gerlach experiment, by Otto Stern and Walther Gerlach.
Effect in magnetic fields
The quantum number refers, loosely, to the direction of the angular momentum vector. The magnetic quantum number only affects the electron's energy if it is in a magnetic field because in the absence of one, all spherical harmonics corresponding to the different arbitrary values of are equivalent. The magnetic quantum number determines the energy shift of an atomic orbital due to an external magnetic field (the Zeeman effect) — hence the name magnetic quantum number. However, the actual magnetic dipole moment of an electron in an atomic orbital arises not only from the electron angular momentum but also from the electron spin, expressed in the spin quantum number.
Since each electron has a magnetic moment in a magnetic field, it will be subject to a torque which tends to make the vector parallel to the field, a phenomenon known as Larmor precession.
See also
Quantum number
Azimuthal quantum number
Principal quantum number
Spin quantum number
Total angular momentum quantum number
Electron shell
Basic quantum mechanics
Bohr atom
Schrödinger equation
Notes
References
Atomic physics
Rotational symmetry
Quantum numbers | Magnetic quantum number | [
"Physics",
"Chemistry"
] | 1,052 | [
"Quantum chemistry",
"Quantum mechanics",
"Quantum numbers",
"Rotational symmetry",
"Atomic physics",
" molecular",
"Atomic",
"Symmetry",
" and optical physics"
] |
552,234 | https://en.wikipedia.org/wiki/Fluctuation%E2%80%93dissipation%20theorem | The fluctuation–dissipation theorem (FDT) or fluctuation–dissipation relation (FDR) is a powerful tool in statistical physics for predicting the behavior of systems that obey detailed balance. Given that a system obeys detailed balance, the theorem is a proof that thermodynamic fluctuations in a physical variable predict the response quantified by the admittance or impedance (in their general sense, not only in electromagnetic terms) of the same physical variable (like voltage, temperature difference, etc.), and vice versa. The fluctuation–dissipation theorem applies both to classical and quantum mechanical systems.
The fluctuation–dissipation theorem was proven by Herbert Callen and Theodore Welton in 1951
and expanded by Ryogo Kubo. There are antecedents to the general theorem, including Einstein's explanation of Brownian motion
during his annus mirabilis and Harry Nyquist's explanation in 1928 of Johnson noise in electrical resistors.
Qualitative overview and examples
The fluctuation–dissipation theorem says that when there is a process that dissipates energy, turning it into heat (e.g., friction), there is a reverse process related to thermal fluctuations. This is best understood by considering some examples:
Drag and Brownian motion
If an object is moving through a fluid, it experiences drag (air resistance or fluid resistance). Drag dissipates kinetic energy, turning it into heat. The corresponding fluctuation is Brownian motion. An object in a fluid does not sit still, but rather moves around with a small and rapidly-changing velocity, as molecules in the fluid bump into it. Brownian motion converts heat energy into kinetic energy—the reverse of drag.
Resistance and Johnson noise
If electric current is running through a wire loop with a resistor in it, the current will rapidly go to zero because of the resistance. Resistance dissipates electrical energy, turning it into heat (Joule heating). The corresponding fluctuation is Johnson noise. A wire loop with a resistor in it does not actually have zero current, it has a small and rapidly-fluctuating current caused by the thermal fluctuations of the electrons and atoms in the resistor. Johnson noise converts heat energy into electrical energy—the reverse of resistance.
Light absorption and thermal radiation
When light impinges on an object, some fraction of the light is absorbed, making the object hotter. In this way, light absorption turns light energy into heat. The corresponding fluctuation is thermal radiation (e.g., the glow of a "red hot" object). Thermal radiation turns heat energy into light energy—the reverse of light absorption. Indeed, Kirchhoff's law of thermal radiation confirms that the more effectively an object absorbs light, the more thermal radiation it emits.
Examples in detail
The fluctuation–dissipation theorem is a general result of statistical thermodynamics that quantifies the relation between the fluctuations in a system that obeys detailed balance and the response of the system to applied perturbations.
Brownian motion
For example, Albert Einstein noted in his 1905 paper on Brownian motion that the same random forces that cause the erratic motion of a particle in Brownian motion would also cause drag if the particle were pulled through the fluid. In other words, the fluctuation of the particle at rest has the same origin as the dissipative frictional force one must do work against, if one tries to perturb the system in a particular direction.
From this observation Einstein was able to use statistical mechanics to derive the Einstein–Smoluchowski relation
which connects the diffusion constant D and the particle mobility μ, the ratio of the particle's terminal drift velocity to an applied force. kB is the Boltzmann constant, and T is the absolute temperature.
Thermal noise in a resistor
In 1928, John B. Johnson discovered and Harry Nyquist explained Johnson–Nyquist noise. With no applied current, the mean-square voltage depends on the resistance , , and the bandwidth over which the voltage is measured:
This observation can be understood through the lens of the fluctuation-dissipation theorem. Take, for example, a simple circuit consisting of a resistor with a resistance and a capacitor with a small capacitance . Kirchhoff's voltage law yields
and so the response function for this circuit is
In the low-frequency limit , its imaginary part is simply
which then can be linked to the power spectral density function of the voltage via the fluctuation-dissipation theorem
The Johnson–Nyquist voltage noise was observed within a small frequency bandwidth centered around . Hence
General formulation
The fluctuation–dissipation theorem can be formulated in many ways; one particularly useful form is the following:.
Let be an observable of a dynamical system with Hamiltonian subject to thermal fluctuations.
The observable will fluctuate around its mean value
with fluctuations characterized by a power spectrum .
Suppose that we can switch on a time-varying, spatially constant field which alters the Hamiltonian
to .
The response of the observable to a time-dependent field is
characterized to first order by the susceptibility or linear response function
of the system
where the perturbation is adiabatically (very slowly) switched on at .
The fluctuation–dissipation theorem relates the two-sided power spectrum (i.e. both positive and negative frequencies) of to the imaginary part of the Fourier transform of the susceptibility :
which holds under the Fourier transform convention . The left-hand side describes fluctuations in , the right-hand side is closely related to the energy dissipated by the system when pumped by an oscillatory field . The spectrum of fluctuations reveal the linear response, because past fluctuations cause future fluctuations via a linear response upon itself.
This is the classical form of the theorem; quantum fluctuations are taken into account by replacing with (whose limit for is ). A proof can be found by means of the LSZ reduction, an identity from quantum field theory.
The fluctuation–dissipation theorem can be generalized in a straightforward way to the case of space-dependent fields, to the case of several variables or to a quantum-mechanics setting.
Derivation
Classical version
We derive the fluctuation–dissipation theorem in the form given above, using the same notation.
Consider the following test case: the field f has been on for infinite time and is switched off at t=0
where is the Heaviside function.
We can express the expectation value of by the probability distribution W(x,0) and the transition probability
The probability distribution function W(x,0) is an equilibrium distribution and hence
given by the Boltzmann distribution for the Hamiltonian
where .
For a weak field , we can expand the right-hand side
here is the equilibrium distribution in the absence of a field.
Plugging this approximation in the formula for yields
where A(t) is the auto-correlation function of x in the absence of a field:
Note that in the absence of a field the system is invariant under time-shifts.
We can rewrite using the susceptibility
of the system and hence find with the above equation (*)
Consequently,
To make a statement about frequency dependence, it is necessary to take the Fourier transform of equation (**). By integrating by parts, it is possible to show that
Since is real and symmetric, it follows that
Finally, for stationary processes, the Wiener–Khinchin theorem states that the two-sided spectral density is equal to the Fourier transform of the auto-correlation function:
Therefore, it follows that
Quantum version
The fluctuation-dissipation theorem relates the correlation function of the observable of interest (a measure of fluctuation) to the imaginary part of the response function in the frequency domain (a measure of dissipation). A link between these quantities can be found through the so-called Kubo formula
which follows, under the assumptions of the linear response theory, from the time evolution of the ensemble average of the observable in the presence of a perturbing source. Once Fourier transformed, the Kubo formula allows writing the imaginary part of the response function as
In the canonical ensemble, the second term can be re-expressed as
where in the second equality we re-positioned using the cyclic property of trace. Next, in the third equality, we inserted next to the trace and interpreted as a time evolution operator with imaginary time interval . The imaginary time shift turns into a factor after Fourier transform
and thus the expression for can be easily rewritten as the quantum fluctuation-dissipation relation
where the power spectral density is the Fourier transform of the auto-correlation and is the Bose-Einstein distribution function. The same calculation also yields
thus, differently from what obtained in the classical case, the power spectral density is not exactly frequency-symmetric in the quantum limit. Consistently, has an imaginary part originating from the commutation rules of operators. The additional "" term in the expression of at positive frequencies can also be thought of as linked to spontaneous emission. An often cited result is also the symmetrized power spectral density
The "" can be thought of as linked to quantum fluctuations, or to zero-point motion of the observable . At high enough temperatures, , i.e. the quantum contribution is negligible, and we recover the classical version.
Violations in glassy systems
While the fluctuation–dissipation theorem provides a general relation between the response of systems obeying detailed balance, when detailed balance is violated comparison of fluctuations to dissipation is more complex. Below the so called glass temperature , glassy systems are not equilibrated, and slowly approach their equilibrium state. This slow approach to equilibrium is synonymous with the violation of detailed balance. Thus these systems require large time-scales to be studied while they slowly move toward equilibrium.
To study the violation of the fluctuation-dissipation relation in glassy systems, particularly spin glasses, researchers have performed numerical simulations of macroscopic systems (i.e. large compared to their correlation lengths) described by the three-dimensional Edwards-Anderson model using supercomputers. In their simulations, the system is initially prepared at a high temperature, rapidly cooled to a temperature below the glass temperature , and left to equilibrate for a very long time under a magnetic field . Then, at a later time , two dynamical observables are probed, namely the response function
and the spin-temporal correlation function
where is the spin living on the node of the cubic lattice of volume , and is the magnetization density. The fluctuation-dissipation relation in this system can be written in terms of these observables as
Their results confirm the expectation that as the system is left to equilibrate for longer times, the fluctuation-dissipation relation is closer to be satisfied.
In the mid-1990s, in the study of dynamics of spin glass models, a generalization of the fluctuation–dissipation theorem was discovered that holds for asymptotic non-stationary states, where the temperature appearing in the equilibrium relation is substituted by an effective temperature with a non-trivial dependence on the time scales. This relation is proposed to hold in glassy systems beyond the models for which it was initially found.
See also
Non-equilibrium thermodynamics
Green–Kubo relations
Onsager reciprocal relations
Equipartition theorem
Boltzmann distribution
Dissipative system
Notes
References
Further reading
Audio recording of a lecture by Prof. E. W. Carlson of Purdue University
Kubo's famous text: Fluctuation-dissipation theorem
Statistical mechanics
Non-equilibrium thermodynamics
Physics theorems
Statistical mechanics theorems | Fluctuation–dissipation theorem | [
"Physics",
"Mathematics"
] | 2,448 | [
"Theorems in dynamical systems",
"Equations of physics",
"Non-equilibrium thermodynamics",
"Statistical mechanics theorems",
"Theorems in mathematical physics",
"Dynamical systems",
"Statistical mechanics",
"Physics theorems"
] |
553,121 | https://en.wikipedia.org/wiki/Regulation%20of%20gene%20expression | Regulation of gene expression, or gene regulation, includes a wide range of mechanisms that are used by cells to increase or decrease the production of specific gene products (protein or RNA). Sophisticated programs of gene expression are widely observed in biology, for example to trigger developmental pathways, respond to environmental stimuli, or adapt to new food sources. Virtually any step of gene expression can be modulated, from transcriptional initiation, to RNA processing, and to the post-translational modification of a protein. Often, one gene regulator controls another, and so on, in a gene regulatory network.
Gene regulation is essential for viruses, prokaryotes and eukaryotes as it increases the versatility and adaptability of an organism by allowing the cell to express protein when needed. Although as early as 1951, Barbara McClintock showed interaction between two genetic loci, Activator (Ac) and Dissociator (Ds), in the color formation of maize seeds, the first discovery of a gene regulation system is widely considered to be the identification in 1961 of the lac operon, discovered by François Jacob and Jacques Monod, in which some enzymes involved in lactose metabolism are expressed by E. coli only in the presence of lactose and absence of glucose.
In multicellular organisms, gene regulation drives cellular differentiation and morphogenesis in the embryo, leading to the creation of different cell types that possess different gene expression profiles from the same genome sequence. Although this does not explain how gene regulation originated, evolutionary biologists include it as a partial explanation of how evolution works at a molecular level, and it is central to the science of evolutionary developmental biology ("evo-devo").
Regulated stages of gene expression
Any step of gene expression may be modulated, from signaling to transcription to post-translational modification of a protein. The following is a list of stages where gene expression is regulated, where the most extensively utilized point is transcription initiation, the first stage in transcription:
Signal transduction
Chromatin, chromatin remodeling, chromatin domains
Transcription
Post-transcriptional modification
RNA transport
Translation
mRNA degradation
Modification of DNA
In eukaryotes, the accessibility of large regions of DNA can depend on its chromatin structure, which can be altered as a result of histone modifications directed by DNA methylation, ncRNA, or DNA-binding protein. Hence these modifications may up or down regulate the expression of a gene. Some of these modifications that regulate gene expression are inheritable and are referred to as epigenetic regulation.
Structural
Transcription of DNA is dictated by its structure. In general, the density of its packing is indicative of the frequency of transcription. Octameric protein complexes called histones together with a segment of DNA wound around the eight histone proteins (together referred to as a nucleosome) are responsible for the amount of supercoiling of DNA, and these complexes can be temporarily modified by processes such as phosphorylation or more permanently modified by processes such as methylation. Such modifications are considered to be responsible for more or less permanent changes in gene expression levels.
Chemical
Methylation of DNA is a common method of gene silencing. DNA is typically methylated by methyltransferase enzymes on cytosine nucleotides in a CpG dinucleotide sequence (also called "CpG islands" when densely clustered). Analysis of the pattern of methylation in a given region of DNA (which can be a promoter) can be achieved through a method called bisulfite mapping. Methylated cytosine residues are unchanged by the treatment, whereas unmethylated ones are changed to uracil. The differences are analyzed by DNA sequencing or by methods developed to quantify SNPs, such as Pyrosequencing (Biotage) or MassArray (Sequenom), measuring the relative amounts of C/T at the CG dinucleotide. Abnormal methylation patterns are thought to be involved in oncogenesis.
Histone acetylation is also an important process in transcription. Histone acetyltransferase enzymes (HATs) such as CREB-binding protein also dissociate the DNA from the histone complex, allowing transcription to proceed. Often, DNA methylation and histone deacetylation work together in gene silencing. The combination of the two seems to be a signal for DNA to be packed more densely, lowering gene expression.
Regulation of transcription
Regulation of transcription thus controls when transcription occurs and how much RNA is created. Transcription of a gene by RNA polymerase can be regulated by several mechanisms.
Specificity factors alter the specificity of RNA polymerase for a given promoter or set of promoters, making it more or less likely to bind to them (i.e., sigma factors used in prokaryotic transcription).
Repressors bind to the Operator, coding sequences on the DNA strand that are close to or overlapping the promoter region, impeding RNA polymerase's progress along the strand, thus impeding the expression of the gene. The image to the right demonstrates regulation by a repressor in the lac operon.
General transcription factors position RNA polymerase at the start of a protein-coding sequence and then release the polymerase to transcribe the mRNA.
Activators enhance the interaction between RNA polymerase and a particular promoter, encouraging the expression of the gene. Activators do this by increasing the attraction of RNA polymerase for the promoter, through interactions with subunits of the RNA polymerase or indirectly by changing the structure of the DNA.
Enhancers are sites on the DNA helix that are bound by activators in order to loop the DNA bringing a specific promoter to the initiation complex. Enhancers are much more common in eukaryotes than prokaryotes, where only a few examples exist (to date).
Silencers are regions of DNA sequences that, when bound by particular transcription factors, can silence expression of the gene.
Regulation by RNA
RNA can be an important regulator of gene activity, e.g. by microRNA (miRNA), antisense-RNA, or long non-coding RNA (lncRNA). LncRNAs differ from mRNAs in the sense that they have specified subcellular locations and functions. They were first discovered to be located in the nucleus and chromatin, and the localizations and functions are highly diverse now. Some still reside in chromatin where they interact with proteins. While this lncRNA ultimately affects gene expression in neuronal disorders such as Parkinson, Huntington, and Alzheimer disease, others, such as, PNCTR(pyrimidine-rich non-coding transcriptors), play a role in lung cancer. Given their role in disease, lncRNAs are potential biomarkers and may be useful targets for drugs or gene therapy, although there are no approved drugs that target lncRNAs yet. The number of lncRNAs in the human genome remains poorly defined, but some estimates range from 16,000 to 100,000 lnc genes.
Epigenetic gene regulation
Epigenetics refers to the modification of genes that is not changing the DNA or RNA sequence. Epigenetic modifications are also a key factor in influencing gene expression. They occur on genomic DNA and histones and their chemical modifications regulate gene expression in a more efficient manner. There are several modifications of DNA (usually methylation) and more than 100 modifications of RNA in mammalian cells.” Those modifications result in altered protein binding to DNA and a change in RNA stability and translation efficiency.
Special cases in human biology and disease
Regulation of transcription in cancer
In vertebrates, the majority of gene promoters contain a CpG island with numerous CpG sites. When many of a gene's promoter CpG sites are methylated the gene becomes silenced. Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs. In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-expressed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers).
Regulation of transcription in addiction
One of the cardinal features of addiction is its persistence. The persistent behavioral changes appear to be due to long-lasting changes, resulting from epigenetic alterations affecting gene expression, within particular regions of the brain. Drugs of abuse cause three types of epigenetic alteration in the brain. These are (1) histone acetylations and histone methylations, (2) DNA methylation at CpG sites, and (3) epigenetic downregulation or upregulation of microRNAs. (See Epigenetics of cocaine addiction for some details.)
Chronic nicotine intake in mice alters brain cell epigenetic control of gene expression through acetylation of histones. This increases expression in the brain of the protein FosB, important in addiction. Cigarette addiction was also studied in about 16,000 humans, including never smokers, current smokers, and those who had quit smoking for up to 30 years. In blood cells, more than 18,000 CpG sites (of the roughly 450,000 analyzed CpG sites in the genome) had frequently altered methylation among current smokers. These CpG sites occurred in over 7,000 genes, or roughly a third of known human genes. The majority of the differentially methylated CpG sites returned to the level of never-smokers within five years of smoking cessation. However, 2,568 CpGs among 942 genes remained differentially methylated in former versus never smokers. Such remaining epigenetic changes can be viewed as “molecular scars” that may affect gene expression.
In rodent models, drugs of abuse, including cocaine, methamphetamine, alcohol and tobacco smoke products, all cause DNA damage in the brain. During repair of DNA damages some individual repair events can alter the methylation of DNA and/or the acetylations or methylations of histones at the sites of damage, and thus can contribute to leaving an epigenetic scar on chromatin.
Such epigenetic scars likely contribute to the persistent epigenetic changes found in addiction.
Regulation of transcription in learning and memory
In mammals, methylation of cytosine (see Figure) in DNA is a major regulatory mediator. Methylated cytosines primarily occur in dinucleotide sequences where cytosine is followed by a guanine, a CpG site. The total number of CpG sites in the human genome is approximately 28 million. and generally about 70% of all CpG sites have a methylated cytosine.
In a rat, a painful learning experience, contextual fear conditioning, can result in a life-long fearful memory after a single training event. Cytosine methylation is altered in the promoter regions of about 9.17% of all genes in the hippocampus neuron DNA of a rat that has been subjected to a brief fear conditioning experience. The hippocampus is where new memories are initially stored.
Methylation of CpGs in a promoter region of a gene represses transcription while methylation of CpGs in the body of a gene increases expression. TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene.
When contextual fear conditioning is applied to a rat, more than 5,000 differentially methylated regions (DMRs) (of 500 nucleotides each) occur in the rat hippocampus neural genome both one hour and 24 hours after the conditioning in the hippocampus. This causes about 500 genes to be up-regulated (often due to demethylation of CpG sites in a promoter region) and about 1,000 genes to be down-regulated (often due to newly formed 5-methylcytosine at CpG sites in a promoter region). The pattern of induced and repressed genes within neurons appears to provide a molecular basis for forming the first transient memory of this training event in the hippocampus of the rat brain.
Post-transcriptional regulation
After the DNA is transcribed and mRNA is formed, there must be some sort of regulation on how much the mRNA is translated into proteins. Cells do this by modulating the capping, splicing, addition of a Poly(A) Tail, the sequence-specific nuclear export rates, and, in several contexts, sequestration of the RNA transcript. These processes occur in eukaryotes but not in prokaryotes. This modulation is a result of a protein or transcript that, in turn, is regulated and may have an affinity for certain sequences.
Three prime untranslated regions and microRNAs
Three prime untranslated regions (3'-UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression. Such 3'-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3'-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3'-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.
The 3'-UTR often contains miRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3'-UTRs. Among all regulatory motifs within the 3'-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.
As of 2014, the miRBase web site, an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). Freidman et al. estimate that >45,000 miRNA target sites within human mRNA 3'-UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.
Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).
The effects of miRNA dysregulation of gene expression seem to be important in cancer. For instance, in gastrointestinal cancers, a 2015 paper identified nine miRNAs as epigenetically altered and effective in down-regulating DNA repair enzymes.
The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depressive disorder, Parkinson's disease, Alzheimer's disease and autism spectrum disorders.
Regulation of translation
The translation of mRNA can also be controlled by a number of mechanisms, mostly at the level of initiation. Recruitment of the small ribosomal subunit can indeed be modulated by mRNA secondary structure, antisense RNA binding, or protein binding. In both prokaryotes and eukaryotes, a large number of RNA binding proteins exist, which often are directed to their target sequence by the secondary structure of the transcript, which may change depending on certain conditions, such as temperature or presence of a ligand (aptamer). Some transcripts act as ribozymes and self-regulate their expression.
Examples of gene regulation
Enzyme induction is a process in which a molecule (e.g., a drug) induces (i.e., initiates or enhances) the expression of an enzyme.
The induction of heat shock proteins in the fruit fly Drosophila melanogaster.
The Lac operon is an interesting example of how gene expression can be regulated.
Viruses, despite having only a few genes, possess mechanisms to regulate their gene expression, typically into an early and late phase, using collinear systems regulated by anti-terminators (lambda phage) or splicing modulators (HIV).
Gal4 is a transcriptional activator that controls the expression of GAL1, GAL7, and GAL10 (all of which code for the metabolic of galactose in yeast). The GAL4/UAS system has been used in a variety of organisms across various phyla to study gene expression.
Developmental biology
A large number of studied regulatory systems come from developmental biology. Examples include:
The colinearity of the Hox gene cluster with their nested antero-posterior patterning
Pattern generation of the hand (digits - interdigits): the gradient of sonic hedgehog (secreted inducing factor) from the zone of polarizing activity in the limb, which creates a gradient of active Gli3, which activates Gremlin, which inhibits BMPs also secreted in the limb, results in the formation of an alternating pattern of activity as a result of this reaction–diffusion system.
Somitogenesis is the creation of segments (somites) from a uniform tissue (Pre-somitic Mesoderm). They are formed sequentially from anterior to posterior. This is achieved in amniotes possibly by means of two opposing gradients, Retinoic acid in the anterior (wavefront) and Wnt and Fgf in the posterior, coupled to an oscillating pattern (segmentation clock) composed of FGF + Notch and Wnt in antiphase.
Sex determination in the soma of a Drosophila requires the sensing of the ratio of autosomal genes to sex chromosome-encoded genes, which results in the production of sexless splicing factor in females, resulting in the female isoform of doublesex.
Circuitry
Up-regulation and down-regulation
Up-regulation is a process which occurs within a cell triggered by a signal (originating internal or external to the cell), which results in increased expression of one or more genes and as a result the proteins encoded by those genes. Conversely, down-regulation is a process resulting in decreased gene and corresponding protein expression.
Up-regulation occurs, for example, when a cell is deficient in some kind of receptor. In this case, more receptor protein is synthesized and transported to the membrane of the cell and, thus, the sensitivity of the cell is brought back to normal, reestablishing homeostasis.
Down-regulation occurs, for example, when a cell is overstimulated by a neurotransmitter, hormone, or drug for a prolonged period of time, and the expression of the receptor protein is decreased in order to protect the cell (see also tachyphylaxis).
Inducible vs. repressible systems
Gene Regulation can be summarized by the response of the respective system:
Inducible systems - An inducible system is off unless there is the presence of some molecule (called an inducer) that allows for gene expression. The molecule is said to "induce expression". The manner by which this happens is dependent on the control mechanisms as well as differences between prokaryotic and eukaryotic cells.
Repressible systems - A repressible system is on except in the presence of some molecule (called a corepressor) that suppresses gene expression. The molecule is said to "repress expression". The manner by which this happens is dependent on the control mechanisms as well as differences between prokaryotic and eukaryotic cells.
The GAL4/UAS system is an example of both an inducible and repressible system. Gal4 binds an upstream activation sequence (UAS) to activate the transcription of the GAL1/GAL7/GAL10 cassette. On the other hand, a MIG1 response to the presence of glucose can inhibit GAL4 and therefore stop the expression of the GAL1/GAL7/GAL10 cassette.
Theoretical circuits
Repressor/Inducer: an activation of a sensor results in the change of expression of a gene
negative feedback: the gene product downregulates its own production directly or indirectly, which can result in
keeping transcript levels constant/proportional to a factor
inhibition of run-away reactions when coupled with a positive feedback loop
creating an oscillator by taking advantage in the time delay of transcription and translation, given that the mRNA and protein half-life is shorter
positive feedback: the gene product upregulates its own production directly or indirectly, which can result in
signal amplification
bistable switches when two genes inhibit each other and both have positive feedback
pattern generation
Study methods
In general, most experiments investigating differential expression used whole cell extracts of RNA, called steady-state levels, to determine which genes changed and by how much. These are, however, not informative of where the regulation has occurred and may mask conflicting regulatory processes (see post-transcriptional regulation), but it is still the most commonly analysed (quantitative PCR and DNA microarray).
When studying gene expression, there are several methods to look at the various stages. In eukaryotes these include:
The local chromatin environment of the region can be determined by ChIP-chip analysis by pulling down RNA Polymerase II, Histone 3 modifications, Trithorax-group protein, Polycomb-group protein, or any other DNA-binding element to which a good antibody is available.
Epistatic interactions can be investigated by synthetic genetic array analysis
Due to post-transcriptional regulation, transcription rates and total RNA levels differ significantly. To measure the transcription rates nuclear run-on assays can be done and newer high-throughput methods are being developed, using thiol labelling instead of radioactivity.
Only 5% of the RNA polymerised in the nucleus exits, and not only introns, abortive products, and non-sense transcripts are degradated. Therefore, the differences in nuclear and cytoplasmic levels can be seen by separating the two fractions by gentle lysis.
Alternative splicing can be analysed with a splicing array or with a tiling array (see DNA microarray).
All in vivo RNA is complexed as RNPs. The quantity of transcripts bound to specific protein can be also analysed by RIP-Chip. For example, DCP2 will give an indication of sequestered protein; ribosome-bound gives and indication of transcripts active in transcription (although a more dated method, called polysome fractionation, is still popular in some labs)
Protein levels can be analysed by Mass spectrometry, which can be compared only to quantitative PCR data, as microarray data is relative and not absolute.
RNA and protein degradation rates are measured by means of transcription inhibitors (actinomycin D or α-Amanitin) or translation inhibitors (Cycloheximide), respectively.
See also
Artificial transcription factors (small molecules that mimic transcription factor protein)
Cellular model
Conserved non-coding DNA sequence
Enhancer (genetics)
Gene structure
Spatiotemporal gene expression
Regulator gene glucosyltransferases (Rgg/SHP) systems
Notes and references
Bibliography
External links
Plant Transcription Factor Database and Plant Transcriptional Regulation Data and Analysis Platform
ChIPBase An open database for decoding the transcriptional regulatory networks of non-coding RNAs and protein-coding genes from ChIP-seq data.
Gene expression
DNA
RNA
Post-translational modification
Evolutionary developmental biology | Regulation of gene expression | [
"Chemistry",
"Biology"
] | 4,969 | [
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
553,182 | https://en.wikipedia.org/wiki/Activator%20%28genetics%29 | A transcriptional activator is a protein (transcription factor) that increases transcription of a gene or set of genes. Activators are considered to have positive control over gene expression, as they function to promote gene transcription and, in some cases, are required for the transcription of genes to occur. Most activators are DNA-binding proteins that bind to enhancers or promoter-proximal elements. The DNA site bound by the activator is referred to as an "activator-binding site". The part of the activator that makes protein–protein interactions with the general transcription machinery is referred to as an "activating region" or "activation domain".
Most activators function by binding sequence-specifically to a regulatory DNA site located near a promoter and making protein–protein interactions with the general transcription machinery (RNA polymerase and general transcription factors), thereby facilitating the binding of the general transcription machinery to the promoter. Other activators help promote gene transcription by triggering RNA polymerase to release from the promoter and proceed along the DNA. At times, RNA polymerase can pause shortly after leaving the promoter; activators also function to allow these "stalled" RNA polymerases to continue transcription.
The activity of activators can be regulated. Some activators have an allosteric site and can only function when a certain molecule binds to this site, essentially turning the activator on. Post-translational modifications to activators can also regulate activity, increasing or decreasing activity depending on the type of modification and activator being modified.
In some cells, usually eukaryotes, multiple activators can bind to the binding-site; these activators tend to bind cooperatively and interact synergistically.
Structure
Activator proteins consist of two main domains: a DNA-binding domain that binds to a DNA sequence specific to the activator, and an activation domain that functions to increase gene transcription by interacting with other molecules. Activator DNA-binding domains come in a variety of conformations, including the helix-turn-helix, zinc finger, and leucine zipper among others. These DNA-binding domains are specific to a certain DNA sequence, allowing activators to turn on only certain genes. Activation domains also come in a variety of types that are categorized based on the domain's amino acid sequence, including alanine-rich, glutamine-rich, and acidic domains. These domains are not as specific, and tend to interact with a variety of target molecules.
Activators can also have allosteric sites that are responsible for turning the activators themselves on and off.
Mechanism of action
Activator binding to regulatory sequences
Within the grooves of the DNA double helix, functional groups of the base pairs are exposed. The sequence of the DNA thus creates a unique pattern of surface features, including areas of possible hydrogen bonding, ionic bonding, as well as hydrophobic interactions. Activators also have unique sequences of amino acids with side chains that are able to interact with the functional groups in DNA. Thus, the pattern of amino acid side chains making up an activator protein will be complementary to the surface features of the specific DNA regulatory sequence it was designed to bind to. The complementary interactions between the amino acids of the activator protein and the functional groups of the DNA create an "exact-fit" specificity between the activator and its regulatory DNA sequence.
Most activators bind to the major grooves of the double helix, as these areas tend to be wider, but there are some that will bind to the minor grooves.
Activator-binding sites may be located very close to the promoter or numerous base pairs away. If the regulatory sequence is located far away, the DNA will loop over itself (DNA looping) in order for the bound activator to interact with the transcription machinery at the promoter site.
In prokaryotes, multiple genes can be transcribed together (operon), and are thus controlled under the same regulatory sequence. In eukaryotes, genes tend to be transcribed individually, and each gene is controlled by its own regulatory sequences. Regulatory sequences where activators bind are commonly found upstream from the promoter, but they can also be found downstream or even within introns in eukaryotes.
Functions to increase gene transcription
Binding of the activator to its regulatory sequence promotes gene transcription by enabling RNA polymerase activity. This is done through various mechanisms, such as recruiting transcription machinery to the promoter and triggering RNA polymerase to continue into elongation.
Recruitment
Activator-controlled genes require the binding of activators to regulatory sites in order to recruit the necessary transcription machinery to the promoter region.
Activator interactions with RNA polymerase are mostly direct in prokaryotes and indirect in eukaryotes. In prokaryotes, activators tend to make contact with the RNA polymerase directly in order to help bind it to the promoter. In eukaryotes, activators mostly interact with other proteins, and these proteins will then be the ones to interact with the RNA polymerase.
Prokaryotes
In prokaryotes, genes controlled by activators have promoters that are unable to strongly bind to RNA polymerase by themselves. Thus, activator proteins help to promote the binding of the RNA polymerase to the promoter. This is done through various mechanisms. Activators may bend the DNA in order to better expose the promoter so the RNA polymerase can bind more effectively. Activators may make direct contact with the RNA polymerase and secure it to the promoter.
Eukaryotes
In eukaryotes, activators have a variety of different target molecules that they can recruit in order to promote gene transcription. They can recruit other transcription factors and cofactors that are needed in transcription initiation.
Activators can recruit molecules known as coactivators. These coactivator molecules can then perform functions necessary for beginning transcription in place of the activators themselves, such as chromatin modifications.
DNA is much more condensed in eukaryotes; thus, activators tend to recruit proteins that are able to restructure the chromatin so the promoter is more easily accessible by the transcription machinery. Some proteins will rearrange the layout of nucleosomes along the DNA in order to expose the promoter site (ATP-dependent chromatin remodeling complexes). Other proteins affect the binding between histones and DNA via post-translational histone modifications, allowing the DNA tightly wrapped into nucleosomes to loosen.
All of these recruited molecules work together in order to ultimately recruit the RNA polymerase to the promoter site.
Release of RNA polymerase
Activators can promote gene transcription by signaling the RNA polymerase to move beyond the promoter and proceed along the DNA, initiating the beginning of transcription. The RNA polymerase can sometimes pause shortly after beginning transcription, and activators are required to release RNA polymerase from this “stalled” state. Multiple mechanisms exist for releasing these "stalled" RNA polymerases. Activators may act simply as a signal to trigger the continued movement of the RNA polymerase. If the DNA is too condensed to allow RNA polymerase to continue transcription, activators may recruit proteins that can restructure the DNA so any blocks are removed. Activators may also promote the recruitment of elongation factors, which are necessary for the RNA polymerase to continue transcription.
Regulation of activators
There are different ways in which the activity of activators themselves can be regulated, in order to ensure that activators are stimulating gene transcription at appropriate times and levels. Activator activity can increase or decrease in response to environmental stimuli or other intracellular signals.
Activation of activator proteins
Activators often must be "turned on" before they can promote gene transcription. The activity of activators is controlled by the ability of the activator to bind to its regulatory site along the DNA. The DNA-binding domain of the activator has an active form and an inactive form, which are controlled by the binding of molecules known as allosteric effectors to the allosteric site of the activator.
Activators in their inactive form are not bound to any allosteric effectors. When inactive, the activator is unable to bind to its specific regulatory sequence in the DNA, and thus has no regulatory effect on the transcription of genes.
When an allosteric effector binds to the allosteric site of an activator, a conformational change in the DNA-binding domain occurs, which allows the protein to bind to the DNA and increase gene transcription.
Post-translational modifications
Some activators are able to undergo post-translational modifications that have an effect on their activity within a cell. Processes such as phosphorylation, acetylation, and ubiquitination, among others, have been seen to regulate the activity of activators. Depending on the chemical group being added, as well as the nature of the activator itself, post-translational modifications can either increase or decrease the activity of an activator. For example, acetylation has been seen to increase the activity of some activators through mechanisms such as increasing DNA-binding affinity. On the other hand, ubiquitination decreases the activity of activators, as ubiquitin marks proteins for degradation after they have performed their respective functions.
Synergy
In prokaryotes, a lone activator protein is able to promote transcription. In eukaryotes, usually more than one activator assembles at the binding-site, forming a complex that acts to promote transcription. These activators bind cooperatively at the binding-site, meaning that the binding of one activator increases the affinity of the site to bind another activator (or in some cases another transcriptional regulator) thus making it easier for multiple activators to bind at the site. In these cases, the activators interact with each other synergistically, meaning that the rate of transcription that is achieved from multiple activators working together is much higher than the additive effects of the activators if they were working individually.
Examples
Regulation of maltose catabolism
The breakdown of maltose in Escherichia coli is controlled by gene activation. The genes that code for the enzymes responsible for maltose catabolism can only be transcribed in the presence of an activator.The activator that controls transcription of the maltose enzymes is "off" in the absence of maltose. In its inactive form, the activator is unable to bind to DNA and promote transcription of the maltose genes.
When maltose is present in the cell, it binds to the allosteric site of the activator protein, causing a conformational change in the DNA-binding domain of the activator. This conformational change "turns on" the activator by allowing it to bind to its specific regulatory DNA sequence. Binding of the activator to its regulatory site promotes RNA polymerase binding to the promoter and thus transcription, producing the enzymes that are needed to break down the maltose that has entered the cell.
Regulation of the lac operon
The catabolite activator protein (CAP), otherwise known as cAMP receptor protein (CRP), activates transcription at the lac operon of the bacterium Escherichia coli. Cyclic adenosine monophosphate (cAMP) is produced during glucose starvation; this molecule acts as an allosteric effector that binds to CAP and causes a conformational change that allows CAP to bind to a DNA site located adjacent to the lac promoter. CAP then makes a direct protein–protein interaction with RNA polymerase that recruits RNA polymerase to the lac promoter.
See also
CRISPR activation
Bacterial transcription
Coactivator (genetics)
Eukaryotic transcription
Glossary of gene expression terms
Operon
Promoter (biology)
Regulation of gene expression
Repressor
Squelching
Transcription factor
References
Transcription factors | Activator (genetics) | [
"Chemistry",
"Biology"
] | 2,486 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
553,950 | https://en.wikipedia.org/wiki/X%20band | The X band is the designation for a band of frequencies in the microwave radio region of the electromagnetic spectrum. In some cases, such as in communication engineering, the frequency range of the X band is rather indefinitely set at approximately 7.0–11.2 GHz. In radar engineering, the frequency range is specified by the Institute of Electrical and Electronics Engineers (IEEE) as 8.0–12.0 GHz. The X band is used for radar, satellite communication, and wireless computer networks.
Radar
X band is used in radar applications, including continuous-wave, pulsed, single-polarization, dual-polarization, synthetic aperture radar, and phased arrays. X-band radar frequency sub-bands are used in civil, military, and government institutions for weather monitoring, air traffic control, maritime vessel traffic control, defense tracking, and vehicle speed detection for law enforcement.
X band is often used in modern radars. The shorter wavelengths of the X band provide higher-resolution imagery from high-resolution imaging radars for target identification and discrimination. X-band weather radars offer significant potential for short-range observations, but the loss of signal strength (attenuation) under rainy conditions limits their use at longer range.
Terrestrial communications and networking
The X band 10.15 to 10.7 GHz segment is used for terrestrial broadband in many countries, such as Brazil, Mexico, Saudi Arabia, Denmark, Ukraine, Spain and Ireland. Alvarion, CBNL, CableFree and Ogier make systems for this, though each has a proprietary airlink. DOCSIS (Data Over Cable Service Interface Specification) the standard used for providing cable internet to customers, uses some X band frequencies. The home / business customer-premises equipment (CPE) has a single coaxial cable with a power adapter connecting to an ordinary cable modem. The local oscillator is usually 9750 MHz, the same as for Ku band satellite TV LNB. Two way applications such as broadband typically use a 350 MHz TX offset.
Space communications
Space communications for science and research
Small portions of the X band are assigned by the International Telecommunication Union (ITU) exclusively for deep space telecommunications. The primary user of this allocation is the American NASA Deep Space Network (DSN). DSN facilities are in Goldstone, California (in the Mojave Desert), near Canberra, Australia, and near Madrid, Spain, and provide continual communications from the Earth to almost any point in the Solar System independent of Earth rotation. (DSN stations are also capable of using the older and lower S band deep-space radio communications allocations, and some higher frequencies on a more-or-less experimental basis, such as in the K band.)
Notable deep space probe programs that have employed X band communications include the Viking Mars landers; the Voyager missions to Jupiter, Saturn, and beyond; the Galileo Jupiter orbiter; the New Horizons mission to Pluto and the Kuiper belt, the Curiosity rover and the Cassini-Huygens Saturn orbiter.
An important use of the X band communications came with the two Viking program landers. When the planet Mars was passing near or behind the Sun, as seen from the Earth, a Viking lander would transmit two simultaneous continuous-wave carriers, one in the S band and one in the X band in the direction of the Earth, where they were picked up by DSN ground stations. By making simultaneous measurements at the two different frequencies, the resulting data enabled theoretical physicists to verify the mathematical predictions of Albert Einstein's General Theory of Relativity. These results are some of the best confirmations of the General Theory of Relativity.
The new European double Mars Mission ExoMars will also use X band communication, on the instrument LaRa, to study the internal structure of Mars, and to make precise measurements of the rotation and orientation of Mars by monitoring two-way Doppler frequency shifts between the surface platform and Earth. It will also detect variations in angular momentum due to the redistribution of masses, such as the migration of ice from the polar caps to the atmosphere.
X band NATO frequency requirements
The International Telecommunication Union (ITU), the international body which allocates radio frequencies for civilian use, is not authorised to allocate frequency bands for military radio communication. This is also the case pertaining to X band military communications satellites. However, in order to meet military radio spectrum requirements, e.g. for fixed-satellite service and mobile-satellite service, the NATO nations negotiated the NATO Joint Civil/Military Frequency Agreement (NJFA).
Amateur radio
The Radio Regulations of the International Telecommunication Union allow amateur radio operations in the frequency range 10.000 to 10.500 GHz, and amateur satellite operations are allowed in the range 10.450 to 10.500 GHz. This is known as the 3-centimeter band by amateurs and the X-band by AMSAT.
Other uses
Motion detectors often use 10.525 GHz. 10.4 GHz is proposed for traffic light crossing detectors. Comreg in Ireland has allocated 10.450 GHz for traffic sensors as SRD.
Many electron paramagnetic resonance (EPR) spectrometers operate near 9.8 GHz.
Particle accelerators may be powered by X-band RF sources. The frequencies are then standardized at 11.9942 GHz (Europe) or 11.424 GHz (US), which is the second harmonic of C-band and fourth harmonic of S-band. The European X-band frequency is used for the Compact Linear Collider (CLIC).
See also
Cassegrain reflector
Directional antenna
XTAR
Sea-based X band Radar
New Horizons telecommunications
Voyager program#Spacecraft design
Earth observation satellites transmission frequencies
TerraSAR-X: a German Earth observation satellite
References
External links
United States Frequency Allocations
10GHz wideband transceiver
Microwave bands
Radar
Radio frequency propagation | X band | [
"Physics"
] | 1,190 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
1,391,942 | https://en.wikipedia.org/wiki/Orientation%20%28vector%20space%29 | The orientation of a real vector space or simply orientation of a vector space is the arbitrary choice of which ordered bases are "positively" oriented and which are "negatively" oriented. In the three-dimensional Euclidean space, right-handed bases are typically declared to be positively oriented, but the choice is arbitrary, as they may also be assigned a negative orientation. A vector space with an orientation selected is called an oriented vector space, while one not having an orientation selected, is called .
In mathematics, orientability is a broader notion that, in two dimensions, allows one to say when a cycle goes around clockwise or counterclockwise, and in three dimensions when a figure is left-handed or right-handed. In linear algebra over the real numbers, the notion of orientation makes sense in arbitrary finite dimension, and is a kind of asymmetry that makes a reflection impossible to replicate by means of a simple displacement. Thus, in three dimensions, it is impossible to make the left hand of a human figure into the right hand of the figure by applying a displacement alone, but it is possible to do so by reflecting the figure in a mirror. As a result, in the three-dimensional Euclidean space, the two possible basis orientations are called right-handed and left-handed (or right-chiral and left-chiral).
Definition
Let V be a finite-dimensional real vector space and let b1 and b2 be two ordered bases for V. It is a standard result in linear algebra that there exists a unique linear transformation A : V → V that takes b1 to b2. The bases b1 and b2 are said to have the same orientation (or be consistently oriented) if A has positive determinant; otherwise they have opposite orientations. The property of having the same orientation defines an equivalence relation on the set of all ordered bases for V. If V is non-zero, there are precisely two equivalence classes determined by this relation. An orientation on V is an assignment of +1 to one equivalence class and −1 to the other.
Every ordered basis lives in one equivalence class or another. Thus any choice of a privileged ordered basis for V determines an orientation: the orientation class of the privileged basis is declared to be positive.
For example, the standard basis on Rn provides a standard orientation on Rn (in turn, the orientation of the standard basis depends on the orientation of the Cartesian coordinate system on which it is built). Any choice of a linear isomorphism between V and Rn will then provide an orientation on V.
The ordering of elements in a basis is crucial. Two bases with a different ordering will differ by some permutation. They will have the same/opposite orientations according to whether the signature of this permutation is ±1. This is because the determinant of a permutation matrix is equal to the signature of the associated permutation.
Similarly, let A be a nonsingular linear mapping of vector space Rn to Rn. This mapping is orientation-preserving if its determinant is positive. For instance, in R3 a rotation around the Z Cartesian axis by an angle α is orientation-preserving:
while a reflection by the XY Cartesian plane is not orientation-preserving:
Zero-dimensional case
The concept of orientation degenerates in the zero-dimensional case. A zero-dimensional vector space has only a single point, the zero vector. Consequently, the only basis of a zero-dimensional vector space is the empty set . Therefore, there is a single equivalence class of ordered bases, namely, the class whose sole member is the empty set. This means that an orientation of a zero-dimensional space is a function
It is therefore possible to orient a point in two different ways, positive and negative.
Because there is only a single ordered basis , a zero-dimensional vector space is the same as a zero-dimensional vector space with ordered basis. Choosing or therefore chooses an orientation of every basis of every zero-dimensional vector space. If all zero-dimensional vector spaces are assigned this orientation, then, because all isomorphisms among zero-dimensional vector spaces preserve the ordered basis, they also preserve the orientation. This is unlike the case of higher-dimensional vector spaces where there is no way to choose an orientation so that it is preserved under all isomorphisms.
However, there are situations where it is desirable to give different orientations to different points. For example, consider the fundamental theorem of calculus as an instance of Stokes' theorem. A closed interval is a one-dimensional manifold with boundary, and its boundary is the set . In order to get the correct statement of the fundamental theorem of calculus, the point should be oriented positively, while the point should be oriented negatively.
On a line
The one-dimensional case deals with an oriented line or directed line, which may be traversed in one of two directions. In real coordinate space, an oriented line is also known as an axis. There are two orientations to a line just as there are two orientations to an oriented circle (clockwise and anti-clockwise). A semi-infinite oriented line is called a ray. In the case of a line segment (a connected subset of a line), the two possible orientations result in directed line segments.
On a surface
An orientable surface sometimes has the selected orientation indicated by the orientation of a surface normal.
An oriented plane can be defined by a pseudovector.
Alternate viewpoints
Multilinear algebra
For any n-dimensional real vector space V we can form the kth-exterior power of V, denoted ΛkV. This is a real vector space of dimension . The vector space ΛnV (called the top exterior power) therefore has dimension 1. That is, ΛnV is just a real line. There is no a priori choice of which direction on this line is positive. An orientation is just such a choice. Any nonzero linear form ω on ΛnV determines an orientation of V by declaring that x is in the positive direction when ω(x) > 0. To connect with the basis point of view we say that the positively-oriented bases are those on which ω evaluates to a positive number (since ω is an n-form we can evaluate it on an ordered set of n vectors, giving an element of R). The form ω is called an orientation form. If {ei} is a privileged basis for V and {ei∗} is the dual basis, then the orientation form giving the standard orientation is .
The connection of this with the determinant point of view is: the determinant of an endomorphism can be interpreted as the induced action on the top exterior power.
Lie group theory
Let B be the set of all ordered bases for V. Then the general linear group GL(V) acts freely and transitively on B. (In fancy language, B is a GL(V)-torsor). This means that as a manifold, B is (noncanonically) homeomorphic to GL(V). Note that the group GL(V) is not connected, but rather has two connected components according to whether the determinant of the transformation is positive or negative (except for GL0, which is the trivial group and thus has a single connected component; this corresponds to the canonical orientation on a zero-dimensional vector space). The identity component of GL(V) is denoted GL+(V) and consists of those transformations with positive determinant. The action of GL+(V) on B is not transitive: there are two orbits which correspond to the connected components of B. These orbits are precisely the equivalence classes referred to above. Since B does not have a distinguished element (i.e. a privileged basis) there is no natural choice of which component is positive. Contrast this with GL(V) which does have a privileged component: the component of the identity. A specific choice of homeomorphism between B and GL(V) is equivalent to a choice of a privileged basis and therefore determines an orientation.
More formally: ,
and the Stiefel manifold of n-frames in is a -torsor, so is a torsor over , i.e., its 2 points, and a choice of one of them is an orientation.
Geometric algebra
The various objects of geometric algebra are charged with three attributes or features: attitude, orientation, and magnitude. For example, a vector has an attitude given by a straight line parallel to it, an orientation given by its sense (often indicated by an arrowhead) and a magnitude given by its length. Similarly, a bivector in three dimensions has an attitude given by the family of planes associated with it (possibly specified by the normal line common to these planes ), an orientation (sometimes denoted by a curved arrow in the plane) indicating a choice of sense of traversal of its boundary (its circulation), and a magnitude given by the area of the parallelogram defined by its two vectors.
Orientation on manifolds
Each point p on an n-dimensional differentiable manifold has a tangent space TpM which is an n-dimensional real vector space. Each of these vector spaces can be assigned an orientation. Some orientations "vary smoothly" from point to point. Due to certain topological restrictions, this is not always possible. A manifold that admits a smooth choice of orientations for its tangent spaces is said to be orientable.
See also
References
External links
Linear algebra
Analytic geometry
Orientation (geometry) | Orientation (vector space) | [
"Physics",
"Mathematics"
] | 1,945 | [
"Topology",
"Space",
"Geometry",
"Linear algebra",
"Spacetime",
"Orientation (geometry)",
"Algebra"
] |
1,392,242 | https://en.wikipedia.org/wiki/Energy%20tower%20%28downdraft%29 | The energy tower is a device for producing electrical power. The brainchild of Dr. Phillip Carlson, expanded by Professor Dan Zaslavsky from the Technion. Energy towers spray water on hot air at the top of the tower, making the cooled air fall through the tower and drive a turbine at the tower's bottom.
Concept
An energy tower (also known as a downdraft energy tower, because the air flows down the tower) is a tall (1,000 meters) and wide (400 meters) hollow cylinder with a water spray system at the top. Pumps lift the water to the top of the tower and then spray the water inside the tower. Evaporation of water cools the hot, dry air hovering at the top. The cooled air, now denser than the outside warmer air, falls through the cylinder, spinning a turbine at the bottom. The turbine drives a generator which produces the electricity.
The greater the temperature difference between the air and water, the greater the energy efficiency. Therefore, downdraft energy towers should work best in a hot dry climate. Energy towers require large quantities of water. Salt water is acceptable, although care must be taken to prevent corrosion; desalination can help solve this problem.
The energy that is extracted from the air is ultimately derived from the sun, so this can be considered a form of solar power. Energy production continues at night, because air retains some of the day's heat after dark. However, power generation by the energy tower is affected by the weather: it slows down each time the ambient humidity increases (such as during a rainstorm), or the temperature falls.
A related approach is the solar updraft tower, which heats air in glass enclosures at ground level and sends the heated air up a tower driving turbines at the base. Updraft towers do not pump water, which increases their efficiency, but do require large amounts of land for the collectors. Land acquisition and collector construction costs for updraft towers must be compared to pumping infrastructure costs for downdraft collectors. Operationally, maintaining the collector structures for updraft towers must be compared to pumping costs and pump infrastructure maintenance.
Cost/efficiency
Zaslavsky and other authors estimate that depending on the site and financing costs, energy could be produced in the range of 1-4 cents per kWh, well below alternative energy sources other than hydro. Pumping the water requires about 50% of the turbine's output. Zaslavsky claims that the Energy Tower would achieve up to 70-80% of the Carnot limit. If the conversion efficiency turns out to be much lower, it is expected to have an adverse impact on projections made for cost of energy.
Projections made by Altmann and by Czisch about conversion efficiency and about cost of energy (cents/kWh) are based only on model calculations, no data on a working pilot plant have ever been collected.
Actual measurements on the 50 kW Manzanares pilot solar updraft tower found a conversion efficiency of 0.53%, although SBP believe that this could be increased to 1.3% in a large and improved 100 MW unit. This amounts to about 10% of the theoretical limit for the Carnot cycle. It is important to note a significant difference between the up-draft and down-draft proposals. The usage of water as a working-medium dramatically increases the potential for thermal energy capture, and electrical generation, due to its specific heat capacity. While the design may have its problems (see next section) and the stated efficiency claims has yet to be demonstrated, it would be an error to extrapolate performance from one to the other simply because of similarities in the name.
Potential problems
In salty humid air corrosion rates can be very high. This concerns the tower and the turbines.
The technology requires a hot and arid climate. Such locations include the coast of West Africa, Western Australia, northern Chile, Namibia, the Red Sea, Persian Gulf, and the Gulf of California. Most of these regions are remote and thinly populated, and would require power to be transported over long distances to where it is needed. Alternatively, such plants could provide captive power for nearby industrial uses such as desalination plants, aluminium production via the Hall-Héroult process, or to generate hydrogen for ammonia production.
Humidity as a result of plant operation may be an issue for nearby communities. A 400 meter diameter powerplant producing wind velocity of 22 meters per second, must add about 15 grams of water per kilogram of air processed. This is equal to 41 tonnes of water per second (m3s−1). In terms of humid air, this is 10 cubic kilometers of very humid air each hour. Thus, a community even 100 kilometers away may be unpleasantly affected.
Brine is a problem in proportion to the humidity created, since water's vapor pressure decreases with salinity, it is reasonable to expect at least as much brine as water in humidity. This means that a river of brine flows away from the powerplant at 41 tonnes per second (m3s−1), along with a river of saline water flowing in with 82 tonnes of water per second (m3s−1).
Large industrial consumers often locate near cheap sources of electricity. However, many of these desert regions also lack necessary infrastructure, increasing capital requirements and overall risk.
Demonstration project
Maryland-based Solar Wind Energy, Inc. was developing a tower.
Under the most recent design specifications, the Tower designed for a site near San Luis, Arizona, has a gross production capacity on an hourly basis, of up to 1,250 megawatt hours. Due to lower capacities during winter days, the average hourly output per day for sale to the grid for the entire year averages approximately 435 megawatt hours/hr.
See also
Psychrometrics (not to be confused with Psychometrics)
Solar updraft tower
References
Zaslavsky, Dan; Rami Guetta et al. (December 2001). . Technion Israel, Israel - India Steering Committee. Retrieved on 2007-03-15.
Zwirn, Michael J. (January 1997). Energy Towers: Pros and Cons of the Arubot Sharav Alternative Energy Proposal. Arava Institute for Environmental Studies. Retrieved on 2006-12-22.
Zaslavsky, Dan (November, 1996). "Solar Energy Without a Collector". The 3rd Sabin Conference.
External links
Energy Towers, A complete brochure by Dan Zaslavsky, updated for December 2009
SHPEGS "open source" energy tower concept similar in some ways to the downdraft tower.
Prof. Dan Zaslavsky on the Technion faculty page.
A commercial company set to build this type of tower
Electric power
Energy conversion
Power station technology
Sustainable energy
Sustainable technologies | Energy tower (downdraft) | [
"Physics",
"Engineering"
] | 1,387 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
1,392,377 | https://en.wikipedia.org/wiki/Catagenesis%20%28biology%29 | Catagenesis is a somewhat archaic term from evolutionary biology referring to evolutionary directions that were considered "retrogressive." It was a term used in contrast to anagenesis, which in present usage denotes the evolution of a single population into a new form without branching lines of descent.
The earliest written reference to catagenesis comes from Edward Drinker Cope, in his article, On Catagenesis, published in The American Naturalist in 1884. In this article, he defines the "primitive energy", which evolution through time has specialized. He defines catagenesis as a return to the "primitive energy".
See also
Evolutionary biology
References
Evolutionary biology | Catagenesis (biology) | [
"Biology"
] | 136 | [
"Evolutionary biology"
] |
1,392,495 | https://en.wikipedia.org/wiki/Stiefel%20manifold | In mathematics, the Stiefel manifold is the set of all orthonormal k-frames in That is, it is the set of ordered orthonormal k-tuples of vectors in It is named after Swiss mathematician Eduard Stiefel. Likewise one can define the complex Stiefel manifold of orthonormal k-frames in and the quaternionic Stiefel manifold of orthonormal k-frames in . More generally, the construction applies to any real, complex, or quaternionic inner product space.
In some contexts, a non-compact Stiefel manifold is defined as the set of all linearly independent k-frames in or this is homotopy equivalent to the more restrictive definition, as the compact Stiefel manifold is a deformation retract of the non-compact one, by employing the Gram–Schmidt process. Statements about the non-compact form correspond to those for the compact form, replacing the orthogonal group (or unitary or symplectic group) with the general linear group.
Topology
Let stand for or The Stiefel manifold can be thought of as a set of n × k matrices by writing a k-frame as a matrix of k column vectors in The orthonormality condition is expressed by A*A = where A* denotes the conjugate transpose of A and denotes the k × k identity matrix. We then have
The topology on is the subspace topology inherited from With this topology is a compact manifold whose dimension is given by
As a homogeneous space
Each of the Stiefel manifolds can be viewed as a homogeneous space for the action of a classical group in a natural manner.
Every orthogonal transformation of a k-frame in results in another k-frame, and any two k-frames are related by some orthogonal transformation. In other words, the orthogonal group O(n) acts transitively on The stabilizer subgroup of a given frame is the subgroup isomorphic to O(n−k) which acts nontrivially on the orthogonal complement of the space spanned by that frame.
Likewise the unitary group U(n) acts transitively on with stabilizer subgroup U(n−k) and the symplectic group Sp(n) acts transitively on with stabilizer subgroup Sp(n−k).
In each case can be viewed as a homogeneous space:
When k = n, the corresponding action is free so that the Stiefel manifold is a principal homogeneous space for the corresponding classical group.
When k is strictly less than n then the special orthogonal group SO(n) also acts transitively on with stabilizer subgroup isomorphic to SO(n−k) so that
The same holds for the action of the special unitary group on
Thus for k = n − 1, the Stiefel manifold is a principal homogeneous space for the corresponding special classical group.
Uniform measure
The Stiefel manifold can be equipped with a uniform measure, i.e. a Borel measure that is invariant under the action of the groups noted above. For example, which is isomorphic to the unit circle in the Euclidean plane, has as its uniform measure the natural uniform measure (arc length) on the circle. It is straightforward to sample this measure on using Gaussian random matrices: if is a random matrix with independent entries identically distributed according to the standard normal distribution on and A = QR is the QR factorization of A, then the matrices, are independent random variables and Q is distributed according to the uniform measure on This result is a consequence of the Bartlett decomposition theorem.
Special cases
A 1-frame in is nothing but a unit vector, so the Stiefel manifold is just the unit sphere in Therefore:
Given a 2-frame in let the first vector define a point in Sn−1 and the second a unit tangent vector to the sphere at that point. In this way, the Stiefel manifold may be identified with the unit tangent bundle
When k = n or n−1 we saw in the previous section that is a principal homogeneous space, and therefore diffeomorphic to the corresponding classical group:
Functoriality
Given an orthogonal inclusion between vector spaces the image of a set of k orthonormal vectors is orthonormal, so there is an induced closed inclusion of Stiefel manifolds, and this is functorial. More subtly, given an n-dimensional vector space X, the dual basis construction gives a bijection between bases for X and bases for the dual space which is continuous, and thus yields a homeomorphism of top Stiefel manifolds This is also functorial for isomorphisms of vector spaces.
As a principal bundle
There is a natural projection
from the Stiefel manifold to the Grassmannian of k-planes in which sends a k-frame to the subspace spanned by that frame. The fiber over a given point P in is the set of all orthonormal k-frames contained in the space P.
This projection has the structure of a principal G-bundle where G is the associated classical group of degree k. Take the real case for concreteness. There is a natural right action of O(k) on which rotates a k-frame in the space it spans. This action is free but not transitive. The orbits of this action are precisely the orthonormal k-frames spanning a given k-dimensional subspace; that is, they are the fibers of the map p. Similar arguments hold in the complex and quaternionic cases.
We then have a sequence of principal bundles:
The vector bundles associated to these principal bundles via the natural action of G on are just the tautological bundles over the Grassmannians. In other words, the Stiefel manifold is the orthogonal, unitary, or symplectic frame bundle associated to the tautological bundle on a Grassmannian.
When one passes to the limit, these bundles become the universal bundles for the classical groups.
Homotopy
The Stiefel manifolds fit into a family of fibrations:
thus the first non-trivial homotopy group of the space is in dimension n − k. Moreover,
This result is used in the obstruction-theoretic definition of Stiefel–Whitney classes.
See also
Flag manifold
Matrix Langevin distribution
References
Differential geometry
Homogeneous spaces
Fiber bundles
Manifolds | Stiefel manifold | [
"Physics",
"Mathematics"
] | 1,318 | [
"Group actions",
"Homogeneous spaces",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Manifolds",
"Geometry",
"Symmetry"
] |
1,392,630 | https://en.wikipedia.org/wiki/Basement%20membrane | The basement membrane, also known as base membrane, is a thin, pliable sheet-like type of extracellular matrix that provides cell and tissue support and acts as a platform for complex signalling. The basement membrane sits between epithelial tissues including mesothelium and endothelium, and the underlying connective tissue.
Structure
As seen with the electron microscope, the basement membrane is composed of two layers, the basal lamina and the reticular lamina. The underlying connective tissue attaches to the basal lamina with collagen VII anchoring fibrils and fibrillin microfibrils.
The basal lamina layer can further be subdivided into two layers based on their visual appearance in electron microscopy. The lighter-colored layer closer to the epithelium is called the lamina lucida, while the denser-colored layer closer to the connective tissue is called the lamina densa. The electron-dense lamina densa layer is about 30–70 nanometers thick and consists of an underlying network of reticular collagen IV fibrils which average 30 nanometers in diameter and 0.1–2 micrometers in thickness and are coated with the heparan sulfate-rich proteoglycan perlecan. In addition to collagen, this supportive matrix contains intrinsic macromolecular components. The lamina lucida layer is made up of laminin, integrins, entactins, and dystroglycans. Integrins are a key component of hemidesmosomes which serve to anchor the epithelium to the underlying basement membrane.
To represent the above in a visually organised manner, the basement membrane is organized as follows:
Epithelial/mesothelial/endothelial tissue (outer layer)
Basement membrane
Basal lamina
Lamina lucida
laminin
integrins (hemidesmosomes)
nidogens
dystroglycans
Lamina densa
collagen IV (coated with perlecan, rich in heparan sulfate)
Attaching proteins (between the basal and reticular laminae)
collagen VII (anchoring fibrils)
fibrillin (microfibrils)
Lamina reticularis
collagen III (as reticular fibers)
Connective tissue (Lamina propria)
Function
The primary function of the basement membrane is to anchor down the epithelium to its loose connective tissue (the dermis or lamina propria) underneath. This is achieved by cell-matrix adhesions through substrate adhesion molecules (SAMs).
The basement membrane acts as a mechanical barrier, preventing malignant cells from invading the deeper tissues. Early stages of malignancy that are thus limited to the epithelial layer by the basement membrane are called carcinoma in situ.
The basement membrane is also essential for angiogenesis (development of new blood vessels). Basement membrane proteins have been found to accelerate differentiation of endothelial cells.
The most notable examples of basement membranes is the glomerular basement membrane of the kidney, by the fusion of the basal lamina from the endothelium of glomerular capillaries and the podocyte basal lamina, and between lung alveoli and pulmonary capillaries, by the fusion of the basal lamina of the lung alveoli and of the basal lamina of the lung capillaries, which is where oxygen and diffusion occurs (gas exchange).
As of 2017, other roles for basement membrane include blood filtration and muscle homeostasis. Fractones may be a type of basement membrane, serving as a niche for stem cells.
Clinical significance
Some diseases result from a poorly functioning basement membrane. The cause can be genetic defects, injuries by the body's own immune system, or other mechanisms. Diseases involving basement membranes at multiple locations include:
Genetic defects in the collagen fibers of the basement membrane, including Alport syndrome and Knobloch syndrome
Autoimmune diseases targeting basement membranes. Non-collagenous domain basement membrane collagen type IV is autoantigen (target antigen) of autoantibodies in the autoimmune disease Goodpasture's syndrome.
A group of diseases stemming from improper function of basement membrane zone are united under the name epidermolysis bullosa.
In histopathology, thickened basement membranes are found in several inflammatory diseases, such as lichen sclerosus, systemic lupus erythematosus or dermatomyositis in the skin, or collagenous colitis in the colon.
Evolutionary origin
These are only found within diploblastic and homoscleromorphic sponge animals. The homoscleromorph were found to be sister to diploblasts in some studies, making the membrane originate once in the history of life. But more recent studies have disregarded diploblast-homoscleromorph group, so other sponges may have lost it (most probable) or the origin in the two groups may be separate.
See also
References
Further reading
Angiology
Tissues (biology)
Histology | Basement membrane | [
"Chemistry"
] | 1,064 | [
"Histology",
"Microscopy"
] |
1,392,969 | https://en.wikipedia.org/wiki/Solarization%20%28physics%29 | Solarization refers to a phenomenon in physics where a material undergoes a temporary change in color after being subjected to high-energy electromagnetic radiation, such as ultraviolet light or X-rays. Clear glass and many plastics will turn amber, green or other colors when subjected to X-radiation, and glass may turn blue after long-term solar exposure in the desert. It is believed that solarization is caused by the formation of internal defects, called color centers, which selectively absorb portions of the visible light spectrum. In glass, color center absorption can often be reversed by heating the glass to high temperatures (a process called thermal bleaching) to restore the glass to its initial transparent state. Solarization may also permanently degrade a material's physical or mechanical properties, and is one of the mechanisms involved in the breakdown of plastics within the environment.
Examples
In the field of clinical imaging, with sufficient exposure, solarization of certain screen-film systems can occur which obscures details within the X-ray image and degrades the accuracy of the diagnosis. Even though degradation can occur this was found to be a rare phenomenon.
See also
Photodegradation
Solarized architectural glass
Atomic, molecular, and optical physics
Chromism | Solarization (physics) | [
"Physics",
"Chemistry",
"Materials_science",
"Astronomy",
"Engineering"
] | 247 | [
"Spectroscopy stubs",
" and optical physics stubs",
"Spectrum (physical sciences)",
"Chromism",
"Astronomy stubs",
"Materials science",
" molecular",
"Atomic",
"Smart materials",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs",
" and optical physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.