id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
766,244 | https://en.wikipedia.org/wiki/Lead%28II%29%20iodide | Lead(II) iodide (or lead iodide) is a chemical compound with the formula . At room temperature, it is a bright yellow odorless crystalline solid, that becomes orange and red when heated. It was formerly called plumbous iodide.
The compound currently has a few specialized applications, such as the manufacture of solar cells, X-rays and gamma-ray detectors. Its preparation is an entertaining and popular demonstration in chemistry education, to teach topics such as precipitation reactions and stoichiometry. It is decomposed by light at temperatures above , and this effect has been used in a patented photographic process.
Lead iodide was formerly employed as a yellow pigment in some paints, with the name iodide yellow. However, that use has been largely discontinued due to its toxicity and poor stability.
Preparation
is commonly synthesized via a precipitation reaction between potassium iodide and lead(II) nitrate ()2 in water solution:
While the potassium nitrate is soluble, the lead iodide is nearly insoluble at room temperature, and thus precipitates out.
Other soluble compounds containing lead(II) and iodide can be used instead, for example lead(II) acetate and sodium iodide.
The compound can also be synthesized by reacting iodine vapor with molten lead between 500 and 700 °C.
A thin film of can also be prepared by depositing a film of lead sulfide and exposing it to iodine vapor, by the reaction
The sulfur is then washed with dimethyl sulfoxide.
Crystallization
Lead iodide prepared from cold solutions usually consists of many small hexagonal platelets, giving the yellow precipitate a silky appearance. Larger crystals can be obtained by exploiting the fact that solubility of lead iodide in water (like those of lead chloride and lead bromide) increases dramatically with temperature. The compound is colorless when dissolved in hot water, but crystallizes on cooling as thin but visibly larger bright yellow flakes, that settle slowly through the liquid — a visual effect often described as "golden rain". Larger crystals can be obtained by autoclaving the with water under pressure at 200 °C.
Even larger crystals can be obtained by slowing down the common reaction. A simple setup is to submerge two beakers containing the concentrated reactants in a larger container of water, taking care to avoid currents. As the two substances diffuse through the water and meet, they slowly react and deposit the iodide in the space between the beakers.
Another similar method is to react the two substances in a gel medium, that slows down the diffusion and supports the growing crystal away from the container's walls. Patel and Rao have used this method to grow crystals up to 30 mm in diameter and 2 mm thick.
The reaction can be slowed also by separating the two reagents with a permeable membrane. This approach, with a cellulose membrane, was used in September 1988 to study the growth of crystals in zero gravity, in an experiment flown on the Space Shuttle Discovery.
can also be crystallized from powder by sublimation at 390 °C, in near vacuum or in a current of argon with some hydrogen.
Large high-purity crystals can be obtained by zone melting or by the Bridgman–Stockbarger technique. These processes can remove various impurities from commercial .
Applications
Lead iodide is a precursor material in the fabrication of highly efficient Perovskite solar cell. Typically, a solution of in an organic solvent, such as dimethylformamide or dimethylsulfoxide, is applied over a titanium dioxide layer by spin coating. The layer is then treated with a solution of methylammonium iodide and annealed, turning it into the double salt methylammonium lead iodide , with a perovskite structure. The reaction changes the film's color from yellow to light brown.
is also used as a high-energy photon detector for gamma-rays and X-rays, due to its wide band gap which ensures low noise operation.
Lead iodide was formerly used as a paint pigment under the name "iodine yellow". It was described by Prosper Mérimée (1830) as "not yet much known in commerce, is as bright as orpiment or chromate of lead. It is thought to be more permanent; but time only can prove its pretension to so essential a quality. It is prepared by precipitating a solution of acetate or nitrate of lead, with potassium iodide: the nitrate produces a more brilliant yellow color." However, due to the toxicity and instability of the compound it is no longer used as such. It may still be used in art for bronzing and in gold-like mosaic tiles.
Stability
Common material characterization techniques such as electron microscopy can damage samples of lead(II) iodide. Thin films of lead(II) iodide are unstable in ambient air. Ambient air oxygen oxidizes iodide into elemental iodine:
Toxicity
Lead iodide is very toxic to human health. Ingestion will cause many acute and chronic consequences characteristic of lead poisoning. Lead iodide has been found to be a carcinogen in animals suggesting the same may hold true in humans. Lead iodide is an inhalation hazard, and appropriate respirators should be used when handling powders of lead iodide.
Structure
The structure of , as determined by X-ray powder diffraction, is primarily hexagonal close-packed system with alternating between layers of lead atoms and iodide atoms, with largely ionic bonding. Weak van der Waals interactions have been observed between lead–iodide layers. The most common stacking forms are 2H and 4H. The 4H polymorph is most common in samples grown from the melt, by precipitation, or by sublimation, whereas the 2H polymorph is usually formed by sol-gel synthesis. The solid can also take an R6 rhombohedral structure.
See also
References
Cited sources
External links
Toxic Substances Portal – Lead
Iodides
Lead(II) compounds
Metal halides
Semiconductor materials | Lead(II) iodide | [
"Chemistry"
] | 1,274 | [
"Semiconductor materials",
"Inorganic compounds",
"Metal halides",
"Salts"
] |
766,409 | https://en.wikipedia.org/wiki/Network%20theory | In mathematics, computer science and network science, network theory is a part of graph theory. It defines networks as graphs where the vertices or edges possess attributes. Network theory analyses these networks over the symmetric relations or asymmetric relations between their (discrete) components.
Network theory has applications in many disciplines, including statistical physics, particle physics, computer science, electrical engineering, biology, archaeology, linguistics, economics, finance, operations research, climatology, ecology, public health, sociology, psychology, and neuroscience. Applications of network theory include logistical networks, the World Wide Web, Internet, gene regulatory networks, metabolic networks, social networks, epistemological networks, etc.; see List of network theory topics for more examples.
Euler's solution of the Seven Bridges of Königsberg problem is considered to be the first true proof in the theory of networks.
Network optimization
Network problems that involve finding an optimal way of doing something are studied as combinatorial optimization. Examples include network flow, shortest path problem, transport problem, transshipment problem, location problem, matching problem, assignment problem, packing problem, routing problem, critical path analysis, and program evaluation and review technique.
Network analysis
Electric network analysis
The analysis of electric power systems could be conducted using network theory from two main points of view:
An abstract perspective (i.e., as a graph consists from nodes and edges), regardless of the electric power aspects (e.g., transmission line impedances). Most of these studies focus only on the abstract structure of the power grid using node degree distribution and betweenness distribution, which introduces substantial insight regarding the vulnerability assessment of the grid. Through these types of studies, the category of the grid structure could be identified from the complex network perspective (e.g., single-scale, scale-free). This classification might help the electric power system engineers in the planning stage or while upgrading the infrastructure (e.g., add a new transmission line) to maintain a proper redundancy level in the transmission system.
Weighted graphs that blend an abstract understanding of complex network theories and electric power systems properties.
Social network analysis
Social network analysis examines the structure of relationships between social entities. These entities are often persons, but may also be groups, organizations, nation states, web sites, or scholarly publications.
Since the 1970s, the empirical study of networks has played a central role in social science, and many of the mathematical and statistical tools used for studying networks have been first developed in sociology. Amongst many other applications, social network analysis has been used to understand the diffusion of innovations, news and rumors. Similarly, it has been used to examine the spread of both diseases and health-related behaviors. It has also been applied to the study of markets, where it has been used to examine the role of trust in exchange relationships and of social mechanisms in setting prices. It has been used to study recruitment into political movements, armed groups, and other social organizations. It has also been used to conceptualize scientific disagreements as well as academic prestige. More recently, network analysis (and its close cousin traffic analysis) has gained a significant use in military intelligence, for uncovering insurgent networks of both hierarchical and leaderless nature.
Biological network analysis
With the recent explosion of publicly available high throughput biological data, the analysis of molecular networks has gained significant interest. The type of analysis in this context is closely related to social network analysis, but often focusing on local patterns in the network. For example, network motifs are small subgraphs that are over-represented in the network. Similarly, activity motifs are patterns in the attributes of nodes and edges in the network that are over-represented given the network structure. Using networks to analyze patterns in biological systems, such as food-webs, allows us to visualize the nature and strength of interactions between species. The analysis of biological networks with respect to diseases has led to the development of the field of network medicine. Recent examples of application of network theory in biology include applications to understanding the cell cycle as well as a quantitative framework for developmental processes.
Narrative network analysis
The automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale. The resulting narrative networks, which can contain thousands of nodes, are then analyzed by using tools from Network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes. This automates the approach introduced by Quantitative Narrative Analysis, whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object.
Link analysis
Link analysis is a subset of network analysis, exploring associations between objects. An example may be examining the addresses of suspects and victims, the telephone numbers they have dialed, and financial transactions that they have partaken in during a given timeframe, and the familial relationships between these subjects as a part of police investigation. Link analysis here provides the crucial relationships and associations between very many objects of different types that are not apparent from isolated pieces of information. Computer-assisted or fully automatic computer-based link analysis is increasingly employed by banks and insurance agencies in fraud detection, by telecommunication operators in telecommunication network analysis, by medical sector in epidemiology and pharmacology, in law enforcement investigations, by search engines for relevance rating (and conversely by the spammers for spamdexing and by business owners for search engine optimization), and everywhere else where relationships between many objects have to be analyzed. Links are also derived from similarity of time behavior in both nodes. Examples include climate networks where the links between two locations (nodes) are determined, for example, by the similarity of the rainfall or temperature fluctuations in both sites.
Web link analysis
Several Web search ranking algorithms use link-based centrality metrics, including Google's PageRank, Kleinberg's HITS algorithm, the CheiRank and TrustRank algorithms. Link analysis is also conducted in information science and communication science in order to understand and extract information from the structure of collections of web pages. For example, the analysis might be of the interlinking between politicians' websites or blogs. Another use is for classifying pages according to their mention in other pages.
Centrality measures
Information about the relative importance of nodes and edges in a graph can be obtained through centrality measures, widely used in disciplines like sociology. For example, eigenvector centrality uses the eigenvectors of the adjacency matrix corresponding to a network, to determine nodes that tend to be frequently visited. Formally established measures of centrality are degree centrality, closeness centrality, betweenness centrality, eigenvector centrality, subgraph centrality, and Katz centrality. The purpose or objective of analysis generally determines the type of centrality measure to be used. For example, if one is interested in dynamics on networks or the robustness of a network to node/link removal, often the dynamical importance of a node is the most relevant centrality measure.
Assortative and disassortative mixing
These concepts are used to characterize the linking preferences of hubs in a network. Hubs are nodes which have a large number of links. Some hubs tend to link to other hubs while others avoid connecting to hubs and prefer to connect to nodes with low connectivity. We say a hub is assortative when it tends to connect to other hubs. A disassortative hub avoids connecting to other hubs. If hubs have connections with the expected random probabilities, they are said to be neutral. There are three methods to quantify degree correlations.
Recurrence networks
The recurrence matrix of a recurrence plot can be considered as the adjacency matrix of an undirected and unweighted network. This allows for the analysis of time series by network measures. Applications range from detection of regime changes over characterizing dynamics to synchronization analysis.
Spatial networks
Many real networks are embedded in space. Examples include, transportation and other infrastructure networks, brain neural networks. Several models for spatial networks have been developed.
Temporal networks
Other networks emphasise the evolution over time of systems of nodes and their interconnections. Temporal networks are used for example to study how financial risk has spread across countries. In this study, temporal networks are used to also visually trace the intricate dynamics of financial contagion during crises. Unlike traditional network approaches that aggregate or analyze static snapshots, the study uses a time-respecting path methodology to preserve the sequence and timing of financial crises contagion events. This enables the identification of nodes as sources, transmitters, or receivers of financial stress, avoiding mischaracterizations inherent in static or aggregated methods. Following this approach, banks are found to serve as key intermediaries in contagion paths, and temporal analysis pinpoints smaller countries like Greece and Italy as significant origins of shocks during crises—insights obscured by static approaches that overemphasize large economies like the US or Japan.
Temporal networks can also be used to explore how cooperation evolves in dynamic, real-world population structures where interactions are time-dependent. Here the authors find that network temporality enhances cooperation compared to static networks, even though "bursty" interaction patterns typically hinder it. This finding also shows how cooperation and other emergent behaviours can thrive in realistic, time-varying population structures, challenging conventional assumptions rooted in static models.
In psychology, temporal networks enable the understanding of psychological disorders by framing them as dynamic systems of interconnected symptoms rather than outcomes of a single underlying cause. Using "nodes" to represent symptoms and "edges" to signify their direct interactions, symptoms like insomnia and fatigue are shown how they influence each other over time; also, disorders such as depression are shown not to be fixed entities but evolving networks, where identifying "bridge symptoms" like concentration difficulties can explain comorbidity between conditions such as depression and anxiety.
Lastly, temporal networks enable a better understanding and controlling of the spread of infectious diseases. Unlike traditional static networks, which assume continuous, unchanging connections, temporal networks account for the precise timing and duration of interactions between individuals. This dynamic approach reveals critical nuances, such as how diseases can spread via time-sensitive pathways that static models miss. Temporal data, such as interactions captured through Bluetooth sensors or in hospital wards, can improve predictions of outbreak speed and extent. Overlooking temporal correlations can lead to significant errors in estimating epidemic dynamics, emphasizing the need for a temporal framework to develop more accurate strategies for disease control.
Spread
Content in a complex network can spread via two major methods: conserved spread and non-conserved spread. In conserved spread, the total amount of content that enters a complex network remains constant as it passes through. The model of conserved spread can best be represented by a pitcher containing a fixed amount of water being poured into a series of funnels connected by tubes. Here, the pitcher represents the original source and the water is the content being spread. The funnels and connecting tubing represent the nodes and the connections between nodes, respectively. As the water passes from one funnel into another, the water disappears instantly from the funnel that was previously exposed to the water. In non-conserved spread, the amount of content changes as it enters and passes through a complex network. The model of non-conserved spread can best be represented by a continuously running faucet running through a series of funnels connected by tubes. Here, the amount of water from the original source is infinite. Also, any funnels that have been exposed to the water continue to experience the water even as it passes into successive funnels. The non-conserved model is the most suitable for explaining the transmission of most infectious diseases, neural excitation, information and rumors, etc.
Network immunization
The question of how to immunize efficiently scale free networks which represent realistic networks such as the Internet and social networks has been studied extensively. One such strategy is to immunize the largest degree nodes, i.e., targeted (intentional) attacks since for this case is relatively high and fewer nodes are needed to be immunized.
However, in most realistic networks the global structure is not available and the largest degree nodes are unknown.
See also
Complex network
Congestion game
Quantum complex network
Dual-phase evolution
Network partition
Network science
Network theory in risk assessment
Network topology
Network analyzer
Seven Bridges of Königsberg
Small-world networks
Social network
Scale-free networks
Network dynamics
Sequential dynamical systems
Pathfinder networks
Human disease network
Biological network
Network medicine
Graph partition
References
Books
External links
netwiki Scientific wiki dedicated to network theory
New Network Theory International Conference on 'New Network Theory'
Network Workbench: A Large-Scale Network Analysis, Modeling and Visualization Toolkit
Optimization of the Large Network doi:10.13140/RG.2.2.20183.06565/6
Network analysis of computer networks
Network analysis of organizational networks
Network analysis of terrorist networks
Network analysis of a disease outbreak
Link Analysis: An Information Science Approach (book)
Connected: The Power of Six Degrees (documentary)
A short course on complex networks
A course on complex network analysis by Albert-László Barabási
The Journal of Network Theory in Finance
Network theory in Operations Research from the Institute for Operations Research and the Management Sciences (INFORMS)
Networks
Graph theory
fi:Verkkoteoria | Network theory | [
"Mathematics"
] | 2,734 | [
"Discrete mathematics",
"Graph theory",
"Combinatorics",
"Network theory",
"Mathematical relations"
] |
766,619 | https://en.wikipedia.org/wiki/Cyclonic%20separation | Cyclonic separation is a method of removing particulates from an air, gas or liquid stream, without the use of filters, through vortex separation. When removing particulate matter from liquid, a hydrocyclone is used; while from gas, a gas cyclone is used. Rotational effects and gravity are used to separate mixtures of solids and fluids. The method can also be used to separate fine droplets of liquid from a gaseous stream.
Operation
A high-speed rotating (air)flow is established within a cylindrical or conical container called a cyclone. Air flows in a helical pattern, beginning at the top (wide end) of the cyclone and ending at the bottom (narrow) end before exiting the cyclone in a straight stream through the center of the cyclone and out the top. Larger (denser) particles in the rotating stream have too much inertia to follow the tight curve of the stream, and thus strike the outside wall, then fall to the bottom of the cyclone where they can be removed. In a conical system, as the rotating flow moves towards the narrow end of the cyclone, the rotational radius of the stream is reduced, thus separating smaller and smaller particles. The cyclone geometry, together with volumetric flow rate, defines the cut point of the cyclone. This is the size of particle that will be removed from the stream with a 50% efficiency. Particles larger than the cut point will be removed with a greater efficiency, and smaller particles with a lower efficiency as they separate with more difficulty or can be subject to re-entrainment when the air vortex reverses direction to move in direction of the outlet.
An alternative cyclone design uses a secondary air flow within the cyclone to keep the collected particles from striking the walls, to protect them from abrasion. The primary air flow containing the particulates enters from the bottom of the cyclone and is forced into spiral rotation by stationary spinner vanes. The secondary air flow enters from the top of the cyclone and moves downward toward the bottom, intercepting the particulate from the primary air. The secondary air flow also allows the collector to optionally be mounted horizontally, because it pushes the particulate toward the collection area, and does not rely solely on gravity to perform this function.
Uses
Cyclone separators are found in all types of power and industrial applications, including pulp and paper plants, cement plants, steel mills, petroleum coke plants, metallurgical plants, saw mills and other kinds of facilities that process dust.
Large scale cyclones are used in sawmills to remove sawdust from extracted air. Cyclones are also used in oil refineries to separate oils and gases, and in the cement industry as components of kiln preheaters. Cyclones are increasingly used in the household, as the core technology in bagless types of portable vacuum cleaners and central vacuum cleaners. Cyclones are also used in industrial and professional kitchen ventilation for separating the grease from the exhaust air in extraction hoods. Smaller cyclones are used to separate airborne particles for analysis. Some are small enough to be worn clipped to clothing, and are used to separate respirable particles for later analysis.
Similar separators are used in the oil refining industry (e.g. for Fluid catalytic cracking) to achieve fast separation of the catalyst particles from the reacting gases and vapors.
Analogous devices for separating particles or solids from liquids are called hydrocyclones or hydroclones. These may be used to separate solid waste from water in wastewater and sewage treatment.
Types
The most common types of centrifugal, or inertial, collectors in use today are:
Single-cyclone separators
Single-cyclone separators create a dual vortex to separate coarse from fine dust. The main vortex spirals downward and carries most of the coarser dust particles. The inner vortex, created near the bottom of the cyclone, spirals upward and carries finer dust particles.
Multiple-cyclone separators
Multiple-cyclone separators consist of a number of small-diameter cyclones, operating in parallel and having a common gas inlet and outlet, as shown in the figure, and operate on the same principle as single cyclone separators—creating an outer downward vortex and an ascending inner vortex.
Multiple-cyclone separators remove more dust than single cyclone separators because the individual cyclones have a greater length and smaller diameter. The longer length provides longer residence time while the smaller diameter creates greater centrifugal force. These two factors result in better separation of dust particulates. The pressure drop of multiple-cyclone separators collectors is higher than that of single-cyclone separators, requiring more energy to clean the same amount of air. A single-chamber cyclone separator of the same volume is more economical, but doesn't remove as much dust.
Secondary-air-flow separators
This type of cyclone uses a secondary air flow, injected into the cyclone to accomplish several things. The secondary air flow increases the speed of the cyclonic action making the separator more efficient; it intercepts the particulate before it reaches the interior walls of the unit; and it forces the separated particulate toward the collection area. The secondary air flow protects the separator from particulate abrasion and allows the separator to be installed horizontally because gravity is not depended upon to move the separated particulate downward.
Cyclone theory
As the cyclone is essentially a two phase particle-fluid system, fluid mechanics and particle transport equations can be used to describe the behaviour of a cyclone. The air in a cyclone is initially introduced tangentially into the cyclone with an inlet velocity . Assuming that the particle is spherical, a simple analysis to calculate critical separation particle sizes can be established.
If one considers an isolated particle circling in the upper cylindrical component of the cyclone at a rotational radius of from the cyclone's central axis, the particle is therefore subjected to drag, centrifugal, and buoyant forces. Given that the fluid velocity is moving in a spiral the gas velocity can be broken into two component velocities: a tangential component, , and an outward radial velocity component . Assuming Stokes' law, the drag force in the outward radial direction that is opposing the outward velocity on any particle in the inlet stream is:
Using as the particle's density, the centrifugal component in the outward radial direction is:
The buoyant force component is in the inward radial direction. It is in the opposite direction to the particle's centrifugal force because it is on a volume of fluid that is missing compared to the surrounding fluid. Using for the density of the fluid, the buoyant force is:
In this case, is equal to the volume of the particle (as opposed to the velocity). Determining the outward radial motion of each particle is found by setting Newton's second law of motion equal to the sum of these forces:
To simplify this, we can assume the particle under consideration has reached "terminal velocity", i.e., that its acceleration is zero. This occurs when the radial velocity has caused enough drag force to counter the centrifugal and buoyancy forces. This simplification changes our equation to:
Which expands to:
Solving for we have
.
Notice that if the density of the fluid is greater than the density of the particle, the motion is (-), toward the center of rotation and if the particle is denser than the fluid, the motion is (+), away from the center. In most cases, this solution is used as guidance in designing a separator, while actual performance is evaluated and modified empirically.
In non-equilibrium conditions when radial acceleration is not zero, the general equation from above must be solved. Rearranging terms we obtain
Since is distance per time, this is a 2nd order differential equation of the form .
Experimentally it is found that the velocity component of rotational flow is proportional to , therefore:
This means that the established feed velocity controls the vortex rate inside the cyclone, and the velocity at an arbitrary radius is therefore:
Subsequently, given a value for , possibly based upon the injection angle, and a cutoff radius, a characteristic particle filtering radius can be estimated, above which particles will be removed from the gas stream.
Alternative models
The above equations are limited in many regards. For example, the geometry of the separator is not considered, the particles are assumed to achieve a steady state and the effect of the vortex inversion at the base of the cyclone is also ignored, all behaviours which are unlikely to be achieved in a cyclone at real operating conditions.
More complete models exist, as many authors have studied the behaviour of cyclone separators. Simplified models allowing a quick calculation of the cyclone, with some limitations, have been developed for common applications in process industries. Numerical modelling using computational fluid dynamics has also been used extensively in the study of cyclonic behaviour. A major limitation of any fluid mechanics model for cyclone separators is the inability to predict the agglomeration of fine particles with larger particles, which has a great impact on cyclone collection efficiency.
See also
Centrifuge
Dust collector
Helikon vortex separation process
Hydrocyclone
Hydrodynamic separator
Spark arrestor
Spiral separator
Trickle valve
Notes
References
High Efficiency Horizontal Dust Collection
patent 2377524 (June 1945)
alternate link to cited patent
Solid-gas separation
Vacuum cleaners
Pollution control technologies
Air pollution control systems
Particulate control
Waste treatment technology
Gas technologies
Particle technology
Aerosols | Cyclonic separation | [
"Chemistry",
"Engineering"
] | 1,929 | [
"Separation processes by phases",
"Solid-gas separation",
"Water treatment",
"Chemical engineering",
"Colloids",
"Pollution control technologies",
"Aerosols",
"Environmental engineering",
"Particle technology",
"Waste treatment technology"
] |
766,953 | https://en.wikipedia.org/wiki/Videogrammetry | Videogrammetry is a measurement technology in which the three-dimensional coordinates of points on an object are determined by measurements made in two or more video images taken from different angles. Images can be obtained from two cameras which simultaneously view the object or from successive images captured by the same camera with a view of the object. Videogrammetry is typically used in manufacturing and construction.
See also
Motion capture
Stereophotogrammetry
Structure from motion
Photogrammetry
References
Measurement
Photogrammetry
Stereophotogrammetry
Video | Videogrammetry | [
"Physics",
"Mathematics"
] | 102 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
767,065 | https://en.wikipedia.org/wiki/Rain%20fade | Rain fade refers primarily to the absorption of a microwave radio frequency (RF) signal by atmospheric rain, snow, or ice, and losses which are especially prevalent at frequencies above 11 GHz. It also refers to the degradation of a signal caused by the electromagnetic interference of the leading edge of a storm front. Rain fade can be caused by precipitation at the uplink or downlink location. It does not need to be raining at a location for it to be affected by rain fade, as the signal may pass through precipitation many miles away, especially if the satellite dish has a low look angle. From 5% to 20% of rain fade or satellite signal attenuation may also be caused by rain, snow, or ice on the uplink or downlink antenna reflector, radome, or feed horn. Rain fade is not limited to satellite uplinks or downlinks, as it can also affect terrestrial point-to-point microwave links (those on the Earth's surface).
Rain fade is usually estimated experimentally and also can be calculated theoretically using scattering theory of raindrops. Raindrop size distribution (DSD) is an important consideration for studying rain fade characteristics. Various mathematical forms such as Gamma function, lognormal or exponential forms are usually used to model the DSD. Mie or Rayleigh scattering theory with point matching or t-matrix approach is used to calculate the scattering cross section, and specific rain attenuation. Since rain is a non-homogeneous process in both time and space, specific attenuation varies with location, time and rain type.
Total rain attenuation is also dependent upon the spatial structure of rain field. Horizontal, as well as vertical, extension of rain again varies for different rain type and location. Limit of the vertical rain region is usually assumed to coincide with 0˚ isotherm and called rain height. Melting layer height is also used as the limits of rain region and can be estimated from the bright band signature of radar reflectivity. The horizontal rain structure is assumed to have a cellular form, called rain cell. Rain cell sizes can vary from a few hundred meters to several kilometers and dependent upon the rain type and location. Existence of very small size rain cells are recently observed in tropical rain.
The rain attenuation on satellite communication can be predicted using rain attenuation prediction models which lead to a suitable selection of the Fade Mitigation Technique (FMT). The rain attenuation prediction models require rainfall rate data which, in turn, can be obtained from in either the prediction rainfall maps, which may reflect inaccurate rain performance prediction, or by actual measured rainfall data that gives more accurate prediction and hence the appropriate selection of FMT. Substantially, the earth altitude above the sea level is an essential factor affecting the rain attenuation performance. The satellite system designers and channel providers should account for the rain impairments at their channel setup.
Possible ways to overcome the effects of rain fade are site diversity, uplink power control, variable rate encoding, and receiving antennas larger than the requested size for normal weather conditions.
Uplink power control
The simplest way to compensate the rain fade effect in satellite communications is to increase the transmission power: this dynamic fade countermeasure is called uplink power control (UPC). Until more recently, uplink power control had limited use, since it required more powerful transmitters – ones that could normally run at lower levels and could be increased in power level on command (i.e. automatically). Also uplink power control could not provide very large signal margins without compressing the transmitting amplifier. Modern amplifiers coupled with advanced uplink power control systems that offer automatic controls to prevent transponder saturation make uplink power control systems an effective, affordable and easy solution to rain fade in satellite signals.
Parallel fail-over links
In terrestrial point to point microwave systems ranging from 11 GHz to 80 GHz, a parallel backup link can be installed alongside a rain fade prone higher bandwidth connection. In this arrangement, a primary link such as an 80 GHz 1 Gbit/s full duplex microwave bridge may be calculated to have a 99.9% availability rate over the period of one year. The calculated 99.9% availability rate means that the link may be down for a cumulative total of ten or more hours per year as the peaks of rain storms pass over the area. A secondary lower bandwidth link such as a 5.8 GHz based 100 Mbit/s bridge may be installed parallel to the primary link, with routers on both ends controlling automatic failover to the 100 Mbit/s bridge when the primary 1 Gbit/s link is down due to rain fade. Using this arrangement, high frequency point to point links (23 GHz+) may be installed to service locations many kilometers farther than could be served with a single link requiring 99.99% uptime over the course of one year.
CCIR interpolation formula
It is possible to extrapolate the cumulative attenuation distribution at a given location by using
the CCIR interpolation formula:
Ap = A001 0.12 p−(0.546 − 0.0043 log10 p).
where Ap is the attenuation in dB exceeded for a p percentage of the time and A001 is the attenuation exceeded for 0.01% of the time.
ITU-R frequency scaling formula
According to the ITU-R, rain attenuation statistics can be scaled in frequency in the range 7 to 55 GHz by the formula
where
and f is the frequency in GHz.
See also
Fresnel zone
Diversity scheme
Drop size distribution (DSD)
IndoStar-1, first direct broadcasting satellite that used S-Band that can efficiently reduce rain fade
S-band
References
Radio frequency propagation fading
Satellite broadcasting | Rain fade | [
"Engineering"
] | 1,171 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
767,086 | https://en.wikipedia.org/wiki/Cotinine | Cotinine is an alkaloid found in tobacco and is also the predominant metabolite of nicotine, typically used as a biomarker for exposure to tobacco smoke. Cotinine is currently being studied as a treatment for depression, post-traumatic stress disorder (PTSD), schizophrenia, Alzheimer's disease and Parkinson's disease. Cotinine was developed as an antidepressant as a fumaric acid salt, cotinine fumarate, to be sold under the brand name Scotine, but it was never marketed.
Similarly to nicotine, cotinine binds to, activates, and desensitizes neuronal nicotinic acetylcholine receptors, though at much lower potency in comparison. It has demonstrated nootropic and antipsychotic-like effects in animal models. Cotinine treatment has also been shown to reduce depression, anxiety, and fear-related behavior as well as memory impairment in animal models of depression, post-traumatic stress disorder, and Alzheimer's disease. Nonetheless, treatment with cotinine in humans was reported to have no significant physiologic, subjective, or performance effects in one study, though others suggest that this may not be the case.
Because cotinine is the main metabolite to nicotine and has been shown to be pharmacologically active, it has been suggested that some of nicotine's effects in the nervous system may be mediated by cotinine and/or complex interactions with nicotine itself.
Pharmacology
A few studies indicate that the affinity for cotinine to the nicotinic acetylcholine receptors (nAChRs) is about 100 times lower than nicotine's. Some work suggests that cotinine may be a positive allosteric modulator of α7 nAChRs. If this is true, cotinine would facilitate endogenous neurotransmission without directly stimulating nAChRs.
Pharmacokinetics
Cotinine has an in vivo half-life of approximately 20 hours, and is typically detectable for several days (up to one week) after the use of tobacco. The level of cotinine in the blood, saliva, and urine is proportionate to the amount of exposure to tobacco smoke, so it is a valuable indicator of tobacco smoke exposure, including secondary (passive) smoke. People who smoke menthol cigarettes may retain cotinine in the blood for a longer period because menthol can compete with enzymatic metabolism of cotinine. African American smokers generally have higher plasma cotinine levels than Caucasian smokers. Males generally have higher plasma cotinine levels than females. These systematic differences in cotinine levels were attributed to variation in CYP2A6 activity. At steady state, plasma cotinine levels are determined by the amount of cotinine formation and the rate of cotinine removal, which are both mediated by the enzyme CYP2A6. Since CYP2A6 activity differs by sex (estrogen induces CYP2A6) and genetic variation, cotinine accumulates in individuals with slower CYP2A6 activity, resulting in substantial differences in cotinine levels for a given tobacco exposure.
Detection in body fluids
Drug tests can detect cotinine in the blood, urine, or saliva. Salivary cotinine concentrations are highly correlated to blood cotinine concentrations, and can detect cotinine in a low range, making it the preferable option for a less invasive method of tobacco exposure testing. Urine cotinine concentrations average four to six times higher than those in blood or saliva, making urine a more sensitive matrix to detect low-concentration exposure.
Cotinine levels <10 ng/mL are considered to be consistent with no active smoking. Values of 10 ng/mL to 100 ng/mL are associated with light smoking or moderate passive exposure, and levels above 300 ng/mL are seen in heavy smokers — more than 20 cigarettes a day. In urine, values between 11 ng/mL and 30 ng/mL may be associated with light smoking or passive exposure, and levels in active smokers typically reach 500 ng/mL or more. In saliva, values between 1 ng/mL and 30 ng/mL may be associated with light smoking or passive exposure, and levels in active smokers typically reach 100 ng/mL or more. Cotinine assays provide an objective quantitative measure that is more reliable than smoking histories or counting the number of cigarettes smoked per day. Cotinine also permits the measurement of exposure to second-hand smoke (passive smoking).
However, tobacco users attempting to quit with the help of nicotine replacement therapies (i.e., gum, lozenge, patch, inhaler, and nasal spray) will also test positive for cotinine, since all common NRT therapies contain nicotine that is metabolized in the same way. Therefore, the presence of cotinine is not a conclusive indication of tobacco use. Cotinine levels can be used in research to explore the question of the amount of nicotine delivered to the user of e-cigarettes, where laboratory smoking machines have many problems replicating real-life conditions.
Serum cotinine concentration has been used for decades in US population surveys of the Centers for Disease Control and Prevention to monitor tobacco use, to monitor levels and trends in exposure to environmental tobacco smoke, and to study the relationship between tobacco smoke and chronic health conditions. An estimated one in four nonsmokers (approximately 58 million persons) were exposed to secondhand smoke during 2013-2014. Nearly 40% of children aged 3–11 years were exposed as were 50% of non-Hispanic blacks.
References
Pyrrolidones
Alkaloids found in Nicotiana
Nicotinic agonists
Pyridine alkaloids
Recreational drug metabolites
Biomarkers
3-Pyridyl compounds | Cotinine | [
"Chemistry",
"Biology"
] | 1,223 | [
"Pyridine alkaloids",
"Alkaloids by chemical classification",
"Biomarkers"
] |
767,142 | https://en.wikipedia.org/wiki/Access%20Grid | Access Grid is a collection of resources and technologies that enables large format audio and video based collaboration between groups of people in different locations. The Access Grid is an ensemble of resources, including multimedia large-format displays, presentation and interactive environments, and interfaces with grid computing middleware and visualization environments. In simple terms, it is advanced videoconferencing using big displays and with multiple simultaneous camera feeds at each node (site). The technology was invented at Argonne National Laboratory, Chicago.
The "Alliance Chautauqua 99", a series of two-day conferences on computational science organised by the NCSA, was the first large-scale Access Grid event. The Access Grid was later demonstrated at Supercomputing'99 in Portland to an international audience.
there are well over 500 nodes around the world that allow for various forms of creative and academic collaborations. Access Grid users tend to use XMPP as their text-based back-end. Indeed, the new version of the Access Grid Toolkit integrates an XMPP client with the Access Grid software.
International Access Grid
Australia and New Zealand
The Access Grid has generated interest and activity in Australia, where factors such as widely disparate geographic locations and relatively low population-densities have previously presented great obstacles to "in-person" collaborations.
The International Centre of Excellence for Education in Mathematics (ICE-EM) have funded 10 Australian universities to construct nodes. The nodes allow the mathematics postgraduate community and professionals access to international experts who are visiting Australia. The nodes also provide a means of carrying out collaborative research with peers within Australia and internationally.
Australia's first Access Grid node was built at Sydney VisLab at the Australian Technology Park in August 2001.
By 2007 the Australian AG network has grown to more than 30 sites serviced by Asia Pacific Access Grid (APAG) venue servers at University of Sydney (AG2) and University of Queensland (AG3).
The University of Queensland began providing AG facilities in 2002, with increasing usage every year since then. In 2004, the UQ Vislab began providing the Access Grid installation packages for various Linux distributions, as well as FreeBSD, to the wider AG community, although intellectual property concerns have placed the future of the Linux-based technologies into doubt. It has also been active in developing various enhancements and add-ons including shared applications for remote sensor monitoring shared application, Remote Thermo
and shared GIS based on
GRASS.
By December 2006 each New Zealand university has an operational AG node, and use of the grid is increasing.
Current development work includes a
federated data management using Storage Resource Broker (SRB)
and high definition video communications.
United Kingdom
UK academic community support for Access Grid Toolkit, IOCOM and EVO technologies on JANET is provided by the JANET Videoconferencing Management Centre.
The first Access Grid (AG) node was built at the University of Manchester in 2001, with Jisc-funded support from the Access Grid Support Centre (AGSC) in Manchester from April 2004 to July 2011. There are now over three hundred AG nodes registered in the UK, ranging from full room nodes to small individual desktop nodes.
There are a number of academic projects using AG technologies such as the Taught Course Centre and MAGIC (postgraduate mathematics) mathematics projects.
References
External links
https://web.archive.org/web/20040627082707/http://www.accessgrid.org/
https://web.archive.org/web/20070227172802/http://www.vislab.uq.edu.au/research/accessgrid/
http://www.iocom.com
Multimedia
Grid computing | Access Grid | [
"Technology"
] | 760 | [
"Multimedia"
] |
767,350 | https://en.wikipedia.org/wiki/Nitrophosphate%20process | The nitrophosphate process (also known as the Odda process) is a method for the industrial production of nitrogen fertilizers invented by Erling Johnson in the municipality of Odda, Norway around 1927.
The process involves acidifying phosphate rock with dilute nitric acid to produce a mixture of phosphoric acid and calcium nitrate.
Ca5(PO4)3OH + 10 HNO3 -> 3 H3PO4 + 5 Ca(NO3)2 + H2O
The mixture is cooled to below 0 °C, where the calcium nitrate crystallizes and can be separated from the phosphoric acid.
2 H3PO4 + 3 Ca(NO3)2 + 12 H2O -> 2 H3PO4 + 3 Ca(NO3)2.4H2O
The resulting calcium nitrate produces nitrogen fertilizer. The filtrate is composed mainly of phosphoric acid with some nitric acid and traces of calcium nitrate, and this is neutralized with ammonia to produce a compound fertilizer.
Ca(NO3)2 + 4 H3PO4 + 8 NH3 -> CaHPO4 + 2 NH4NO3 + 3(NH4)2HPO4
If potassium chloride or potassium sulfate is added, the result will be NPK fertilizer. The process was an innovation for requiring neither the expensive sulfuric acid nor producing gypsum waste (known in the context of phosphate production as phosphogypsum).
The calcium nitrate mentioned before, can as said be worked up as calcium nitrate fertilizer but often it is converted into ammonium nitrate and calcium carbonate using carbon dioxide and ammonia.
Ca(NO3)2 + 2 NH3 + CO2 + H2O -> 2 NH4NO3 + CaCO3
Both products can be worked up together as straight nitrogen fertilizer.
Although Johnson created the process while working for the Odda Smelteverk, his company never employed it. Instead, it licensed the process to Norsk Hydro, BASF, Hoechst, and DSM. Each of these companies used the process, introduced variations, and licensed it to other companies. Today, only a few companies (e.g. Yara (Norsk Hydro), Acron, EuroChem, Borealis Agrolinz Melamine GmbH, Omnia, GNFC) still use the Odda process. Due to the alterations of the process by the various companies who employed it, the process is now generally referred to as the nitrophosphate process.
References
Chemical processes
Norwegian inventions | Nitrophosphate process | [
"Chemistry"
] | 540 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
767,937 | https://en.wikipedia.org/wiki/Consensus%20sequence | In molecular biology and bioinformatics, the consensus sequence (or canonical sequence) is the calculated sequence of most frequent residues, either nucleotide or amino acid, found at each position in a sequence alignment. It represents the results of multiple sequence alignments in which related sequences are compared to each other and similar sequence motifs are calculated. Such information is important when considering sequence-dependent enzymes such as RNA polymerase.
Biological significance
A protein binding site, represented by a consensus sequence, may be a short sequence of nucleotides which is found several times in the genome and is thought to play the same role in its different locations. For example, many transcription factors recognize particular patterns in the promoters of the genes they regulate. In the same way, restriction enzymes usually have palindromic consensus sequences, usually corresponding to the site where they cut the DNA. Transposons act in much the same manner in their identification of target sequences for transposition. Finally, splice sites (sequences immediately surrounding the exon-intron boundaries) can also be considered as consensus sequences.
Thus a consensus sequence is a model for a putative DNA binding site: it is obtained by aligning all known examples of a certain recognition site and defined as the idealized sequence that represents the predominant base at each position. All the actual examples shouldn't differ from the consensus by more than a few substitutions, but counting mismatches in this way can lead to inconsistencies.
Any mutation allowing a mutated nucleotide in the core promoter sequence to look more like the consensus sequence is known as an up mutation. This kind of mutation will generally make the promoter stronger, and thus the RNA polymerase forms a tighter bind to the DNA it wishes to transcribe and transcription is up-regulated. On the contrary, mutations that destroy conserved nucleotides in the consensus sequence are known as down mutations. These types of mutations down-regulate transcription since RNA polymerase can no longer bind as tightly to the core promoter sequence.
Sequence analysis
Developing software for pattern recognition is a major topic in genetics, molecular biology, and bioinformatics. Specific sequence motifs can function as regulatory sequences controlling biosynthesis, or as signal sequences that direct a molecule to a specific site within the cell or regulate its maturation. Since the regulatory function of these sequences is important, they are thought to be conserved across long periods of evolution. In some cases, evolutionary relatedness can be estimated by the amount of conservation of these sites.
Notation
The conserved sequence motifs are called consensus sequences and they show which residues are conserved and which residues are variable. Consider the following example DNA sequence:
A[CT]N{A}YR
In this notation, A means that an A is always found in that position; [CT] stands for either C or T; N stands for any base; and {A} means any base except A. Y represents any pyrimidine, and R indicates any purine.
In this example, the notation [CT] does not give any indication of the relative frequency of C or T occurring at that position. And it is not possible to write it as a single consensus sequence e.g. ACNCCA. An alternative method of representing a consensus sequence uses a sequence logo. This is a graphical representation of the consensus sequence, in which the size of a symbol is related to the frequency that a given nucleotide (or amino acid) occurs at a certain position. In sequence logos the more conserved the residue, the larger the symbol for that residue is drawn; the less frequent, the smaller the symbol. Sequence logos can be generated using WebLogo, or using the Gestalt Workbench, a publicly available visualization tool written by Gustavo Glusman at the Institute for Systems Biology.
Software
Bioinformatics tools are able to calculate and visualize consensus sequences. Examples of the tools are JalView and UGENE.
See also
Position-specific scoring matrix
Regular expression — denoting multiple sequences of symbols in formal language theory
Sequence motif
Sequence logo
References
Bioinformatics
DNA | Consensus sequence | [
"Engineering",
"Biology"
] | 831 | [
"Bioinformatics",
"Biological engineering"
] |
8,116,008 | https://en.wikipedia.org/wiki/Cohn%20process | The Cohn process, developed by Edwin J. Cohn, is a series of purification steps with the purpose of extracting albumin from blood plasma. The process is based on the differential solubility of albumin and other plasma proteins based on pH, ethanol concentration, temperature, ionic strength, and protein concentration. Albumin has the highest solubility and lowest isoelectric point of all the major plasma proteins. This makes it the final product to be precipitated, or separated from its solution in a solid form. Albumin was an excellent substitute for human plasma in World War Two. When administered to wounded soldiers or other patients with blood loss, it helped expand the volume of blood and led to speedier recovery. Cohn's method was gentle enough that isolated albumin protein retained its biological activity.
Process details
During the operations, the ethanol concentration change from zero initially to 40%. The pH decreases from neutral at 7 to more acidic at 4.8 over the course of the fractionation. The temperature starts at room temperature and decreases to −5 degrees Celsius. Initially, the blood is frozen. There are five major fractions. Each fraction ends with a specific precipitate. These precipitates are the separate fractions.
Fractions I, II, and III are precipitated out at earlier stages. The conditions of the earlier stages are 8% ethanol, pH 7.2, −3 °C, and 5.1% protein for Fraction I; 25% ethanol, pH of 6.9, −5 °C, and 3% protein. The albumin remains in the supernatant fraction during the solid/liquid separation under these conditions. Fraction IV has several unwanted proteins that need to be removed. In order to do this, the conditions are varied in order to precipitate the proteins out. The conditions to precipitate these proteins are raising the ethanol concentration from 18 to 40% and raising the pH from 5.2 to 5.8. Finally, albumin is located in fraction V. The precipitation of albumin is done by reducing the pH to 4.8, which is near the pI of the protein, and maintaining the ethanol concentration to be 40%, with a protein concentration of 1%. Thus, only 1% of the original plasma remains in the fifth fraction.
However, albumin is lost at each process stage, with roughly 20% of the albumin lost through precipitation stages before fraction V. In order to purify the albumin, there is an extraction with water, and adjustment to 10% ethanol, pH of 4.5 at −3 °C. Any precipitate formed here is done so by filtration and is an impurity. These precipitates are discarded. Reprecipitation, or repetition of the precipitation step in order to improve purity, is done so by raising ethanol concentration back to 40% from the extraction stage. The pH is 5.2 and it is conducted at −5 °C. Several variations of Cohn fraction were created to account for lower cost and higher yield. Generally, if the yield is high, the purity is lowered, to roughly 85–90%.
Products other than albumin
Cohn was able to start the Plasma Fractionation Laboratory after he was given massive funding from the government agencies and the private pharmaceutical companies. This led to the fractionation of human plasma. Human plasma proved to have several useful components other than albumin. Human blood plasma fractionation yielded human serum albumin, serum gamma globulin, fibrinogen, thrombin, and blood group globulins. The fibrinogen and thrombin fractions were further combined during the War into additional products, including liquid fibrin sealant, solid fibrin foam and a fibrin film.
Gamma globulins are found in Fractions II and III and proved to be essential in treating measles for soldiers. Gamma globulin also was useful in treatment for polio, but did not have much effect in treating mumps or scarlet fever. Most importantly, the gamma globulins were useful in modifying and preventing infectious hepatitis during the Second World War. It eventually became a treatment for children exposed to this type of hepatitis.
Liquid fibrin sealant was used in treating burn victims, including some from the attack at Pearl Harbor, to attach skin grafts with an increased success rate. It was also found helpful at re-connecting or anastomosing severed nerves. Fibrin foam and thrombin were used to control blood vessel oozing especially in liver injuries and near tumors. It also minimized bleeding from large veins as well as dealing with blood vessel malformations within the brain. Fibrin film was used to stop bleeding in various surgical applications, including neurosurgery. However, it was not useful in controlling arterial bleeding. The first fibrinogen/fibrin based product capable of stopping arterial hemorrhage was the "Fibrin Sealant Bandage" or "Hemostatic Dressing (HD)" invented by Martin MacPhee at the American Red Cross in the early 1990s, and tested in collaboration with the U.S. Army.
Process variations
The Gerlough method, developed in 1955 improved process economics by reducing the consumption of ethanol. Instead of 40% in certain steps, Gerlough used 20% ethanol for precipitation. This is especially used for Fractions II and III. In addition, Gerlough combined the two fractions with IV into one step to reduce the number of fractionations required. While this method proved less expensive, it was not adopted by industry because of this combination of fractions II, III, and IV, for fear of mixing and high impurities.
The Hink method developed in 1957. This method gave higher yields through recovery of some of the plasma proteins discarded in the Fractions of IV. The improved yields, however, balanced by the lower purities obtained, within the 85% range.
The Mulford method, akin to the Hink, used the fractions II and III supernatant as the last step before finishing and heat treatment. The method combined fractions IV and V, but in this case, the albumin would not be as pure, although the yields may be higher.
Another variation was developed by Kistler and Nitschmann, to provide a purer form of albumin, even though offset by lower yields. Similar to Gerlough, the Precipitate A, which is equivalent to Cohn’s Fraction II and III, was done at a lower ethanol concentration of 19%, but the pH, in this case, was also lower to 5.85. Also similar to Gerlough and Mulford, the fraction IV was combined and precipitated at conditions of 40% ethanol, pH of 5.85, and temperature of −8 degrees C. The albumin, which is recovered in fraction V, is recovered in Precipitate C at a pH adjustment to 4.8. Similar to the Cohn Process, the albumin is purified by extraction into water followed by precipitation of the impurities at 10% ethanol, pH 4.6, and −3 degrees C. Akin to the Cohn Process, the precipitate formed here is filtered out and discarded. Then Precipitate C (fraction V) is reprecipitated at pH 5.2 and stored as a paste at −40 degrees C. This process has been more widely accepted because it separates the fractions and makes each stage independent of each other.
Another variation involved a heat ethanol fractionation. It was originally developed to inactivate the hepatitis virus. In this process, recovery of high yield, high purity albumin is the most important goal, while the other plasma proteins are neglected. In order to make sure the albumin does not denature in the heat, there are stabilizers such as sodium octanoate, which allow the albumin to tolerate higher temperatures for long periods. In heat ethanol, the plasma is heat treated at 68 degrees C with sodium octanoate with 9% ethanol at pH of 6.5. This results in improved albumin recovery with yields of 90%, and purities of 100%. It is not nearly as expensive as cold ethanol procedures such as the Cohn Process. One drawback is the presences of new antigens due to possible heat denaturation of the albumin. In addition, the other plasma proteins have practical uses and to neglect them would not be worth it. Finally, the expensive heat treatment vessels offset the lower cost compared to the cold ethanol format that do not need it. For these reasons, several companies haven not adopted this method even though it has the most impressive results. However, one prominent organization that uses it is the German Red Cross.
The latest variation was developed by Hao in 1979. This method is significantly simplified compared to the Cohn Process. Its goal is to create high albumin yields as long as albumin is the sole product. Through a two-stage process, impurities are precipitated directly from fractions II and III supernatant at 42% ethanol, pH 5.8, temperature −5 degrees C, 1.2% protein, and 0.09 ionic strength. Fraction V is precipitated at pH 4.8. Fractions I, II, III, and IV are coprecipitated at 40% ethanol, with pH of 5.4 to 7.0, and temperature −3 to −7 degrees C. Fraction V is then precipitated at pH 4.8 and −10 degrees C. The high yields are due to a combination of a simplified process, with lower losses due to coprecipitation, and use of filtration. Higher purities were also achieved at 98% because of the higher ethanol levels, but the yields were lowered with the high purity.
More recent methods involve the use of chromatography.
Influences of Cohn process
The Cohn process was a major development in the field of blood fractionation. It has several practical uses in treating diseases such as hepatitis and polio. It was most useful during the Second World War where soldiers recovered at a faster rate because of the transfusions with albumin. The Cohn Process has been modified over the years as seen above. In addition, it has influenced other processes with the blood fractionation industry. This has led to new forms of fractionation such as chromatographic plasma fractionation in ion exchange and albumin finishing processes. In general, the Cohn Process and its variations have given a huge boost to and serve as a foundation for the fractionation industry to this day.
However, the process has not been studied well because it is archaic. Most importantly, it has never been modernized by manufacturing companies. The cold ethanol format may be too gentle to kill off certain viruses that require heat inactivation. Since this process remains unchanged for so long, several built-in inefficiencies and inconsistencies affect the economics of the process for pharmaceutical and manufacturing companies. One exception to this was the application in Scotland of continuous-flow processing instead of batch processing. This process was devised at the Protein Fractionation Centre (PFC), the plasma fractionation facility of the Scottish National Blood Transfusion Service (SNBTS). This process involved in-line monitoring and control of pH and temperature, with flow control of plasma and ethanol streams using precision gear pumps, all under computerised feedback control . As a result, Cohn Fractions I+II+III, IV and V were produced in a few hours, rather than over many days. The continuous-flow preparation of cryoprecipitate was subsequently integrated into the process upstream of Cohn Fractionation.
Nevertheless, this process still serves as a major foundation for the blood industry in general and its influence can be seen as it is referred to in the development of newer methods. Although it has its drawbacks depending on the variation, the Cohn Process’ main advantage is its practical uses and its utility within pharmacological and medical industries.
References
Biochemical separation processes
Blood
Blood products
Industrial processes
Medical technology
Transfusion medicine
Fractionation | Cohn process | [
"Chemistry",
"Biology"
] | 2,513 | [
"Biochemistry methods",
"Fractionation",
"Separation processes",
"Biochemical separation processes",
"Medical technology"
] |
8,117,002 | https://en.wikipedia.org/wiki/Newton%E2%80%93Wigner%20localization | Newton–Wigner localization (named after Theodore Duddell Newton and Eugene Wigner) is a scheme for obtaining a position operator for massive relativistic quantum particles. It is known to largely conflict with the Reeh–Schlieder theorem outside of a very limited scope.
The Newton–Wigner position operators 1, 2, 3, are the premier notion of position
in relativistic quantum mechanics of a single particle. They enjoy the same
commutation relations with the 3 space momentum operators and transform under
rotations in the same way as the , , in ordinary QM. Though formally they have the same properties with respect to 1,
2, 3, as
the position in ordinary QM, they have additional properties: One of these is that
This ensures that the free particle moves at the expected velocity with the given momentum/energy.
Apparently these notions were discovered when attempting to define a self adjoint operator in the relativistic setting that resembled the
position operator in basic quantum mechanics in the sense that at low momenta it
approximately agreed with that operator. It also has several famous strange behaviors (see the Hegerfeldt theorem in particular), one of
which is seen as the motivation for having to introduce quantum field theory.
References
M.H.L. Pryce, Proc. Roy. Soc. 195A, 62 (1948)
V. Bargmann and E. P. Wigner, Proc Natl Acad Sci USA 34, 211-223 (1948). pdf
Valter Moretti, On the relativistic spatial localization for massive real scalar Klein–Gordon quantum particles Lett Math Phys 113, 66 (2023).
Quantum field theory
Axiomatic quantum field theory | Newton–Wigner localization | [
"Physics"
] | 358 | [
"Quantum field theory",
"Quantum mechanics",
"Quantum physics stubs"
] |
8,120,812 | https://en.wikipedia.org/wiki/Dobson%20ozone%20spectrophotometer | The Dobson spectrophotometer, also known as Dobsonmeter, Dobson spectrometer, or just Dobson is one of the earliest instruments used to measure atmospheric ozone.
History
The Dobson spectrometer was invented in 1924 by British physicist and meteorologist Gordon Dobson. A history of the development of the instrument is here and an example of one of Dobson's own instruments remains on display in the University of Oxford Department of Physics.
Operation
Dobson spectrophotometers can be used to measure both total column ozone and profiles of ozone in the atmosphere. Ozone is tri-atomic oxygen, O3; ozone molecules absorb harmful UV light in the atmosphere before it reaches the surface of the earth. No UVC radiation penetrates to the ground as it is absorbed in the ozone-oxygen cycle. However some longer-wave and less harmful UVB and most of the UVA are not absorbed as ozone is less opaque to these frequencies, so they penetrate to the ground level of Earth in higher quantities. The sources of light used may vary. Beside the direct sun light, the light from the clear sky, moon or stars may be used.
The Dobson spectrometer measures the total ozone by measuring the relative intensity of the UVB radiation that reaches the Earth and comparing it to that of UVA radiation at ground level. If all of the ozone were removed from the atmosphere, the amount of UVB radiation would equal the amount of UVA radiation on the ground. As ozone does exist in the atmosphere, the Dobson Spectrometer can use the ratio between UVA and UVB radiation on the ground to determine how much ozone is present in the upper atmosphere to absorb the UVC radiation.
The ratio is determined by turning the R-dial, which can be rotated a full 300°, on the instrument. The spectrometer compares two different wavelength intensities, UVB (305 nm) and UVA (325 nm), in order to calculate the amount of ozone. When turned, the R-dial filters and blocks out the light of the UVA wavelength until the intensity of the two wavelengths of light are equal. The ratio of the two wavelengths at incidence can be calculated once the filtered intensities are the same. The results are measured in Dobson Units, equal to 10 μm thickness of ozone compressed to Standard conditions for temperature and pressure (STP) in the column. If all of the ozone in the atmospheric column one was measuring were compressed to STP, the thickness of the compressed atmosphere in mm would equal an answer in Dobson Units divided by 100.
The vertical distribution of ozone is derived using the Umkehr method. This method relies on the intensities of reflected, rather than direct, UV light. Ozone distribution is derived from the change in the ratio of the same UV-pair frequencies with time as the sun sets. An "Umkehr" measurement takes about three hours, and provides data up to an altitude of 48 km, with the most accurate information for altitudes above 30 km.
The Dobson method has its drawbacks. It is strongly affected by aerosols and pollutants in the atmosphere, because they also absorb some of the light at the same wavelength. Measurements are made over a small area in the direction of the sun. Today this method is often used to calibrate data obtained by other methods, including satellites.
Instruments and manufacturers
Some modernized versions of Dobson spectrophotometer exist and continue to provide data.
About 120 Dobsonmeters have been made, mostly by R&J Beck of London, of which about 50 remain in use today. The most famous ones are probably Nos. 31 and 51 with which Joe Farman of the British Antarctic Survey discovered the Ozone Hole above the South Pole in 1984. The "World Standard Dobson", No. 83, is owned and operated by the US Dept of Commerce's, NOAA, as is the secondary standard, No. 65.
The oldest instrument still in use is No.8 located at the roof of the Norwegian Polar Institute at Ny-Ålesund, Svalbard. This instrument has the last reported data for 1997.
The instrument D003, operated in Kunming, China reported data to August 2009. The history of the stations and instruments can be found at the World Ozone and UV Data Centre.
The Environment Canada (Alan West Brewer) developed double- and single- monochromator spectrophotometers known as the "Brewer" Spectrophotometer produced by Kipp & Zonen.
References
Further reading
New Scientist, 20. Sept 2008
Spectrometers
Meteorological instrumentation and equipment
Ozone | Dobson ozone spectrophotometer | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 933 | [
"Meteorological instrumentation and equipment",
"Spectrum (physical sciences)",
"Oxidizing agents",
"Measuring instruments",
"Ozone",
"Spectrometers",
"Spectroscopy"
] |
8,121,479 | https://en.wikipedia.org/wiki/Quasi-set%20theory | Quasi-set theory is a formal mathematical theory for dealing with collections of objects, some of which may be indistinguishable from one another. Quasi-set theory is mainly motivated by the assumption that certain objects treated in quantum physics are indistinguishable and don't have individuality.
Motivation
The American Mathematical Society sponsored a 1974 meeting to evaluate the resolution and consequences of the 23 problems Hilbert proposed in 1900. An outcome of that meeting was a new list of mathematical problems, the first of which, due to Manin (1976, p. 36), questioned whether classical set theory was an adequate paradigm for treating collections of indistinguishable elementary particles in quantum mechanics. He suggested that such collections cannot be sets in the usual sense, and that the study of such collections required a "new language".
The use of the term quasi-set follows a suggestion in da Costa's 1980 monograph Ensaio sobre os Fundamentos da Lógica (see da Costa and Krause 1994), in which he explored possible semantics for what he called "Schrödinger Logics". In these logics, the concept of identity is restricted to some objects of the domain, and has motivation in Schrödinger's claim that the concept of identity does not make sense for elementary particles (Schrödinger 1952). Thus in order to provide a semantics that fits the logic, da Costa submitted that "a theory of quasi-sets should be developed", encompassing "standard sets" as particular cases, yet da Costa did not develop this theory in any concrete way. To the same end and independently of da Costa, Dalla Chiara and di Francia (1993) proposed a theory of quasets to enable a semantic treatment of the language of microphysics. The first quasi-set theory was proposed by D. Krause in his PhD thesis, in 1990 (see Krause 1992). A related physics theory, based on the logic of adding fundamental indistinguishability to equality and inequality, was developed and elaborated independently in the book The Theory of Indistinguishables by A. F. Parker-Rhodes.
Summary of the theory
We now expound Krause's (1992) axiomatic theory , the first quasi-set theory; other formulations and improvements have since appeared. For an updated paper on the subject, see French and Krause (2010). Krause builds on the set theory ZFU, consisting of Zermelo-Fraenkel set theory with an ontology extended to include two kinds of urelements:
m-atoms, whose intended interpretation is elementary quantum particles;
M-atoms, macroscopic objects to which classical logic is assumed to apply.
Quasi-sets (q-sets) are collections resulting from applying axioms, very similar to those for ZFU, to a basic domain composed of m-atoms, M-atoms, and aggregates of these. The axioms of include equivalents of extensionality, but in a weaker form, termed "weak extensionality axiom"; axioms asserting the existence of the empty set, unordered pair, union set, and power set; the axiom of separation; an axiom stating the image of a q-set under a q-function is also a q-set; q-set equivalents of the axioms of infinity, regularity, and choice. Q-set theories based on other set-theoretical frameworks are, of course, possible.
has a primitive concept of quasi-cardinal, governed by eight additional axioms, intuitively standing for the quantity of objects in a collection. The quasi-cardinal of a quasi-set is not defined in the usual sense (by means of ordinals) because the m-atoms are assumed (absolutely) indistinguishable. Furthermore, it is possible to define a translation from the language of ZFU into the language of in such a way so that there is a 'copy' of ZFU in . In this copy, all the usual mathematical concepts can be defined, and the 'sets' (in reality, the '-sets') turn out to be those q-sets whose transitive closure contains no m-atoms.
In there may exist q-sets, called "pure" q-sets, whose elements are all m-atoms, and the axiomatics of provides the grounds for saying that nothing in distinguishes the elements of a pure q-set from one another, for certain pure q-sets. Within the theory, the idea that there is more than one entity in x is expressed by an axiom stating that the quasi-cardinal of the power quasi-set of x has quasi-cardinal 2qc(x), where qc(x) is the quasi-cardinal of x (which is a cardinal obtained in the 'copy' of ZFU just mentioned).
What exactly does this mean? Consider the level 2p of a sodium atom, in which there are six indiscernible electrons. Even so, physicists reason as if there are in fact six entities in that level, and not only one. In this way, by saying that the quasi-cardinal of the power quasi-set of x is 2qc(x) (suppose that qc(x) = 6 to follow the example), we are not excluding the hypothesis that there can exist six subquasi-sets of x that are 'singletons', although we cannot distinguish among them. Whether there are or not six elements in x is something that cannot be ascribed by the theory (although the notion is compatible with the theory). If the theory could answer this question, the elements of x would be individualized and hence counted, contradicting the basic assumption that they cannot be distinguished.
In other words, we may consistently (within the axiomatics of ) reason as if there are six entities in x, but x must be regarded as a collection whose elements cannot be discerned as individuals. Using quasi-set theory, we can express some facts of quantum physics without introducing symmetry conditions (Krause et al. 1999, 2005). As is well known, in order to express indistinguishability, the particles are deemed to be individuals, say by attaching them to coordinates or to adequate functions/vectors like |ψ>. Thus, given two quantum systems labeled |ψ1⟩ and |ψ2⟩ at the outset, we need to consider a function like |ψ12⟩ = |ψ1⟩|ψ2⟩ ± |ψ2⟩|ψ1⟩ (except for certain constants), which keep the quanta indistinguishable by permutations; the probability density of the joint system independs on which is quanta #1 and which is quanta #2. (Note that precision requires that we talk of "two" quanta without distinguishing them, which is impossible in conventional set theories.) In , we can dispense with this "identification" of the quanta; for details, see Krause et al. (1999, 2005) and French and Krause (2006).
Quasi-set theory is a way to operationalize Heinz Post's (1963) claim that quanta should be deemed indistinguishable "right from the start."
See also
Multisets
Quantum physics
Quantum logic
References
Newton da Costa (1980) Ensaio sobre os Fundamentos da Lógica. São Paulo: Hucitec.
Manin, Yuri (1976) "Problems in Present Day Mathematics: Foundations," in Felix Browder, ed., Proceedings of Symposia in Pure Mathematics, Vol. XXVIII. Providence RI: American Mathematical Society.
Reprinted in
Set theory
Quantum mechanics | Quasi-set theory | [
"Physics",
"Mathematics"
] | 1,605 | [
"Mathematical logic",
"Theoretical physics",
"Quantum mechanics",
"Set theory"
] |
8,124,077 | https://en.wikipedia.org/wiki/Transition%20state%20analog | Transition state analogs (transition state analogues), are chemical compounds with a chemical structure that resembles the transition state of a substrate molecule in an enzyme-catalyzed chemical reaction. Enzymes interact with a substrate by means of strain or distortions, moving the substrate towards the transition state. Transition state analogs can be used as inhibitors in enzyme-catalyzed reactions by blocking the active site of the enzyme. Theory suggests that enzyme inhibitors which resembled the transition state structure would bind more tightly to the enzyme than the actual substrate. Examples of drugs that are transition state analog inhibitors include flu medications such as the neuraminidase inhibitor oseltamivir and the HIV protease inhibitors saquinavir in the treatment of AIDS.
Transition state analogue
The transition state of a structure can best be described in regards to statistical mechanics where the energies of bonds breaking and forming have an equal probability of moving from the transition state backwards to the reactants or forward to the products. In enzyme-catalyzed reactions, the overall activation energy of the reaction is lowered when an enzyme stabilizes a high energy transition state intermediate. Transition state analogs mimic this high energy intermediate but do not undergo a catalyzed chemical reaction and can therefore bind much stronger to an enzyme than simple substrate or product analogs.
Designing transition state analogue
To design a transition state analogue, the pivotal step is the determination of transition state structure of substrate on the specific enzyme of interest with experimental method, for example, kinetic isotope effect. In addition, the transition state structure can also be predicted with computational approaches as a complementary to KIE. We will explain these two methods in brief.
Kinetic isotope effect
Kinetic isotope effect (KIE) is a measurement of the reaction rate of isotope-labeled reactants against the more common natural substrate. Kinetic isotope effect values are a ratio of the turnover number and include all steps of the reaction. Intrinsic kinetic isotope values stem from the difference in the bond vibrational environment of an atom in the reactants at ground state to the environment of the atom's transition state. Through the kinetic isotope effect much insight can be gained as to what the transition state looks like of an enzyme-catalyzed reaction and guide the development of transition state analogs.
Computational simulation
Computational approaches have been regarded as a useful tool to elucidate the mechanism of action of enzymes. Molecular mechanics itself can not predict the electron transfer which is the fundamental of organic reaction but the molecular dynamics simulation provide sufficient information considering the flexibility of protein during catalytic reaction. The complementary method would be combined molecular mechanics/ quantum mechanics simulation (QM/MM)methods. With this approach, only the atoms responsible for enzymatic reaction in the catalytic region will be reared with quantum mechanics and the rest of the atoms were treated with molecular mechanics.
Examples of transition state analogue design
After determining the transition state structures using either KIE or computation simulations, the inhibitor can be designed according to the determined transition state structures or intermediates. The following three examples illustrate how the inhibitors mimic the transition state structure by changing functional groups correspond to the geometry and electrostatic distribution of the transition state structures.
Methylthioadenosine nucleosidase inhibitor
Methylthioadenosine nucleosidase are enzymes that catalyse the hydrolytic deadenylation reaction of 5'-methylthioadenosine and S-adenosylhomocysteine. It is also regarded as an important target for antibacterial drug discovery because it is important in the metabolic system of bacteria and only produced by bacteria. Given the different distance between nitrogen atom of adenine and the ribose anomeric carbon (see in the diagram in this section), the transition state structure can be defined by early or late dissociation stage. Based on the finding of different transition state structures, Schramm and coworkers designed two transition state analogues mimicking the early and late dissociative transition state. The early and late transition state analogue shown binding affinity (Kd) of 360 and 140 pM, respectively.
Thermolysin inhibitor
Thermolysin is an enzyme produced by Bacillus thermoproteolyticus that catalyses the hydrolysis of peptides containing hydrophobic amino acids. Therefore, it is also a target for antibacterial agents. The enzymatic reaction mechanism starts form the small peptide molecule and replaces the zinc binding water molecule towards Glu143 of thermolysin. The water molecule is then activated by both the zinc ion and the Glu143 residue and attacks the carbonyl carbon to form a tetrahedral transition state (see figure). Holden and coworkers then mimicked that tetrahedral transition state to design a series of phosphonamidate peptide analogues. Among the synthesized analogues, R = L-Leu possesses the most potent inhibitory activity (Ki = 9.1 nM).
Arginase inhibitor
Arginase is a binuclear manganese metalloprotein that catalyses the hydrolysis of L-arginine to L-ornithine and urea. It is also regarded as a drug target for the treatment of asthma. The mechanism of hydrolysis of L-arginine is carried out via nucleophilic attack on the guanidino group by water, forming a tetrahedral intermediate. Studies shown that a boronic acid moiety adopts a tetrahedral configuration and serves as an inhibitor. In addition, the sulfonamide functional group can also mimic the transition state structure. Evidence of boronic acid mimics as transition state analogue inhibitors of human arginase I was elucidated by x-ray crystal structures.
See also
Enzyme
Structural analog, compounds with similar chemical structure
Enzyme inhibitor
Substrate analog
Suicide inhibitor
Substrate
References
Enzyme kinetics
Chemical nomenclature
he:אנלוג של סובסטרט | Transition state analog | [
"Chemistry"
] | 1,210 | [
"Chemical kinetics",
"nan",
"Enzyme kinetics"
] |
3,500,268 | https://en.wikipedia.org/wiki/Metal%20roof | A metal roof is a roofing system featuring metal pieces or tiles exhibiting corrosion resistance, impermeability to water, and long life. It is a component of the building envelope. The metal pieces may be a covering on a structural, non-waterproof roof, or they could be self-supporting sheets.
History
Lead and copper have played a significant role in architecture for thousands of years (see: copper in architecture).
Lead was one of the first and easiest metals to smelt and with a low melting point, it could be easily formed to be watertight. As a by-product of silver smelting, in Roman times it was readily available and relatively cheap.
In the 3rd century BCE, copper roof shingles were installed atop the Lovamahapaya Temple in Sri Lanka. The Romans used copper as roof covering for the Pantheon in 27BCE. Centuries later, copper and its alloys were integral in European medieval architecture. The copper roof of St. Mary's Cathedral, Hildesheim, installed in 1280CE, survived until its destruction during bombings in World War II. The roof at Kronborg, one of Northern Europe's most important Renaissance castles (immortalized as Elsinore Castle in Shakespeare’s Hamlet) was installed in 1585CE. The copper on the tower was renovated in 2009.
When iron smelting became widespread in the early 19th century, although the smelting process was complicated, ore was so plentiful that iron became cheaper than lead, and much cheaper than copper. It was later determined that iron corrosion (rust) could be stopped or at least slowed by dipping the hot iron sheets into molten tin or zinc, forming a metallurgically bonded coating which protected it. Terne, an iron plate dipped into a solder of 80–90% lead with only the remainder tin, was cheaper than tinplate made in the same way, and the lead was more resistant in long-term outside use than tin or zinc alone. Terne became popular for roofs and weather-resistant farm items.
In 1829, Henry Palmer, engineer of the London Dock Company, patented "indented or corrugated metallic sheets" which added additional stiffness to bending in one direction in the manner of a beam. This allowed the sheet iron to be self-supporting when used as a roof; a contemporary account praised the material as "the lightest and strongest roof (for its weight) since the days of Adam".
After Palmer's patent expired in 1843, corrugated galvanized iron (CGI) became a world-wide favorite roofing material. In the later 19th century, steel mills replaced iron works, and the product using steel could be made thinner for the same span and stiffness performance, but the term CGI remains in the UK and Australia. In the early 20th century, after being used for military purposes in the trenches and for Nissen or Quonset huts, CGI roofs were widely used but had low status. After architects such as Walter Gropius and Buckminster Fuller used the material, and with shiny and streamlined "desert modernist" designs such as Pierre Koenig's Stahl House (with an interior exposed metal roof, but not actual CGI), it recovered status. Albert Frey's 1964 Palm Springs house used actual corrugated steel as roofing, as well as corrugated aluminum exterior siding.
Environmentally friendly
Metal roofs are 100% recyclable and can be made from other recycled products. Asphalt shingles are petroleum based with other chemicals making their recycling process more toxic, most shingles are not recycled and are sent to landfills every year and take hundreds of years to decompose. Metal roofs emissivity is better at reflecting solar radiation at 10%–75% depending on the color choice, compared to asphalt roofs that reflect 5%–25% depending on their color. Over the lifetime of the metal roof they keep 95% of the reflective capacity compared to other roof types that lose 20%–40% of their reflective capacity. The highly reflective coatings in the paint of metal roofing can lower utility bills by 40%.
Advantages
Metal roofs can last up to 100 years, with installers providing 50-year warranties. Because of their longevity, most metal roofs are less expensive than asphalt shingles in the long term.
Metal roofing can consist of a high percentage of recycled material and is 100% recyclable. It does not get as hot as asphalt, a common roofing material, and it reflects heat away from the building underneath in summertime. On a larger scale, its use reduces the heat island effect of cities when compared to asphalt. Coupled with its better insulating abilities, metal roofs can offer not only a 40% reduction in energy costs in the summer, but also up to a 15% reduction in the energy costs in the winter according to a 2008 study by Oak Ridge National Laboratory. This finding is based on the use of a strapping system of between the plywood and "cool-color" metal on top, which provides an air gap between the plywood roof sheathing and the metal. Cool-color metals are light, reflective colors, like white. The study went on to say that resealing and insulating air ducts in the attic will save even more money.
Metal roofing is also lightweight, creates little stress on the load-bearing roof support structures and can be installed on top of an existing roof. A lightweight roof is very useful for large or old structures, as it helps to maintain the overall structural integrity of the building. Despite its light weight, metal roofing provides increased wind resistance when compared to other roofing materials. This is because metal roofing systems use interlocking panels. Metal roofing sheets are also resistant to any kind of attack by pests and insects.
Material types
Metal roofs are sometimes made of corrugated galvanized steel: a wrought iron–steel sheet was coated with zinc and then roll-formed into corrugated sheets. Another approach is to blend zinc, aluminum, and silicon-coated steel. These products are sold under various trade names like "Zincalume" or "Galvalume". The surface may display the raw zinc finish, or it may be used as a base metal under factory-coated colors.
Standing seam metal roof
Standing seam metal roofs come in sheets up to or sometimes more than long and widths of . The standing seam is typically . They are more expensive upfront for installation and material costs but last longer than Asphalt shingles, over a lifespan of at least 50 years, they are less expensive than asphalt shingle roofs. They require less maintenance than corrugated metal roofs because of the exposed fasteners on that roof.
Mechanically seamed
Mechanically seamed roofs are seamed together using a roof seamer and can be either single lock or double lock seamed, meaning they can be folded under once to be seamed together or folded under twice for extra weather protection. This is the most expensive of the three types but is the most weather resistant.
Snap locked with fastener strip
One side of the standing seam sheet is snap locked into the other standing seam sheet that is fastened to the roof, concealing those fasteners, and the other side of the sheet is fastened to the roof with screws, and the next sheet will cover those screws as well. The fastener screws shouldn't be screwed in too tightly to allow the sheet to expand and contract with the changing temperatures, each fastener slot has some room to move past the screw to adjust for thermal expansion. The fastener heads breaking off potentially is the down side to this method if installed improperly or from wear and tear from the fluctuating climate.
Snap locked with clips
Snap locked with metal clips fastened to the roof allows for more thermal expansion than fastener strip standing seam metal roofs. The fasteners and clips are both hidden under the metal roof sheets, and this option is marginally more expensive than the fastener strip snap locked standing seam roof.
Thin-film solar on metal roofs
With the increasing efficiencies of thin-film solar cells, installing them on metal roofs has become cost competitive with traditional monocrystalline and polycrystalline solar cells. The thin-film panels are flexible and run down the standing-seam metal roofs and stick to the metal roof with adhesive, so no holes are needed to install. The connection wires run under the ridge cap at the top of the roof. Efficiency ranges from 10–18% but costs only about $2.00–$3.00 per watt of installed capacity, compared to monocrystalline which is 17–22% efficient and costs $3.00–$3.50 per watt of installed capacity. Thin-film solar is light weight at . Thin-film solar panels last 10–20 years but have a quicker ROI than traditional solar panels. The metal roofs last 40–70 years before replacement compared to 12–20 years for an asphalt shingle roof.
Corrugated metal roof
Corrugated metal roofs are prefabricated sheets that are bent and wavy to make them more rigid.
Corrugated metal roofs are similar in price to asphalt shingle roof installation. The fasteners are screwed through the metal into the roof requiring more maintenance to make sure the screws stay secured. Corrugated metal roofs can last 30–45 years with proper maintenance.
Stone-coated metal roofing
Metal tile sheets can also be employed. These are usually painted or stone-coated steel. Stone coated steel roofing panels are made from zinc/aluminium-coated steel with an acrylic gel coating. The stones are usually a natural product with a colored ceramic coating. Stainless steel is another option. It is usually roll-formed into standing seam profiles for roofing; however, individual shingles are also available. Other metals used for roofing are lead, tin and aluminium and copper.
Copper roofs
Copper is used for roofing because it offers corrosion resistance, durability, long life, low maintenance, radio frequency shielding, lightning protection, and sustainability benefits. Copper roofs are often one of the most architecturally distinguishable features of prominent buildings, including churches, government buildings, and universities. Today, copper is used in not only in roofing systems, but also for flashings and copings, rain gutters and downspouts, domes, spires, vaults, and various other architectural design elements. At the Lyle Center for Regenerative Studies in Pomona, California, copper was chosen for the roofing on regenerative principles: if the building were to be dismantled the copper could be reused because of its high value in recycling and its variety of potential uses. A vented copper roof assembly at Oak Ridge National Laboratories (U.S.) substantially reduced heat gain compared with stone-coated steel shingle (SR246E90) or asphalt shingle (SR093E89), resulting in lower energy costs.
Coating
Several types of coatings are used on metal panels to prevent rust, provide waterproofing, or reflect heat. They are made of various materials such as epoxy and ceramic.
Ceramic coatings can be applied on metal roof materials to add heat-reflective properties. Most ceramic coatings are made from regular paint with ceramic beads mixed in as an additive.
Coatings are sometimes applied to copper. Clear coatings preserve the natural color, warmth, and metallic tone of copper alloys. Oils exclude moisture from copper roofs and flashings and simultaneously enhance their appearance by bringing out a rich luster and depth of color. The most popular oils are lemon oil (like USP), lemongrass oil (such as East Indian), paraffin oils, linseed oil, and castor oil. On copper roofing or flashing, reapplication once every three years can effectively retard patina formation. In arid climates, the maximum span between oilings may be extended up to five years. Opaque paint coatings are primarily applied over copper when substrate integrity and longevity are desired but a specific color other than the naturally occurring copper hues is required. Lead-sheet covered roofs are not considered metal roofs today, but since lead bonds metallurgically (see solder) thin lead coatings on copper are very long-lasting. Lead-coated copper can be used when the appearance of exposed lead is desired or where copper-contaminated water runoff from bare copper alloys would ordinarily stain lighter-colored building materials, such as marble, limestone, stucco, mortar, or concrete. Zinc-tin coatings are an alternative to lead coatings since they have approximately the same appearance and workability.
See also
References
Roofs
Building materials | Metal roof | [
"Physics",
"Technology",
"Engineering"
] | 2,573 | [
"Structural engineering",
"Building engineering",
"Architecture",
"Structural system",
"Construction",
"Materials",
"Roofs",
"Matter",
"Building materials"
] |
3,502,601 | https://en.wikipedia.org/wiki/EDA%20database | An EDA database is a database specialized for the purpose of electronic design automation. These application specific databases are required because general purpose databases have historically not provided enough performance for EDA applications.
In examining EDA design databases, it is useful to look at EDA tool architecture, to determine which parts are to be considered part of the design database, and which parts are the application levels. In addition to the database itself, many other components are needed for a useful EDA application. Associated with a database are one or more language systems (which, although not directly part of the database, are used by EDA applications such as parameterized cells and user scripts). On top of the database are built the algorithmic engines within the tool (such as timing, placement, routing, or simulation engines ), and the highest level represents the applications built from these component blocks, such as floorplanning. The scope of the design database includes the actual design, library information, technology information, and the set of translators to and from external formats such as Verilog and GDSII.
Mature design databases
Many instances of mature design databases exist in the EDA industry, both as a basis for commercial EDA tools as well as proprietary EDA tools developed by the CAD groups of major electronics companies.
IBM, Hewlett-Packard, SDA Systems and ECAD (now Cadence Design Systems), High Level Design Systems, and many other companies developed EDA specific databases over the last 20 years, and these continue to be the basis of IC-design systems today. Many of these systems took ideas from university research and successfully productized them. Most of the mature design databases have evolved to the point where they can represent netlist data, layout data, and the ties between the two. They are hierarchical to allow for reuse and smaller designs. They can support styles of layout from digital through pure analog and many styles of mixed-signal design.
Current design databases
The OpenAccess design database
Given the importance of a common design database in the EDA industry, the OpenAccess Coalition has been formed to develop, deploy, and support an open-sourced EDA design database with shared control. The data model presented in the OA DB provides a unified model that currently extends from structural RTL through GDSII-level mask data, and now into the reticle and wafer space. It provides a
rich enough capability to support digital, analog, and mixed-signal design data. It provides technology data that can express foundry process design rules through at least 20 nm, contains the definitions of the layers and purposes used in the design, definitions of VIAs and routing rules, definitions of operating points used for analysis, and so on. OA makes extensive use of IC-specific data compression techniques to reduce the memory footprint, to address the size, capacity, and performance problems of previous DBs. Despite what its name could imply, this file format has no publicly accessible implementation or specification. Those are exclusive to the members of the OpenAccess Coalition.
Synopsys Milkyway
The Milkyway database was originally developed by Avanti Corporation, which has since been acquired by Synopsys. It was first released in 1997. Milkyway is the database underlying most of Synopsys' physical design tools:
IC Compiler and Astro physical synthesis
Star-RCXT RC parasitic extractor
Hercules LVS/DRC physical verification
Milkyway stores topological, parasitic and timing data. Having been used to design thousands of chips, Milkyway is very stable and production worthy. Milkyway is known to be written in C. Its internal implementation is not available outside Synopsys, so no comments may be made about the implementation.
MDX C-API
At the request of large customers such as Texas Instruments, Avanti released the MDX C-API in 1998. This enables the customers' CAD developers to create plugins that add custom functionality to Milkyway tools (chiefly Astro).
MDX allows fairly complete access to topological data in Milkyway, but does not support timing or RC parasitic data.
MAP-in Program
In early 2003, Synopsys (which acquired Avanti) opened Milkyway through the Milkyway Access Program (MAP-In). Any EDA company may become a MAP-in member for free (Synopsys customers must use MDX). Members are provided the means to interface their software to Milkyway using C, Tcl, or Scheme. The Scheme interface is deprecated in favor of TCL. IC Compiler supports only TCL.
The MAP-in C-API enables a non-Synopsys application to read and write Milkyway databases. Unlike MDX, MAP-in does not permit the creation of a plugin that can be used from within Synopsys Milkyway tools. MAP-in does not support access to timing or RC parasitic data. MAP-in also lacks direct support of certain geometric objects.
MAP-in includes Milkyway Development Environment (MDE). MDE is a GUI application used to develop TCL and Scheme interfaces and diagnose problems. Its major features include:
Graphical editor for viewing and editing Milkyway databases
TCL command interpreter
Scheme command interpreter
Translators to read and write popular formats like Verilog, LEF, DEF and GDSII
Falcon from Mentor
Another significant design database is Falcon, from Mentor Graphics. This database was one of the first in the industry written in C++. Like Milkyway is for Synopsys, Falcon seems to be a stable and mature platform for Mentor’s IC products. Again, the implementation is not publicly available, so little can be said about its features or performance relative to other industry standards.
Magma's database
Magma Design Automation’s database is not just a disk format with an API, but is an entire system built around their DB as a central data structure. Again, since the details of the system are not publicly available, a direct comparison of features or performance is not possible. Looking at the capabilities of the Magma tools would indicate that this DB has a similar functionality to OpenAccess, and may be capable of representing behavioral (synthesis input) information.
Major features of an EDA specific database
An EDA specific database is expected to provide many basic constructs and services. Here is a brief and incomplete list of what is needed:
Fundamental Features
The Design (or Cell) as the Basic Unit
Shapes and Physical Geometry
Hierarchy
Connectivity and Hierarchical Connectivity
General Constructs
API Forms
Utility Layer
Advanced Features
Parameterized Designs
Namespaces and Name Mapping
Place-and-Route Constructs
Timing and Parasitic Constructs
Occurrence Models and Logical/Physical Mapping
Interface to Configuration management
Extensibility
Technology Data
Layer Definitions
Design Rules
Generation and extraction rules for simple devices
Library Data and Structures: Design-Data Management
Library Organization: From Designs to Disk Files
Design-Data Management
Interoperability Models
References
Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, A survey of the field. This article was derived (with permission) from Volume 2, Chapter 12, Design Databases, author Mark Bales.
Electronic design automation
Integrated circuits
Types of databases | EDA database | [
"Technology",
"Engineering"
] | 1,446 | [
"Computer engineering",
"Integrated circuits"
] |
3,502,906 | https://en.wikipedia.org/wiki/Mannheim%20process | The Mannheim process is an industrial process for the production of hydrogen chloride and sodium sulfate from sulfuric acid and sodium chloride. The Mannheim furnace is also used to produce potassium sulfate from potassium chloride. The Mannheim process is a stage in the Leblanc process for the production of sodium carbonate.
Process
The process is named after Mannheim furnace, a large cast iron kiln in which it is conducted. The furnace was developed at at the turn of the 20th century and superseded earlier furnace designs formerly used for the same purpose.
Sodium chloride and sulfuric acid are first fed onto a stationary reaction plate where an initial reaction takes place. The stationary plate is up to in diameter. Rotating rabble arms constantly turn over the mixture and move the intermediate product to a lower plate. The kiln portion of the furnace is constructed with bricks that have high resistance to direct flame, temperature, and acid. The other parts of the furnace are heat and acid resistant. Hot flue gas passes up over the plates carrying out liberated hydrogen chloride gas. The intermediate product reacts with more sodium chloride in the lower, hotter section of the kiln producing sodium sulfate. This exits the furnace and passes through cooling drums before being milled, screened and sent to product storage facilities.
The process involves intermediate formation of sodium bisulfate, an exothermic reaction that occurs at room temperature:
NaCl + H2SO4 → HCl + NaHSO4
The second step of the process is endothermic, requiring energy input:
NaCl + NaHSO4 → HCl + Na2SO4
Temperatures in the range 600-700 °C are required.
References
Chemical processes | Mannheim process | [
"Chemistry"
] | 330 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
3,503,207 | https://en.wikipedia.org/wiki/Multi-core%20processor | A multi-core processor (MCP) is a microprocessor on a single integrated circuit (IC) with two or more separate central processing units (CPUs), called cores to emphasize their multiplicity (for example, dual-core or quad-core). Each core reads and executes program instructions, specifically ordinary CPU instructions (such as add, move data, and branch). However, the MCP can run instructions on separate cores at the same time, increasing overall speed for programs that support multithreading or other parallel computing techniques. Manufacturers typically integrate the cores onto a single IC die, known as a chip multiprocessor (CMP), or onto multiple dies in a single chip package. As of 2024, the microprocessors used in almost all new personal computers are multi-core.
A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or may not share caches, and they may implement message passing or shared-memory inter-core communication methods. Common network topologies used to interconnect cores include bus, ring, two-dimensional mesh, and crossbar. Homogeneous multi-core systems include only identical cores; heterogeneous multi-core systems have cores that are not identical (e.g. big.LITTLE have heterogeneous cores that share the same instruction set, while AMD Accelerated Processing Units have cores that do not share the same instruction set). Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, superscalar, vector, or multithreading.
Multi-core processors are widely used across many application domains, including general-purpose, embedded, network, digital signal processing (DSP), and graphics (GPU). Core count goes up to even dozens, and for specialized chips over 10,000, and in supercomputers (i.e. clusters of chips) the count can go over 10 million (and in one case up to 20 million processing elements total in addition to host processors).
The improvement in performance gained by the use of a multi-core processor depends very much on the software algorithms used and their implementation. In particular, possible gains are limited by the fraction of the software that can run in parallel simultaneously on multiple cores; this effect is described by Amdahl's law. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or even more if the problem is split up enough to fit within each core's cache(s), avoiding use of much slower main-system memory. Most applications, however, are not accelerated as much unless programmers invest effort in refactoring.
The parallelization of software is a significant ongoing topic of research. Cointegration of multiprocessor applications provides flexibility in network architecture design. Adaptability within parallel models is an additional feature of systems utilizing these protocols.
In the consumer market, dual-core processors (that is, microprocessors with two units) started becoming commonplace on personal computers in the late 2000s. Quad-core processors were also being adopted in that era for higher-end systems before becoming standard. In the late 2010s, hexa-core (six cores) started entering the mainstream and since the early 2020s has overtaken quad-core in many spaces.
Terminology
The terms multi-core and dual-core most commonly refer to some sort of central processing unit (CPU), but are sometimes also applied to digital signal processors (DSP) and system on a chip (SoC). The terms are generally used only to refer to multi-core microprocessors that are manufactured on the same integrated circuit die; separate microprocessor dies in the same package are generally referred to by another name, such as multi-chip module. This article uses the terms "multi-core" and "dual-core" for CPUs manufactured on the same integrated circuit, unless otherwise noted.
In contrast to multi-core systems, the term multi-CPU refers to multiple physically separate processing-units (which often contain special circuitry to facilitate communication between each other).
The terms many-core and massively multi-core are sometimes used to describe multi-core architectures with an especially high number of cores (tens to thousands).
Some systems use many soft microprocessor cores placed on a single FPGA. Each "core" can be considered a "semiconductor intellectual property core" as well as a CPU core.
Development
While manufacturing technology improves, reducing the size of individual gates, physical limits of semiconductor-based microelectronics have become a major design concern. These physical limitations can cause significant heat dissipation and data synchronization problems. Various other methods are used to improve CPU performance. Some instruction-level parallelism (ILP) methods such as superscalar pipelining are suitable for many applications, but are inefficient for others that contain difficult-to-predict code. Many applications are better suited to thread-level parallelism (TLP) methods, and multiple independent CPUs are commonly used to increase a system's overall TLP. A combination of increased available space (due to refined manufacturing processes) and the demand for increased TLP led to the development of multi-core CPUs.
Commercial incentives
Several business motives drive the development of multi-core architectures. For decades, it was possible to improve performance of a CPU by shrinking the area of the integrated circuit (IC), which reduced the cost per device on the IC. Alternatively, for the same circuit area, more transistors could be used in the design, which increased functionality, especially for complex instruction set computing (CISC) architectures. Clock rates also increased by orders of magnitude in the decades of the late 20th century, from several megahertz in the 1980s to several gigahertz in the early 2000s.
As the rate of clock speed improvements slowed, increased use of parallel computing in the form of multi-core processors has been pursued to improve overall processing performance. Multiple cores were used on the same CPU chip, which could then lead to better sales of CPU chips with two or more cores. For example, Intel has produced a 48-core processor for research in cloud computing; each core has an x86 architecture.
Technical factors
Since computer manufacturers have long implemented symmetric multiprocessing (SMP) designs using discrete CPUs, the issues regarding implementing multi-core processor architecture and supporting it with software are well known.
Additionally:
Using a proven processing-core design without architectural changes reduces design risk significantly.
For general-purpose processors, much of the motivation for multi-core processors comes from greatly diminished gains in processor performance from increasing the operating frequency. This is due to three primary factors:
The memory wall; the increasing gap between processor and memory speeds. This, in effect, pushes for cache sizes to be larger in order to mask the latency of memory. This helps only to the extent that memory bandwidth is not the bottleneck in performance.
The ILP wall; the increasing difficulty of finding enough parallelism in a single instruction stream to keep a high-performance single-core processor busy.
The power wall; the trend of consuming exponentially increasing power (and thus also generating exponentially increasing heat) with each factorial increase of operating frequency. This increase can be mitigated by "shrinking" the processor by using smaller traces for the same logic. The power wall poses manufacturing, system design and deployment problems that have not been justified in the face of the diminished gains in performance due to the memory wall and ILP wall.
In order to continue delivering regular performance improvements for general-purpose processors, manufacturers such as Intel and AMD have turned to multi-core designs, sacrificing lower manufacturing-costs for higher performance in some applications and systems. Multi-core architectures are being developed, but so are the alternatives. An especially strong contender for established markets is the further integration of peripheral functions into the chip.
Advantages
The proximity of multiple CPU cores on the same die allows the cache coherency circuitry to operate at a much higher clock rate than what is possible if the signals have to travel off-chip. Combining equivalent CPUs on a single die significantly improves the performance of cache snoop (alternative: Bus snooping) operations. Put simply, this means that signals between different CPUs travel shorter distances, and therefore those signals degrade less. These higher-quality signals allow more data to be sent in a given time period, since individual signals can be shorter and do not need to be repeated as often.
Assuming that the die can physically fit into the package, multi-core CPU designs require much less printed circuit board (PCB) space than do multi-chip SMP designs. Also, a dual-core processor uses slightly less power than two coupled single-core processors, principally because of the decreased power required to drive signals external to the chip. Furthermore, the cores share some circuitry, like the L2 cache and the interface to the front-side bus (FSB). In terms of competing technologies for the available silicon die area, multi-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider-core design. Also, adding more cache suffers from diminishing returns.
Multi-core chips also allow higher performance at lower energy. This can be a big factor in mobile devices that operate on batteries. Since each core in a multi-core CPU is generally more energy-efficient, the chip becomes more efficient than having a single large monolithic core. This allows higher performance with less energy. A challenge in this, however, is the additional overhead of writing parallel code.
Disadvantages
Maximizing the usage of the computing resources provided by multi-core processors requires adjustments both to the operating system (OS) support and to existing application software. Also, the ability of multi-core processors to increase application performance depends on the use of multiple threads within applications.
Integration of a multi-core chip can lower the chip production yields. They are also more difficult to manage thermally than lower-density single-core designs. Intel has partially countered this first problem by creating its quad-core designs by combining two dual-core ones on a single die with a unified cache, hence any two working dual-core dies can be used, as opposed to producing four cores on a single die and requiring all four to work to produce a quad-core CPU. From an architectural point of view, ultimately, single CPU designs may make better use of the silicon surface area than multiprocessing cores, so a development commitment to this architecture may carry the risk of obsolescence. Finally, raw processing power is not the only constraint on system performance. Two processing cores sharing the same system bus and memory bandwidth limits the real-world performance advantage.
Hardware
Trends
The trend in processor development has been towards an ever-increasing number of cores, as processors with hundreds or even thousands of cores become theoretically possible. In addition, multi-core chips mixed with simultaneous multithreading, memory-on-chip, and special-purpose "heterogeneous" (or asymmetric) cores promise further performance and efficiency gains, especially in processing multimedia, recognition and networking applications. For example, a big.LITTLE core includes a high-performance core (called 'big') and a low-power core (called 'LITTLE'). There is also a trend towards improving energy-efficiency by focusing on performance-per-watt with advanced fine-grain or ultra fine-grain power management and dynamic voltage and frequency scaling (i.e. laptop computers and portable media players).
Chips designed from the outset for a large number of cores (rather than having evolved from single core designs) are sometimes referred to as manycore designs, emphasising qualitative differences.
Architecture
The composition and balance of the cores in multi-core architecture show great variety. Some architectures use one core design repeated consistently ("homogeneous"), while others use a mixture of different cores, each optimized for a different, "heterogeneous" role.
How multiple cores are implemented and integrated significantly affects both the developer's programming skills and the consumer's expectations of apps and interactivity versus the device. A device advertised as being octa-core will only have independent cores if advertised as True Octa-core, or similar styling, as opposed to being merely two sets of quad-cores each with fixed clock speeds.
The article "CPU designers debate multi-core future" by Rick Merritt, EE Times 2008, includes these comments:
Software effects
An outdated version of an anti-virus application may create a new thread for a scan process, while its GUI thread waits for commands from the user (e.g. cancel the scan). In such cases, a multi-core architecture is of little benefit for the application itself due to the single thread doing all the heavy lifting and the inability to balance the work evenly across multiple cores. Programming truly multithreaded code often requires complex co-ordination of threads and can easily introduce subtle and difficult-to-find bugs due to the interweaving of processing on data shared between threads (see thread-safety). Consequently, such code is much more difficult to debug than single-threaded code when it breaks. There has been a perceived lack of motivation for writing consumer-level threaded applications because of the relative rarity of consumer-level demand for maximum use of computer hardware. Also, serial tasks like decoding the entropy encoding algorithms used in video codecs are impossible to parallelize because each result generated is used to help create the next result of the entropy decoding algorithm.
Given the increasing emphasis on multi-core chip design, stemming from the grave thermal and power consumption problems posed by any further significant increase in processor clock speeds, the extent to which software can be multithreaded to take advantage of these new chips is likely to be the single greatest constraint on computer performance in the future. If developers are unable to design software to fully exploit the resources provided by multiple cores, then they will ultimately reach an insurmountable performance ceiling.
The telecommunications market had been one of the first that needed a new design of parallel datapath packet processing because there was a very quick adoption of these multiple-core processors for the datapath and the control plane. These MPUs are going to replace the traditional Network Processors that were based on proprietary microcode or picocode.
Parallel programming techniques can benefit from multiple cores directly. Some existing parallel programming models such as Cilk Plus, OpenMP, OpenHMPP, FastFlow, Skandium, MPI, and Erlang can be used on multi-core platforms. Intel introduced a new abstraction for C++ parallelism called TBB. Other research efforts include the Codeplay Sieve System, Cray's Chapel, Sun's Fortress, and IBM's X10.
Multi-core processing has also affected the ability of modern computational software development. Developers programming in newer languages might find that their modern languages do not support multi-core functionality. This then requires the use of numerical libraries to access code written in languages like C and Fortran, which perform math computations faster than newer languages like C#. Intel's MKL and AMD's ACML are written in these native languages and take advantage of multi-core processing. Balancing the application workload across processors can be problematic, especially if they have different performance characteristics. There are different conceptual models to deal with the problem, for example using a coordination language and program building blocks (programming libraries or higher-order functions). Each block can have a different native implementation for each processor type. Users simply program using these abstractions and an intelligent compiler chooses the best implementation based on the context.
Managing concurrency acquires a central role in developing parallel applications. The basic steps in designing parallel applications are:
Partitioning The partitioning stage of a design is intended to expose opportunities for parallel execution. Hence, the focus is on defining a large number of small tasks in order to yield what is termed a fine-grained decomposition of a problem.
Communication The tasks generated by a partition are intended to execute concurrently but cannot, in general, execute independently. The computation to be performed in one task will typically require data associated with another task. Data must then be transferred between tasks so as to allow computation to proceed. This information flow is specified in the communication phase of a design.
Agglomeration In the third stage, development moves from the abstract toward the concrete. Developers revisit decisions made in the partitioning and communication phases with a view to obtaining an algorithm that will execute efficiently on some class of parallel computer. In particular, developers consider whether it is useful to combine, or agglomerate, tasks identified by the partitioning phase, so as to provide a smaller number of tasks, each of greater size. They also determine whether it is worthwhile to replicate data and computation.
Mapping In the fourth and final stage of the design of parallel algorithms, the developers specify where each task is to execute. This mapping problem does not arise on uniprocessors or on shared-memory computers that provide automatic task scheduling.
On the other hand, on the server side, multi-core processors are ideal because they allow many users to connect to a site simultaneously and have independent threads of execution. This allows for Web servers and application servers that have much better throughput.
Licensing
Vendors may license some software "per processor". This can give rise to ambiguity, because a "processor" may consist either of a single core or of a combination of cores.
Initially, for some of its enterprise software, Microsoft continued to use a per-socket licensing system. However, for some software such as BizTalk Server 2013, SQL Server 2014, and Windows Server 2016, Microsoft has shifted to per-core licensing.
Oracle Corporation counts an AMD X2 or an Intel dual-core CPU as a single processor but uses other metrics for other types, especially for processors with more than two cores.
Embedded applications
Embedded computing operates in an area of processor technology distinct from that of "mainstream" PCs. The same technological drives towards multi-core apply here too. Indeed, in many cases the application is a "natural" fit for multi-core technologies, if the task can easily be partitioned between the different processors.
In addition, embedded software is typically developed for a specific hardware release, making issues of software portability, legacy code or supporting independent developers less critical than is the case for PC or enterprise computing. As a result, it is easier for developers to adopt new technologies and as a result there is a greater variety of multi-core processing architectures and suppliers.
Network processors
, multi-core network processors have become mainstream, with companies such as Freescale Semiconductor, Cavium Networks, Wintegra and Broadcom all manufacturing products with eight processors. For the system developer, a key challenge is how to exploit all the cores in these devices to achieve maximum networking performance at the system level, despite the performance limitations inherent in a symmetric multiprocessing (SMP) operating system. Companies such as 6WIND provide portable packet processing software designed so that the networking data plane runs in a fast path environment outside the operating system of the network device.
Digital signal processing
In digital signal processing the same trend applies: Texas Instruments has the three-core TMS320C6488 and four-core TMS320C5441, Freescale the four-core MSC8144 and six-core MSC8156 (and both have stated they are working on eight-core successors). Newer entries include the Storm-1 family from Stream Processors, Inc with 40 and 80 general purpose ALUs per chip, all programmable in C as a SIMD engine and Picochip with 300 processors on a single die, focused on communication applications.
Heterogeneous systems
In heterogeneous computing, where a system uses more than one kind of processor or cores, multi-core solutions are becoming more common: Xilinx Zynq UltraScale+ MPSoC has a quad-core ARM Cortex-A53 and dual-core ARM Cortex-R5. Software solutions such as OpenAMP are being used to help with inter-processor communication.
Mobile devices may use the ARM big.LITTLE architecture.
Hardware examples
Commercial
Adapteva Epiphany, a many-core processor architecture which allows up to 4096 processors on-chip, although only a 16-core version has been commercially produced.
Aeroflex Gaisler LEON3, a multi-core SPARC that also exists in a fault-tolerant version.
Ageia PhysX, a multi-core physics processing unit.
Ambric Am2045, a 336-core massively parallel processor array (MPPA)
AMD
A-Series, dual-, triple-, and quad-core of Accelerated Processor Units (APU).
Athlon 64 FX and Athlon 64 X2 single- and dual-core desktop processors.
Athlon II, dual-, triple-, and quad-core desktop processors.
FX-Series, quad-, 6-, and 8-core desktop processors.
Opteron, single-, dual-, quad-, 6-, 8-, 12-, and 16-core server/workstation processors.
Phenom, dual-, triple-, and quad-core processors.
Phenom II, dual-, triple-, quad-, and 6-core desktop processors.
Sempron, single-, dual-, and quad-core entry level processors.
Turion, single- and dual-core laptop processors.
Ryzen, dual-, quad-, 6-, 8-, 12-, 16-, 24-, 32-, and 64-core desktop, mobile, and embedded platform processors.
Epyc, quad-, 8-, 12-, 16-, 24-, 32-, and 64-core server and embedded processors.
Radeon and FireStream GPU/GPGPU.
Analog Devices Blackfin BF561, a symmetrical dual-core processor
ARM MPCore is a fully synthesizable multi-core container for ARM11 MPCore and ARM Cortex-A9 MPCore processor cores, intended for high-performance embedded and entertainment applications.
ASOCS ModemX, up to 128 cores, wireless applications.
Azul Systems
Vega 1, a 24-core processor, released in 2005.
Vega 2, a 48-core processor, released in 2006.
Vega 3, a 54-core processor, released in 2008.
Broadcom
SiByte SB1250, SB1255, SB1455
BCM2836, BCM2837, BCM2710 and BCM2711 quad-core ARM SoC (designed for different Raspberry Pi models)
Cadence Design Systems Tensilica Xtensa LX6, available in a dual-core configuration in Espressif Systems's ESP32
ClearSpeed
CSX700, 192-core processor, released in 2008 (32/64-bit floating point; Integer ALU).
Cradle Technologies CT3400 and CT3600, both multi-core DSPs.
Cavium Networks Octeon, a 32-core MIPS MPU.
Coherent Logix hx3100 Processor, a 100-core DSP/GPP processor.
Freescale Semiconductor QorIQ series processors, up to 8 cores, Power ISA MPU.
Hewlett-Packard PA-8800 and PA-8900, dual core PA-RISC processors.
IBM
POWER4, a dual-core PowerPC processor, released in 2001.
POWER5, a dual-core PowerPC processor, released in 2004.
POWER6, a dual-core PowerPC processor, released in 2007.
POWER7, a 4, 6 and 8-core PowerPC processor, released in 2010.
POWER8, a 12-core PowerPC processor, released in 2013.
POWER9, a 12 or 24-core PowerPC processor, released in 2017.
Power10, a 15 or 30-core PowerPC processor, released in 2021.
PowerPC 970MP, a dual-core PowerPC processor, used in the Apple Power Mac G5.
Xenon, a triple-core, SMT-capable, PowerPC microprocessor used in the Microsoft Xbox 360 game console.
z10, a quad-core z/Architecture processor, released in 2008.
z196, a quad-core z/Architecture processor, released in 2010.
zEC12, a six-core z/Architecture processor, released in 2012.
z13, an eight-core z/Architecture processor, released in 2015.
z14, a ten-core z/Architecture processor, released in 2017.
z15, a twelve-core z/Architecture processor, released in 2019.
Telum, an eight-core z/Architecture processor, released in 2021.
Infineon
AURIX
Danube, a dual-core, MIPS-based, home gateway processor.
Intel
Atom, single, dual-core, quad-core, 8-, 12-, and 16-core processors for netbooks, nettops, embedded applications, and mobile internet devices (MIDs).
Atom SoC (system on a chip), single-core, dual-core, and quad-core processors for smartphones and tablets.
Celeron, the first dual-core (and, later, quad-core) processor for the budget/entry-level market.
Core Duo, a dual-core processor.
Core 2 Duo, a dual-core processor.
Core 2 Quad, 2 dual-core dies packaged in a multi-chip module.
Core i3, Core i5, Core i7 and Core i9, a family of dual-, quad-, 6-, 8-, 10-, 12-, 14-, 16-, and 18-core processors, and the successor of the Core 2 Duo and the Core 2 Quad.
Itanium, single, dual-core, quad-core, and 8-core processors.
Pentium, single, dual-core, and quad-core processors for the entry-level market.
Teraflops Research Chip (Polaris), a 3.16 GHz, 80-core processor prototype, which the company originally stated would be released by 2011.
Xeon dual-, quad-, 6-, 8-, 10-, 12-, 14-, 15-, 16-, 18-, 20-, 22-, 24-, 26-, 28-, 32-, 48-, and 56-core processors.
Xeon Phi 57-, 60-, 61-, 64-, 68-, and 72-core processors.
IntellaSys
SEAforth 40C18, a 40-core processor.
SEAforth24, a 24-core processor designed by Charles H. Moore.
Kalray
MPPA-256, 256-core processor, released 2012 (256 usable VLIW cores, Network-on-Chip (NoC), 32/64-bit IEEE 754 compliant FPU)
NetLogic Microsystems
XLP, a 32-core, quad-threaded MIPS64 processor.
XLR, an eight-core, quad-threaded MIPS64 processor.
XLS, an eight-core, quad-threaded MIPS64 processor.
Nvidia
RTX 3090 (10496 CUDA cores, GPGPU cores; plus other more specialized cores).
Parallax Propeller P8X32, an eight-core microcontroller.
picoChip PC200 series 200–300 cores per device for DSP & wireless.
Plurality HAL series tightly coupled 16-256 cores, L1 shared memory, hardware synchronized processor.
Rapport Kilocore KC256, a 257-core microcontroller with a PowerPC core and 256 8-bit "processing elements".
Raspberry Pi Ltd. RP2040, a dual ARM Cortex-M0+ microcontroller
SiCortex "SiCortex node" has six MIPS64 cores on a single chip.
SiFive
U74 includes 4 cores
Sony/IBM/Toshiba's Cell processor, a nine-core processor with one general purpose PowerPC core and eight specialized SPUs (Synergistic Processing Unit) optimized for vector operations used in the Sony PlayStation 3.
Sun Microsystems
MAJC 5200, two-core VLIW processor.
UltraSPARC IV and UltraSPARC IV+, dual-core processors.
UltraSPARC T1, an eight-core, 32-thread processor.
UltraSPARC T2, an eight-core, 64-concurrent-thread processor.
UltraSPARC T3, a sixteen-core, 128-concurrent-thread processor.
SPARC T4, an eight-core, 64-concurrent-thread processor.
SPARC T5, a sixteen-core, 128-concurrent-thread processor.
Sunway
Sunway SW26010, a 260-core processor used in the Sunway TaihuLight.
Texas Instruments
TMS320C80 MVP, a five-core multimedia video processor.
TMS320TMS320C66, 2-, 4-, 8-core DSP.
Tilera
TILE64, a 64-core 32-bit processor.
TILE-Gx, a 72-core 64-bit processor.
XMOS Software Defined Silicon quad-core XS1-G4.
Free
OpenSPARC
Academic
Stanford, 4-core Hydra processor
MIT, 16-core RAW processor
University of California, Davis, Asynchronous array of simple processors (AsAP)
36-core 610 MHz AsAP
167-core 1.2 GHz AsAP2
University of Washington, Wavescalar processor
University of Texas, Austin, TRIPS processor
Linköping University, Sweden, ePUMA processor
UC Davis, Kilocore, a 1000 core 1.78 GHz processor on a 32 nm IBM process
Benchmarks
The research and development of multicore processors often compares many options, and benchmarks are developed to help such evaluations. Existing benchmarks include SPLASH-2, PARSEC, and COSMIC for heterogeneous systems.
See also
CPU shielding
CUDA
GPGPU
Hyper-threading
Manycore
Multicore Association
Multitasking
OpenCL (Open Computing Language) – a framework for heterogeneous execution
Parallel random access machine
Partitioned global address space (PGAS)
Race condition
Thread
Notes
Digital signal processors (DSPs) have used multi-core architectures for much longer than high-end general-purpose processors. A typical example of a DSP-specific implementation would be a combination of a RISC CPU and a DSP MPU. This allows for the design of products that require a general-purpose processor for user interfaces and a DSP for real-time data processing; this type of design is common in mobile phones. In other applications, a growing number of companies have developed multi-core DSPs with very large numbers of processors.
Two types of operating systems are able to use a dual-CPU multiprocessor: partitioned multiprocessing and symmetric multiprocessing (SMP). In a partitioned architecture, each CPU boots into separate segments of physical memory and operate independently; in an SMP OS, processors work in a shared space, executing threads within the OS independently.
References
Further reading
External links
"What Is a Processor Core?"—MakeUseOf
"Embedded moves to multicore"—Embedded Computing Design
"Multicore Is Bad News for Supercomputers"—IEEE Spectrum
Architecting solutions for the Manycore future, published on Feb 19, 2010 (more than one dead link in the slide)
Computer architecture
Digital signal processing
Flynn's taxonomy
Microprocessors
Parallel computing | Multi-core processor | [
"Technology",
"Engineering"
] | 6,575 | [
"Computers",
"Computer engineering",
"Computer architecture"
] |
3,503,227 | https://en.wikipedia.org/wiki/Solubility%20chart | A solubility chart is a chart describing whether the ionic compounds formed from different combinations of cations and anions dissolve in or precipitate from solution.
Chart
The following chart shows the solubility of various ionic compounds in water at 1 atm pressure and room temperature (approx. ). "Soluble" means the ionic compound doesn't precipitate, while "slightly soluble" and "insoluble" mean that a solid will precipitate; "slightly soluble" compounds like calcium sulfate may require heat to precipitate. For compounds with multiple hydrates, the solubility of the most soluble hydrate is shown.
Some compounds, such as nickel oxalate, will not precipitate immediately even though they are insoluble, requiring a few minutes to precipitate out.
See also
Solubility rules
Notes
References
Solutions | Solubility chart | [
"Chemistry"
] | 182 | [
"Homogeneous chemical mixtures",
"Solutions"
] |
3,503,493 | https://en.wikipedia.org/wiki/Di-tert-butyl%20dicarbonate | Di-tert-butyl dicarbonate is a reagent widely used in organic synthesis. Since this compound can be regarded formally as the acid anhydride derived from a tert-butoxycarbonyl (Boc) group, it is commonly referred to as Boc anhydride. This pyrocarbonate reacts with amines to give N-tert-butoxycarbonyl or so-called Boc derivatives. These carbamate derivatives do not behave as amines, which allows certain subsequent transformations to occur that would be incompatible with the amine functional group. The Boc group can later be removed from the amine using moderately strong acids (e.g., trifluoroacetic acid). Thus, Boc serves as a protective group, for instance in solid phase peptide synthesis. Boc-protected amines are unreactive to most bases and nucleophiles, allowing for the use of the fluorenylmethyloxycarbonyl group (Fmoc) as an orthogonal protecting group.
Preparation
Di-tert-butyl dicarbonate is inexpensive, so it is usually purchased. Classically, this compound is prepared from tert-butanol, carbon dioxide, and phosgene, using DABCO as a base:
This route is currently employed commercially by manufacturers in China and India. European and Japanese companies use the reaction of sodium tert-butoxide with carbon dioxide, catalysed by p-toluenesulfonic acid or methanesulfonic acid. This process involves a distillation of the crude material yielding a very pure grade.
Boc anhydride is also available as a 70% solution in toluene or THF. As boc anhydride may melt at ambient temperatures, its storage and handling is sometimes simplified by using a solution.
Protection and deprotection of amines
The Boc group can be added to the amine under aqueous conditions using di-tert-butyl dicarbonate in the presence of a base such as sodium bicarbonate. Protection of the amine can also be accomplished in acetonitrile solution using 4-dimethylaminopyridine (DMAP) as the base.
Removal of the Boc in amino acids can be accomplished with strong acids such as trifluoroacetic acid neat or in dichloromethane or with HCl in methanol. A complication may be the tendency of the t-butyl cation intermediate to alkylate other nucleophiles; scavengers such as anisole or thioanisole may be used.
Selective cleavage of the N-Boc group in the presence of other protecting groups is possible when using AlCl3.
Reaction with trimethylsilyl iodide in acetonitrile followed by methanol is a mild and versatile method of deprotecting Boc-protected amines.
The use of triethylsilane as a carbocation scavenger in the presence of trifluoroacetic acid in dichloromethane has been shown to lead to increased yields, decreased reaction times, simple work-up and improved selectivity for the deprotection of t-butyl ester and t-butoxycarbonyl sites in protected amino-acids and peptides in the presence of other acid-sensitive protecting groups such as the benzyloxycarbonyl, 9-fluorenylmethoxycarbonyl, O- and S-benzyl and t-butylthio groups.
Other uses
The synthesis of 6-acetyl-1,2,3,4-tetrahydropyridine, an important bread aroma compound, starting from 2-piperidone was accomplished using t-boc anhydride.
(See Maillard reaction). The first step in this reaction sequence is the formation of the carbamate from the reaction of the amide nitrogen with boc anhydride in acetonitrile using DMAP as a catalyst.
Di-tert-butyl dicarbonate also finds applications as a polymer blowing agent due to its decomposition into gaseous products upon heating.
Hazards
Bottles of di-tert-butyl dicarbonate buildup of internal pressure in sealed containers caused by its slow decomposition to di-tert-butyl carbonate and ultimately tert-butanol and CO2 in the presence of moisture. For this reason, it is usually sold and stored in plastic bottles rather than glass ones.
The main hazard of the reagent is its inhalational toxicity. Its median lethal concentration of 100 mg/m3 over 4 hours in rats is comparable to that of phosgene (49 mg/m3 over 50 min in rats).
References
External links
Reagents for organic chemistry
Dicarbonates
Tert-Butyl esters | Di-tert-butyl dicarbonate | [
"Chemistry"
] | 1,028 | [
"Reagents for organic chemistry"
] |
23,717,143 | https://en.wikipedia.org/wiki/C5H4N4 | {{DISPLAYTITLE:C5H4N4}}
The molecular formula C5H4N4 may refer to:
Purine
Purine analogues
5-Aza-7-deazapurine
Pyridine analogues
Triazolopyridine
4-azidopyridine
Molecular formulas | C5H4N4 | [
"Physics",
"Chemistry"
] | 67 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,717,161 | https://en.wikipedia.org/wiki/C4H12N2 | {{DISPLAYTITLE:C4H12N2}}
The molecular formula C4H12N2 (molar mass: 88.15 g/mol) may refer to:
Dimethylethylenediamines
1,1-Dimethylethylenediamine
1,2-Dimethylethylenediamine
2,3-Butanediamine
Putrescine
Tetramethylhydrazine
Molecular formulas | C4H12N2 | [
"Physics",
"Chemistry"
] | 91 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,717,280 | https://en.wikipedia.org/wiki/C5H5NO | {{DISPLAYTITLE:C5H5NO}}
The molecular formula C5H5NO (molar mass: 95.10 g/mol, exact mass: 95.03711 u) may refer to:
Pyridone
2-Pyridone
3-Pyridone
4-Pyridone
Pyridine-N-oxide
Molecular formulas | C5H5NO | [
"Physics",
"Chemistry"
] | 82 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,717,469 | https://en.wikipedia.org/wiki/C10H10O2 | {{DISPLAYTITLE:C10H10O2}}
The molecular formula C10H10O2 (molar mass : 162.18 g/mol) may refer to:
Benzoylacetone
Isosafrole, 3,4-methylenedioxyphenyl-1-propene
4-Methoxycinnamaldehyde
Methyl cinnamate
Safrole, 3,4-methylenedioxyphenyl-2-propene
4,5-Dihydro-1-benzoxepin-3(2H)-one, a watermelon flavorant | C10H10O2 | [
"Chemistry"
] | 130 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
23,717,479 | https://en.wikipedia.org/wiki/C13H18O7 | {{DISPLAYTITLE:C13H18O7}}
The molecular formula C13H18O7 (molar mass : 286.28 g/mol, react mass : 286.105253 u) may refer to :
Gastrodin, a natural polyphenol found in the orchid Gastrodia elata
Salicin, a natural polyphenol found in willow | C13H18O7 | [
"Chemistry"
] | 84 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
23,717,513 | https://en.wikipedia.org/wiki/C7H6O3 | {{DISPLAYTITLE:C7H6O3}}
The molecular formula C7H6O3 may refer to:
Dihydroxybenzaldehydes
2,4-Dihydroxybenzaldehyde
3,4-Dihydroxybenzaldehyde
Monohydroxybenzoic acids
2-Hydroxybenzoic acid (salicylic acid)
3-Hydroxybenzoic acid
4-Hydroxybenzoic acid
Peroxybenzoic acid
Sesamol | C7H6O3 | [
"Chemistry"
] | 111 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
23,727,664 | https://en.wikipedia.org/wiki/Shanghai%20Stem%20Cell%20Institute | The Shanghai Stem Cell Institute is an institute in Shanghai, People's Republic of China dedicated to stem cell research.
The institute
The institute, located within the Shanghai Jiao Tong University under the School of Medicine faculty, is entirely funded by the government of the People's Republic of China.
In 2007, the first Shanghai International Symposium on Stem Cell Research took place at Shanghai Jiatong University.
IPS cell breakthrough
On July 24, 2009, the first publication of a successful breakthrough in Stem cell research was released, where Chinese researchers from the Shanghai Stem Cell Institute, led by Professor Fanyi Zeng, successfully reprogrammed adult stem cells to be able to differentiate into any body cell, as in the case with standard embryonic stem cells, the cells in question known as "induced pluripotent stem cells" (IPS cells). The IPS cells were obtained by genetically reprogramming the skin cells of mice to acts like embryonic stem cells, which then were able to differentiate into all forms of body tissue. The researchers have managed to use the IPS cells to create every type of cell in a mouse, creating entire mouse pups using the technique. This is the first time the technique has been used to make an entire mouse.
This breakthrough, published in the journals Nature and Cell Stem Cell and developed independently by two teams in China, may possibly depreciate the usage of stem cells obtained from human embryos. The oldest living mice created by the technique are nine months old and are reproducing, albeit showing signs of abnormalities. "This gives us hope for future therapeutic intervention using patients' own re-programmed cells in our far future," according to Professor Zeng Fanyi. A total of 27 mice were successfully born from the first generation of mice created from the IPS cells which were able to reproduce without any issues.
See also
Stem cell
References
External links
Mice made from induced stem cells - Nature News
2009 establishments in China
Biotechnology organizations
Cloning
Shanghai Jiao Tong University
Stem cell research
Medical schools in China
Organizations established in 2009 | Shanghai Stem Cell Institute | [
"Chemistry",
"Engineering",
"Biology"
] | 419 | [
"Stem cell research",
"Cloning",
"Genetic engineering",
"Translational medicine",
"Tissue engineering",
"Biotechnology organizations"
] |
20,727,645 | https://en.wikipedia.org/wiki/Somatic%20fusion | Somatic fusion, also called protoplast fusion, is a type of genetic modification in plants by which two distinct species of plants are fused together to form a new hybrid plant with the characteristics of both, a somatic hybrid. Hybrids have been produced either between different varieties of the same species (e.g. between non-flowering potato plants and flowering potato plants) or between two different species (e.g. between wheat Triticum and rye Secale to produce Triticale).
Uses of somatic fusion include developing plants resistant to disease, such as making potato plants resistant to potato leaf roll disease. Through somatic fusion, the crop potato plant Solanum tuberosum – the yield of which is severely reduced by a viral disease transmitted on by the aphid vector – is fused with the wild, non-tuber-bearing potato Solanum brevidens, which is resistant to the disease. The resulting hybrid has the chromosomes of both plants and is thus similar to polyploid plants.
Somatic hybridization was first introduced by Carlson et al. in Nicotiana glauca.
Process for plant cells
The somatic fusion process occurs in four steps:
The removal of the cell wall of one cell of each type of plant using cellulase enzyme to produce a somatic cell called a protoplast
The cells are then fused using electric shock (electrofusion) or chemical treatment to join the cells and fuse together the nuclei. The resulting fused nucleus is called heterokaryon.
The formation of the cell wall is then induced using hormones
The cells are then grown into calluses which then are further grown to plantlets and finally to a full plant, known as a somatic hybrid.
The procedure for seed plants describe above, fusion of moss protoplasts can be initiated without electric shock but by the use of polyethylene glycol (PEG). Further, moss protoplasts do not need phytohormones for regeneration, and they do not form a callus. Instead, regenerating moss protoplasts behave like germinating moss spores. Of further note sodium nitrate and calcium ion at high pH can be used, although results are variable depending on the organism.
Applications of hybrid cells
Somatic cells of different types can be fused to obtain hybrid cells. Hybrid cells are useful in a variety of ways, e.g.,
(i) to study the control of cell division and gene expression,
(ii) to investigate malignant transformations,
(iii) to obtain viral replication,
(iv) for gene or chromosome mapping and for
(v) production of monoclonal antibodies by producing hybridoma (hybrid cells between an immortalised cell and an antibody producing lymphocyte), etc.
Chromosome mapping through somatic cell hybridization is essentially based on fusion of human and mouse somatic cells. Generally, human fibrocytes or leucocytes are fused with mouse continuous cell lines.
When human and mouse cells (or cells of any two mammalian species or of the same species) are mixed, spontaneous cell fusion occurs at a very low rate (10-6). Cell fusion is enhanced 100 to 1000 times by the addition of ultraviolet inactivated Sendai (parainfluenza) virus or polyethylene glycol (PEG).
These agents adhere to the plasma membranes of cells and alter their properties in such a way that facilitates their fusion. Fusion of two cells produces a heterokaryon, i.e., a single hybrid cell with two nuclei, one from each of the cells entering fusion. Subsequently, the two nuclei also fuse to yield a hybrid cell with a single nucleus.
A generalized scheme for somatic cell hybridization may be described as follows. Appropriate human and mouse cells are selected and mixed together in the presence of inactivated Sendai virus or PEG to promote cell fusion. After a period of time, the cells (a mixture of man, mouse and 'hybrid' cells) are plated on a selective medium, e.g., HAT medium, which allows the multiplication of hybrid cells only.
Several clones (each derived from a single hybrid cell) of the hybrid cells are thus isolated and subjected to both cytogenetic and appropriate biochemical analyses for the detection of enzyme/ protein/trait under investigation. An attempt is now made to correlate the presence and absence of the trait with the presence and absence of a human chromosome in the hybrid clones.
If there is a perfect correlation between the presence and absence of a human chromosome and that of a trait in the hybrid clones, the gene governing the trait is taken to be located in the concerned chromosome.
The HAT medium is one of the several selective media used for the selection of hybrid cells. This medium is supplemented with hypoxanthine, aminopterin and thymidine, hence the name HAT medium. Antimetabolite aminopterin blocks the cellular biosynthesis of purines and pyrimidines from simple sugars and amino acids.
However, normal human and mouse cells can still multiply as they can utilize hypoxanthine and thymidine present in the medium through a salvage pathway, which ordinarily recycles the purines and pyrimidines produced from degradation of nucleic acids.
Hypoxanthine is converted into guanine by the enzyme hypoxanthine-guanine phosphoribosyltransferase (HGPRT), while thymidine is phosphorylated by thymidine kinase (TK); both HGPRT and TK are enzymes of the salvage pathway.
On a HAT medium, only those cells that have active HGPRT (HGPRT+) and TK (TK+) enzymes can proliferate, while those deficient in these enzymes (HGPRr- and/or TK-) can not divide (since they cannot produce purines and pyrimidines due to the aminopterin present in the HAT medium).
For using HAT medium as a selective agent, human cells used for fusion must be deficient for either the enzyme HGPRT or TK, while mouse cells must be deficient for the other enzyme of this pair. Thus, one may fuse HGPRT deficient human cells (designated as TK+ HGPRr-) with TK deficient mouse cells (denoted as TK- HGPRT+).
Their fusion products (hybrid cells) will be TK+ (due to the human gene) and HGPRT+ (due to the mouse gene) and will multiply on the HAT medium, while the man and mouse cells will fail to do so. Experiments with other selective media can be planned in a similar fashion.
Characteristics of somatic hybridization and cybridization
Somatic cell fusion appears to be the only means through which two different parental genomes can be recombined among plants that cannot reproduce sexually (asexual or sterile).
Protoplasts of sexually sterile (haploid, triploid, and aneuploid) plants can be fused to produce fertile diploids and polyploids.
Somatic cell fusion overcomes sexual incompatibility barriers. In some cases somatic hybrids between two incompatible plants have also found application in industry or agriculture.
Somatic cell fusion is useful in the study of cytoplasmic genes and their activities and this information can be applied in plant breeding experiments.
Inter-specific and inter-generic fusion achievements
Note: The table only lists a few examples, there are many more crosses. The possibilities of this technology are great; however, not all species are easily put into protoplast culture.
References
Genetic engineering
Molecular biology
Biological engineering
Biotechnology | Somatic fusion | [
"Chemistry",
"Engineering",
"Biology"
] | 1,586 | [
"Biological engineering",
"Genetic engineering",
"Biotechnology",
"nan",
"Molecular biology",
"Biochemistry"
] |
20,736,184 | https://en.wikipedia.org/wiki/Solidago%20sempervirens | Solidago sempervirens, the seaside goldenrod or salt-marsh goldenrod, is a plant species in the genus Solidago of the family Asteraceae. It is native to eastern North America and parts of the Caribbean. It is an introduced species in the Great Lakes region. Similar plants found in the Azores (now Solidago azorica) are thought have evolved from a natural introduction of this species.
Description
Solidago sempervirens is a succulent, herbaceous perennial that reaches heights of 4–6 feet (120–180 cm). It is unusual in the genus in having toothless, hair-less leaves, thicker than those of most other Solidago species. Flower heads are found in a large paniculiform inflorescence at the top of the plant, often with branches that bend backwards towards the base. This species blooms in late summer and well into the fall, later in the season than most of its relatives. Its fruits are wind-dispersed achenes. They are yellow often, and have sprouts of buds at the end of the short branches.
Distribution and habitat
In nature, S. sempervirens is primarily a plant of the seashore, and is accordingly found along coasts of the Atlantic Ocean, the Caribbean, and the Gulf of Mexico from Central America north as far as Newfoundland. It grows on sand dunes, salt marshes, and the banks of estuaries. It is naturally found inland along the St. Lawrence Seaway and the Great Lakes, and has expanded its range further inland along roadsides over the past 30 years. It is highly tolerant of both saline soils and salt spray, and is usually found growing on coastal dunes and in salt marshes.
Varieties
Solidago sempervirens var. mexicana (L.) Fernald - from Massachusetts south to Central America and the West Indies
Solidago sempervirens var. sempervirens - from Newfoundland south to Virginia; introduced in Great Lakes region
Ecology
Solidago sempervirens is a seashore plant with a high salinity tolerance. It is occasionally cultivated as an ornamental, preferring sunny locations with sandy soil, with little competition from other species.
Galls
This species is host to the following insect induced galls:
Eurosta cribrata (Wulp, 1867)
Gnorimoschema salinaris Busck, 1911
Calycomyza solidaginis Kaltenbach, 1869
external link to gallformers
References
External links
sempervirens
Salt marsh plants
Halophytes
Flora of Northern America
Plants described in 1753
Taxa named by Carl Linnaeus | Solidago sempervirens | [
"Chemistry"
] | 531 | [
"Halophytes",
"Salts"
] |
20,740,421 | https://en.wikipedia.org/wiki/Methyl%20cyanoacrylate | Methyl cyanoacrylate (MCA; also sometimes referred to as α-cyanoacrylate or alpha-cyanoacrylate) is an organic compound that contains several functional groups: a methyl ester, a nitrile, and an alkene. It is a colorless liquid with low viscosity. Its chief use is as the main component of cyanoacrylate glues. It can be encountered under many trade names. Methyl cyanoacrylate is less commonly encountered than ethyl cyanoacrylate.
It is soluble in acetone, methyl ethyl ketone, nitromethane, and dichloromethane. MCA polymerizes rapidly in presence of moisture.
Safety
Heating the polymer causes depolymerization of the cured MCA, producing gaseous products which are a strong irritant to the lungs and eyes.
With regard to occupational exposure to MCA, the National Institute for Occupational Safety and Health recommends workers do not exceed exposures over 2 ppm (8 mg/m3) over an eight-hour workshift, or over 4 ppm (16 mg/m3) over a short-term exposure.
References
Methyl esters
Cyanoacrylate esters
Monomers
Lachrymatory agents | Methyl cyanoacrylate | [
"Chemistry",
"Materials_science"
] | 259 | [
"Lachrymatory agents",
"Monomers",
"Polymer chemistry",
"Chemical weapons"
] |
15,607,775 | https://en.wikipedia.org/wiki/Reaction%20Engines%20LAPCAT%20A2 | The Reaction Engines Limited LAPCAT Configuration A2 (called the LAPCAT A2) is a design study for a hypersonic speed jet airliner intended to provide long range, high capacity commercial transportation.
The aircraft concept was designed, as part of the European Union-funded Long-Term Advanced Propulsion Concepts and Technologies (LAPCAT) programme, by the British aerospace engineering firm Reaction Engines Limited, who said it could be developed into a working aircraft within 25 years once there is market demand for it.
Development
The vehicle design was intended to have about range and good subsonic and supersonic speed fuel efficiency, thus avoiding the problems inherent in earlier supersonic aircraft. The top speed is projected to be Mach 5+. The design was to use liquid hydrogen as a fuel, which can achieve twice the specific impulse of kerosene, and the cryogenic fluid can also be used to cool the vehicle and the air entering the engines via a precooler.
Alan Bond, managing director of Reaction Engines, said "Our work shows that it is possible technically; now it's up to the world to decide if it wants it."
The developers said in 2009 that it would be able to fly from Europe to Australia in under five hours, compared to around a complete day of travel with normal aircraft. The cost of a ticket was aspirationally roughly business class level.
Design
Capabilities
According to Alan Bond, the A2 design could fly subsonically from Brussels International Airport into the North Atlantic then reaching Mach 5 across the North Pole and over the Pacific to Australia. The route described isn't a great circle, in order to minimise the travel time while avoiding flying supersonically over land, as there are concerns the sonic boom generated by travelling at supersonic speed could cause significant discomfort for people on the ground.
The A2 design is much longer than conventional jets, but would be lighter than a Boeing 747. It could take off and land on 2000s-era airport runways.
The A2 design does not have windows. The heat generated by the hypersonic airflow over the body puts constraints on window design which would make them too heavy. One solution Reaction Engines proposed was to install flat panel displays, showing images of the scene outside.
Engines
The Scimitar engines use technology related to the company's earlier SABRE engine, which is intended for space launch, but here adapted for very long distance, very high speed travel.
Normally, as air enters a jet engine, it is compressed by the inlet, and thus heats up. It needs much more power to compress that heated air further by the engine's compressor section, which reduces the compressor's efficiency dramatically. Furthermore, this means that high-speed engines need to be made of materials that can survive extremely high temperatures. In practice, this inevitably makes the engines heavier and also reduces the amount of fuel that can be burned, to avoid melting the gas turbine section of the engine. This in turn reduces thrust at high speed.
The key design feature for the Scimitar engines is the precooler, which is a heat exchanger that transfers the heat from the incoming air into the hydrogen fuel. This greatly cools the air, which allows the engines to burn more fuel even at very high speed, and allows the engines to be made of lighter, but more heat susceptible, materials such as light alloys. The engine inlet diffuser also has to slow the incoming air to subsonic speeds because if the air moved through the precooler and compressor at supersonic speeds, it would cause damage to them.
The rest of the engine is described as having high-bypass (4:1) turbofan engine features to give it good efficiency and subsonic (quiet) exhaust velocity at low speeds. Unlike SABRE, the A2's Scimitar engine would not have rocket engine features.
Specifications (LAPCAT A2)
See also
References
External links
.
Hypersonic aircraft
Hydrogen-powered aircraft
Ramjet-powered aircraft
Reaction Engines aircraft
Supersonic transports
Abandoned civil aircraft projects of the United Kingdom
Aircraft with retractable tricycle landing gear | Reaction Engines LAPCAT A2 | [
"Physics"
] | 819 | [
"Physical systems",
"Transport",
"Supersonic transports"
] |
15,611,465 | https://en.wikipedia.org/wiki/Magnetic%20Prandtl%20number | The Magnetic Prandtl number (Prm) is a dimensionless quantity occurring in magnetohydrodynamics which approximates the ratio of momentum diffusivity (viscosity) and magnetic diffusivity. It is defined as:
where:
Rem is the magnetic Reynolds number
Re is the Reynolds number
ν is the momentum diffusivity (kinematic viscosity)
η is the magnetic diffusivity
At the base of the Sun's convection zone the Magnetic Prandtl number is approximately 10−2, and in the interiors of planets and in liquid-metal laboratory dynamos is approximately 10−5.
See also
Prandtl number
References
Dimensionless numbers of fluid mechanics
Fluid dynamics
Magnetohydrodynamics | Magnetic Prandtl number | [
"Chemistry",
"Engineering"
] | 154 | [
"Piping",
"Magnetohydrodynamics",
"Chemical engineering",
"Fluid dynamics"
] |
15,612,827 | https://en.wikipedia.org/wiki/Bandwidth%20%28computing%29 | In computing, bandwidth is the maximum rate of data transfer across a given path. Bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth.
This definition of bandwidth is in contrast to the field of signal processing, wireless communications, modem data transmission, digital communications, and electronics, in which bandwidth is used to refer to analog signal bandwidth measured in hertz, meaning the frequency range between lowest and highest attainable frequency while meeting a well-defined impairment level in signal power. The actual bit rate that can be achieved depends not only on the signal bandwidth but also on the noise on the channel.
Network capacity
The term bandwidth sometimes defines the net bit rate peak bit rate, information rate, or physical layer useful bit rate, channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth tests measure the maximum throughput of a computer network. The maximum rate that can be sustained on a link is limited by the Shannon–Hartley channel capacity for these communication systems, which is dependent on the bandwidth in hertz and the noise on the channel.
Network consumption
The consumed bandwidth in bit/s, corresponds to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The consumed bandwidth can be affected by technologies such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap, bandwidth allocation (for example bandwidth allocation protocol and dynamic bandwidth allocation), etc. A bit stream's bandwidth is proportional to the average consumed signal bandwidth in hertz (the average spectral bandwidth of the analog signal representing the bit stream) during a studied time interval.
Channel bandwidth may be confused with useful data throughput (or goodput). For example, a channel with x bit/s may not necessarily transmit data at x rate, since protocols, encryption, and other factors can add appreciable overhead. For instance, much internet traffic uses the transmission control protocol (TCP), which requires a three-way handshake for each transaction. Although in many modern implementations the protocol is efficient, it does add significant overhead compared to simpler protocols. Also, data packets may be lost, which further reduces the useful data throughput. In general, for any effective digital communication, a framing protocol is needed; overhead and effective throughput depends on implementation. Useful throughput is less than or equal to the actual channel capacity minus implementation overhead.
Maximum throughput
The asymptotic bandwidth (formally asymptotic throughput) for a network is the measure of maximum throughput for a greedy source, for example when the message size (the number of packets per second from a source) approaches close to the maximum amount.
Asymptotic bandwidths are usually estimated by sending a number of very large messages through the network, measuring the end-to-end throughput. As with other bandwidths, the asymptotic bandwidth is measured in multiples of bits per seconds. Since bandwidth spikes can skew the measurement, carriers often use the 95th percentile method. This method continuously measures bandwidth usage and then removes the top 5 percent.
Multimedia
Digital bandwidth may also refer to: multimedia bit rate or average bitrate after multimedia data compression (source coding), defined as the total amount of data divided by the playback time.
Due to the impractically high bandwidth requirements of uncompressed digital media, the required multimedia bandwidth can be significantly reduced with data compression. The most widely used data compression technique for media bandwidth reduction is the discrete cosine transform (DCT), which was first proposed by Nasir Ahmed in the early 1970s. DCT compression significantly reduces the amount of memory and bandwidth required for digital signals, capable of achieving a data compression ratio of up to 100:1 compared to uncompressed media.
Web hosting
In Web hosting service, the term bandwidth is often incorrectly used to describe the amount of data transferred to or from the website or server within a prescribed period of time, for example bandwidth consumption accumulated over a month measured in gigabytes per month. The more accurate phrase used for this meaning of a maximum amount of data transfer each month or given period is monthly data transfer.
A similar situation can occur for end-user Internet service providers as well, especially where network capacity is limited (for example in areas with underdeveloped internet connectivity and on wireless networks).
Internet connections
Edholm's law
Edholm's law, proposed by and named after Phil Edholm in 2004, holds that the bandwidth of telecommunication networks double every 18 months, which has proven to be true since the 1970s. The trend is evident in the cases of Internet, cellular (mobile), wireless LAN and wireless personal area networks.
The MOSFET (metal–oxide–semiconductor field-effect transistor) is the most important factor enabling the rapid increase in bandwidth. The MOSFET (MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, and went on to become the basic building block of modern telecommunications technology. Continuous MOSFET scaling, along with various advances in MOS technology, has enabled both Moore's law (transistor counts in integrated circuit chips doubling every two years) and Edholm's law (communication bandwidth doubling every 18 months).
References
Network performance
Information theory
Temporal rates | Bandwidth (computing) | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 1,091 | [
"Temporal quantities",
"Telecommunications engineering",
"Physical quantities",
"Applied mathematics",
"Temporal rates",
"Computer science",
"Information theory"
] |
10,488,043 | https://en.wikipedia.org/wiki/Interferon%20alpha-n3 | Interferon alpha-n3 (Alferon-N) is a medication consisting of purified natural human interferon alpha proteins used for the treatment of genital warts.
References
Immunostimulants
Antiviral drugs | Interferon alpha-n3 | [
"Biology"
] | 50 | [
"Antiviral drugs",
"Biocides"
] |
10,489,186 | https://en.wikipedia.org/wiki/Melamine%20resin | Melamine resin or melamine formaldehyde (also shortened to melamine) is a resin with melamine rings terminated with multiple hydroxyl groups derived from formaldehyde. This thermosetting plastic material is made from melamine and formaldehyde. In its butylated form, it is dissolved in n-butanol and xylene. It is then used to cross-link with alkyd, epoxy, acrylic, and polyester resins, used in surface coatings. There are many types, varying from very slow to very fast curing.
Curing
Melamine-formaldehyde can be cured by heating, which induces dehydration and crosslinking. The crosslinking can be carried out to a limited degree to give resins. Either the melamine-formaldehyde resins or melamine-formaldehyde "monomer" can be cured by treatment with any of several polyols.
Applications
Construction material
The principal use of melamine resin is as the main constituent of high-pressure laminates, such as Formica and Arborite, and of laminate flooring. Melamine-resin tile wall panels can also be used as whiteboards. Melamine formaldehyde is used in plastic laminate and overlay materials. Formaldehyde is more tightly bound in melamine-formaldehyde than it is in urea-formaldehyde, reducing emissions.
Other
In the kitchen
Melamine resin is often used in kitchen utensils and plates (such as Melmac). Because of its high dielectric constant ranging from 7.2 to 8.4, melamine resin utensils and bowls are not microwave safe.
During the late 1950s and 1960s melamine tableware became fashionable. Aided by the stylish modern designs of A. H. Woodfull and the Product Design Unit of British Industrial Plastics, it was thought to threaten the dominant position of ceramics in the market. In the late 1960s the tendency of melamine cups and plates to become stained and scratched led to a decline in sales, and eventually the material became largely restricted to the camping and nursery markets, in which its light weight and resistance to breaking were valued.
Cabinet and furniture making
Melamine resin is often used to saturate decorative paper that is laminated under heat and pressure and then pasted onto particle board; the resulting panel, often called melamine, is commonly used in ready-to-assemble furniture and kitchen cabinets.
Melamine is available in diverse sizes and thicknesses, as well as a large number of colors and patterns. The sheets are heavy for their size, and the resin is prone to chipping when being cut with conventional table saws.
Carbon capture
Melamine, with the addition of formaldehyde, cyanuric acid, and DETA (diethylenetriamine) has been demonstrated to bind CO2 for purposes of carbon capture, according to researchers at Stanford, Berkeley, and Texas A&M.
Microencapsulation of active compounds
Melamine-based resin (e.g., melamine-formaldehyde or melamine-urea-formaldehyde resins) can also be used to microencapsulate active agents, such as healing agents or phase change materials, to prevent leakage above their melting temperature. The resulting surface is quite inert and can hardly be modified with traditional techniques such as silanization. Some research has shown that polydopamine can be effective as a surface modifier for this resin.
Production and structure
Melamine-formaldehyde resin forms via the condensation of formaldehyde with melamine to give, under idealized conditions, the hexa-hydroxymethyl derivative. Upon heating in the presence of acid, this or similar hydroxymethylated species undergoes further condensation and crosslinking. Linkages between the heterocycles include mono-, di-, and polyethers. The microstructure of the material can be analyzed by NMR spectroscopy. The crosslinking density of melamine resins can be controlled by co-condensation with bifunctional analogues of melamine, benzoguanamine and acetoguanamine.
See also
Melamine foam is a special form of melamine resin. It is used mainly as an insulating and soundproofing material and more recently as a cleaning abrasive.
Formica is a brand of composite materials manufactured by the Formica Corporation. In common use, the term refers to the company's classic product, a heat-resistant, wipe-clean, plastic laminate of paper or fabric with melamine resin.
References
Kitchenware
Building materials
Organic polymers
Thermosetting plastics | Melamine resin | [
"Physics",
"Chemistry",
"Engineering"
] | 967 | [
"Organic polymers",
"Building engineering",
"Architecture",
"Organic compounds",
"Construction",
"Materials",
"Matter",
"Building materials"
] |
10,491,549 | https://en.wikipedia.org/wiki/Mannitol%20salt%20agar | Mannitol salt agar or MSA is a commonly used selective and differential growth medium in microbiology. It encourages the growth of a group of certain bacteria while inhibiting the growth of others.
It contains a high concentration (about 7.5–10%) of salt (NaCl) which is inhibitory to most bacteria - making MSA selective against most Gram-negative and selective for some Gram-positive bacteria (Staphylococcus, Enterococcus and Micrococcaceae) that tolerate high salt concentrations. It is also a differential medium for mannitol-fermenting staphylococci, containing the sugar alcohol mannitol and the indicator phenol red, a pH indicator for detecting acid produced by mannitol-fermenting staphylococci. Staphylococcus aureus produces yellow colonies with yellow zones, whereas other coagulase-negative staphylococci produce small pink or red colonies with no colour change to the medium. If an organism can ferment mannitol, an acidic byproduct is formed that causes the phenol red in the agar to turn yellow. It is used for the selective isolation of presumptive pathogenic (pp) Staphylococcus species.
Expected results
Gram + Staphylococcus: fermenting mannitol: medium turns yellow (e.g. S. aureus)
Gram + Staphylococcus: not fermenting mannitol, medium does not change color (e.g. S. epidermidis)
Gram + Streptococcus: inhibited growth
Gram -: inhibited growth
Typical composition
MSA typically contains:
5.0 g/L enzymatic digest of casein
5.0 g/L enzymatic digest of animal tissue
1.0 g/L beef extract
10.0 g/L D-mannitol
75.0 g/L sodium chloride
0.025 g/L phenol red
15.0 g/L agar
pH 7.4 ± 0.2 at 25 °C
References
Biochemistry detection reactions
Microbiological media | Mannitol salt agar | [
"Chemistry",
"Biology"
] | 440 | [
"Biochemistry detection reactions",
"Microbiology equipment",
"Biochemical reactions",
"Microbiology techniques",
"Microbiological media"
] |
10,494,269 | https://en.wikipedia.org/wiki/Bethe%E2%80%93Salpeter%20equation | The Bethe–Salpeter equation (BSE, named after Hans Bethe and Edwin Salpeter) is an integral equation, the solution of which describes the structure of a relativistic two-body (particles) bound state in a covariant formalism quantum field theory (QFT). The equation was first published in 1950 at the end of a paper by Yoichiro Nambu, but without derivation.
Due to its common application in several branches of theoretical physics, the Bethe–Salpeter equation appears in many forms. One form often used in high energy physics is
where is the Bethe–Salpeter amplitude (BSA), the Green's function representing the interaction and the dressed propagators of the two constituent particles.
In quantum theory, bound states are composite physical systems with lifetime significantly longer than the time scale of the interaction breaking their structure (otherwise the physical systems under consideration are called resonances), thus allowing ample time for constituents to interact. By accounting all possible interactions that can occur between the two constituents, the BSE is a tool to calculate properties of deep-bound states. The BSA as Its solution encodes the structure of the bound state under consideration.
As it can be derived via identifying bound-states with poles in the S-matrix of the 4-point function involving the constituent particles, the equation is related to the quantum-field description of scattering processes applying Green's functions.
As a general-purpose tool the applications of the BSE can be found in most quantum field theories. Examples include positronium (bound state of an electron–positron pair), excitons (bound states of an electron–hole pairs), and mesons (as quark-antiquark bound states).
Even for simple systems such as the positronium, the equation cannot be solved exactly under quantum electrodynamics (QED), despite its exact formulation. A reduction of the equation can be achieved without the exact solution. In the case where particle-pair production can be ignored, if one of the two fermion constituent is significantly more massive than the other, the system is simplified into the Dirac equation for the light particle under the external potential of the heavy one.
Derivation
The starting point for the derivation of the Bethe–Salpeter equation is the two-particle (or four point) Dyson equation
in momentum space, where "G" is the two-particle Green function , "S" are the free propagators and "K" is an interaction kernel, which contains all possible interactions between the two particles. The crucial step is now, to assume that bound states appear as poles in the Green function. One assumes, that two particles come together and form a bound state with mass "M", this bound state propagates freely, and then the bound state splits in its two constituents again. Therefore, one introduces the Bethe–Salpeter wave function , which is a transition amplitude of two constituents into a bound state , and then makes an Ansatz for the Green function in the vicinity of the pole as
where P is the total momentum of the system. One sees, that if for this momentum the equation holds, which is exactly the Einstein energy-momentum relation (with the Four-momentum and ), the four-point Green function contains a pole. If one plugs that Ansatz into the Dyson equation above, and sets the total momentum "P" such that the energy-momentum relation holds, on both sides of the term a pole appears.
Comparing the residues yields
This is already the Bethe–Salpeter equation, written in terms of the Bethe–Salpeter wave functions. To obtain the above form one introduces the Bethe–Salpeter amplitudes "Γ"
and gets finally
which is written down above, with the explicit momentum dependence.
Rainbow-ladder approximation
In principle the interaction kernel K contains all possible two-particle-irreducible interactions that can occur between the two constituents. In order to carry out practical calculations one has to model it by choosing a subset of the interactions. As in quantum field theories, interaction is described via the exchange of particles (e.g. photons in QED, or gluons in quantum chromodynamics), other than contact interactions the most simple interaction is modeled by the exchange of only one of these force-carrying particles with a known propagator.
As the Bethe–Salpeter equation sums up the interaction infinitely many times from a perturbative view point, the resulting Feynman graph resembles the form of a ladder (or rainbow), hence the name of this approximation.
While in QED the ladder approximation caused problems with crossing symmetry and gauge invariance, indicating the inclusion of crossed-ladder terms. In quantum chromodynamics (QCD) this approximation is frequently used phenomenologically to calculate hadron mass and its structure in terms of Bethe—Salpeter amplitudes and Faddeev amplitudes, a well-known Ansatz of which is proposed by Maris and Tandy. Such an Ansatz for the dressed quark-gluon vertex within the rainbow-ladder truncation respects chiral symmetry and its dynamical breaking, which therefore is an important modeling of the strong nuclear interaction. As an example the structure of pions can be solved applying the Maris—Tandy Ansatz from the Bethe—Salpeter equation in Euclidean space.
Normalization
As for solutions of any homogeneous equation, that of the Bethe–Salpeter equation is determined up to a numerical factor. This factor has to be specified by a certain normalization condition. For the Bethe–Salpeter amplitudes this is usually done by demanding probability conservation (similar to the normalization of the quantum mechanical wave function), which corresponds to the equation
Normalizations to the charge and energy-momentum tensor of the bound state lead to the same equation. In the rainbow-ladder approximation this Interaction kernel does not depend on the total momentum of the Bethe–Salpeter amplitude, in which case the second term of the normalization condition vanishes. An alternative normalization based on the eigenvalue of the corresponding linear operator was derived by Nakanishi.
Solution in the Minkowski space
The Bethe—Salpeter equation applies to all kinematic region of the Bethe—Salpeter amplitude. Consequently it determines the amplitudes where the functions are not continuous. Such singularities are usually located when the constituent momentum is timelike, which are not directly accessible from Euclidean-space solutions of this equation. Instead one develop methods to solve these type of integral equations directly in the timelike region. In the case of scalar bound states through a scalar-particle exchange in the rainbow-ladder truncation, the Bethe—Salpeter equation in the Minkowski space can be solved with the assistance of Nakanishi integral representation.
See also
ABINIT
Araki–Sucher correction
Breit equation
Lippmann–Schwinger equation
Schwinger–Dyson equation
Two-body Dirac equations
YAMBO code
References
Bibliography
Many modern quantum field theory textbooks and a few articles provide pedagogical accounts for the Bethe–Salpeter equation's context and uses. See:
Still a good introduction is given by the review article of Nakanishi
For historical aspects, see
External links to codes where the Bethe-Salpeter equation is coded
Yambo - plane-wave pseudopotential
BerkeleyGW – plane-wave pseudopotential
ExC - plane-wave pseudopotential
Fiesta - Gaussian all-electron
Abinit - plane-wave pseudopotential
VASP - plane-wave pseudopotential
For a more comprehensive list of first principles codes see here: List of quantum chemistry and solid-state physics software
Eponymous equations of physics
Quantum field theory
Quantum mechanics | Bethe–Salpeter equation | [
"Physics"
] | 1,628 | [
"Quantum field theory",
"Equations of physics",
"Theoretical physics",
"Eponymous equations of physics",
"Quantum mechanics"
] |
10,495,039 | https://en.wikipedia.org/wiki/Ekanite | Ekanite is an uncommon silicate mineral with chemical formula or . It is a member of the steacyite group. It is among the few gemstones that are naturally radioactive. Most ekanite is mined in Sri Lanka, although deposits also occur in Russia and North America. Clear and well-colored stones are rare as the radioactivity tends to degrade the crystal matrix over time in a process known as metamictization.
The type locality is Eheliyagoda, Ratnapura District, Sabaragamuwa Province, Sri Lanka, where it was first described in 1955 by F. L. D. Ekanayake, a Sri Lankan scientist, and it is named after him.
In Sri Lanka the mineral specimens occur as detrital pebbles. In the Tombstone Mountains of Yukon, Canada, the mineral is found in a syenitic glacial erratic boulder. In the Alban Hills of Italy it is found in volcanic ejecta.
Ekanite can be uranium-lead dated with ekanite from Okkampitiya in the Monaragala District of Sri Lanka being dated to around 560 million years old.
References
Calcium minerals
Radioactive gemstones
Gemstones
Phyllosilicates
Radioactive minerals
Ratnapura District
Tetragonal minerals
Minerals in space group 97
Thorium minerals | Ekanite | [
"Physics"
] | 267 | [
"Materials",
"Gemstones",
"Matter"
] |
10,499,606 | https://en.wikipedia.org/wiki/Slater%27s%20rules | In quantum chemistry, Slater's rules provide numerical values for the effective nuclear charge in a many-electron atom. Each electron is said to experience less than the actual nuclear charge, because of shielding or screening by the other electrons. For each electron in an atom, Slater's rules provide a value for the screening constant, denoted by s, S, or σ, which relates the effective and actual nuclear charges as
The rules were devised semi-empirically by John C. Slater and published in 1930.
Revised values of screening constants based on computations of atomic structure by the Hartree–Fock method were obtained by Enrico Clementi et al. in the 1960s.
Rules
Firstly, the electrons are arranged into a sequence of groups in order of increasing principal quantum number n, and for equal n in order of increasing azimuthal quantum number l, except that s- and p- orbitals are kept together.
[1s] [2s,2p] [3s,3p] [3d] [4s,4p] [4d] [4f] [5s, 5p] [5d] etc.
Each group is given a different shielding constant which depends upon the number and types of electrons in those groups preceding it.
The shielding constant for each group is formed as the sum of the following contributions:
An amount of 0.35 from each other electron within the same group except for the [1s] group, where the other electron contributes only 0.30.
If the group is of the [ns, np] type, an amount of 0.85 from each electron with principal quantum number (n–1), and an amount of 1.00 for each electron with principal quantum number (n–2) or less.
If the group is of the [d] or [f], type, an amount of 1.00 for each electron "closer" to the nucleus than the group. This includes both i) electrons with a smaller principal quantum number than n and ii) electrons with principal quantum number n and a smaller azimuthal quantum number l.
In tabular form, the rules are summarized as:
Example
An example provided in Slater's original paper is for the iron atom which has nuclear charge 26 and electronic configuration 1s22s22p63s23p63d64s2. The screening constant, and subsequently the shielded (or effective) nuclear charge for each electron is deduced as:
Note that the effective nuclear charge is calculated by subtracting the screening constant from the atomic number, 26.
Motivation
The rules were developed by John C. Slater in an attempt to construct simple analytic expressions for the atomic orbital of any electron in an atom. Specifically, for each electron in an atom, Slater wished to determine shielding constants (s) and "effective" quantum numbers (n*) such that
provides a reasonable approximation to a single-electron wave function. Slater defined n* by the rule that for n = 1, 2, 3, 4, 5, 6 respectively; n* = 1, 2, 3, 3.7, 4.0 and 4.2. This was an arbitrary adjustment to fit calculated atomic energies to experimental data.
Such a form was inspired by the known wave function spectrum of hydrogen-like atoms which have the radial component
where n is the (true) principal quantum number, l the azimuthal quantum number, and fnl(r) is an oscillatory polynomial with n - l - 1 nodes. Slater argued on the basis of previous calculations by Clarence Zener that the presence of radial nodes was not required to obtain a reasonable approximation. He also noted that in the asymptotic limit (far away from the nucleus), his approximate form coincides with the exact hydrogen-like wave function in the presence of a nuclear charge of Z-s and in the state with a principal quantum number n equal to his effective quantum number n*.
Slater then argued, again based on the work of Zener, that the total energy of a N-electron atom with a wavefunction constructed from orbitals of his form should be well approximated as
Using this expression for the total energy of an atom (or ion) as a function of the shielding constants and effective quantum numbers, Slater was able to compose rules such that spectral energies calculated agree reasonably well with experimental values for a wide range of atoms. Using the values in the iron example above, the total energy of a neutral iron atom using this method is −2497.2 Ry, while the energy of an excited Fe+ cation lacking a single 1s electron is −1964.6 Ry. The difference, 532.6 Ry, can be compared to the experimental (circa 1930) K absorption limit of 524.0 Ry.
References
Atomic physics
Chemical bonding
Quantum chemistry | Slater's rules | [
"Physics",
"Chemistry",
"Materials_science"
] | 989 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Condensed matter physics",
" molecular",
"Atomic physics",
"Atomic",
"nan",
"Chemical bonding",
" and optical physics"
] |
10,500,138 | https://en.wikipedia.org/wiki/CD4%2B%20T%20cells%20and%20antitumor%20immunity | Understanding of the antitumor immunity role of CD4+ T cells has grown substantially since the late 1990s. CD4+ T cells (mature T-helper cells) play an important role in modulating immune responses to pathogens and tumor cells, and are important in orchestrating overall immune responses.
Immunosurveillance and immunoediting
This discovery furthered the development of a previously hypothesized theory, the immunosurveillance theory. The immunosurveillance theory suggests that the immune system routinely patrols the cells of the body, and, upon recognition of a cell, or group of cells, that has become cancerous, it will attempt to destroy them, thus preventing the growth of some tumors. (Burnet, 1970) More recent evidence has suggested that immunosurveillance is only part of a larger role the immune system plays in fighting cancer. Remodeling of this theory has led to the progression of the immunoediting theory, in which there are 3 phases, Elimination, Equilibrium and Escape.
Elimination phase
As mentioned, the elimination phase is synonymous with the classic immunosurveillance theory.
In 2001, it was shown that mice deficient in RAG-2 (Recombinase Activator Gene 2) were far less capable of preventing MCA induced tumours than were wild type mice. (Shankaran et al., 2001, Bui and Schreiber, 2007) RAG proteins are necessary for the recombination events necessary to produce TCRs and Igs, and as such RAG-2 deficient mice are incapable of producing functional T, B or NK cells. RAG-2 deficient mice were chosen over other methods of inducting immunodeficiency (such as SCID mice) as an absence of these proteins does not affect DNA repair mechanisms, which becomes important when dealing with cancer, as DNA repair problems can lead to cancers themselves. This experiment provides clear evidence that the immune system does, in fact, play a role in eradication of tumor cells.
Further knock out experiments showed important roles of αβ T cells, γδ T cells and NK cells in tumour immunity (Girardi et al. 2001, Smyth et al., 2001)
Another experiment involving interferon gamma (IFNγ−/−) showed that these mice are more likely to develop certain types of cancers as well, and suggests a role of CD4+ T cells in tumor immunity, which produce large amounts of IFNγ (Street et al., 2002)
Perforin deficient mice were also shown to have a reduced ability to ward off MCA induced cancers, suggesting an important role of CD8+ T cells. (Street et al. 2001) Perforin is a protein produced by CD8+ T cells, which plays a central role in the cytotoxic killing mechanisms by providing entry of degradative granzymes into an infected cell. (Abbas and Lichtman, 2005)
Finally, the innate immune system has also been associated with immunosurveillance (Dunn et al., 2004).
Equilibrium phase
The equilibrium phase of the immunoediting theory is characterized by the continued existence of the tumour, but little growth. Due to the extremely high rate of mutation of cancer cells, it is probable that many will escape the elimination phase, and progress into the equilibrium phase. There is currently little evidence to support the existence of an equilibrium phase, aside from the observation that cancers have been shown to lie dormant, i.e. to go into remission, in a person's body for years before re-emerging again in the final escape phase. It has been noted that tumors that persist in the equilibrium phase show reduced immunogenicity when compared to tumors which have been grown in immunodeficient mice (Shankaran et al., 2001) Three possible outcomes for tumors managing to evade the immune system, and reach the equilibrium phase have been proposed: 1) eventual elimination by the immune system 2) a prolonged or indefinite period of dormancy, or 3) progression into the final escape phase.
Escape phase
As the name implies, the escape phase is characterized by a reduced immunogenicity of the cancer cells, their subsequent evasion of the immune system and their ability to be clinically detected. A number of theories have been proposed to explain this phase of the theory.
Cancer cells, through mutation, may actually have mutations in some of the proteins involved in antigen presentation, and as such, evade an immune response. (Dunn et al., 2004) Tumor cells may, through mutations, often begin producing large quantities of inhibitory cytokines IL-10, or transforming growth factor β (TGF-β) (Khong and Restifo, 2002) thereby suppressing the immune system, allowing for large-scale proliferation (Salazar-Onfray et al., 2007). Also, it has been observed that some cancer patients exhibit higher than normal levels of CD4+/CD25+ T cells, a subset of T cells often called regulatory T cells, for their known immunosuppressive actions. These T cells produce high levels of IL-10 and TGF-β, thereby suppressing the immune system and allowing for evasion by the tumor (Shimizu et al., 1999).
Tumour antigens
Tumour antigens are those expressed by tumor cells, and recognizable as being different from self cells. Most currently classified tumor antigens are endogenously synthesized, and as such are presented on MHC class I molecules to CD8+ T cells. Such antigens include products of oncogenes or tumor suppressor genes, mutants of other cellular genes, products of genes that are normally silenced, over-expressed gene products, products of oncogenic viruses, oncofetal antigens (proteins normally expressed only during development of the fetus) glycolipids and glycoproteins. Detailed explanations of these tumor antigens can be found in Abbas and Lichtman, 2005. MHC class II restricted antigens currently remain somewhat obscure. Development of new techniques has been successful in identifying some of these antigens, however, additional research is required. (Wang, 2003)
Antitumour immunity
Historically, much more attention and funding has been devoted to the role of CD8+ T cells in antitumor immunity, rather than to CD4+ T cells. This can be attributed to a number of things; CD4+ T cells respond only to presentation of antigens by MHC class II, however, most cells express only MHC class I; second, CD8+ T cells, upon being presented with antigen by MHC class I, can directly kill the cancerous cell, through mechanisms which will not be discussed in this article, but which have been well categorized; (See Abbas and Lichtman, 2005) finally, there is simply a more widespread understanding and knowledge of MHC class I tumor antigens, while MHC class II antigens remain somewhat obscure.(Pardol and Toplain, 1998).
It was believed that CD4+ T cells were not involved directly in antitumour immunity, but rather functioned simply in the priming of CD8+ T cells, through activation of antigen-presenting cells (APCs) and increased antigen presentation on MHC class I, as well as secretion of excitatory cytokines such as IL-2 (Pardol and Toplain, 1998, Kalams and Walker, 1998, Wang 2001).
Controversial role in antitumor immunity
The role of CD4+ T cells in antitumor immunity is controversial. It was suggested that CD4+ T cells can have a direct role in antitumor immunity through direct recognition of tumor antigens presented on the surface of tumor cells in association with MHC class II molecules. Of note, results from recent reports suggest that direct recognition of tumors from tumor-antigen specific CD4+ T cells might not be always beneficial. For example, it was recently shown that CD4+ T cells primarily produce TNF after recognition of tumor-antigens in melanoma. TNF may in turn increase local immunosuppression and impair the effector functions of CD8 T cells (Donia M. et al., 2015).
Th1 and Th2 CD4+T cells
The same series of experiments, examining the role of CD4+ cells, showed that high levels of IL-4 and IFNγ were present at the site of the tumor, following vaccination, and subsequent tumour challenge. (Hung, 1998) IL-4 is the predominant cytokine produced by Th2 cells, while IFNγ is the predominant Th1 cytokine. Earlier work has shown that these two cytokines inhibit the production of each other by inhibiting differentiation down the opposite Th pathway, in normal microbial infections (Abbas and Lichtman, 2005), yet here they were seen at nearly equal levels. Even more interesting was the fact that both these cytokines were required for maximal tumor immunity, and that mice deficient in either showed greatly reduced antitumor immunity. IFN-γ null mice showed virtually no immunity, while IL-4 null mice showed a 50% reduction when compared to immunised wild type mice.
The reduction of immunity in IL-4 deficient mice, has been attributed to a decrease in eosinophil production. In mice deficient in IL-5, the cytokine responsible for differentiation of myeloid progenitor cells into eosinophils, less eosinophils are seen at the site of tumour challenge, which is to be expected. (Hung, 1998) These mice also show reduced antitumor immunity, suggesting that IL-4 deficient mice, which would produce less IL-5, and subsequently have reduced eosinophil levels, elicit their effect through eosinophils.
Th-1 activity in tumor immunity
Th1 cells are one of the two main Th cell polarizations first identified. Th1 differentiation is IL-12 dependent, and IFN-γ is the signature cytokine of cells of a Th1 lineage.
Th1 cell anti-tumor activity is complex and includes many mechanisms. Th1 cells are indirectly responsible for activating tumor-suppressing CTLs by activating the antigen-presenting cells which then present antigen to and activate the CTL.
IFN-γ produced by Th1 cells activates macrophages, increasing phagocytosis of pathogen and tumor cells. Activated macrophages produce IL-12, and since IL-12 promotes Th1 cell differentiation, this forms a tumor-suppressing feedback loop.
Th1 and NK cells both contribute to killing of tumor cells via the TNF-related apoptosis-inducing ligand (TRAIL) pathway. NK cells produce IFN-γ and are also activated by IL-12, creating another tumor-suppressing feedback loop.
Th-2 activity in tumor immunity
Th2 cells are the other Th cell polarization initially defined. Th2 differentiation is dependent on the presence of IL-4 and the absence of IL-12, and signature cytokines of Th2 cells include IL-4, IL-5, and IL-13.
Th2 mediated anti-tumor activity primarily involves recruitment of eosinophils to the tumor environment via IL-4 and IL-13. Anti-tumor eosinophil activity includes attraction of tumor-specific CTLs, activation of macrophages, and vascularization of the tumor stroma.
However, Th2 polarization as quantified by IL-5 production has been associated with tumor proliferation, complicating the role of Th2 cells in tumor immunity.
Th-17 activity in tumor immunity
Th17 are a recently identified subset of Th cells that are primarily involved in promoting inflammatory responses. Th17 differentiation is induced by TGF-β and IL-6, and signature cytokines of Th17 cells include IL-17A and IL-17F.
The mechanisms of Th17 cell activity in the tumor microenvironment are not well understood. Th17 cells can orchestrate chronic inflammatory responses, which tend to promote tumor growth and survival. In addition, some tumors have been shown to express high levels of IL-6 & TGF-β, which would reinforce a Th17 polarization, creating a tumor-promoting feedback loop.
Th17 cells have also been found to have the capacity to differentiate into IFN-γ secreting cells, thus suppressing tumor growth via IFN-γ-related pathways.
Treg activity in tumor immunity
Regulatory Th cells (Tregs) are another recently defined subset of Th cells. Their main functions involve maintaining self-tolerance and immune homeostasis. Treg differentiation is induced by expression of FoxP3 transcription factor, and Tregs secrete a variety of immunosuppressive cytokines, such as TGF-β. Tregs are detrimental to anti-tumor immune responses, as the secretion of TGF-β and other suppressive cytokines dampens immunity from CTLs, Th cells and APCs.
IFN-γ
A number of mechanisms have been proposed to explain the role of IFN-γ in antitumor immunity. In conjuncture with TNF (Tumor Necrosis Factors), IFN-γ can have direct cytotoxic effects on tumor cells (Franzen et al., 1986) Increased MHC expression, as a direct result of increased IFN-γ secretion, may result in increased presentation to T cells. (Abbas and Lichtman, 2005) It has also been shown to be involved in the expression of iNOS as well as ROIs.
iNOS (inducible nitric oxide synthase) is an enzyme responsible for the production of NO, an important molecule used by macrophages to kill infected cells. (Abbas and Lichtman, 2005) A decrease in the levels of iNOS, (as seen through immunohistochemical staining) has been observed in IFNγ−/− mice although levels of macrophages at the site of tumor challenge are similar to wild type mice. INOS −/− mice also show decreased immunity, indicating a direct role of CD4+-stimulated iNOS production in protection against tumours. (Hung et al., 1998) Similar results have been seen in knockout mice deficient in gp91phox, a protein involved in the production ROIs (Reactive Oxygen Intermediates) which are also an important weapon utilized by macrophages to elicit cell death.
In 2000, Qin and Blankenstein, showed that IFNγ production was necessary for CD4+ T cell-mediated antitumor immunity. A series of experiments showed that it was essential for nonhematopoietic cells at the site of challenge, to express functional IFNγ receptors. Further experiments showed that IFN-γ was responsible for inhibition of tumor induced angiogenesis and could prevent tumor growth through this method. (Qin and Blankenstein, 2000)
MHC class II and immunotherapy
Many of the aforementioned mechanisms by which CD4+ cells play a role in tumor immunity are dependent on phagocytosis of tumors by APCs and subsequent presentation on MHC class II. It is rare that tumor cells will express sufficient MHC class II to directly activate a CD4+ T cell. As such, at least two approaches have been investigated to enhance the activation of CD4+ T cells. The simplest approach involves upregulation of adhesion molecules, thus extending the presentation of antigens by APC. (Chamuleau et al., 2006) A second approach involves increasing the expression of MHC class II in tumor cells. This technique has not been used in vivo, but rather involves injection of tumor cells which have been transfected to express MHC class II molecules, in addition to suppression of the invariant chain (Ii, see below) through antisense technology. (Qiu, 1999) Mice vaccinated with irradiated strains of these cells show a greater immune response to subsequent challenge by the same tumor, without the upregulation of MHC class II, then do mice vaccinated with irradiated, but otherwise unaltered tumor cells. These findings signify a promising area of future research in the development of cancer vaccines.
MHC class I and class II pathways
The down regulation of the invariant chain (Ii) becomes important when considering the two pathways by which antigens are presented by cells. Most recognized tumor antigens are endogenously produced, altered gene products of mutated cells. These antigens, however, are normally only presented by MHC class I molecules, to CD8+ T cells, and not expressed on the cell surface bound to MHC class II molecules, which is required for presentation to CD4+ T cells. Research has shown that the two pathways by which antigens are presented cross over in the endoplasmic reticulum of the cell, in which MHC class I, MHC class II and endogenously synthesized antigenic proteins are all present. These antigen proteins are prevented from binding to MHC class II molecules by a protein known as the invariant chain or Ii, which, in a normal cell, remains bound to the MHC class II molecule until leaving the ER. Down regulation of this Ii, using antisense technology, has yielded promising results in allowing MHC class I tumor antigens to be expressed on MHC class II molecules at the cell surface (Qui, 1999).
Upregulation of MHC class II
Due to the extremely polymorphic nature of MHC class II molecules, simple transfection of these proteins does not provide a practical method for use as a cancer vaccine. (Chamuleau et al., 2006) Alternately, two other methods have been examined to upregulate the expression of these proteins on MHC class II− cells. The first is treatment with IFNγ, which can lead to increased MHC class II expression. (Trincheiri and Perussia, 1985, Fransen L, 1986) A second, more effective approach involves targeting the genes responsible for the synthesis of these proteins, the CIITA or class II transcription activator. Selective gene targeting of CIITA has been used ex vivo to allow MHC class II− cells to become MHC class II+ (Xu, et al. 2000). upregulation of CIITA also causes an increased expression of Ii, and as such, must be used in conjunction with the antisense techniques referred to earlier (Qui, 1999). In some forms of cancer, such as acute myeloid leukemia (AML) the cells may already be MHC class II+, but because of mutation, express low levels on their surface. It is believed that low levels are seen as a direct result of methylation of the CIITA promoter genes (Morimoto et al., 2004, Chamuleau et al., 2006) and that demethylation of these promoters may restore MHC class II expression (Chamuleau et al., 2006).
See also
List of distinct cell types in the adult human body
References
Abbas, A.K, and Lichtman, 2005. A.H.Cellular and Molecular Immunology. Elsevier Saunders, Philadelphia.
Bui, Jack D. and Schreiber, Robert R., 2001. Cancer immunosurveillance, immunoediting and inflammation: independent or inderdependent process? Current Opinion in Immunology 19, pp. 203–208
Burnet, F.M., 1970. The concept of immunological surveillance. Prog. Exp. Tumor Res. 13, pp. 1–27
Chamuleau, M., Ossenkopple, G., and Loosdrecht, A., 2006. MHC class II molecules in tumor immunology: prognostic marker and target for immune modulation. Immunobiology 211:6-8, pp. 616–225.
Donia, M. et al., 2015. Aberrant expression of MHC Class II in melanoma attracts inflammatory tumor specific CD4+ T cells which dampen CD8+ T cell antitumor reactivity. Cancer Res 75(18):3747-59, doi: 10.1158/0008-5472.CAN-14-2956
Dranoff, G., Jaffee, E., Lazenby, A., Golumbek, P., Levitsky, H., Brose, K., Jackson, V., Hamada, H., Pardoll, D. and Mulligan, R., 1993. Vaccination with irradiated tumor cells engineered to secrete murine granulocyte-macrophage colony-stimulating factor stimulates potent, specific, and long-lasting anti-tumor immunity. Proc. Natl. Acad. Sci. USA 90, pp. 3539–3543.
Dunn, Gavin P., Old, Lloyd J. and Schreiber, Robert D., 2004. The immunobiology of cancer immunosurveillance and immunoediting. Immunity 21:2, pp. 137–148
Fransen, L., Van der Heyden, J., Ruysschaert, R and Fiers, W., 1986 Recombinant tumor necrosis factor: its effect and its synergism with interferon-gamma on a variety of normal and transformed human cell lines. Eur. J. Cancer Clin. Oncol. 22, pp. 419–426.
Girardi, M., Oppenheim, D.E., Steele, C.R., Lewis, J.M., Glusac, E., Filler, R., Hobby, P., Sutton, B., Tigelaar, R.E. and Hayday, A.C., 2001. Regulation of cutaneous malignancy by γδ T cells. Science 294, pp. 605–609
Hung, K et al., 1998. The central role of CD4+ T cells in the antitumor immune response. J. Exp. Med. 188, pp. 2357–2368.
Kalams, Spyros A. and Walker, Bruce D., 1998. The critical need for CD4 help in maintaining effective cytotoxic T lymphocyte Responses. J. Exp. Med. 188:12, pp. 2199–2204.
Khong, H.T. and Restifo, N.P., 2002. Natural selection of tumor variants in the generation of “tumor escape” phenotypes. Nat. Immunol. 3, pp. 999–1005.
Morimoto et al., 2004 Y. Morimoto, M. Toyota, A. Satoh, M. Murai, H. Mita, H. Suzuki, Y. Takamura, H. Ikeda, T. Ishida, N. Sato, T. Tokino and K. Imai, Inactivation of class II transactivator by DNA methylation and histone deacetylation associated with absence of HLA-DR induction by interferon-gamma in haematopoietic tumour cells. Br. J. Cancer 90, pp. 844–852.
Old, L.J. and Boyse, E.A., 1964. Immunology of experimental tumors. Annu. Rev. Med. 15, pp. 167–186.
Pardoll, Drew M and Toplain, Suzanne L., 1998. The role of CD4+ T cell responses in antitumor immunity. Current Opinion in Immunology 10, pp. 588–594
Qin, Z and Blankenstein, T., 2000. CD4+ T cell-mediated tumor rejection involves inhibition of angiogenesis that is dependent on IFNγ receptor expression on nonhematopoietic cells. Immunity 12:6, pp. 677–686
Qiu et al., 1999 G. Qiu, J. Goodchild, R.E. Humphreys and M. Xu, Cancer immunotherapy by antisense suppression of Ii protein in MHC-class-II-positive tumor cells. Cancer Immunol. Immunother. 48, pp. 499–506
Salazar-Onfray, Flavio., López, Mercedes N. and Mendoza-Naranjo, Ariadna., 2007. Paradoxical effects of cytokines in tumor immune surveillance and tumor immune escape. Cytokine and Growth Factor Reviews 18, pp. 171–182
Shimizu, J., Yamazaki, S. and Sakaguchi, S., 1999. Induction of tumor immunity by removing CD25+CD4+ T cells: a common basis between tumor immunity and autoimmunity. J. Immunol. 163, pp. 5211–5218.
Shankaran, V., Ikeda, H., Bruce, A.T., White, J.M., Swanson, P.E., Old, L.J. and Schreiber, R.D., 2001. IFNγ and lymphocytes prevent primary tumor development and shape tumor immunogenicity. Nature 410, pp. 1107–1111.
Smyth, M.J., Crowe, N.Y. and Godfrey, D.I., 2001. NK cells and NKT cells collaborate in host protection from methylcholanthrene-induced fibrosarcoma. Int. Immunol. 13, pp. 459–463
Street, S.E., Cretney, E. and Smyth, M.J., 2001. Perforin and interferon-γ activities independently control tumor initiation, growth, and metastasis. Blood 97, pp. 192–197.
Street, S.E., Trapani, J.A., MacGregor, D. and Smyth, M.J., 2002. Suppression of lymphoma and epithelial malignancies effected by interferon γ. J. Exp. Med. 196, pp. 129–134.
Trinchieri, G. and Perussia, B., 1985. Immune interferon: a pleiotropic lymphokine with multiple effects. Immunology Today 6:4, pp. 131–136
Wang, Rong-Fu., 2001. The role of MHC class II-restricted tumor antigens and CD4+ T cells in antitumor immunity. Trends in Immunology 22:5, pp. 269–276
Wang, Rong-Fu., 2003. Identification of MHC class II-restricted tumor antigens recognized by CD4+ T cells. Methods 29:3, pp. 227–235
Xu, M., Qiu, G., Jiang, Z., Hofe, E. and Humphreys, R., 2000. Genetic modulation of tumor antigen presentation. Methods in Biotechnology 18:4, pp. 167–172
T cells
Immunology
Human cells
Tumor | CD4+ T cells and antitumor immunity | [
"Biology"
] | 5,680 | [
"Immunology"
] |
19,531,744 | https://en.wikipedia.org/wiki/Brinelling | Brinelling is the permanent indentation of a hard surface. It is named after the Brinell scale of hardness, in which a small ball is pushed against a hard surface at a preset level of force, and the depth and diameter of the mark indicates the Brinell hardness of the surface. Brinelling is permanent plastic deformation of a surface, and usually occurs while two surfaces in contact are stationary (such as rolling elements and the raceway of a bearing) and the material yield strength has been exceeded.
Brinelling is undesirable, as the parts often mate with other parts in very close proximity. The very small indentations can quickly lead to improper operation, such as chattering or excess vibration, which in turn can accelerate other forms of wear, such as spalling and ultimately, failure of the bearing.
Introduction
Brinelling is a material surface failure caused by Hertz contact stress that exceeds the material limit. It usually occurs in situations where a significant load force is distributed over a relatively small surface area. Brinelling typically results from a heavy or repeated impact load, either while stopped or during rotation, though it can also be caused by just one application of a force greater than the material limit.
Brinelling can be caused by a heavy load resting on a stationary bearing for an extended length of time. The result is a permanent dent or "brinell mark". The brinell marks will often appear in evenly spaced patterns along the bearing races, resembling the primary elements of the bearing, such as rows of indented lines for needle or roller bearings or rounded indentations in ball bearings. It is a common cause of roller bearing failures, and loss of preload in bolted joints when a hardened washer is not used. For example, brinelling occurs in casters when the ball bearings within the swivel head produce grooves in the hard cap, thus degrading performance by increasing the required swivel force.
Avoiding brinelling damage
Engineers can use the Brinell hardness of materials in their calculations to avoid this mode of failure. A rolling element bearing's static load rating is defined to avoid this failure type. Increasing the number of elements can provide better distribution of the load, so bearings intended for a large load may have many balls, or use needles instead. This decreases the chances of brinelling, but increases friction and other factors. However, although roller and ball bearings work well for radial and thrust loading, they are often prone to brinelling when very high impact loading, lateral loading, or vibration are experienced. Babbitt bearings or bronze bushings are often used instead of roller bearings in applications where such loads exist, such as in automotive crankshafts or pulley sheaves, to decrease the possibility of brinelling by distributing the force over a very large surface area.
A common cause of brinelling is the use of improper installation procedures. Brinelling often occurs when pressing bearings into holes or onto shafts. Care must usually be taken to ensure that pressure is applied to the proper bearing race to avoid transferring the pressure from one race to the other through the balls or rollers. If pressing force is applied to the wrong race, brinelling can occur to either or both of the races. The act of pressing or clamping can also leave brinell marks, especially if the vise or press has serrated jaws or roughened surfaces. Flat pressing plates are often used in the pressing of bearings, while soft copper, brass, or aluminum jaw covers are often used in vises to help avoid brinell marks from being forced into the workpiece.
False brinelling
A similar-looking kind of damage is called false brinelling and is caused by fretting wear. Fretting wear occurs when localized wear-marks develop in evenly spaced patterns, with raised or unworn portions in between, like frets on a guitar. False brinelling occurs in two types: stationary and by precession.
Stationary false-brinelling occurs without any rotational motion in the bearing. This occurs when contacting bodies vibrate against each other in the presence of very small loads, which pushes lubricant out of the contact surface area, all while the bearing assembly cannot move far enough (or rotate far enough) to redistribute the displaced lubricant. The result is a finely polished surface that resembles a brinell mark, but has not permanently deformed either contacting surface. This type of false brinelling usually occurs in bearings during transportation, between the time of manufacture and installation. The polished surfaces are often mistaken for brinelling, although no actual damage to the bearing exists. The false brinelling will disappear after a short break-in period of operation.
Fretting wear can also occur during operation, causing deep indentations. This occurs when small vibrations form in the rotating shaft and become harmonically in sync with the speed of rotation, causing circular oscillations in the shaft. The oscillation causes the shaft to move in precession, and the timing of the rotation speed causes the balls or rollers to contact the races only when they are in similar positions. This forms wear marks caused by contact with the bearings and the races in specific areas, but not in others, leaving an uneven wear-pattern that can become quite deep before failure occurs, resembling brinelling. However, the marks are usually too wide, due to the motion of the bearing, and do not exactly match the shape of the rolling elements, and therefore this type of wear can be differentiated from true brinelling.
References
Tribology
Metallurgy
Mechanical engineering | Brinelling | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,158 | [
"Tribology",
"Applied and interdisciplinary physics",
"Metallurgy",
"Materials science",
"Surface science",
"nan",
"Mechanical engineering"
] |
19,535,600 | https://en.wikipedia.org/wiki/HMMER | HMMER is a free and commonly used software package for sequence analysis written by Sean Eddy. Its general usage is to identify homologous protein or nucleotide sequences, and to perform sequence alignments. It detects homology by comparing a profile-HMM (a Hidden Markov model constructed explicitly for a particular search) to either a single sequence or a database of sequences. Sequences that score significantly better to the profile-HMM compared to a null model are considered to be homologous to the sequences that were used to construct the profile-HMM. Profile-HMMs are constructed from a multiple sequence alignment in the HMMER package using the hmmbuild program. The profile-HMM implementation used in the HMMER software was based on the work of Krogh and colleagues. HMMER is a console utility ported to every major operating system, including different versions of Linux, Windows, and macOS.
HMMER is the core utility that protein family databases such as Pfam and InterPro are based upon. Some other bioinformatics tools such as UGENE also use HMMER.
HMMER3 also makes extensive use of vector instructions to increase computational speed. This work is based upon an earlier publication showing a significant acceleration of the Smith-Waterman algorithm for aligning two sequences.
Profile HMMs
A profile HMM is a variant of an HMM relating specifically to biological sequences. Profile HMMs turn a multiple sequence alignment into a position-specific scoring system, which can be used to align sequences and search databases for remotely homologous sequences. They capitalise on the fact that certain positions in a sequence alignment tend to have biases in which residues are most likely to occur, and are likely to differ in their probability of containing an insertion or a deletion. Capturing this information gives them a better ability to detect true homologs than traditional BLAST-based approaches, which penalise substitutions, insertions and deletions equally, regardless of where in an alignment they occur.
Profile HMMs center around a linear set of match (M) states, with one state corresponding to each consensus column in a sequence alignment. Each M state emits a single residue (amino acid or nucleotide). The probability of emitting a particular residue is determined largely by the frequency at which that residue has been observed in that column of the alignment, but also incorporates prior information on patterns of residues that tend to co-occur in the same columns of sequence alignments. This string of match states emitting amino acids at particular frequencies is analogous to position specific score matrices or weight matrices.
A profile HMM takes this modelling of sequence alignments further by modelling insertions and deletions, using I and D states, respectively. D states do not emit a residue, while I state do emit a residue. Multiple I state can occur consecutively, corresponding to multiple residues between consensus columns in an alignment. M, I and D states are connected by state transition probabilities, which also vary by position in the sequence alignment, to reflect the different frequencies of insertions and deletions across sequence alignments.
The HMMER2 and HMMER3 releases used an architecture for building profile HMMs called the Plan 7 architecture, named after the seven states captured by the model. In addition to the three major states (M, I and D), six additional states capture non-homologous flanking sequence in the alignment. These 6 states collectively are important for controlling how sequences are aligned to the model, e.g. whether a sequence can have multiple consecutive hits to the same model (in the case of sequences with multiple instances of the same domain).
Programs in the HMMER package
The HMMER package consists of a collection of programs for performing functions using profile hidden Markov models. The programs include:
Profile HMM building
hmmbuild – construct profile HMMs from multiple sequence alignments
Homology searching
hmmscan – search protein sequences against a profile HMM database
hmmsearch – search profile HMMs against a sequence database
jackhmmer – iteratively search sequences against a protein database
nhmmer – search DNA/RNA queries against a DNA/RNA sequence database
nhmmscan – search nucleotide sequences against a nucleotide profile
phmmer – search protein sequences against a protein database
Other functions
hmmalign – align sequences to a profile HMM
hmmemit – produce sample sequences from a profile HMM
hmmlogo – produce data for an HMM logo from an HMM file
The package contains numerous other specialised functions.
The HMMER web server
In addition to the software package, the HMMER search function is available in the form of a web server. The service facilitates searches across a range of databases, including sequence databases such as UniProt, SwissProt, and the Protein Data Bank, and HMM databases such as Pfam, TIGRFAMs and SUPERFAMILY. The four search types phmmer, hmmsearch, hmmscan and jackhmmer are supported (see Programs). The search function accepts single sequences as well as sequence alignments or profile HMMs.
The search results are accompanied by a report on the taxonomic breakdown, and the domain organisation of the hits. Search results can then be filtered according to either parameter.
The web service is currently run out of the European Bioinformatics Institute (EBI) in the United Kingdom, while development of the algorithm is still performed by Sean Eddy's team in the United States. Major reasons for relocating the web service were to leverage the computing infrastructure at the EBI, and to cross-link HMMER searches with relevant databases that are also maintained by the EBI.
The HMMER3 release
The latest stable release of HMMER is version 3.0. HMMER3 is complete rewrite of the earlier HMMER2 package, with the aim of improving the speed of profile-HMM searches. Major changes are outlined below:
Improvements in speed
A major aim of the HMMER3 project, started in 2004 was to improve the speed of HMMER searches. While profile HMM-based homology searches were more accurate than BLAST-based approaches, their slower speed limited their applicability. The main performance gain is due to a heuristic filter that finds high-scoring un-gapped matches within database sequences to a query profile. This heuristic results in a computation time comparable to BLAST with little impact on accuracy. Further gains in performance are due to a log-likelihood model that requires no calibration for estimating E-values, and allows the more accurate forward scores to be used for computing the significance of a homologous sequence.
HMMER still lags behind BLAST in speed of DNA-based searches; however, DNA-based searches can be tuned such that an improvement in speed comes at the expense of accuracy.
Improvements in remote homology searching
The major advance in speed was made possible by the development of an approach for calculating the significance of results integrated over a range of possible alignments. In discovering remote homologs, alignments between query and hit proteins are often very uncertain. While most sequence alignment tools calculate match scores using only the best scoring alignment, HMMER3 calculates match scores by integrating across all possible alignments, to account for uncertainty in which alignment is best. HMMER sequence alignments are accompanied by posterior probability annotations, indicating which portions of the alignment have been assigned high confidence and which are more uncertain.
DNA sequence comparison
A major improvement in HMMER3 was the inclusion of DNA/DNA comparison tools. HMMER2 only had functionality to compare protein sequences.
Restriction to local alignments
While HMMER2 could perform local alignment (align a complete model to a subsequence of the target) and global alignment (align a complete model to a complete target sequence), HMMER3 only performs local alignment. This restriction is due to the difficulty in calculating the significance of hits when performing local/global alignments using the new algorithm.
See also
Hidden Markov model
Sequence alignment software
Pfam
UGENE
Several implementations of profile HMM methods and related position-specific scoring matrix methods are available. Some are listed below:
HH-suite
SAM
PSI-BLAST
MMseqs2
PFTOOLS
GENEWISE
PROBE
META-MEME
BLOCKS
GPU-HMMER
DeCypherHMM
References
External links
HMMER3 announcement
A blog posting on HMMER policy on trademark, copyright, patents, and licensing
Bioinformatics software
Free science software
Free software programmed in C
Computational science
Free bioinformatics software | HMMER | [
"Mathematics",
"Biology"
] | 1,722 | [
"Computational science",
"Applied mathematics",
"Bioinformatics",
"Bioinformatics software"
] |
19,539,938 | https://en.wikipedia.org/wiki/Fr%C3%A9chet%20distance | In mathematics, the Fréchet distance is a measure of similarity between curves that takes into account the location and ordering of the points along the curves. It is named after Maurice Fréchet.
Intuitive definition
Imagine a person traversing a finite curved path while walking their dog on a leash, with the dog traversing a separate finite curved path. Each can vary their speed to keep slack in the leash, but neither can move backwards. The Fréchet distance between the two curves is the length of the shortest leash sufficient for both to traverse their separate paths from start to finish. Note that the definition is symmetric with respect to the two curves—the Fréchet distance would be the same if the dog were walking its owner.
Formal definition
Let be a metric space. A curve in is a continuous map from the unit interval into , i.e., . A reparameterization of is a continuous, non-decreasing, surjection .
Let and be two given curves in . Then, the Fréchet distance between and is defined as the infimum over all reparameterizations and of of the maximum over all of the distance in between and . In mathematical notation, the Fréchet distance is
where is the distance function of .
Informally, we can think of the parameter as "time". Then, is the position of the dog and is the position of the dog's owner at time (or vice versa). The length of the leash between them at time is the distance between and . Taking the infimum over all possible reparametrizations of corresponds to choosing the walk along the given paths where the maximum leash length is minimized. The restriction that and be non-decreasing means that neither the dog nor its owner can backtrack.
The Fréchet metric takes into account the flow of the two curves because the pairs of points whose distance contributes to the Fréchet distance sweep continuously along their respective curves. This makes the Fréchet distance a better measure of similarity for curves than alternatives, such as the Hausdorff distance, for arbitrary point sets. It is possible for two curves to have small Hausdorff distance but large Fréchet distance.
The Fréchet distance and its variants find application in several problems, from morphing and handwriting recognition to protein structure alignment. Alt and Godau were the first to describe a polynomial-time algorithm to compute the Fréchet distance between two polygonal curves in Euclidean space, based on the principle of parametric search. The running time of their algorithm is for two polygonal curves with m and n segments.
The free-space diagram
An important tool for calculating the Fréchet distance of two curves is the free-space diagram, which was introduced by Alt and Godau.
The free-space diagram between two curves for a given distance threshold ε is a two-dimensional region in the parameter space that consists of all point pairs on the two curves at distance at most ε:
The Fréchet distance is at most ε if and only if the free-space diagram contains a path from the lower left corner to the upper right corner, which is monotone both in the horizontal and in the vertical direction.
As a distance between probability distributions (the FID score)
In addition to measuring the distances between curves, the Fréchet distance can also be used to measure the difference between probability distributions.
For two multivariate Gaussian distributions with means and and covariance matrices and , the Fréchet distance between these distributions is given by
.
This distance is the basis for the Fréchet inception distance (FID) that is used in machine learning to compare images produced by an image generative model with the real images that were used for training.
Variants
The weak Fréchet distance is a variant of the classical Fréchet distance without the requirement that the endpoints move monotonically along their respective curves — the dog and its owner are allowed to backtrack to keep the leash between them short. Alt and Godau describe a simpler algorithm to compute the weak Fréchet distance between polygonal curves, based on computing minimax paths in an associated grid graph.
The discrete Fréchet distance, also called the coupling distance, is an approximation of the Fréchet metric for polygonal curves, defined by Eiter and Mannila. The discrete Fréchet distance considers only positions of the leash where its endpoints are located at vertices of the two polygonal curves and never in the interior of an edge. This approximation unconditionally yields larger values than the corresponding (continuous) Fréchet distance. However, the approximation error is bounded by the largest distance between two adjacent vertices of the polygonal curves. Contrary to common algorithms of the (continuous) Fréchet distance, this algorithm is agnostic of the distance measures induced by the metric space. Its formulation as a dynamic programming problem can be implemented efficiently with a quadratic runtime and a linear memory overhead using only few lines of code.
When the two curves are embedded in a metric space other than Euclidean space, such as a polyhedral terrain or some Euclidean space with obstacles, the distance between two points on the curves is most naturally defined as the length of the shortest path between them. The leash is required to be a geodesic joining its endpoints. The resulting metric between curves is called the geodesic Fréchet distance. Cook and Wenk describe a polynomial-time algorithm to compute the geodesic Fréchet distance between two polygonal curves in a simple polygon.
If we further require that the leash must move continuously in the ambient metric space, then we obtain the notion of the homotopic Fréchet distance between two curves. The leash cannot switch discontinuously from one position to another — in particular, the leash cannot jump over obstacles, and can sweep over a mountain on a terrain only if it is long enough. The motion of the leash describes a homotopy between the two curves. Chambers et al. describe a polynomial-time algorithm to compute the homotopic Fréchet distance between polygonal curves in the Euclidean plane with obstacles.
Examples
The Fréchet distance between two concentric circles of radius and respectively is
The longest leash is required when the owner stands still and the dog travels to the opposite side of the circle (), and the shortest leash when both owner and dog walk at a constant angular velocity around the circle ().
Applications
Fréchet distance has been used to study visual hierarchy, a graphic design principle.
See also
Fréchet inception distance
Fréchet mean
References
Further reading
.
.
.
Metric geometry
Distance
Topology
Geometric algorithms | Fréchet distance | [
"Physics",
"Mathematics"
] | 1,348 | [
"Physical quantities",
"Distance",
"Quantity",
"Size",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
250,001 | https://en.wikipedia.org/wiki/Molecular%20clock | The molecular clock is a figurative term for a technique that uses the mutation rate of biomolecules to deduce the time in prehistory when two or more life forms diverged. The biomolecular data used for such calculations are usually nucleotide sequences for DNA, RNA, or amino acid sequences for proteins.
Early discovery and genetic equidistance
The notion of the existence of a so-called "molecular clock" was first attributed to Émile Zuckerkandl and Linus Pauling who, in 1962, noticed that the number of amino acid differences in hemoglobin between different lineages changes roughly linearly with time, as estimated from fossil evidence. They generalized this observation to assert that the rate of evolutionary change of any specified protein was approximately constant over time and over different lineages (known as the molecular clock hypothesis).
The genetic equidistance phenomenon was first noted in 1963 by Emanuel Margoliash, who wrote: "It appears that the number of residue differences between cytochrome c of any two species is mostly conditioned by the time elapsed since the lines of evolution leading to these two species originally diverged. If this is correct, the cytochrome c of all mammals should be equally different from the cytochrome c of all birds. Since fish diverges from the main stem of vertebrate evolution earlier than either birds or mammals, the cytochrome c of both mammals and birds should be equally different from the cytochrome c of fish. Similarly, all vertebrate cytochrome c should be equally different from the yeast protein." For example, the difference between the cytochrome c of a carp and a frog, turtle, chicken, rabbit, and horse is a very constant 13% to 14%. Similarly, the difference between the cytochrome c of a bacterium and yeast, wheat, moth, tuna, pigeon, and horse ranges from 64% to 69%. Together with the work of Emile Zuckerkandl and Linus Pauling, the genetic equidistance result led directly to the formal postulation of the molecular clock hypothesis in the early 1960s.
Similarly, Vincent Sarich and Allan Wilson in 1967 demonstrated that molecular differences among modern primates in albumin proteins showed that approximately constant rates of change had occurred in all the lineages they assessed. The basic logic of their analysis involved recognizing that if one species lineage had evolved more quickly than a sister species lineage since their common ancestor, then the molecular differences between an outgroup (more distantly related) species and the faster-evolving species should be larger (since more molecular changes would have accumulated on that lineage) than the molecular differences between the outgroup species and the slower-evolving species. This method is known as the relative rate test. Sarich and Wilson's paper reported, for example, that human (Homo sapiens) and chimpanzee (Pan troglodytes) albumin immunological cross-reactions suggested they were about equally different from Ceboidea (New World Monkey) species (within experimental error). This meant that they had both accumulated approximately equal changes in albumin since their shared common ancestor. This pattern was also found for all the primate comparisons they tested. When calibrated with the few well-documented fossil branch points (such as no Primate fossils of modern aspect found before the K-T boundary), this led Sarich and Wilson to argue that the human-chimp divergence probably occurred only ~4–6 million years ago.
Relationship with neutral theory
The observation of a clock-like rate of molecular change was originally purely phenomenological. Later, the work of Motoo Kimura developed the neutral theory of molecular evolution, which predicted a molecular clock. Let there be N individuals, and to keep this calculation simple, let the individuals be haploid (i.e. have one copy of each gene). Let the rate of neutral mutations (i.e. mutations with no effect on fitness) in a new individual be . The probability that this new mutation will become fixed in the population is then 1/N, since each copy of the gene is as good as any other. Every generation, each individual can have new mutations, so there are N new neutral mutations in the population as a whole. That means that each generation, new neutral mutations will become fixed. If most changes seen during molecular evolution are neutral, then fixations in a population will accumulate at a clock-rate that is equal to the rate of neutral mutations in an individual.
Calibration
To use molecular clocks to estimate divergence times, molecular clocks need to be "calibrated". This is because molecular data alone does not contain any information on absolute times. For viral phylogenetics and ancient DNA studies—two areas of evolutionary biology where it is possible to sample sequences over an evolutionary timescale—the dates of the intermediate samples can be used to calibrate the molecular clock. However, most phylogenies require that the molecular clock be calibrated using independent evidence about dates, such as the fossil record. There are two general methods for calibrating the molecular clock using fossils: node calibration and tip calibration.
Node calibration
Sometimes referred to as node dating, node calibration is a method for time-scaling phylogenetic trees by specifying time constraints for one or more nodes in the tree. Early methods of clock calibration only used a single fossil constraint (e.g. non-parametric rate smoothing), but newer methods (BEAST and r8s) allow for the use of multiple fossils to calibrate molecular clocks. The oldest fossil of a clade is used to constrain the minimum possible age for the node representing the most recent common ancestor of the clade. However, due to incomplete fossil preservation and other factors, clades are typically older than their oldest fossils. In order to account for this, nodes are allowed to be older than the minimum constraint in node calibration analyses. However, determining how much older the node is allowed to be is challenging. There are a number of strategies for deriving the maximum bound for the age of a clade including those based on birth-death models, fossil stratigraphic distribution analyses, or taphonomic controls. Alternatively, instead of a maximum and a minimum, a probability density can be used to represent the uncertainty about the age of the clade. These calibration densities can take the shape of standard probability densities (e.g. normal, lognormal, exponential, gamma) that can be used to express the uncertainty associated with divergence time estimates. Determining the shape and parameters of the probability distribution is not trivial, but there are methods that use not only the oldest fossil but a larger sample of the fossil record of clades to estimate calibration densities empirically. Studies have shown that increasing the number of fossil constraints increases the accuracy of divergence time estimation.
Tip calibration
Sometimes referred to as tip dating, tip calibration is a method of molecular clock calibration in which fossils are treated as taxa and placed on the tips of the tree. This is achieved by creating a matrix that includes a molecular dataset for the extant taxa along with a morphological dataset for both the extinct and the extant taxa. Unlike node calibration, this method reconstructs the tree topology and places the fossils simultaneously. Molecular and morphological models work together simultaneously, allowing morphology to inform the placement of fossils. Tip calibration makes use of all relevant fossil taxa during clock calibration, rather than relying on only the oldest fossil of each clade. This method does not rely on the interpretation of negative evidence to infer maximum clade ages.
Expansion calibration
Demographic changes in populations can be detected as fluctuations in historical coalescent effective population size from a sample of extant genetic variation in the population using coalescent theory. Ancient population expansions that are well documented and dated in the geological record can be used to calibrate a rate of molecular evolution in a manner similar to node calibration. However, instead of calibrating from the known age of a node, expansion calibration uses a two-epoch model of constant population size followed by population growth, with the time of transition between epochs being the parameter of interest for calibration. Expansion calibration works at shorter, intraspecific timescales in comparison to node calibration, because expansions can only be detected after the most recent common ancestor of the species in question. Expansion dating has been used to show that molecular clock rates can be inflated at short timescales (< 1 MY) due to incomplete fixation of alleles, as discussed below
Total evidence dating
This approach to tip calibration goes a step further by simultaneously estimating fossil placement, topology, and the evolutionary timescale. In this method, the age of a fossil can inform its phylogenetic position in addition to morphology. By allowing all aspects of tree reconstruction to occur simultaneously, the risk of biased results is decreased. This approach has been improved upon by pairing it with different models. One current method of molecular clock calibration is total evidence dating paired with the fossilized birth-death (FBD) model and a model of morphological evolution. The FBD model is novel in that it allows for "sampled ancestors", which are fossil taxa that are the direct ancestor of a living taxon or lineage. This allows fossils to be placed on a branch above an extant organism, rather than being confined to the tips.
Methods
Bayesian methods can provide more appropriate estimates of divergence times, especially if large datasets—such as those yielded by phylogenomics—are employed.
Non-constant rate of molecular clock
Sometimes only a single divergence date can be estimated from fossils, with all other dates inferred from that. Other sets of species have abundant fossils available, allowing the hypothesis of constant divergence rates to be tested. DNA sequences experiencing low levels of negative selection showed divergence rates of 0.7–0.8% per Myr in bacteria, mammals, invertebrates, and plants. In the same study, genomic regions experiencing very high negative or purifying selection (encoding rRNA) were considerably slower (1% per 50 Myr).
In addition to such variation in rate with genomic position, since the early 1990s variation among taxa has proven fertile ground for research too, even over comparatively short periods of evolutionary time (for example mockingbirds). Tube-nosed seabirds have molecular clocks that on average run at half speed of many other birds, possibly due to long generation times, and many turtles have a molecular clock running at one-eighth the speed it does in small mammals, or even slower. Effects of small population size are also likely to confound molecular clock analyses. Researchers such as Francisco J. Ayala have more fundamentally challenged the molecular clock hypothesis. According to Ayala's 1999 study, five factors combine to limit the application of molecular clock models:
Changing generation times (If the rate of new mutations depends at least partly on the number of generations rather than the number of years)
Population size (Genetic drift is stronger in small populations, and so more mutations are effectively neutral)
Species-specific differences (due to differing metabolism, ecology, evolutionary history, ...)
Change in function of the protein studied (can be avoided in closely related species by utilizing non-coding DNA sequences or emphasizing silent mutations)
Changes in the intensity of natural selection.
Molecular clock users have developed workaround solutions using a number of statistical approaches including maximum likelihood techniques and later Bayesian modeling. In particular, models that take into account rate variation across lineages have been proposed in order to obtain better estimates of divergence times. These models are called relaxed molecular clocks because they represent an intermediate position between the 'strict' molecular clock hypothesis and Joseph Felsenstein's many-rates model and are made possible through MCMC techniques that explore a weighted range of tree topologies and simultaneously estimate parameters of the chosen substitution model. It must be remembered that divergence dates inferred using a molecular clock are based on statistical inference and not on direct evidence.
The molecular clock runs into particular challenges at very short and very long timescales. At long timescales, the problem is saturation. When enough time has passed, many sites have undergone more than one change, but it is impossible to detect more than one. This means that the observed number of changes is no longer linear with time, but instead flattens out. Even at intermediate genetic distances, with phylogenetic data still sufficient to estimate topology, signal for the overall scale of the tree can be weak under complex likelihood models, leading to highly uncertain molecular clock estimates.
At very short time scales, many differences between samples do not represent fixation of different sequences in the different populations. Instead, they represent alternative alleles that were both present as part of a polymorphism in the common ancestor. The inclusion of differences that have not yet become fixed
leads to a potentially dramatic inflation of the apparent rate of the molecular clock at very short timescales.
Uses
The molecular clock technique is an important tool in molecular systematics, macroevolution, and phylogenetic comparative methods. Estimation of the dates of phylogenetic events, including those not documented by fossils, such as the divergences between living taxa has allowed the study of macroevolutionary processes in organisms that had limited fossil records. Phylogenetic comparative methods rely heavily on calibrated phylogenies.
See also
Charles Darwin
Gene orders
Human mitochondrial molecular clock
Mitochondrial Eve and Y-chromosomal Adam
Models of DNA evolution
Molecular evolution
Neutral theory of molecular evolution
Glottochronology
References
Further reading
External links
Allan Wilson and the molecular clock
Molecular clock explanation of the molecular equidistance phenomenon
Date-a-Clade service for the molecular tree of life
Evolutionary biology concepts
Molecular evolution
Molecular genetics
Phylogenetics | Molecular clock | [
"Chemistry",
"Biology"
] | 2,834 | [
"Evolutionary processes",
"Molecular evolution",
"Taxonomy (biology)",
"Evolutionary biology concepts",
"Bioinformatics",
"Molecular genetics",
"Molecular biology",
"Phylogenetics"
] |
250,074 | https://en.wikipedia.org/wiki/Binomial%20options%20pricing%20model | In finance, the binomial options pricing model (BOPM) provides a generalizable numerical method for the valuation of options. Essentially, the model uses a "discrete-time" (lattice based) model of the varying price over time of the underlying financial instrument, addressing cases where the closed-form Black–Scholes formula is wanting.
The binomial model was first proposed by William Sharpe in the 1978 edition of Investments (), and formalized by Cox, Ross and Rubinstein in 1979 and by Rendleman and Bartter in that same year.
For binomial trees as applied to fixed income and interest rate derivatives see .
Use of the model
The Binomial options pricing model approach has been widely used since it is able to handle a variety of conditions for which other models cannot easily be applied. This is largely because the BOPM is based on the description of an underlying instrument over a period of time rather than a single point. As a consequence, it is used to value American options that are exercisable at any time in a given interval as well as Bermudan options that are exercisable at specific instances of time. Being relatively simple, the model is readily implementable in computer software (including a spreadsheet).
Although computationally slower than the Black–Scholes formula, it is more accurate, particularly for longer-dated options on securities with dividend payments. For these reasons, various versions of the binomial model are widely used by practitioners in the options markets.
For options with several sources of uncertainty (e.g., real options) and for options with complicated features (e.g., Asian options), binomial methods are less practical due to several difficulties, and Monte Carlo option models are commonly used instead. When simulating a small number of time steps Monte Carlo simulation will be more computationally time-consuming than BOPM (cf. Monte Carlo methods in finance). However, the worst-case runtime of BOPM will be O(2n), where n is the number of time steps in the simulation. Monte Carlo simulations will generally have a polynomial time complexity, and will be faster for large numbers of simulation steps. Monte Carlo simulations are also less susceptible to sampling errors, since binomial techniques use discrete time units. This becomes more true the smaller the discrete units become.
Method
The binomial pricing model traces the evolution of the option's key underlying variables in discrete-time. This is done by means of a binomial lattice (Tree), for a number of time steps between the valuation and expiration dates. Each node in the lattice represents a possible price of the underlying at a given point in time.
Valuation is performed iteratively, starting at each of the final nodes (those that may be reached at the time of expiration), and then working backwards through the tree towards the first node (valuation date). The value computed at each stage is the value of the option at that point in time.
Option valuation using this method is, as described, a three-step process:
Price tree generation,
Calculation of option value at each final node,
Sequential calculation of the option value at each preceding node.
Step 1: Create the binomial price tree
The tree of prices is produced by working forward from valuation date to expiration.
At each step, it is assumed that the underlying instrument will move up or down by a specific factor ( or ) per step of the tree (where, by definition, and ). So, if is the current price, then in the next period the price will either be or .
The up and down factors are calculated using the underlying volatility, , and the time duration of a step, , measured in years (using the day count convention of the underlying instrument). From the condition that the variance of the log of the price is , we have:
Above is the original Cox, Ross, & Rubinstein (CRR) method; there are various other techniques for generating the lattice, such as "the equal probabilities" tree, see.
The CRR method ensures that the tree is recombinant, i.e. if the underlying asset moves up and then down (u,d), the price will be the same as if it had moved down and then up (d,u)—here the two paths merge or recombine. This property reduces the number of tree nodes, and thus accelerates the computation of the option price.
This property also allows the value of the underlying asset at each node to be calculated directly via formula, and does not require that the tree be built first. The node-value will be:
Where is the number of up ticks and is the number of down ticks.
Step 2: Find option value at each final node
At each final node of the tree—i.e. at expiration of the option—the option value is simply its intrinsic, or exercise, value:
, for a call option
, for a put option,
Where is the strike price and is the spot price of the underlying asset at the period.
Step 3: Find option value at earlier nodes
Once the above step is complete, the option value is then found for each node, starting at the penultimate time step, and working back to the first node of the tree (the valuation date) where the calculated result is the value of the option.
In overview: the "binomial value" is found at each node, using the risk neutrality assumption; see Risk neutral valuation. If exercise is permitted at the node, then the model takes the greater of binomial and exercise value at the node.
The steps are as follows:
In calculating the value at the next time step calculated—i.e. one step closer to valuation—the model must use the value selected here, for "Option up"/"Option down" as appropriate, in the formula at the node.
The aside algorithm demonstrates the approach computing the price of an American put option, although is easily generalized for calls and for European and Bermudan options:
Relationship with Black–Scholes
Similar assumptions underpin both the binomial model and the Black–Scholes model, and the binomial model thus provides a discrete time approximation to the continuous process underlying the Black–Scholes model. The binomial model assumes that movements in the price follow a binomial distribution; for many trials, this binomial distribution approaches the log-normal distribution assumed by Black–Scholes. In this case then, for European options without dividends, the binomial model value converges on the Black–Scholes formula value as the number of time steps increases.
In addition, when analyzed as a numerical procedure, the CRR binomial method can be viewed as a special case of the explicit finite difference method for the Black–Scholes PDE; see finite difference methods for option pricing.
See also
Trinomial tree, a similar model with three possible paths per node.
Tree (data structure)
Lattice model (finance), for more general discussion and application to other underlyings
Black–Scholes: binomial lattices are able to handle a variety of conditions for which Black–Scholes cannot be applied.
Monte Carlo option model, used in the valuation of options with complicated features that make them difficult to value through other methods.
Real options analysis, where the BOPM is widely used.
Quantum finance, quantum binomial pricing model.
Mathematical finance, which has a list of related articles.
, where the BOPM is widely used.
Implied binomial tree
Edgeworth binomial tree
References
External links
The Binomial Model for Pricing Options, Prof. Thayer Watkins
Binomial Option Pricing (PDF), Prof. Robert M. Conroy
Binomial Option Pricing Model by Fiona Maclachlan, The Wolfram Demonstrations Project
On the Irrelevance of Expected Stock Returns in the Pricing of Options in the Binomial Model: A Pedagogical Note by Valeri Zakamouline
A Simple Derivation of Risk-Neutral Probability in the Binomial Option Pricing Model by Greg Orosi
Financial models
Options (finance)
Mathematical finance
Models of computation
Trees (data structures)
Articles with example code | Binomial options pricing model | [
"Mathematics"
] | 1,680 | [
"Applied mathematics",
"Mathematical finance"
] |
250,107 | https://en.wikipedia.org/wiki/Pitch%20drop%20experiment | A pitch drop experiment is a long-term experiment which measures the flow of a piece of pitch over many years. "Pitch" is the name for any of a number of highly viscous liquids which appear solid, most commonly bitumen, also known as asphalt. At room temperature, tar pitch flows at a very low rate, taking several years to form a single drop.
University of Queensland experiment
The best-known version of the experiment was started in 1927 by Professor Thomas Parnell of the University of Queensland in Brisbane, Australia, to demonstrate to students that some substances which appear solid are highly viscous fluids. Parnell poured a heated sample of the pitch into a sealed funnel and allowed it to settle for three years. In 1930, the seal at the neck of the funnel was cut, allowing the pitch to start flowing. A glass dome covers the funnel and it is placed on display outside a lecture theatre. Each droplet forms and falls over a period of about a decade.
The seventh drop fell at approximately 4:45 p.m. on 3 July 1988, while the experiment was on display at Brisbane's World Expo 88. However, apparently no one witnessed the drop fall itself; Professor Mainstone had stepped out to get a drink at the moment it occurred.
The eighth drop fell on 28 November 2000, allowing experimenters to calculate the pitch as having a viscosity of approximately 230 billion times that of water.
This experiment is recorded in Guinness World Records as the "world's longest continuously running laboratory experiment", and it is expected there is enough pitch in the funnel to allow it to continue for at least another hundred years. This experiment is predated by two other (still-active) scientific devices, the Oxford Electric Bell (1840) and the Beverly Clock (1864), but each of these has experienced brief interruptions since 1937.
The experiment was not originally carried out under any special controlled atmospheric conditions, meaning the viscosity could vary throughout the year with fluctuations in temperature. Sometime after the seventh drop fell (1988), air conditioning was added to the location where the experiment takes place. The lower average temperature has lengthened each drop's stretch before it separates from the rest of the pitch in the funnel, and correspondingly the typical interval between drops has increased from eight years to 12–13 years.
In October 2005, John Mainstone and the late Thomas Parnell were awarded the Ig Nobel Prize in physics, a parody of the Nobel Prize, for the pitch drop experiment. Mainstone subsequently commented:
The experiment is monitored by a webcam but technical problems prevented the November 2000 drop from being recorded. The pitch drop experiment is on public display on Level 2 of Parnell building in the School of Mathematics and Physics at the St Lucia campus of the University of Queensland. Hundreds of thousands of Internet users check the live stream each year.
Professor John Mainstone died on 13 August 2013, aged 78, following a stroke. Custodianship then passed to Professor Andrew White.
The ninth drop touched the eighth drop on 12 April 2014; however, it was still attached to the funnel. On 24 April, Professor White decided to replace the beaker holding the previous eight drops before the ninth drop fused to them (which would have permanently affected the ability of further drops to form). While the bell jar was being lifted, the wooden base wobbled and the ninth drop snapped away from the funnel.
Timeline
Timeline for the University of Queensland experiment:
Trinity College Dublin experiment
The pitch drop experiment at Trinity College Dublin in Ireland was started in October 1944 by an unknown colleague of the Nobel Prize winner Ernest Walton while he was in the physics department of Trinity College. This experiment, like the one at University of Queensland, was set up to demonstrate the high viscosity of pitch. This physics experiment sat on a shelf in a lecture hall at Trinity College unmonitored for decades as it dripped a number of times from the funnel to the receiving jar below, also gathering layers of dust.
In April 2013, about a decade after the previous pitch drop, physicists at Trinity College noticed that another drip was forming. They moved the experiment to a table to monitor and record the falling drip with a webcam, allowing all present to watch. The pitch dripped around 17:00 IST on 11 July 2013, marking the first time that a pitch drop was successfully recorded on camera.
Based on the results from this experiment, the Trinity College physicists estimated that the viscosity of the pitch is about two million times that of honey, or about 20 billion times the viscosity of water.
University of St. Andrews experiment
A pitch drop experiment was begun at the University of St Andrews in 1927, the same year as the Queensland experiment. No evidence has emerged of any contact between Parnell and the instigator or instigators of the St. Andrews experiment. The pitch in the St. Andrews experiment flows in a largely steady, but extremely slow, stream. At some stage (likely in 1984) St. Andrews professor John Allen modified the St. Andrews experiment to bring its setup closer to that of the University of Queensland experiment.
Aberystwyth University experiment
In 2014, media reported that a pitch drop experiment had been recently rediscovered at Aberystwyth University in Wales. Dating from 1914, it predates the Queensland experiment by 13 years. But as the pitch is more viscous (or the average temperature lower) this experiment has not yet produced its first drop and is not expected to for over 1,000 years.
National Museum of Scotland experiment
Another pitch-in-funnel demonstration was begun in 1902 by the Royal Scottish Museum in Edinburgh and is in Edinburgh at the Royal Scottish Museum's successor institution the National Museum of Scotland. The known records of its behaviour are incomplete: it is known to have dripped once at some time between 4 and 6 June 2016 and on at least one occasion in the past, but the time and number of the previous drip or drips is unknown. Furthermore, the June 2016 drip happened shortly after the experiment was taken out of museum storage, and the physical movement may have caused it to drip at that time.
Demonstrations of Lord Kelvin
In the Hunterian Museum at the University of Glasgow are two pitch-based demonstrations by Lord Kelvin from the 19th century. Kelvin placed some bullets on top of a dish of pitch, and corks at the bottom: over time, the bullets sank and the corks floated.
Lord Kelvin also showed that the pitch flows like glaciers, with a mahogany ramp that allowed it to slide slowly downward and form shapes and patterns similar to glaciers in the Alps. This model was considered as an inspiration for the expected properties of luminiferous aether.
See also
Rheology
William James Beal, botanist who started a long-running seed germination experiment in 1879
Oxford Electric Bell, ringing nearly continuously from 1840
Centennial Light, light bulb burning since 1901
The E. coli long-term evolution experiment (LTEE), a study in experimental evolution running since 1988.
References
External links
Fluid dynamics
Physics experiments
University of Queensland | Pitch drop experiment | [
"Physics",
"Chemistry",
"Engineering"
] | 1,429 | [
"Physics experiments",
"Chemical engineering",
"Experimental physics",
"Piping",
"Fluid dynamics"
] |
250,178 | https://en.wikipedia.org/wiki/Skewes%27s%20number | In number theory, Skewes's number is the smallest natural number for which the prime-counting function exceeds the logarithmic integral function It is named for the South African mathematician Stanley Skewes who first computed an upper bound on its value.
The exact value of Skewes's number is still not known, but it is known that there is a crossing between and near It is not known whether this is the smallest crossing.
The name is sometimes also applied to either of the large number bounds which Skewes found.
Skewes's bounds
Although nobody has ever found a value of for which Skewes's research supervisor J.E. Littlewood had proved in that there is such a number (and so, a first such number); and indeed found that the sign of the difference changes infinitely many times. Littlewood's proof did not, however, exhibit a concrete such number , nor did it even give any bounds on the value.
Skewes's task was to make Littlewood's existence proof effective: exhibit some concrete upper bound for the first sign change. According to Georg Kreisel, this was not considered obvious even in principle at the time.
proved that, assuming that the Riemann hypothesis is true, there exists a number violating below
Without assuming the Riemann hypothesis, later proved that there exists a value of below
More recent bounds
These upper bounds have since been reduced considerably by using large-scale computer calculations of zeros of the Riemann zeta function. The first estimate for the actual value of a crossover point was given by , who showed that somewhere between and there are more than consecutive integers with .
Without assuming the Riemann hypothesis, proved an upper bound of . A better estimate was discovered by , who showed there are at least consecutive integers somewhere near this value where . Bays and Hudson found a few much smaller values of where gets close to ; the possibility that there are crossover points near these values does not seem to have been definitely ruled out yet, though computer calculations suggest they are unlikely to exist. gave a small improvement and correction to the result of Bays and Hudson. found a smaller interval for a crossing, which was slightly improved by . The same source shows that there exists a number violating below . This can be reduced to assuming the Riemann hypothesis. gave .
Rigorously, proved that there are no crossover points below , improved by to , by to , by to , and by to .
There is no explicit value known for certain to have the property though computer calculations suggest some explicit numbers that are quite likely to satisfy this.
Even though the natural density of the positive integers for which does not exist, showed that the logarithmic density of these positive integers does exist and is positive. showed that this proportion is about , which is surprisingly large given how far one has to go to find the first example.
Riemann's formula
Riemann gave an explicit formula for , whose leading terms are (ignoring some subtle convergence questions)
where the sum is over all in the set of non-trivial zeros of the Riemann zeta function.
The largest error term in the approximation (if the Riemann hypothesis is true) is negative , showing that is usually larger than . The other terms above are somewhat smaller, and moreover tend to have different, seemingly random complex arguments, so mostly cancel out. Occasionally however, several of the larger ones might happen to have roughly the same complex argument, in which case they will reinforce each other instead of cancelling and will overwhelm the term .
The reason why the Skewes number is so large is that these smaller terms are quite a lot smaller than the leading error term, mainly because the first complex zero of the zeta function has quite a large imaginary part, so a large number (several hundred) of them need to have roughly the same argument in order to overwhelm the dominant term. The chance of random complex numbers having roughly the same argument is about 1 in .
This explains why is sometimes larger than and also why it is rare for this to happen.
It also shows why finding places where this happens depends on large scale calculations of millions of high precision zeros of the Riemann zeta function.
The argument above is not a proof, as it assumes the zeros of the Riemann zeta function are random, which is not true. Roughly speaking, Littlewood's proof consists of Dirichlet's approximation theorem to show that sometimes many terms have about the same argument.
In the event that the Riemann hypothesis is false, the argument is much simpler, essentially because the terms for zeros violating the Riemann hypothesis (with real part greater than ) are eventually larger than .
The reason for the term is that, roughly speaking, actually counts powers of primes, rather than the primes themselves, with weighted by . The term is roughly analogous to a second-order correction accounting for squares of primes.
Equivalent for prime k-tuples
An equivalent definition of Skewes's number exists for prime k-tuples (). Let denote a prime (k + 1)-tuple, the number of primes below such that are all prime, let and let denote its Hardy–Littlewood constant (see First Hardy–Littlewood conjecture). Then the first prime that violates the Hardy–Littlewood inequality for the (k + 1)-tuple , i.e., the first prime such that
(if such a prime exists) is the Skewes number for
The table below shows the currently known Skewes numbers for prime k-tuples:
The Skewes number (if it exists) for sexy primes is still unknown.
It is also unknown whether all admissible k-tuples have a corresponding Skewes number.
See also
References
.
.
External links
Large numbers
Number theory
Large integers | Skewes's number | [
"Mathematics"
] | 1,184 | [
"Discrete mathematics",
"Mathematical objects",
"Large numbers",
"Numbers",
"Number theory"
] |
250,237 | https://en.wikipedia.org/wiki/Effective%20results%20in%20number%20theory | For historical reasons and in order to have application to the solution of Diophantine equations, results in number theory have been scrutinised more than in other branches of mathematics to see if their content is effectively computable. Where it is asserted that some list of integers is finite, the question is whether in principle the list could be printed out after a machine computation.
Littlewood's result
An early example of an ineffective result was J. E. Littlewood's theorem of 1914, that in the prime number theorem the differences of both ψ(x) and π(x) with their asymptotic estimates change sign infinitely often. In 1933 Stanley Skewes obtained an effective upper bound for the first sign change, now known as Skewes' number.
In more detail, writing for a numerical sequence f (n), an effective result about its changing sign infinitely often would be a theorem including, for every value of N, a value M > N such that f (N) and f (M) have different signs, and such that M could be computed with specified resources. In practical terms, M would be computed by taking values of n from N onwards, and the question is 'how far must you go?' A special case is to find the first sign change. The interest of the question was that the numerical evidence known showed no change of sign: Littlewood's result guaranteed that this evidence was just a small number effect, but 'small' here included values of n up to a billion.
The requirement of computability is reflected in and contrasts with the approach used in the analytic number theory to prove the results. It for example brings into question any use of Landau notation and its implied constants: are assertions pure existence theorems for such constants, or can one recover a version in which 1000 (say) takes the place of the implied constant? In other words, if it were known that there was M > N with a change of sign and such that
M = O(G(N))
for some explicit function G, say built up from powers, logarithms and exponentials, that means only
M < A.G(N)
for some absolute constant A. The value of A, the so-called implied constant, may also need to be made explicit, for computational purposes. One reason Landau notation was a popular introduction is that it hides exactly what A is. In some indirect forms of proof it may not be at all obvious that the implied constant can be made explicit.
The 'Siegel period'
Many of the principal results of analytic number theory that were proved in the period 1900–1950 were in fact ineffective. The main examples were:
The Thue–Siegel–Roth theorem
Siegel's theorem on integral points, from 1929
The 1934 theorem of Hans Heilbronn and Edward Linfoot on the class number 1 problem
The 1935 result on the Siegel zero
The Siegel–Walfisz theorem based on the Siegel zero.
The concrete information that was left theoretically incomplete included lower bounds for class numbers (ideal class groups for some families of number fields grow); and bounds for the best rational approximations to algebraic numbers in terms of denominators. These latter could be read quite directly as results on Diophantine equations, after the work of Axel Thue. The result used for Liouville numbers in the proof is effective in the way it applies the mean value theorem: but improvements (to what is now the Thue–Siegel–Roth theorem) were not.
Later work
Later results, particularly of Alan Baker, changed the position. Qualitatively speaking, Baker's theorems look weaker, but they have explicit constants and can actually be applied, in conjunction with machine computation, to prove that lists of solutions (suspected to be complete) are actually the entire solution set.
Theoretical issues
The difficulties here were met by radically different proof techniques, taking much more care about proofs by contradiction. The logic involved is closer to proof theory than to that of computability theory and computable functions. It is rather loosely conjectured that the difficulties may lie in the realm of computational complexity theory. Ineffective results are still being proved in the shape A or B, where we have no way of telling which.
References
External links
Analytic number theory
Diophantine equations | Effective results in number theory | [
"Mathematics"
] | 894 | [
"Analytic number theory",
"Mathematical objects",
"Equations",
"Diophantine equations",
"Number theory"
] |
250,302 | https://en.wikipedia.org/wiki/Waiting%20room | A waiting room or waiting hall is a building, or more commonly a part of a building or a room, where people sit or stand until the event or appointment for which they are waiting begins.
There are two types of physical waiting room. One has individuals leave for appointments one at a time or in small groups, for instance at a doctor's office, a hospital triage area, or outside a school headmaster's office. The other has people leave en masse such as those at railway stations, bus stations, and airports. Both examples also highlight the difference between waiting rooms in which one is asked to wait (private waiting rooms) and waiting rooms in which one can enter at will (public waiting rooms).
There are also digital waiting rooms that operate within on-line video conferencing applications such as Zoom developed by Zoom Video Communications. This is a virtual waiting room where participants can be held until such time as the host allows them to enter the meeting.
Order in private rooms
People in private waiting rooms are queued up based on various methods in different types of waiting rooms. In hospital emergency department waiting areas, patients are triaged by a nurse, and they are seen by the doctor depending on the severity of their medical condition. In a doctor's or dentist's waiting room, patients are generally seen in the order in which their appointments are for, with the exception of emergency cases, which get seen immediately upon their arrival. In Canada, where there is publicly-provided health care, controversy has arisen when some important people or celebrities have jumped the line (which is supposed to be based on the appointment order or by severity of condition). In some government offices, such as motor vehicle registration offices or social assistance services, there is a "first-come, first-served" approach in which clients take a number when they arrive. The clients are then seen in the order of their number. In the 2010s, some government offices have a triage-based variant of the first-come, first-served approach, in which some clients are seen by the civil servants faster than others, depending on the nature of their service request and/or the availability of civil servants. This approach can lead to frustration for clients who are waiting, because one client who has been waiting for 30 minutes may see another client come in, take a number, and then be seen within five minutes.
In car repair businesses, clients typically wait until their vehicle is repaired; the service manager can only give an estimate of the approximate waiting time. Clients waiting in the entrance or waiting area of a restaurant for a table normally are seated based on whether they have reservations, or for those without reservations, on a first-come, first-served approach; however, important customers or celebrities may be put to the front of the line. In restaurants, customers may also be able to jump the line by giving a large gratuity or bribe to the maitre d'hotel or head waiter. Some restaurants which are co-located with or combined with a retail store or gift shop ask customers who are waiting for a table to browse in the merchandise section until their table's availability is announced on a PA system or via a pager; this strategy can lead to increased purchases in the retail part of the establishment. One combination restaurant/store is the US Cracker Barrel chain. Some restaurants ask customers who are waiting for a table to sit in the restaurant's bar or its licensed lounge area; this approach may lead to increased sales of alcoholic beverages.
Waiting rooms may be staffed or unstaffed. In waiting rooms that are staffed, a receptionist or administrative staffer sits behind a desk or counter to greet customers/clients, give them information about the expected waiting period, and answer any questions about their appointment time or the appointment process. In doctors' or dentists' waiting rooms, the patients may be able to make additional appointments, pay for appointments, or deal with other administrative tasks with the receptionist or administrator. In police stations, check cashing stores, and some government waiting rooms, the receptionist or administrator is behind a plexiglass barrier, with either small holes to permit communication, or, in higher-security settings, a microphone and speaker. In reception areas with a plexiglass barrier, there may be a heavy-duty drawer to enable the client to provide money or papers to the receptionist and for the receptionist to provide documents to the client. The plexiglass barrier and the drawer system help to protect the receptionists from aggressive or potentially violent clients.
Amenities
Most waiting rooms have seating. Some have adjacent toilets. It is not uncommon to find vending machines in public waiting rooms or newspapers and magazines in private waiting rooms. Also common in waiting rooms in the United States or in airports are public drinking fountains. Some waiting rooms have television access or music. The increasing prevalence of mobile devices has led to many waiting rooms providing electric outlets and free Wi-Fi Internet connections, though cybersecurity is a concern as unsecured connections may be vulnerable to attack, tampering, or even simply by piggybacking users who are within range but not waiting. Sometimes found in airports and railway stations are special waiting rooms, often called "lounges", for those who have paid more. These will generally be less crowded and will have superior seating and better facilities. Waiting rooms for high-end services may provide complimentary drinks and snacks.
In other media
In fiction
The films Brief Encounter and The Terminal use waiting rooms as sets for a large part of their duration. They are used elsewhere in the arts to symbolize waiting in the general sense, to symbolize transitions in life and for scenes depicting emptiness, insignificance or sadness. In the play No Exit, by French existentialist philosopher Jean-Paul Sartre, several strangers find themselves waiting in a mysterious room, where they each wonder why; finally, they each realize that they are in Hell, and that their punishment is being forced to be with each other ("L'enfer, c'est les autres", which translates as "Hell is other people").
In the 2010 Bollywood film The Waiting Room, directed by Maneej Premnath and produced by Sunil Doshi, four passengers waiting in a remote South Indian railway station are stranded there on a rainy night. A serial killer is on the prowl, targeting the passengers of the waiting room, creating intense fear among them.
In video games
The term "waiting room" also extends to the realm of video games as a similar virtual waiting area where players for an online multiplayer game are placed into while waiting for all remaining players for a game session to be present. A virtual waiting room may be a mere, static loading screen (such as the waiting screens in the mobile game Star Wars: Force Arena), or a playable environment in of itself where readied players can practice their skills to pass the time needed for all players to come onboard to begin the session, such as a dedicated "waiting room" arena in Super Smash Bros. Brawl and its subsequent sequels, where players can practice their fighting moves with their chosen character while waiting for other players to arrive.
See also
Airport lounge
Waiting in healthcare
References
Rooms
Time management | Waiting room | [
"Physics",
"Engineering"
] | 1,466 | [
"Physical quantities",
"Time",
"Rooms",
"Time management",
"Spacetime",
"Architecture"
] |
250,323 | https://en.wikipedia.org/wiki/Covering%20space | In topology, a covering or covering projection is a map between topological spaces that, intuitively, locally acts like a projection of multiple copies of a space onto itself. In particular, coverings are special types of local homeomorphisms. If is a covering, is said to be a covering space or cover of , and is said to be the base of the covering, or simply the base. By abuse of terminology, and may sometimes be called covering spaces as well. Since coverings are local homeomorphisms, a covering space is a special kind of étalé space.
Covering spaces first arose in the context of complex analysis (specifically, the technique of analytic continuation), where they were introduced by Riemann as domains on which naturally multivalued complex functions become single-valued. These spaces are now called Riemann surfaces.
Covering spaces are an important tool in several areas of mathematics. In modern geometry, covering spaces (or branched coverings, which have slightly weaker conditions) are used in the construction of manifolds, orbifolds, and the morphisms between them. In algebraic topology, covering spaces are closely related to the fundamental group: for one, since all coverings have the homotopy lifting property, covering spaces are an important tool in the calculation of homotopy groups. A standard example in this vein is the calculation of the fundamental group of the circle by means of the covering of by (see below). Under certain conditions, covering spaces also exhibit a Galois correspondence with the subgroups of the fundamental group.
Definition
Let be a topological space. A covering of is a continuous map
such that for every there exists an open neighborhood of and a discrete space such that and is a homeomorphism for every .
The open sets are called sheets, which are uniquely determined up to homeomorphism if is connected. For each the discrete set is called the fiber of . If is connected (and is non-empty), it can be shown that is surjective, and the cardinality of is the same for all ; this value is called the degree of the covering. If is path-connected, then the covering is called a path-connected covering. This definition is equivalent to the statement that is a locally trivial Fiber bundle.
Some authors also require that be surjective in the case that is not connected.
Examples
For every topological space , the identity map is a covering. Likewise for any discrete space the projection taking is a covering. Coverings of this type are called trivial coverings; if has finitely many (say ) elements, the covering is called the trivial -sheeted covering of .
The map with is a covering of the unit circle . The base of the covering is and the covering space is . For any point such that , the set is an open neighborhood of . The preimage of under is
and the sheets of the covering are for The fiber of is
Another covering of the unit circle is the map with for some For an open neighborhood of an , one has:
.
A map which is a local homeomorphism but not a covering of the unit circle is with . There is a sheet of an open neighborhood of , which is not mapped homeomorphically onto .
Properties
Local homeomorphism
Since a covering maps each of the disjoint open sets of homeomorphically onto it is a local homeomorphism, i.e. is a continuous map and for every there exists an open neighborhood of , such that is a homeomorphism.
It follows that the covering space and the base space locally share the same properties.
If is a connected and non-orientable manifold, then there is a covering of degree , whereby is a connected and orientable manifold.
If is a connected Lie group, then there is a covering which is also a Lie group homomorphism and is a Lie group.
If is a graph, then it follows for a covering that is also a graph.
If is a connected manifold, then there is a covering , whereby is a connected and simply connected manifold.
If is a connected Riemann surface, then there is a covering which is also a holomorphic map and is a connected and simply connected Riemann surface.
Factorisation
Let and be path-connected, locally path-connected spaces, and and be continuous maps, such that the diagram
commutes.
If and are coverings, so is .
If and are coverings, so is .
Product of coverings
Let and be topological spaces and and be coverings, then with is a covering. However, coverings of are not all of this form in general.
Equivalence of coverings
Let be a topological space and and be coverings. Both coverings are called equivalent, if there exists a homeomorphism , such that the diagram
commutes. If such a homeomorphism exists, then one calls the covering spaces and isomorphic.
Lifting property
All coverings satisfy the lifting property, i.e.:
Let be the unit interval and be a covering. Let be a continuous map and be a lift of , i.e. a continuous map such that . Then there is a uniquely determined, continuous map for which and which is a lift of , i.e. .
If is a path-connected space, then for it follows that the map is a lift of a path in and for it is a lift of a homotopy of paths in .
As a consequence, one can show that the fundamental group of the unit circle is an infinite cyclic group, which is generated by the homotopy classes of the loop with .
Let be a path-connected space and be a connected covering. Let be any two points, which are connected by a path , i.e. and . Let be the unique lift of , then the map
with
is bijective.
If is a path-connected space and a connected covering, then the induced group homomorphism
with ,
is injective and the subgroup of consists of the homotopy classes of loops in , whose lifts are loops in .
Branched covering
Definitions
Holomorphic maps between Riemann surfaces
Let and be Riemann surfaces, i.e. one dimensional complex manifolds, and let be a continuous map. is holomorphic in a point , if for any charts of and of , with , the map is holomorphic.
If is holomorphic at all , we say is holomorphic.
The map is called the local expression of in .
If is a non-constant, holomorphic map between compact Riemann surfaces, then is surjective and an open map, i.e. for every open set the image is also open.
Ramification point and branch point
Let be a non-constant, holomorphic map between compact Riemann surfaces. For every there exist charts for and and there exists a uniquely determined , such that the local expression of in is of the form . The number is called the ramification index of in and the point is called a ramification point if . If for an , then is unramified. The image point of a ramification point is called a branch point.
Degree of a holomorphic map
Let be a non-constant, holomorphic map between compact Riemann surfaces. The degree of is the cardinality of the fiber of an unramified point , i.e. .
This number is well-defined, since for every the fiber is discrete and for any two unramified points , it is:
It can be calculated by:
Branched covering
Definition
A continuous map is called a branched covering, if there exists a closed set with dense complement , such that is a covering.
Examples
Let and , then with is a branched covering of degree , where by is a branch point.
Every non-constant, holomorphic map between compact Riemann surfaces of degree is a branched covering of degree .
Universal covering
Definition
Let be a simply connected covering. If is another simply connected covering, then there exists a uniquely determined homeomorphism , such that the diagram
commutes.
This means that is, up to equivalence, uniquely determined and because of that universal property denoted as the universal covering of the space .
Existence
A universal covering does not always exist. The following theorem guarantees its existence for a certain class of base spaces.
Let be a connected, locally simply connected topological space. Then, there exists a universal covering
The set is defined as where is any chosen base point. The map is defined by
The topology on is constructed as follows: Let be a path with Let be a simply connected neighborhood of the endpoint Then, for every there is a path inside from to that is unique up to homotopy. Now consider the set The restriction with is a bijection and can be equipped with the final topology of
The fundamental group acts freely on by and the orbit space is homeomorphic to through the map
Examples
with is the universal covering of the unit circle .
with is the universal covering of the projective space for .
with is the universal covering of the unitary group .
Since , it follows that the quotient map is the universal covering of .
A topological space which has no universal covering is the Hawaiian earring: One can show that no neighborhood of the origin is simply connected.
G-coverings
Let G be a discrete group acting on the topological space X. This means that each element g of G is associated to a homeomorphism Hg of X onto itself, in such a way that Hg h is always equal to Hg ∘ Hh for any two elements g and h of G. (Or in other words, a group action of the group G on the space X is just a group homomorphism of the group G into the group Homeo(X) of self-homeomorphisms of X.) It is natural to ask under what conditions the projection from X to the orbit space X/G is a covering map. This is not always true since the action may have fixed points. An example for this is the cyclic group of order 2 acting on a product by the twist action where the non-identity element acts by . Thus the study of the relation between the fundamental groups of X and X/G is not so straightforward.
However the group G does act on the fundamental groupoid of X, and so the study is best handled by considering groups acting on groupoids, and the corresponding orbit groupoids. The theory for this is set down in Chapter 11 of the book Topology and groupoids referred to below. The main result is that for discontinuous actions of a group G on a Hausdorff space X which admits a universal cover, then the fundamental groupoid of the orbit space X/G is isomorphic to the orbit groupoid of the fundamental groupoid of X, i.e. the quotient of that groupoid by the action of the group G. This leads to explicit computations, for example of the fundamental group of the symmetric square of a space.
Smooth coverings
Let and be smooth manifolds with or without boundary. A covering is called a smooth covering if it is a smooth map and the sheets are mapped diffeomorphically onto the corresponding open subset of . (This is in contrast to the definition of a covering, which merely requires that the sheets are mapped homeomorphically onto the corresponding open subset.)
Deck transformation
Definition
Let be a covering. A deck transformation is a homeomorphism , such that the diagram of continuous maps
commutes. Together with the composition of maps, the set of deck transformation forms a group , which is the same as .
Now suppose is a covering map and (and therefore also ) is connected and locally path connected. The action of on each fiber is free. If this action is transitive on some fiber, then it is transitive on all fibers, and we call the cover regular (or normal or Galois). Every such regular cover is a principal , where is considered as a discrete topological group.
Every universal cover is regular, with deck transformation group being isomorphic to the fundamental group
Examples
Let be the covering for some , then the map for is a deck transformation and .
Let be the covering , then the map for is a deck transformation and .
As another important example, consider the complex plane and the complex plane minus the origin. Then the map with is a regular cover. The deck transformations are multiplications with -th roots of unity and the deck transformation group is therefore isomorphic to the cyclic group . Likewise, the map with is the universal cover.
Properties
Let be a path-connected space and be a connected covering. Since a deck transformation is bijective, it permutes the elements of a fiber with and is uniquely determined by where it sends a single point. In particular, only the identity map fixes a point in the fiber. Because of this property every deck transformation defines a group action on , i.e. let be an open neighborhood of a and an open neighborhood of an , then is a group action.
Normal coverings
Definition
A covering is called normal, if . This means, that for every and any two there exists a deck transformation , such that .
Properties
Let be a path-connected space and be a connected covering. Let be a subgroup of , then is a normal covering iff is a normal subgroup of .
If is a normal covering and , then .
If is a path-connected covering and , then , whereby is the normaliser of .
Let be a topological space. A group acts discontinuously on , if every has an open neighborhood with , such that for every with one has .
If a group acts discontinuously on a topological space , then the quotient map with is a normal covering. Hereby is the quotient space and is the orbit of the group action.
Examples
The covering with is a normal coverings for every .
Every simply connected covering is a normal covering.
Calculation
Let be a group, which acts discontinuously on a topological space and let be the normal covering.
If is path-connected, then .
If is simply connected, then .
Examples
Let . The antipodal map with generates, together with the composition of maps, a group and induces a group action , which acts discontinuously on . Because of it follows, that the quotient map is a normal covering and for a universal covering, hence for .
Let be the special orthogonal group, then the map is a normal covering and because of , it is the universal covering, hence .
With the group action of on , whereby is the semidirect product , one gets the universal covering of the klein bottle , hence .
Let be the torus which is embedded in the . Then one gets a homeomorphism , which induces a discontinuous group action , whereby . It follows, that the map is a normal covering of the klein bottle, hence .
Let be embedded in the . Since the group action is discontinuously, whereby are coprime, the map is the universal covering of the lens space , hence .
Galois correspondence
Let be a connected and locally simply connected space, then for every subgroup there exists a path-connected covering with .
Let and be two path-connected coverings, then they are equivalent iff the subgroups and are conjugate to each other.
Let be a connected and locally simply connected space, then, up to equivalence between coverings, there is a bijection:
For a sequence of subgroups one gets a sequence of coverings . For a subgroup with index , the covering has degree .
Classification
Definitions
Category of coverings
Let be a topological space. The objects of the category are the coverings of and the morphisms between two coverings and are continuous maps , such that the diagram
commutes.
G-Set
Let be a topological group. The category is the category of sets which are G-sets. The morphisms are G-maps between G-sets. They satisfy the condition for every .
Equivalence
Let be a connected and locally simply connected space, and be the fundamental group of . Since defines, by lifting of paths and evaluating at the endpoint of the lift, a group action on the fiber of a covering, the functor is an equivalence of categories.
Applications
An important practical application of covering spaces occurs in charts on SO(3), the rotation group. This group occurs widely in engineering, due to 3-dimensional rotations being heavily used in navigation, nautical engineering, and aerospace engineering, among many other uses. Topologically, SO(3) is the real projective space RP3, with fundamental group Z/2, and only (non-trivial) covering space the hypersphere S3, which is the group Spin(3), and represented by the unit quaternions. Thus quaternions are a preferred method for representing spatial rotations – see quaternions and spatial rotation.
However, it is often desirable to represent rotations by a set of three numbers, known as Euler angles (in numerous variants), both because this is conceptually simpler for someone familiar with planar rotation, and because one can build a combination of three gimbals to produce rotations in three dimensions. Topologically this corresponds to a map from the 3-torus T3 of three angles to the real projective space RP3 of rotations, and the resulting map has imperfections due to this map being unable to be a covering map. Specifically, the failure of the map to be a local homeomorphism at certain points is referred to as gimbal lock, and is demonstrated in the animation at the right – at some points (when the axes are coplanar) the rank of the map is 2, rather than 3, meaning that only 2 dimensions of rotations can be realized from that point by changing the angles. This causes problems in applications, and is formalized by the notion of a covering space.
See also
Bethe lattice is the universal cover of a Cayley graph
Covering graph, a covering space for an undirected graph, and its special case the bipartite double cover
Covering group
Galois connection
Quotient space (topology)
Literature
References
Algebraic topology
Homotopy theory
Fiber bundles
Topological graph theory | Covering space | [
"Mathematics"
] | 3,726 | [
"Graph theory",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Mathematical relations",
"Topological graph theory"
] |
250,424 | https://en.wikipedia.org/wiki/Ring%20theory | In algebra, ring theory is the study of rings, algebraic structures in which addition and multiplication are defined and have similar properties to those operations defined for the integers. Ring theory studies the structure of rings; their representations, or, in different language, modules; special classes of rings (group rings, division rings, universal enveloping algebras); related structures like rngs; as well as an array of properties that prove to be of interest both within the theory itself and for its applications, such as homological properties and polynomial identities.
Commutative rings are much better understood than noncommutative ones. Algebraic geometry and algebraic number theory, which provide many natural examples of commutative rings, have driven much of the development of commutative ring theory, which is now, under the name of commutative algebra, a major area of modern mathematics. Because these three fields (algebraic geometry, algebraic number theory and commutative algebra) are so intimately connected it is usually difficult and meaningless to decide which field a particular result belongs to. For example, Hilbert's Nullstellensatz is a theorem which is fundamental for algebraic geometry, and is stated and proved in terms of commutative algebra. Similarly, Fermat's Last Theorem is stated in terms of elementary arithmetic, which is a part of commutative algebra, but its proof involves deep results of both algebraic number theory and algebraic geometry.
Noncommutative rings are quite different in flavour, since more unusual behavior can arise. While the theory has developed in its own right, a fairly recent trend has sought to parallel the commutative development by building the theory of certain classes of noncommutative rings in a geometric fashion as if they were rings of functions on (non-existent) 'noncommutative spaces'. This trend started in the 1980s with the development of noncommutative geometry and with the discovery of quantum groups. It has led to a better understanding of noncommutative rings, especially noncommutative Noetherian rings.
For the definitions of a ring and basic concepts and their properties, see Ring (mathematics). The definitions of terms used throughout ring theory may be found in Glossary of ring theory.
Commutative rings
A ring is called commutative if its multiplication is commutative. Commutative rings resemble familiar number systems, and various definitions for commutative rings are designed to formalize properties of the integers. Commutative rings are also important in algebraic geometry. In commutative ring theory, numbers are often replaced by ideals, and the definition of the prime ideal tries to capture the essence of prime numbers. Integral domains, non-trivial commutative rings where no two non-zero elements multiply to give zero, generalize another property of the integers and serve as the proper realm to study divisibility. Principal ideal domains are integral domains in which every ideal can be generated by a single element, another property shared by the integers. Euclidean domains are integral domains in which the Euclidean algorithm can be carried out. Important examples of commutative rings can be constructed as rings of polynomials and their factor rings. Summary: Euclidean domain ⊂ principal ideal domain ⊂ unique factorization domain ⊂ integral domain ⊂ commutative ring.
Algebraic geometry
Algebraic geometry is in many ways the mirror image of commutative algebra. This correspondence started with Hilbert's Nullstellensatz that establishes a one-to-one correspondence between the points of an algebraic variety, and the maximal ideals of its coordinate ring. This correspondence has been enlarged and systematized for translating (and proving) most geometrical properties of algebraic varieties into algebraic properties of associated commutative rings. Alexander Grothendieck completed this by introducing schemes, a generalization of algebraic varieties, which may be built from any commutative ring. More precisely,
the spectrum of a commutative ring is the space of its prime ideals equipped with Zariski topology, and augmented with a sheaf of rings. These objects are the "affine schemes" (generalization of affine varieties), and a general scheme is then obtained by "gluing together" (by purely algebraic methods) several such affine schemes, in analogy to the way of constructing a manifold by gluing together the charts of an atlas.
Noncommutative rings
Noncommutative rings resemble rings of matrices in many respects. Following the model of algebraic geometry, attempts have been made recently at defining noncommutative geometry based on noncommutative rings.
Noncommutative rings and associative algebras (rings that are also vector spaces) are often studied via their categories of modules. A module over a ring is an abelian group that the ring acts on as a ring of endomorphisms, very much akin to the way fields (integral domains in which every non-zero element is invertible) act on vector spaces. Examples of noncommutative rings are given by rings of square matrices or more generally by rings of endomorphisms of abelian groups or modules, and by monoid rings.
Representation theory
Representation theory is a branch of mathematics that draws heavily on non-commutative rings. It studies abstract algebraic structures by representing their elements as linear transformations of vector spaces, and studies
modules over these abstract algebraic structures. In essence, a representation makes an abstract algebraic object more concrete by describing its elements by matrices and the algebraic operations in terms of matrix addition and matrix multiplication, which is non-commutative. The algebraic objects amenable to such a description include groups, associative algebras and Lie algebras. The most prominent of these (and historically the first) is the representation theory of groups, in which elements of a group are represented by invertible matrices in such a way that the group operation is matrix multiplication.
Some relevant theorems
General
Isomorphism theorems for rings
Nakayama's lemma
Structure theorems
The Artin–Wedderburn theorem determines the structure of semisimple rings
The Jacobson density theorem determines the structure of primitive rings
Goldie's theorem determines the structure of semiprime Goldie rings
The Zariski–Samuel theorem determines the structure of a commutative principal ideal ring
The Hopkins–Levitzki theorem gives necessary and sufficient conditions for a Noetherian ring to be an Artinian ring
Morita theory consists of theorems determining when two rings have "equivalent" module categories
Cartan–Brauer–Hua theorem gives insight on the structure of division rings
Wedderburn's little theorem states that finite domains are fields
Other
The Skolem–Noether theorem characterizes the automorphisms of simple rings
Structures and invariants of rings
Dimension of a commutative ring
In this section, R denotes a commutative ring. The Krull dimension of R is the supremum of the lengths n of all the chains of prime ideals . It turns out that the polynomial ring over a field k has dimension n. The fundamental theorem of dimension theory states that the following numbers coincide for a noetherian local ring :
The Krull dimension of R.
The minimum number of the generators of the -primary ideals.
The dimension of the graded ring (equivalently, 1 plus the degree of its Hilbert polynomial).
A commutative ring R is said to be catenary if for every pair of prime ideals , there exists a finite chain of prime ideals that is maximal in the sense that it is impossible to insert an additional prime ideal between two ideals in the chain, and all such maximal chains between and have the same length. Practically all noetherian rings that appear in applications are catenary. Ratliff proved that a noetherian local integral domain R is catenary if and only if for every prime ideal ,
where is the height of .
If R is an integral domain that is a finitely generated k-algebra, then its dimension is the transcendence degree of its field of fractions over k. If S is an integral extension of a commutative ring R, then S and R have the same dimension.
Closely related concepts are those of depth and global dimension. In general, if R is a noetherian local ring, then the depth of R is less than or equal to the dimension of R. When the equality holds, R is called a Cohen–Macaulay ring. A regular local ring is an example of a Cohen–Macaulay ring. It is a theorem of Serre that R is a regular local ring if and only if it has finite global dimension and in that case the global dimension is the Krull dimension of R. The significance of this is that a global dimension is a homological notion.
Morita equivalence
Two rings R, S are said to be Morita equivalent if the category of left modules over R is equivalent to the category of left modules over S. In fact, two commutative rings which are Morita equivalent must be isomorphic, so the notion does not add anything new to the category of commutative rings. However, commutative rings can be Morita equivalent to noncommutative rings, so Morita equivalence is coarser than isomorphism. Morita equivalence is especially important in algebraic topology and functional analysis.
Finitely generated projective module over a ring and Picard group
Let R be a commutative ring and the set of isomorphism classes of finitely generated projective modules over R; let also subsets consisting of those with constant rank n. (The rank of a module M is the continuous function .) is usually denoted by Pic(R). It is an abelian group called the Picard group of R. If R is an integral domain with the field of fractions F of R, then there is an exact sequence of groups:
where is the set of fractional ideals of R. If R is a regular domain (i.e., regular at any prime ideal), then Pic(R) is precisely the divisor class group of R.
For example, if R is a principal ideal domain, then Pic(R) vanishes. In algebraic number theory, R will be taken to be the ring of integers, which is Dedekind and thus regular. It follows that Pic(R) is a finite group (finiteness of class number) that measures the deviation of the ring of integers from being a PID.
One can also consider the group completion of ; this results in a commutative ring K0(R). Note that K0(R) = K0(S) if two commutative rings R, S are Morita equivalent.
Structure of noncommutative rings
The structure of a noncommutative ring is more complicated than that of a commutative ring. For example, there exist simple rings that contain no non-trivial proper (two-sided) ideals, yet contain non-trivial proper left or right ideals. Various invariants exist for commutative rings, whereas invariants of noncommutative rings are difficult to find. As an example, the nilradical of a ring, the set of all nilpotent elements, is not necessarily an ideal unless the ring is commutative. Specifically, the set of all nilpotent elements in the ring of all matrices over a division ring never forms an ideal, irrespective of the division ring chosen. There are, however, analogues of the nilradical defined for noncommutative rings, that coincide with the nilradical when commutativity is assumed.
The concept of the Jacobson radical of a ring; that is, the intersection of all right (left) annihilators of simple right (left) modules over a ring, is one example. The fact that the Jacobson radical can be viewed as the intersection of all maximal right (left) ideals in the ring, shows how the internal structure of the ring is reflected by its modules. It is also a fact that the intersection of all maximal right ideals in a ring is the same as the intersection of all maximal left ideals in the ring, in the context of all rings; irrespective of whether the ring is commutative.
Noncommutative rings are an active area of research due to their ubiquity in mathematics. For instance, the ring of n-by-n matrices over a field is noncommutative despite its natural occurrence in geometry, physics and many parts of mathematics. More generally, endomorphism rings of abelian groups are rarely commutative, the simplest example being the endomorphism ring of the Klein four-group.
One of the best-known strictly noncommutative ring is the quaternions.
Applications
The ring of integers of a number field
The coordinate ring of an algebraic variety
If X is an affine algebraic variety, then the set of all regular functions on X forms a ring called the coordinate ring of X. For a projective variety, there is an analogous ring called the homogeneous coordinate ring. Those rings are essentially the same things as varieties: they correspond in essentially a unique way. This may be seen via either Hilbert's Nullstellensatz or scheme-theoretic constructions (i.e., Spec and Proj).
Ring of invariants
A basic (and perhaps the most fundamental) question in the classical invariant theory is to find and study polynomials in the polynomial ring that are invariant under the action of a finite group (or more generally reductive) G on V. The main example is the ring of symmetric polynomials: symmetric polynomials are polynomials that are invariant under permutation of variable. The fundamental theorem of symmetric polynomials states that this ring is where are elementary symmetric polynomials.
History
Commutative ring theory originated in algebraic number theory, algebraic geometry, and invariant theory. Central to the development of these subjects were the rings of integers in algebraic number fields and algebraic function fields, and the rings of polynomials in two or more variables. Noncommutative ring theory began with attempts to extend the complex numbers to various hypercomplex number systems. The genesis of the theories of commutative and noncommutative rings dates back to the early 19th century, while their maturity was achieved only in the third decade of the 20th century.
More precisely, William Rowan Hamilton put forth the quaternions and biquaternions; James Cockle presented tessarines and coquaternions; and William Kingdon Clifford was an enthusiast of split-biquaternions, which he called algebraic motors. These noncommutative algebras, and the non-associative Lie algebras, were studied within universal algebra before the subject was divided into particular mathematical structure types. One sign of re-organization was the use of direct sums to describe algebraic structure.
The various hypercomplex numbers were identified with matrix rings by Joseph Wedderburn (1908) and Emil Artin (1928). Wedderburn's structure theorems were formulated for finite-dimensional algebras over a field while Artin generalized them to Artinian rings.
In 1920, Emmy Noether, in collaboration with W. Schmeidler, published a paper about the theory of ideals in which they defined left and right ideals in a ring. The following year she published a landmark paper called Idealtheorie in Ringbereichen, analyzing ascending chain conditions with regard to (mathematical) ideals. Noted algebraist Irving Kaplansky called this work "revolutionary"; the publication gave rise to the term "Noetherian ring", and several other mathematical objects being called Noetherian.
Notes
References
. Vol. II, Pure and Applied Mathematics 128, .
ka:რგოლი (მათემატიკა)
ro:Inel (algebră) | Ring theory | [
"Mathematics"
] | 3,225 | [
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
250,438 | https://en.wikipedia.org/wiki/Slag | The general term slag may be a by-product or co-product of smelting (pyrometallurgical) ores and recycled metals depending on the type of material being produced. Slag is mainly a mixture of metal oxides and silicon dioxide. Broadly, it can be classified as ferrous (co-products of processing iron and steel), ferroalloy (a by-product of ferroalloy production) or non-ferrous/base metals (by-products of recovering non-ferrous materials like copper, nickel, zinc and phosphorus). Within these general categories, slags can be further categorized by their precursor and processing conditions (e.g., blast furnace slags, air-cooled blast furnace slag, granulated blast furnace slag, basic oxygen furnace slag, and electric arc furnace slag). Slag generated from the EAF process can contain toxic metals, which can be hazardous to human and environmental health.
Due to the large demand for ferrous, ferralloy, and non-ferrous materials, slag production has increased throughout the years despite recycling (most notably in the iron and steelmaking industries) and upcycling efforts. The World Steel Association (WSA) estimates that 600 kg of co-materials (co-products and by-products)(about 90 wt% is slags) are generated per tonne of steel produced.
Composition
Slag is usually a mixture of metal oxides and silicon dioxide. However, slags can contain metal sulfides and elemental metals. It is important to note, the oxide form may or may not be present once the molten slag solidifies and forms amorphous and crystalline components.
The major components of these slags include the oxides of calcium, magnesium, silicon, iron, and aluminium, with lesser amounts of manganese, phosphorus, and others depending on the specifics of the raw materials used. Furthermore, slag can be classified based on the abundance of iron among other major components.
Ore smelting
In nature, iron, copper, lead, nickel, and other metals are found in impure states called ores, often oxidized and mixed in with silicates of other metals. During smelting, when the ore is exposed to high temperatures, these impurities are separated from the molten metal and can be removed. Slag is the collection of compounds that are removed. In many smelting processes, oxides are introduced to control the slag chemistry, assisting in the removal of impurities and protecting the furnace refractory lining from excessive wear. In this case, the slag is termed synthetic. A good example is steelmaking slag: quicklime (CaO) and magnesite (MgCO3) are introduced for refractory protection, neutralizing the alumina and silica separated from the metal, and assisting in the removal of sulfur and phosphorus from the steel.
As a co-product of steelmaking, slag is typically produced either through the blast furnace – oxygen converter route or the electric arc furnace – ladle furnace route. To flux the silica produced during steelmaking, limestone and/or dolomite are added, as well as other types of slag conditioners such as calcium aluminate or fluorspar.
Classifications
There are three types of slag: ferrous, ferroalloy, non-ferrous slags, which are produced through different smelting processes.
Ferrous slag
Ferrous slags are produced in different stages of the iron and steelmaking processes resulting in varying physiochemical properties. Additionally, the rate of cooling of the slag material affects its degree of crystallinity further diversifying its range of properties. For example, slow cooled blast furnace slags (or air-cooled slags) tend have more crystalline phases than quenched blast furnace slags (ground granulated blast furnace slags) making it denser and better suited as an aggregate. It may also have higher free calcium oxide and magnesium oxide content, which are often converted to its hydrated forms if excessive volume expansions are not desired. On the other hand, water quenched blast furnace slags have greater amorphous phases giving it latent hydraulic properties (as discovered by Emil Langen in 1862) similar to Portland cement.
During the process of smelting iron, ferrous slag is created, but dominated by calcium and silicon compositions. Through this process, ferrous slag can be broken down into blast furnace slag (produced from iron oxides of molten iron), then steel slag (forms when steel scrap and molten iron combined). The major phases of ferrous slag contain calcium-rich olivine-group silicates and melilite-group silicates.
Slag from steel mills in ferrous smelting is designed to minimize iron loss, which gives out the significant amount of iron, following by oxides of calcium, silicon, magnesium, and aluminium. As the slag is cooled down by water, several chemical reactions from a temperature of around (such as oxidization) take place within the slag.
Based on a case study at the Hopewell National Historical Site in Berks and Chester counties, Pennsylvania, US, ferrous slag usually contains lower concentration of various types of trace elements than non-ferrous slag. However, some of them, such as arsenic (As), iron, and manganese, can accumulate in groundwater and surface water to levels that can exceed environmental guidelines.
Non-ferrous slag
Non-ferrous slag is produced from non-ferrous metals of natural ores. Non-ferrous slag can be characterized into copper, lead, and zinc slags due to the ores' compositions, and they have more potential to impact the environment negatively than ferrous slag. The smelting of copper, lead and bauxite in non-ferrous smelting, for instance, is designed to remove the iron and silica that often occurs with those ores, and separates them as iron-silicate-based slags.
Copper slag, the waste product of smelting copper ores, was studied in an abandoned Penn Mine in California, US. For six to eight months per year, this region is flooded and becomes a reservoir for drinking water and irrigation. Samples collected from the reservoir showed the higher concentration of cadmium (Cd) and lead (Pb) that exceeded regulatory guidelines.
Applications
Slags can serve other purposes, such as assisting in the temperature control of the smelting, and minimizing any re-oxidation of the final liquid metal product before the molten metal is removed from the furnace and used to make solid metal. In some smelting processes, such as ilmenite smelting to produce titanium dioxide, the slag can be the valuable product.
Ancient uses
During the Bronze Age of the Mediterranean area there were a vast number of differential metallurgical processes in use. A slag by-product of such workings was a colorful, glassy material found on the surfaces of slag from ancient copper foundries. It was primarily blue or green and was formerly chipped away and melted down to make glassware products and jewelry. It was also ground into powder to add to glazes for use in ceramics. Some of the earliest such uses for the by-products of slag have been found in ancient Egypt.
Historically, the re-smelting of iron ore slag was common practice, as improved smelting techniques permitted greater iron yields—in some cases exceeding that which was originally achieved. During the early 20th century, iron ore slag was also ground to a powder and used to make agate glass, also known as slag glass.
Modern uses
Construction
Use of slags in the construction industry dates back to the 1800s, where blast furnace slags were used to build roads and railroad ballast. During this time, it was also used as an aggregate and had begun being integrated into the cement industry as a geopolymer.
Today, ground granulated blast furnace slags are used in combination with Portland cement to create "slag cement". Granulated blast furnace slags react with portlandite (), which is formed during cement hydration, via the pozzolanic reaction to produce cementitious properties that primarily contribute to the later strength gain of concrete. This leads to concrete with reduced permeability and better durability. Careful consideration of the slag type used is required, as the high calcium oxide and magnesium oxide content can lead to excessive volume expansion and cracking in concrete.
These hydraulic properties have also been used for soil stabilization in roads and railroad constructions.
Granulated blast furnace slag is used in the manufacture of high-performance concretes, especially those used in the construction of bridges and coastal features, where its low permeability and greater resistance to chlorides and sulfates can help to reduce corrosive action and deterioration of the structure.
Slag can also be used to create fibers used as an insulation material called slag wool.
Slag is also used as aggregate in asphalt concrete for paving roads. A 2022 study in Finland found that road surfaces containing ferrochrome slag release a highly abrasive dust that has caused car parts to wear at significantly greater than normal rates.
Wastewater treatment and agriculture
Dissolution of slags generate alkalinity that can be used to precipitate out metals, sulfates, and excess nutrients (nitrogen and phosphorus) in wastewater treatment. Similarly, ferrous slags have been used as soil conditioners to re-balance soil pH and fertilizers as sources of calcium and magnesium.
Because of the slowly released phosphate content in phosphorus-containing slag, and because of its liming effect, it is valued as fertilizer in gardens and farms in steel making areas. However, the most important application is construction.
Emerging applications
Slags have one of the highest carbonation potential among the industrial alkaline waste due their high calcium oxide and magnesium oxide content, inspiring further studies to test its feasibility in capture and storage (CCS) methods (e.g., direct aqueous sequestration, dry gas-solid carbonation among others). Across these CCS methods, slags can be transformed into precipitated calcium carbonates to be used in the plastic, and concrete industries and leached for metals to be used in the electronic industries.
However, high physical and chemical variability across different types of slags results in performance and yield inconsistencies. Moreover, stoichiometric-based calculation of the carbonation potential can lead to overestimation that can further obfuscate the material's true potential. To this end, some have proposed performing a series of experiments testing the reactivity of a specific slag material (i.e., dissolution) or using the topological constraint theory (TCT) to account for its complex chemical network.
Health and environmental effect
Slags are transported along with slag tailings to "slag dumps", where they are exposed to weathering, with the possibility of leaching of toxic elements and hyperalkaline runoffs into the soil and water, endangering the local ecological communities. Leaching concerns are typically around non-ferrous or base metal slags, which tend to have higher concentrations of toxic elements. However, ferrous and ferroalloy slags may also have them, which raises concerns about highly weathered slag dumps and upcycled materials.
Dissolution of slags can produce highly alkaline groundwater with pH values above 12. The calcium silicates (CaSiO4) in slags react with water to produce calcium hydroxide ions that leads to a higher concentration of hydroxide (OH-) in ground water. This alkalinity promotes the mineralization of dissolved (from the atmosphere) to produce calcite (CaCO3), which can accumulate to as thick as 20 cm. This can also lead to the dissolution of other metals in slag, such as iron (Fe), manganese (Mn), nickel (Ni), and molybdenum (Mo), which become insoluble in water and mobile as particulate matter. The most effective method to detoxify alkaline ground water discharge is air sparging.
Fine slags and slag dusts generated from milling slags to be recycled into the smelting process or upcycled in a different industry (e.g. construction) can be carried by the wind, affecting a larger ecosystem. It can be ingested and inhaled, posing a direct health risk to the communities near the plants, mines, disposal sites, etc.
See also
Calcium cycle
Circular economy
Clinker (waste)
Dross
Fly ash
Ground granulated blast furnace slag
Heavy metals
Mill scale
Pozzolan
Slag (welding)
Spoil tip
Tailings
References
Further reading
External links
Types of Slag
Electric Arc Furnace (EAF) Slag, US EPA
Amorphous solids
Materials with minor glass phase
Steelmaking
Smelting
By-products
Articles containing video clips | Slag | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,720 | [
"Smelting",
"Metallurgical processes",
"Metallurgy",
"Steelmaking",
"Unsolved problems in physics",
"Metallurgical by-products",
"Amorphous solids"
] |
250,540 | https://en.wikipedia.org/wiki/Rhodopsin | Rhodopsin, also known as visual purple, is a protein encoded by the RHO gene and a G-protein-coupled receptor (GPCR). It is a light-sensitive receptor protein that triggers visual phototransduction in rods. Rhodopsin mediates dim light vision and thus is extremely sensitive to light. When rhodopsin is exposed to light, it immediately photobleaches. In humans, it is regenerated fully in about 30 minutes, after which the rods are more sensitive. Defects in the rhodopsin gene cause eye diseases such as retinitis pigmentosa and congenital stationary night blindness.
Names
Rhodopsin was discovered by Franz Christian Boll in 1876. The name rhodopsin derives from Ancient Greek () for "rose", due to its pinkish color, and () for "sight". It was coined in 1878 by the German physiologist Wilhelm Friedrich Kühne (1837–1900).
When George Wald discovered that rhodopsin is a holoprotein, consisting of retinal and an apoprotein, he called it opsin, which today would be described more narrowly as apo-rhodopsin. Today, the term opsin refers more broadly to the class of G-protein-coupled receptors that bind retinal and as a result become a light sensitive photoreceptor, including all closely related proteins. When Wald and colleagues later isolated iodopsin from chicken retinas, thereby discovering the first known cone opsin, they called apo-iodopsin photopsin (for its relation to photopic vision) and apo-rhodopsin scotopsin (for its use in scotopic vision).
General
Rhodopsin is a protein found in the outer segment discs of rod cells. It mediates scotopic vision, which is monochromatic vision in dim light. Rhodopsin most strongly absorbs green-blue light (~500 nm) and appears therefore reddish-purple, hence the archaic term "visual purple".
Several closely related opsins differ only in a few amino acids and in the wavelengths of light that they absorb most strongly. Humans have, including rhodopsin, nine opsins, as well as cryptochrome (light-sensitive, but not an opsin).
Structure
Rhodopsin, like other opsins, is a G-protein-coupled receptor (GPCR). GPCRs are chemoreceptors that embed in the lipid bilayer of the cell membranes and have seven transmembrane domains forming a binding pocket for a ligand. The ligand for rhodopsin is the vitamin A-based chromophore 11-cis-retinal, which lies horizontally to the cell membrane and is covalently bound to a lysine residue (lys296) in the seventh transmembrane domain through a Schiff-base. However, 11-cis-retinal only blocks the binding pocket and does not activate rhodopsin. It is only activated when 11-cis-retinal absorbs a photon of light and isomerizes to all-trans-retinal, the receptor activating form, causing conformal changes in rhodopsin (bleaching), which activate a phototransduction cascade. Thus, a chemoreceptor is converted to a light or photo(n)receptor.
The retinal binding lysine is conserved in almost all opsins, only a few opsins having lost it during evolution. Opsins without the lysine are not light sensitive, including rhodopsin. Rhodopsin is made constitutively (continuously) active by some of those mutations even without light. Also wild-type rhodopsin is constitutively active, if no 11-cis-retinal is bound, but much less. Therefore 11-cis-retinal is an inverse agonist. Such mutations are one cause of autosomal dominant retinitis pigmentosa. Artificially, the retinal binding lysine can be shifted to other positions, even into other transmembrane domains, without changing the activity.
The rhodopsin of cattle has 348 amino acids, the retinal binding lysine being Lys296. It was the first opsin whose amino acid sequence and 3D-structure were determined. Its structure has been studied in detail by x-ray crystallography on rhodopsin crystals. Several models (e.g., the bicycle-pedal mechanism, hula-twist mechanism) attempt to explain how the retinal group can change its conformation without clashing with the enveloping rhodopsin protein pocket. Recent data support that rhodopsin is a functional monomer, instead of a dimer, which was the paradigm of G-protein-coupled receptors for many years.
Within its native membrane, rhodopsin is found at a high density facilitating its ability to capture photons. Due to its dense packing within the membrane, there is a higher chance of rhodopsin capturing proteins. However, the high density also provides a disadvantage when it comes to G protein signaling because the diffusion becomes more difficult in a crowded membrane that is packed with the receptor, rhodopsin.
Phototransduction
Rhodopsin is an essential G-protein coupled receptor in phototransduction.
Activation
In rhodopsin, the aldehyde group of retinal is covalently linked to the amino group of a lysine residue on the protein in a protonated Schiff base (-NH+=CH-). When rhodopsin absorbs light, its retinal cofactor isomerizes from the 11-cis to the all-trans configuration, and the protein subsequently undergoes a series of relaxations to accommodate the altered shape of the isomerized cofactor. The intermediates formed during this process were first investigated in the laboratory of George Wald, who received the Nobel prize for this research in 1967. The photoisomerization dynamics has been subsequently investigated with time-resolved IR spectroscopy and UV/Vis spectroscopy. A first photoproduct called photorhodopsin forms within 200 femtoseconds after irradiation, followed within picoseconds by a second one called bathorhodopsin with distorted all-trans bonds. This intermediate can be trapped and studied at cryogenic temperatures, and was initially referred to as prelumirhodopsin. In subsequent intermediates lumirhodopsin and metarhodopsin I, the Schiff's base linkage to all-trans retinal remains protonated, and the protein retains its reddish color. The critical change that initiates the neuronal excitation involves the conversion of metarhodopsin I to metarhodopsin II, which is associated with deprotonation of the Schiff's base and change in color from red to yellow.
Phototransduction cascade
The product of light activation, Metarhodopsin II, initiates the visual phototransduction second messenger pathway by stimulating the G-protein transducin (Gt), resulting in the liberation of its α subunit. This GTP-bound subunit in turn activates a cGMP phosphodiesterase. The cGMP phosphodiesterase hydrolyzes (breaks down) cGMP, lowering its local concentration so it can no longer activate cGMP-dependent cation channels. This leads to the hyperpolarization of photoreceptor cells, changing the rate at which they release transmitters.
Deactivation
Meta II (metarhodopsin II) is deactivated rapidly after activating transducin by rhodopsin kinase and arrestin. Rhodopsin pigment must be regenerated for further phototransduction to occur. This means replacing all-trans-retinal with 11-cis-retinal and the decay of Meta II is crucial in this process. During the decay of Meta II, the Schiff base link that normally holds all-trans-retinal and the apoprotein opsin (aporhodopsin) is hydrolyzed and becomes Meta III. In the rod outer segment, Meta III decays into separate all-trans-retinal and opsin. A second product of Meta II decay is an all-trans-retinal opsin complex in which the all-trans-retinal has been translocated to second binding sites. Whether the Meta II decay runs into Meta III or the all-trans-retinal opsin complex seems to depend on the pH of the reaction. Higher pH tends to drive the decay reaction towards Meta III.
Diseases of the retina
Mutations in the rhodopsin gene contribute majorly to various diseases of the retina such as retinitis pigmentosa. In general, the defect rhodopsin aggregates with ubiquitin in inclusion bodies, disrupts the intermediate filament network, and impairs the ability of the cell to degrade non-functioning proteins, which leads to photoreceptor apoptosis. Other mutations on rhodopsin lead to X-linked congenital stationary night blindness, mainly due to constitutive activation, when the mutations occur around the chromophore binding pocket of rhodopsin. Several other pathological states relating to rhodopsin have been discovered including poor post-Golgi trafficking, dysregulative activation, rod outer segment instability and arrestin binding.
See also
Bacteriorhodopsin, used in some halobacteria as a light-driven proton pump.
Explanatory notes
References
Further reading
External links
The Rhodopsin Protein
Photoisomerization of rhodopsin, animation.
Rhodopsin and the eye, summary with pictures.
Biological pigments
Eye
G protein-coupled receptors
Genes on human chromosome 3
Rhodopsins
Sensory receptors | Rhodopsin | [
"Chemistry",
"Biology"
] | 2,139 | [
"G protein-coupled receptors",
"Biological pigments",
"Pigmentation",
"Signal transduction"
] |
250,596 | https://en.wikipedia.org/wiki/Qiblih |
In the Baháʼí Faith, the Qiblih (, "direction") is the location to which Baháʼís face when saying their daily obligatory prayers. The Qiblih is fixed at the Shrine of Baháʼu'lláh, near Acre, in present-day Israel; approximately at .
In Bábism the Qiblih was originally identified by the Báb with "the One Whom God will make manifest", a messianic figure predicted by the Báb. Baháʼu'lláh, the Prophet-founder of the Baháʼí Faith claimed to be the figure predicted by the Báb. In the Kitáb-i-Aqdas, Baháʼu'lláh confirms the Báb's ordinance and further ordains his final resting-place as the Qiblih for his followers. ʻAbdu'l-Bahá describes that spot as the "luminous Shrine", "the place around which circumambulate the Concourse on High". The concept exists in other religions. Jews face Jerusalem, more specifically the site of the former Temple of Jerusalem. Muslims face the Kaaba in Mecca, which they also call the Qibla (another transliteration of Qiblih).
Baháʼís do not worship the Shrine of Baháʼu'lláh or its contents, the Qiblih is simply a focal point for the obligatory prayers. When praying obligatory prayers the members of the Baháʼí Faith face in the direction of the Qiblih. It is a fixed requirement for the recitation of an obligatory prayer, but for other prayers and devotions one may follow what is written in the Qurʼan: "Whichever way ye turn, there is the face of God."
Burial of the dead
"The dead should be buried with their face turned towards the Qiblih. This also is in accordance with what is practiced in Islam. There is also a congregational prayer to be recited. Besides this there is no other ceremony to be performed" (From a letter written on behalf of Shoghi Effendi to an individual believer, July 6, 1935).
See also
Direction of prayer
Mizrah, the direction of prayer in the Jewish faith, facing the Temple Mount in Jerusalem
Ad orientem, comparable concept in traditional Christianity; informs orientation of many church buildings
Qibla, the Muslim direction of prayer
Orientation of churches
Spatial deixis, spatial orientation relevant to an utterance
Citations
References
External links
Excerpts from the Kitáb-i-Aqdas regarding the Qiblih
Direction to Bahjí
Find the direction to Bahji with Google Maps
Bahá'í prayer
Orientation (geometry)
fr:Qiblih | Qiblih | [
"Physics",
"Mathematics"
] | 542 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
250,708 | https://en.wikipedia.org/wiki/Induction%20coil | An induction coil or "spark coil" (archaically known as an inductorium or Ruhmkorff coil after Heinrich Rühmkorff) is a type of transformer used to produce high-voltage pulses from a low-voltage direct current (DC) supply. To create the flux changes necessary to induce voltage in the secondary coil, the direct current in the primary coil is repeatedly interrupted by a vibrating mechanical contact called an interrupter. Invented in 1836 by the Irish-Catholic priest Nicholas Callan, also independently by American inventor Charles Grafton Page, the induction coil was the first type of transformer. It was widely used in x-ray machines, spark-gap radio transmitters, arc lighting and quack medical electrotherapy devices from the 1880s to the 1920s. Today its only common use is as the ignition coils in internal combustion engines and in physics education to demonstrate induction.
Construction and function
An induction coil consists of two coils of insulated wire wound around a common iron core (M). One coil, called the primary winding (P), is made from relatively few (tens or hundreds) turns of coarse wire. The other coil, the secondary winding, (S) typically consists of up to a million turns of fine wire (up to 40 gauge).
An electric current is passed through the primary, creating a magnetic field. Because of the common core, most of the primary's magnetic field couples with the secondary winding. The primary behaves as an inductor, storing energy in the associated magnetic field. When the primary current is suddenly interrupted, the magnetic field rapidly collapses. This causes a high voltage pulse to be developed across the secondary terminals through electromagnetic induction. Because of the large number of turns in the secondary coil, the secondary voltage pulse is typically many thousands of volts. This voltage is often sufficient to cause an electric spark, to jump across an air gap (G) separating the secondary's output terminals. For this reason, induction coils were called spark coils.
An induction coil is traditionally characterised by the length of spark it can produce; a '4 inch' (10 cm) induction coil could produce a 4 inch spark. Until the development of the cathode ray oscilloscope, this was the most reliable measurement of peak voltage of such asymmetric waveforms. The relationship between spark length and voltage is linear within a wide range:
= 110kV; = 150kV; = 190kV; = 230kV
Curves supplied by a 1984 reference agree closely with those values.
Interrupter
To operate the coil continually, the DC supply current must be repeatedly connected and disconnected to create the magnetic field changes needed for induction. To do that, induction coils use a magnetically activated vibrating arm called an interrupter or break (A) to rapidly connect and break the current flowing into the primary coil. The interrupter is mounted on the end of the coil next to the iron core. When the power is turned on, the increasing current in the primary coil produces an increasing magnetic field, the magnetic field attracts the interrupter's iron armature (A). After a time, the magnetic attraction overcomes the armature's spring force, and the armature begins to move. When the armature has moved far enough, the pair of contacts (K) in the primary circuit open and disconnect the primary current. Disconnecting the current causes the magnetic field to collapse and create the spark. Also, the collapsed field no longer attracts the armature, so the spring force accelerates the armature toward its initial position. A short time later the contacts reconnect, and the current starts building the magnetic field again. The whole process starts over and repeats many times per second. The secondary voltage (red, left), is roughly proportional to the rate of change of primary current (blue).
Opposite potentials are induced in the secondary when the interrupter 'breaks' the circuit and 'closes' the circuit. However, the current change in the primary is much more abrupt when the interrupter 'breaks'. When the contacts close, the current builds up slowly in the primary because the supply voltage has a limited ability to force current through the coil's inductance. In contrast, when the interrupter contacts open, the current falls to zero suddenly. So the pulse of voltage induced in the secondary at 'break' is much larger than the pulse induced at 'close', it is the 'break' that generates the coil's high voltage output.
Capacitor
An arc forms at the interrupter contacts on break which has undesirable effects: the arc consumes energy stored in the magnetic field, reduces the output voltage, and damages the contacts. To prevent this, a quenching capacitor (C) of 0.5 to 15 μF is connected across the primary coil to slow the rise in the voltage after a break. The capacitor and primary winding together form a tuned circuit, so on break, a damped sinusoidal wave of current flows in the primary and likewise induces a damped wave in the secondary. As a result, the high voltage output consists of a series of damped waves (left).
Construction details
To prevent the high voltages generated in the coil from breaking down the thin insulation and arcing between the secondary wires, the secondary coil uses special construction so as to avoid having wires carrying large voltage differences lying next to each other. In one widely used technique, the secondary coil is wound in many thin flat pancake-shaped sections (called "pies"), connected in series.
The primary coil is first wound on the iron core and insulated from the secondary with a thick paper or rubber coating. Then each secondary subcoil is connected to the coil next to it and slid onto the iron core, insulated from adjoining coils with waxed cardboard disks. The voltage developed in each subcoil isn't large enough to jump between the wires in the subcoil. Large voltages are only developed across many subcoils in series, which are too widely separated to arc over. To give the entire coil a final insulating coating, it is immersed in melted paraffin wax or rosin; the air evacuated to ensure there are no air bubbles left inside and the paraffin allowed to solidify, so the entire coil is encased in wax.
To prevent eddy currents, which cause energy losses, the iron core is made of a bundle of parallel iron wires, individually coated with shellac to insulate them electrically. The eddy currents, which flow in loops in the core perpendicular to the magnetic axis, are blocked by the layers of insulation. The ends of the insulated primary coil often protruded several inches from either end of the secondary coil, to prevent arcs from the secondary to the primary or the core.
Mercury and electrolytic interrupters
Although modern induction coils used for educational purposes all use the vibrating arm 'hammer' type interrupter described above, these were inadequate for powering the large induction coils used in spark-gap radio transmitters and x-ray machines around the turn of the 20th century. In powerful coils the high primary current created arcs at the interrupter contacts which quickly destroyed the contacts. Also, since each "break" produces a pulse of voltage from the coil, the more breaks per second the greater the power output. Hammer interrupters were not capable of interruption rates over 200 breaks per second and the ones used on powerful coils were limited to 20 – 40 breaks per second.
Therefore much research went into improving interrupters and improved designs were used in high power coils, with the hammer interrupters only used on small coils under 8" sparks. Léon Foucault and others developed interrupters consisting of an oscillating needle dipping into and out of a container of mercury. The mercury was covered with a layer of spirits which extinguished the arc quickly, causing faster switching. These were often driven by a separate electromagnet or motor, which allowed the interruption rate and "dwell" time to be adjusted separately from the primary current.
The largest coils used either electrolytic or mercury turbine interrupters. The electrolytic or Wehnelt interrupter, invented by Arthur Wehnelt in 1899, consisted of a short platinum needle anode immersed in an electrolyte of dilute sulfuric acid, with the other side of the circuit connected to a lead plate cathode. When the primary current passed through it, hydrogen gas bubbles formed on the needle which repeatedly broke the circuit. This resulted in a primary current broken randomly at rates up to 2000 breaks per second. They were preferred for powering X-ray tubes. They produced a lot of heat and due to this the hydrogen could explode. Mercury turbine interrupters had a centrifugal pump which sprayed a stream of liquid mercury onto rotating metal contacts. They could achieve interruption rates up to 10,000 breaks per second and were the most widely used type of interrupter in commercial wireless stations.
History
The induction coil was the first type of electrical transformer. During its development between 1836 and the 1860s, mostly by trial and error, researchers discovered many of the principles that governed all transformers, such as the proportionality between turns and output voltage and the use of a "divided" iron core to reduce eddy current losses.
Michael Faraday discovered the principle of induction, Faraday's induction law, in 1831 and did the first experiments with induction between coils of wire. The induction coil was invented by the American physician Charles Grafton Page in 1836 and independently by Irish scientist and Catholic priest Nicholas Callan in the same year at the St. Patrick's College, Maynooth and improved by William Sturgeon. George Henry Bachhoffner and Sturgeon (1837) independently discovered that a "divided" iron core of iron wires reduced power losses. The early coils had hand cranked interrupters, invented by Callan and Antoine Philibert Masson (1837). The automatic 'hammer' interrupter was invented by Rev. Prof. James William MacGauley (1838) of Dublin, Ireland, Johann Philipp Wagner (1839), and Christian Ernst Neeff (1847). Hippolyte Fizeau (1853) introduced the use of the quenching capacitor. Heinrich Ruhmkorff generated higher voltages by greatly increasing the length of the secondary, in some coils using 5 or 6 miles (10 km) of wire and produced sparks up to 16 inches. In the early 1850s, American inventor Edward Samuel Ritchie introduced the divided secondary construction to improve insulation. Jonathan Nash Hearder worked on induction coils. Callan's induction coil was named an IEEE Milestone in 2006.
Induction coils were used to provide high voltage for early gas discharge and Crookes tubes and other high voltage research. They were also used to provide entertainment (lighting Geissler tubes, for example) and to drive small "shocking coils", Tesla coils and violet ray devices used in quack medicine. They were used by Hertz to demonstrate the existence of electromagnetic waves, as predicted by James Clerk Maxwell and by Lodge and Marconi in the first research into radio waves. Their largest industrial use was probably in early wireless telegraphy spark-gap radio transmitters and to power early cold cathode x-ray tubes from the 1890s to the 1920s, after which they were supplanted in both these applications by AC transformers and vacuum tubes. However their largest use was as the ignition coil or spark coil in the ignition system of internal combustion engines, where they are still used, although the interrupter contacts are now replaced by solid state switches. A smaller version is used to trigger the flash tubes used in cameras and strobe lights.
See also
Ignition coil
Trembler coil
Spark gap transmitter
Transformer
Tesla coil
Faraday's law of induction
Ignition system
Inductor
Magnetic field
Nicholas Callan
Footnotes
Further reading
Norrie, H. S., "Induction Coils: How to Make, Use, and Repair Them". Norman H. Schneider, 1907, New York. 4th edition.
Has detailed history of invention of induction coil
External links
Battery powered Driver circuit for Induction Coils
The Cathode Ray Tube site
Relay Technical Information See section "Contact Protection – Counter EMF".
Capacitive Discharge Ignition vs Magnetic Discharge Ignition: Ignition System Options for the TR4A See figure 9 for actual discharge.
Electric transformers
Electrical breakdown | Induction coil | [
"Physics"
] | 2,539 | [
"Physical phenomena",
"Electrical phenomena",
"Electrical breakdown"
] |
250,815 | https://en.wikipedia.org/wiki/Selectron%20tube | The Selectron was an early form of digital computer memory developed by Jan A. Rajchman and his group at the Radio Corporation of America (RCA) under the direction of Vladimir K. Zworykin. It was a vacuum tube that stored digital data as electrostatic charges using technology similar to the Williams tube storage device. The team was never able to produce a commercially viable form of Selectron before magnetic-core memory became almost universal.
Development
Development of Selectron started in 1946 at the behest of John von Neumann of the Institute for Advanced Study, who was in the midst of designing the IAS machine and was looking for a new form of high-speed memory.
RCA's original design concept had a capacity of 4096 bits, with a planned production of 200 by the end of 1946. They found the device to be much more difficult to build than expected, and they were still not available by the middle of 1948. As development dragged on, the IAS machine was forced to switch to Williams tubes for storage, and the primary customer for Selectron disappeared. RCA lost interest in the design and assigned its engineers to improve televisions
A contract from the US Air Force led to a re-examination of the device in a 256-bit form. Rand Corporation took advantage of this project to switch their own IAS machine, the JOHNNIAC, to this new version of the Selectron, using 80 of them to provide 512 40-bit words of main memory. They signed a development contract with RCA to produce enough tubes for their machine at a projected cost of $500 per tube ($ in ).
Around this time IBM expressed an interest in the Selectron as well, but this did not lead to additional production. As a result, RCA assigned their engineers to color television development, and put the Selectron in the hands of "the mothers-in-law of two deserving employees (the Chairman of the Board and the President)."
Both the Selectron and the Williams tube were superseded in the market by the compact and cost-effective magnetic-core memory, in the early 1950s. The JOHNNIAC developers had decided to switch to core even before the first Selectron-based version had been completed.
Principle of operation
Electrostatic storage
The Williams tube was an example of a general class of cathode-ray tube (CRT) devices known as storage tubes.
The primary function of a conventional CRT is to display an image by lighting phosphor using a beam of electrons fired at it from an electron gun at the back of the tube. The target point of the beam is steered around the front of the tube though the use of deflection magnets or electrostatic plates.
Storage tubes were based on CRTs, sometimes unmodified. They relied on two normally undesirable principles of phosphor used in the tubes. One was that when electrons from the CRT's electron gun struck the phosphor to light it, some of the electrons "stuck" to the tube and caused a localized static electric charge to build up. This charge opposed any future electrons flowing into that area from the gun, and caused differences in brightness. The second was that the phosphor, like many materials, also released new electrons when struck by an electron beam, a process known as secondary emission.
Secondary emission had the useful feature that the rate of electron release was significantly non-linear. When a voltage was applied that crossed a certain threshold, the rate of emission increased dramatically. This caused the lit spot to rapidly decay, which also caused any stuck electrons to be released as well. Visual systems used this process to erase the display, causing any stored pattern to rapidly fade. For computer uses it was the rapid release of the stuck charge that allowed it to be used for storage.
In the Williams tube, the electron gun at the back of an otherwise typical CRT is used to deposit a series of small patterns representing a 1 or 0 on the phosphor in a grid representing memory addresses. To read the display, the beam scanned the tube again, this time set to a voltage very close to that of the secondary emission threshold. The patterns were selected to bias the tube very slightly positive or negative. When the stored static electricity was added to the voltage of the beam, the total voltage either crossed the secondary emission threshold or didn't. If it crossed the threshold, a burst of electrons was released as the dot decayed. This burst was read capacitively on a metal plate placed just in front of the display side of the tube.
There were four general classes of storage tubes; the "surface redistribution type" represented by the Williams tube, the "barrier grid" system, which was unsuccessfully commercialized by RCA as the Radechon tube, the "sticking potential" type which was not used commercially, and the "holding beam" concept, of which the Selectron is a specific example.
Holding beam concept
In the most basic implementation, the holding beam tube uses three electron guns; one for writing, one for reading, and a third "holding gun" that maintains the pattern. The general operation is very similar to the Williams tube in concept. The main difference was the holding gun, which fired continually and unfocussed so it covered the entire storage area on the phosphor. This caused the phosphor to be continually charged to a selected voltage, somewhat below that of the secondary emission threshold.
Writing was accomplished by firing the writing gun at low voltage in a fashion similar to the Williams tube, adding a further voltage to the phosphor. Thus the storage pattern was the slight difference between two voltages stored on the tube, typically only a few tens of volts different. In comparison, the Williams tube used much higher voltages, producing a pattern that could only be stored for a short period before it decayed below readability.
Reading was accomplished by scanning the reading gun across the storage area. This gun was set to a voltage that would cross the secondary emission threshold for the entire display. If the scanned area held the holding gun potential a certain number of electrons would be released, if it held the writing gun potential the number would be higher. The electrons were read on a grid of fine wires placed behind the display, making the system entirely self-contained. In contrast, the Williams tube's read plate was in front of the tube, and required continual mechanical adjustment to work properly. The grid also had the advantage of breaking the display into individual spots without requiring the tight focus of the Williams system.
General operation was the same as the Williams system, but the holding concept had two major advantages. One was that it operated at much lower voltage differences and was thus able to safely store data for a longer period of time. The other was that the same deflection magnet drivers could be sent to several electron guns to produce a single larger device with no increase in complexity of the electronics.
Design
The Selectron further modified the basic holding gun concept through the use of individual metal eyelets that were used to store additional charge in a more predictable and long-lasting fashion.
Unlike a CRT where the electron gun is a single point source consisting of a filament and single charged accelerator, in the Selectron the "gun" is a plate and the accelerator is a grid of wires (thus borrowing some design notes from the barrier-grid tube). Switching circuits allow voltages to be applied to the wires to turn them on or off. When the gun fires through the eyelets, it is slightly defocussed. Some of the electrons strike the eyelet and deposit a charge on it.
The original 4096-bit Selectron was a by vacuum tube configured as 1024 by 4 bits. It had an indirectly heated cathode running up the middle, surrounded by two separate sets of wires — one radial, one axial — forming a cylindrical grid array, and finally a dielectric storage material coating on the inside of four segments of an enclosing metal cylinder, called the signal plates. The bits were stored as discrete regions of charge on the smooth surfaces of the signal plates.
The two sets of orthogonal grid wires were normally "biased" slightly positive, so that the electrons from the cathode were accelerated through the grid to reach the dielectric. The continuous flow of electrons allowed the stored charge to be continuously regenerated by the secondary emission of electrons. To select a bit to be read from or written to, all but two adjacent wires on each of the two grids were biased negative, allowing current to flow to the dielectric at one location only.
In this respect, the Selectron works in the opposite sense of the Williams tube. In the Williams tube, the beam is continually scanning in a read/write cycle which is also used to regenerate data. In contrast, the Selectron is almost always regenerating the entire tube, only breaking this periodically to do actual reads and writes. This not only made operation faster due to the lack of required pauses but also meant the data was much more reliable as it was constantly refreshed.
Writing was accomplished by selecting a bit, as above, and then sending a pulse of potential, either positive or negative, to the signal plate. With a bit selected, electrons would be pulled onto (with a positive potential) or pushed from (negative potential) the dielectric. When the bias on the grid was dropped, the electrons were trapped on the dielectric as a spot of static electricity.
To read from the device, a bit location was selected and a pulse sent from the cathode. If the dielectric for that bit contained a charge, the electrons would be pushed off the dielectric and read as a brief pulse of current in the signal plate. No such pulse meant that the dielectric must not have held a charge.
The smaller capacity 256-bit (128 by 2 bits) "production" device was in a similar vacuum-tube envelope. It was built with two storage arrays of discrete "eyelets" on a rectangular plate, separated by a row of eight cathodes. The pin count was reduced from 44 for the 4096-bit device down to 31 pins and two coaxial signal output connectors. This version included visible green phosphors in each eyelet so that the bit status could also be read by eye.
Patents
Cylindrical 4096-bit Selectron
Planar 256-bit Selectron
References
Citations
Bibliography
Republished in IEEE Annals of the History of Computing, Volume 20 Number 4 (October 1988), pp. 11–28
External links
The Selectron
Early Devices display: Memories — has a picture of a 256-bit Selectron about halfway down the page
More pictures
History of the RCA Selectron
Computer memory
RCA brands
Vacuum tubes | Selectron tube | [
"Physics"
] | 2,200 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
251,061 | https://en.wikipedia.org/wiki/Glass-ceramic | Glass-ceramics are polycrystalline materials produced through controlled crystallization of base glass, producing a fine uniform dispersion of crystals throughout the bulk material. Crystallization is accomplished by subjecting suitable glasses to a carefully regulated heat treatment schedule, resulting in the nucleation and growth of crystal phases. In many cases, the crystallization process can proceed to near completion, but in a small proportion of processes, the residual glass phase often remains.
Glass-ceramic materials share many properties with both glasses and ceramics. Glass-ceramics have an amorphous phase and one or more crystalline phases and are produced by a so-called "controlled crystallization" in contrast to a spontaneous crystallization, which is usually not wanted in glass manufacturing. Glass-ceramics have the fabrication advantage of glass, as well as special properties of ceramics. When used for sealing, some glass-ceramics do not require brazing but can withstand brazing temperatures up to 700 °C.
Glass-ceramics usually have between 30% [m/m] and 90% [m/m] crystallinity and yield an array of materials with interesting properties like zero porosity, high strength, toughness, translucency or opacity, pigmentation, opalescence, low or even negative thermal expansion, high temperature stability, fluorescence, machinability, ferromagnetism, resorbability or high chemical durability, biocompatibility, bioactivity, ion conductivity, superconductivity, isolation capabilities, low dielectric constant and loss, corrosion resistance, high resistivity and break-down voltage. These properties can be tailored by controlling the base-glass composition and by controlled heat treatment/crystallization of base glass. In manufacturing, glass-ceramics are valued for having the strength of ceramic but the hermetic sealing properties of glass.
Glass-ceramics are mostly produced in two steps: First, a glass is formed by a glass-manufacturing process, after which the glass is cooled down. Second, the glass is put through a controlled heat treatment schedule. In this heat treatment the glass partly crystallizes. In most cases nucleation agents are added to the base composition of the glass-ceramic. These nucleation agents aid and control the crystallization process. Because there is usually no pressing and sintering, glass-ceramics have no pores, unlike sintered ceramics.
A wide variety of glass-ceramic systems exist, e.g., the Li2O × Al2O3 × nSiO2 system (LAS system), the MgO × Al2O3 × nSiO2 system (MAS system), and the ZnO × Al2O3 × nSiO2 system (ZAS system).
History
Réaumur, a French chemist, made early attempts to produce polycrystalline materials from glass, demonstrating that if glass bottles were packed into a mixture of sand and gypsum, and subjected to red heat for several days, the glass bottles turned opaque and porcelain-like. Although Réaumur was successful in the conversion of glass to a polycrystalline material, he was unsuccessful in achieving the control of the crystallization process, which is a key step in producing true practical glass ceramics with the improved properties mentioned above.
The discovery of glass-ceramics is credited to a man named Donald Stookey, a renowned glass scientist who worked at Corning Inc. for 47 years. The first iteration stemmed from a glass material, Fotoform, which was also discovered by Stookey while he was searching for a photo-etch-able material to be used in television screens. Soon after the beginning of Fotoform, the first ceramic material was discovered when Stookey overheated a Fotoform plate in a furnace at 900 degrees Celsius and found an opaque, milky-white plate inside the furnace rather than the molten mess that was expected. While examining the new material, which Stookey aptly named Fotoceram, he took note that it was much stronger than the Fotoform that it was created from as it survived a short fall onto concrete.
In the late 1950s two more glass-ceramic materials would be developed by Stookey, one found use as the radome in the nose cone of missiles, while the other led to the line of consumer kitchenware known as Corningware. Corning executives announced Stookey's discovery of the latter "new basic material" called Pyroceram which was touted as light, durable, capable of being an electrical insulator and yet thermally shock resistant. At the time, there were only few materials which offered the specific combination of characteristics that Pyroceram did and the material was rolled out as the Corningware kitchen line August 7, 1958.
Some of the success that Pyroceram brought inspired Corning to put an effort towards strengthening glass which became an effort by the technical director's of Corning titled Project Muscle. A lesser known "ultrastrong" glass-ceramic material developed in 1962 called Chemcor (now known as Gorilla Glass) was produced by Corning's glass team due to the Project Muscle effort. Chemcor would even be used to innovate the Pyroceram line of products as in 1961 Corning launched Centura Ware, a new line of Pyroceram that was lined with a glass laminate (invented by John MacDowell) and treated with the Chemcor process. Stookey continued to forge ahead in the discovery of the properties of glass-ceramics as he discovered how to make the material transparent in 1966. Though Corning would not release a product with his new innovation, for fear of cannibalizing Pyrex sales, until the late 1970s under the name Visions.
Nucleation and crystal growth
The key to engineering a glass-ceramic material is controlling the nucleation and growth of crystals in the base glass. The amount of crystallinity will vary depending on the amount of nuclei present and the time and temperature at which the material is heated. It is important to understand the types of nucleation occurring in the material, whether it is homogeneous or heterogeneous.
Homogeneous nucleation is a process resulting from the inherent thermodynamic instability of a glassy material. When enough thermal energy is applied to the system, the metastable glassy phase begins to return to the lower-energy, crystalline state. The term "homogeneous" is used here because the formation of nuclei comes from the base glass without any second phases or surfaces promoting their formation.
The rate of homogenous nucleation in a condensed system can be described with the following equation, proposed by Becker in 1938.
Where Q is the activation energy for diffusion across the phase boundary, A is a constant, and is the maximum activation energy for formation of a stable nucleus, as given by the equation below.
Where is the change of free energy per unit volume resulting from the transformation from one phase to the other, and can be equated with interfacial tension.
Heterogeneous nucleation is a term used when a nucleating agent is introduced into the system to aid and control the crystallization process. The presence of this nucleating agent, in the form of an additional phase or surface, can act as a catalyst for nucleation and is particularly effective if there is epitaxy between the nucleus and the substrate. There are a number of metals that can act as nucleating agents in glass because they can exist in the glass in the form of particle dispersion of colloidal dimensions. Examples include copper, metallic silver, and platinum. It was suggested by Stookey in 1959 that the effectiveness of metallic nucleation catalysts relates to the similarities between the crystal structures of the metals and the phase being nucleated.
The most important feature of heterogenous nucleation is that the interfacial tension between the heterogeneity and the nucleated phase is minimized. This means that the influence that the catalyzing surface has on the rate of nucleation is determined by the contact angle at the interface. Based on this, Turnbull and Vonnegut (1952) modified the equation for homogenous nucleation rate to give an expression for heterogenous nucleation rate.
If activation energy for diffusion is included, as suggested by Stokey (1959a), the equation then becomes:
From these equations, heterogeneous nucleation can be described in terms of the same parameters as homogeneous nucleation with a shape factor, which is a function of θ (contact angle). The term is given by:
if the nucleus has the form of a spherical cap.
In addition to nucleation, crystal growth is also required for the formation of glass ceramics. The crystal growth process is of considerable importance in determining the morphology of the produced glass ceramic composite material. Crystal growth is primarily dependent on two factors. First, it is dependent upon the rate at which the disordered structure can be re-arranged into a periodic lattice with longer-range order. Second, it is dependent upon the rate at which energy is released in the phase transformation (essentially the rate of cooling at the interface).
Glass ceramics in medical applications
Glass-ceramics are used in medical applications due to their unique interaction, or lack thereof, with human body tissue. Bioceramics are typically placed into the following groups based on their biocompatibility: biopassive (bioinert), bioactive, or resorbable ceramics.
Biopassive (bioinert) ceramics are, as the name suggests, characterized by the limited interaction the material has with the surrounding biological tissue. Historically, these were the "first generation" biomaterials used as replacements for missing or damaged tissues. One problem resulting from using inert biomaterials was the body's reaction to the foreign object; it was found that a phenomenon known as "fibrous encapsulation" would occur, where tissues would grow around the implant in an attempt to isolate the object from the rest of the body. This occasionally caused a variety of problems such as necrosis or sequestration of the implant. Two commonly used bioinert materials are alumina (Al2O3) and zirconia (ZrO2).
Bioactive materials have the ability to form bonds and interfaces with natural tissues. In the case of bone implants, two properties known as osteoconduction and osteoinduction play an important role in the success and longevity of the implant. Osteoconduction refers to a material's ability to permit bone growth on the surface and into the pores and channels of the material. Osteoinduction is a term used when a material stimulates existing cells to proliferate, causing new bone to grow independently of the implant. In general, the bioactivity of a material is a result of a chemical reaction, typically dissolution of the implanted material. Calcium phosphate ceramics and bioactive glasses are commonly used as bioactive materials as they exhibit this dissolution behavior when introduced to living body tissue. One engineering goal relating to these materials is that the dissolution rate of the implant be closely matched to the growth rate of new tissue, leading to a state of dynamic equilibrium.
Resorbable ceramics are similar to bioactive ceramics in their interaction with the body, but the main difference lies in the extent to which the dissolution occurs. Resorbable ceramics are intended to gradually dissolve entirely, all the while new tissue grows in its stead. The architecture of these materials has become quite complex, with foam-like scaffolds being introduced to maximize the interfacial area between the implant and body tissue. One issue that arises from using highly porous materials for bioactive/resorbable implants is the low mechanical strength, especially in load-bearing areas such as the bones in the legs. An example of a resorbable material that has seen some success is tricalcium phosphate (TCP), however, it too falls short in terms of mechanical strength when used in high-stress areas.
LAS system
The commercially most important system is the Li2O × Al2O3 × nSiO2 system (LAS system). The LAS system mainly refers to a mix of lithium, silicon, and aluminum oxides with additional components, e.g., glass-phase-forming agents such as Na2O, K2O and CaO and refining agents. As nucleation agents most commonly zirconium(IV) oxide in combination with titanium(IV) oxide is used. This important system was studied first and intensively by Hummel, and Smoke.
After crystallization the dominant crystal phase in this type of glass-ceramic is a high-quartz solid solution (HQ s.s.). If the glass-ceramic is subjected to a more intense heat treatment, this HQ s.s. transforms into a keatite-solid solution (K s.s., sometimes wrongly named as beta-spodumene). This transition is non-reversible and reconstructive, which means bonds in the crystal-lattice are broken and new arranged. However, these two crystal phases show a very similar structure as Li could show.
An interesting property of these glass-ceramics is their thermomechanical durability. Glass-ceramic from the LAS system is a mechanically strong material and can sustain repeated and quick temperature changes up to 800–1000 °C. The dominant crystalline phase of the LAS glass-ceramics, HQ s.s., has a strong negative coefficient of thermal expansion (CTE), keatite-solid solution as still a negative CTE but much higher than HQ s.s. These negative CTEs of the crystalline phase contrasts with the positive CTE of the residual glass. Adjusting the proportion of these phases offers a wide range of possible CTEs in the finished composite. Mostly for today's applications a low or even zero CTE is desired. Also a negative CTE is possible, which means, in contrast to most materials when heated up, such a glass-ceramic contracts. At a certain point, generally between 60% [m/m] and 80% [m/m] crystallinity, the two coefficients balance such that the glass-ceramic as a whole has a thermal expansion coefficient that is very close to zero. Also, when an interface between material will be subject to thermal fatigue, glass-ceramics can be adjusted to match the coefficient of the material they will be bonded to.
Originally developed for use in the mirrors and mirror mounts of astronomical telescopes, LAS glass-ceramics have become known and entered the domestic market through its use in glass-ceramic cooktops, as well as cookware and bakeware or as high-performance reflectors for digital projectors.
Ceramic matrix composites
One particularly notable use of glass-ceramics is in the processing of ceramic matrix composites. For many ceramic matrix composites typical sintering temperatures and times cannot be used, as the degradation and corrosion of the constituent fibres becomes more of an issue as temperature and sintering time increase.
One example of this is SiC fibres, which can start to degrade via pyrolysis at temperatures above 1470K. One solution to this is to use the glassy form of the ceramic as the sintering feedstock rather than the ceramic, as unlike the ceramic the glass pellets have a softening point and will generally flow at much lower pressures and temperatures. This allows the use of less extreme processing parameters, making the production of many new technologically important fibre-matrix combinations by sintering possible.
Glass ceramics in cooktops
Glass-ceramic from the LAS-System is a mechanically strong material and can sustain repeated and quick temperature changes, and its smooth glass-like surface is easy to clean, therefore it is often used as a cooktop surface.
The material has a very low heat conduction coefficient, which means that it stays cool outside the cooking area. It can be made nearly transparent (15–20% loss in a typical cooktop) for radiation in the infrared wavelengths. In the visible range glass-ceramics can be transparent, translucent or opaque and even colored by coloring agents.
However, glass-ceramic is not totally unbreakable. Because it is still a brittle material as glass and ceramics are, it can be broken - in particular it is less robust than traditional cooktops made of steel or cast iron. There have been instances where users reported damage to their cooktops when the surface was struck with a hard or blunt object (such as a can falling from above or other heavy items).
, there are two major types of electrical stoves with cooktops made of glass-ceramic:
A radiant heating stove uses coils or infrared halogen lamps as the heating elements. The surface of the glass-ceramic cooktop above the burner heats up, but the adjacent surface remains cool because of the low heat conduction coefficient of the material.
An induction stove heats a metal pot's bottom directly through electromagnetic induction.
This technology is not entirely new, as glass-ceramic ranges were first introduced in the 1970s using Corningware tops instead of the more durable material used today. These first generation smoothtops were problematic and could only be used with flat-bottomed cookware as the heating was primarily conductive rather than radiative.
Compared to conventional kitchen stoves, glass-ceramic cooktops are relatively simple to clean, due to their flat surface. However, glass-ceramic cooktops can be scratched very easily, so care must be taken not to slide the cooking pans over the surface. If food with a high sugar content (such as jam) spills, it should never be allowed to dry on the surface, otherwise damage will occur.
For best results and maximum heat transfer, all cookware should be flat-bottomed and matched to the same size as the burner zone.
Industry and material variations
Some well-known brands of glass-ceramics are Pyroceram, Ceran, Eurokera, Zerodur, and Macor. Nippon Electric Glass is a predominant worldwide manufacturer of glass ceramics, whose related products in this area include FireLite and NeoCeram , ceramic glass materials for architectural and high temperature applications respectively. Keralite, manufactured by Vetrotech Saint-Gobain, is a specialty glass-ceramic fire and impact safety rated material for use in fire-rated applications. Glass-ceramics manufactured in the Soviet Union/Russia are known under the name Sitall. Macor is a white, odorless, porcelain-like glass ceramic material and was developed originally to minimize heat transfer during crewed spaceflight by Corning Inc. StellaShine, launched in 2016 by Nippon Electric Glass Co., is a heat-resistant, glass-ceramic material with a thermal shock resistance of up to 800 degrees Celsius. This was developed as an addition to Nippon's line of heat-resistant cooking range plates along with materials like Neoceram. KangerTech is an ecigarette manufacturer which began in Shenzhen, China which produces glass ceramic materials and other special hardened-glass applications like vaporizer modification tanks.
The same class of material is also used in Visions and CorningWare glass-ceramic cookware, allowing it to be taken from the freezer directly to the stovetop or oven with no risk of thermal shock while maintaining the transparent look of glassware.
Sources
Literature
American inventions
01
Ceramic materials
Glass engineering and science
Glass chemistry | Glass-ceramic | [
"Chemistry",
"Materials_science",
"Engineering"
] | 4,003 | [
"Glass engineering and science",
"Glass chemistry",
"Materials science",
"Ceramic materials",
"Ceramic engineering"
] |
251,075 | https://en.wikipedia.org/wiki/Induction%20motor | An induction motor or asynchronous motor is an AC electric motor in which the electric current in the rotor that produces torque is obtained by electromagnetic induction from the magnetic field of the stator winding. An induction motor therefore needs no electrical connections to the rotor. An induction motor's rotor can be either wound type or squirrel-cage type.
Three-phase squirrel-cage induction motors are widely used as industrial drives because they are self-starting, reliable, and economical. Single-phase induction motors are used extensively for smaller loads, such as garbage disposals and stationary power tools. Although traditionally used for constant-speed service, single- and three-phase induction motors are increasingly being installed in variable-speed applications using variable-frequency drives (VFD). VFD offers energy savings opportunities for induction motors in applications like fans, pumps, and compressors that have a variable load.
History
In 1824, the French physicist François Arago formulated the existence of rotating magnetic fields, termed Arago's rotations. By manually turning switches on and off, Walter Baily demonstrated this in 1879, effectively the first primitive induction motor.
The first commutator-free single-phase AC induction motor was invented by Hungarian engineer Ottó Bláthy; he used the single-phase motor to propel his invention, the electricity meter.
The first AC commutator-free polyphase induction motors were independently invented by Galileo Ferraris and Nikola Tesla, a working motor model having been demonstrated by the former in 1885 and by the latter in 1887. Tesla applied for US patents in October and November 1887 and was granted some of these patents in May 1888. In April 1888, the Royal Academy of Science of Turin published Ferraris's research on his AC polyphase motor detailing the foundations of motor operation. In May 1888 Tesla presented the technical paper A New System for Alternating Current Motors and Transformers to the American Institute of Electrical Engineers (AIEE)
describing three four-stator-pole motor types: one having a four-pole rotor forming a non-self-starting reluctance motor, another with a wound rotor forming a self-starting induction motor, and the third a true synchronous motor with a separately excited DC supply to the rotor winding.
George Westinghouse, who was developing an alternating current power system at that time, licensed Tesla's patents in 1888 and purchased a US patent option on Ferraris' induction motor concept. Tesla was also employed for one year as a consultant. Westinghouse employee C. F. Scott was assigned to assist Tesla and later took over development of the induction motor at Westinghouse. Steadfast in his promotion of three-phase development, Mikhail Dolivo-Dobrovolsky invented the cage-rotor induction motor in 1889 and the three-limb transformer in 1890. Furthermore, he claimed that Tesla's motor was not practical because of two-phase pulsations, which prompted him to persist in his three-phase work. Although Westinghouse achieved its first practical induction motor in 1892 and developed a line of polyphase 60 hertz induction motors in 1893, these early Westinghouse motors were two-phase motors with wound rotors until B. G. Lamme developed a rotating bar winding rotor.
The General Electric Company (GE) began developing three-phase induction motors in 1891. By 1896, General Electric and Westinghouse signed a cross-licensing agreement for the bar-winding-rotor design, later called the squirrel-cage rotor. Arthur E. Kennelly was the first to bring out the full significance of complex numbers (using j to represent the square root of minus one) to designate the 90º rotation operator in analysis of AC problems. GE's Charles Proteus Steinmetz improved the application of AC complex quantities and developed an analytical model called the induction motor Steinmetz equivalent circuit.
Induction motor improvements flowing from these inventions and innovations were such that a modern 100-horsepower induction motor has the same mounting dimensions as a 7.5-horsepower motor in 1897.
Principle
3-phase motor
In both induction and synchronous motors, the AC power supplied to the motor's stator creates a magnetic field that rotates in synchronism with the AC oscillations. Whereas a synchronous motor's rotor turns at the same rate as the stator field, an induction motor's rotor rotates at a somewhat slower speed than the stator field. The induction motor stator's magnetic field is therefore changing or rotating relative to the rotor. This induces an opposing current in the rotor, in effect the motor's secondary winding. The rotating magnetic flux induces currents in the rotor windings, in a manner similar to currents induced in a transformer's secondary winding(s).
The induced currents in the rotor windings in turn create magnetic fields in the rotor that react against the stator field. The direction of the rotor magnetic field opposes the change in current through the rotor windings, following Lenz's Law. The cause of induced current in the rotor windings is the rotating stator magnetic field, so to oppose the change in rotor-winding currents the rotor turns in the direction of the stator magnetic field. The rotor accelerates until the magnitude of induced rotor current and torque balances the load on the rotor. Since rotation at synchronous speed does not induce rotor current, an induction motor always operates slightly slower than synchronous speed. The difference, or "slip," between actual and synchronous speed varies from about 0.5% to 5.0% for standard Design B torque curve induction motors. The induction motor's essential character is that torque is created solely by induction instead of the rotor being separately excited as in synchronous or DC machines or being self-magnetized as in permanent magnet motors.
For rotor currents to be induced, the speed of the physical rotor must be lower than that of the stator's rotating magnetic field (); otherwise the magnetic field would not be moving relative to the rotor conductors and no currents would be induced. As the speed of the rotor drops below synchronous speed, the rotation rate of the magnetic field in the rotor increases, inducing more current in the windings and creating more torque. The ratio between the rotation rate of the magnetic field induced in the rotor and the rotation rate of the stator's rotating field is called "slip". Under load, the speed drops and the slip increases enough to create sufficient torque to turn the load. For this reason, induction motors are sometimes referred to as "asynchronous motors".
An induction motor can be used as an induction generator, or it can be unrolled to form a linear induction motor which can directly generate linear motion. The generating mode for induction motors is complicated by the need to excite the rotor, which begins with only residual magnetization. In some cases, that residual magnetization is enough to self-excite the motor under load. Therefore, it is necessary to either snap the motor and connect it momentarily to a live grid or to add capacitors charged initially by residual magnetism and providing the required reactive power during operation. Similar is the operation of the induction motor in parallel with a synchronous motor serving as a power factor compensator. A feature in the generator mode in parallel to the grid is that the rotor speed is higher than in the driving mode. Then active energy is being given to the grid. Another disadvantage of the induction motor generator is that it consumes a significant magnetizing current I0 = (20–35)%.
Synchronous speed
An AC motor's synchronous speed, , is the rotation rate of the stator's magnetic field,
,
where is the frequency of the power supply, is the number of magnetic poles, and is the synchronous speed of the machine. For in hertz and synchronous speed in RPM, the formula becomes:
.
For example, for a four-pole, three-phase motor, = 4 and = 1,500 RPM (for = 50 Hz) and 1,800 RPM (for = 60 Hz) synchronous speed.
The number of magnetic poles, , is the number of north and south poles per phase. For example; a single-phase motor with 3 north and 3 south poles, having 6 poles per phase, is a 6-pole motor. A three-phase motor with 18 north and 18 south poles, having 6 poles per phase, is also a 6-pole motor. This industry standard method of counting poles results in the same synchronous speed for a given frequency regardless of polarity.
Slip
Slip, , is defined as the difference between synchronous speed and operating speed, at the same frequency, expressed in rpm, or in percentage or ratio of synchronous speed. Thus
where
is stator electrical speed, is rotor mechanical speed. Slip, which varies from zero at synchronous speed and 1 when the rotor is stalled, determines the motor's torque. Since the short-circuited rotor windings have small resistance, even a small slip induces a large current in the rotor and produces significant torque. At full rated load, slip varies from more than 5% for small or special purpose motors to less than 1% for large motors. These speed variations can cause load-sharing problems when differently sized motors are mechanically connected. Various methods are available to reduce slip, VFDs often offering the best solution.
Torque
Standard torque
The typical speed-torque relationship of a standard NEMA Design B polyphase induction motor is as shown in the curve at right. Suitable for most low performance loads such as centrifugal pumps and fans, Design B motors are constrained by the following typical torque ranges:
Breakdown torque (peak torque), 175–300% of rated torque
Locked-rotor torque (torque at 100% slip), 75–275% of rated torque
Pull-up torque, 65–190% of rated torque.
Over a motor's normal load range, the torque's slope is approximately linear or proportional to slip because the value of rotor resistance divided by slip, , dominates torque in a linear manner. As load increases above rated load, stator and rotor leakage reactance factors gradually become more significant in relation to such that torque gradually curves towards breakdown torque. As the load torque increases beyond breakdown torque the motor stalls.
Starting
There are three basic types of small induction motors: split-phase single-phase, shaded-pole single-phase, and polyphase.
In two-pole single-phase motors, the torque goes to zero at 100% slip (zero speed), so these require alterations to the stator such as shaded-poles to provide starting torque. A single phase induction motor requires separate starting circuitry to provide a rotating field to the motor. The normal running windings within such a single-phase motor can cause the rotor to turn in either direction, so the starting circuit determines the operating direction.
In certain smaller single-phase motors, starting is done by means of a copper wire turn around part of a pole; such a pole is referred to as a shaded pole. The current induced in this turn lags behind the supply current, creating a delayed magnetic field around the shaded part of the pole face. This imparts sufficient rotational field energy to start the motor. These motors are typically used in applications such as desk fans and record players, as the required starting torque is low, and the low efficiency is tolerable relative to the reduced cost of the motor and starting method compared to other AC motor designs.
Larger single phase motors are split-phase motors and have a second stator winding fed with out-of-phase current; such currents may be created by feeding the winding through a capacitor or having it receive different values of inductance and resistance from the main winding. In capacitor-start designs, the second winding is disconnected once the motor is up to speed, usually either by a centrifugal switch acting on weights on the motor shaft or a thermistor which heats up and increases its resistance, reducing the current through the second winding to an insignificant level. The capacitor-run designs keep the second winding on when running, improving torque. A resistance start design uses a starter inserted in series with the startup winding, creating reactance.
Self-starting polyphase induction motors produce torque even at standstill. Available squirrel-cage induction motor starting methods include direct-on-line starting, reduced-voltage reactor or auto-transformer starting, star-delta starting or, increasingly, new solid-state soft assemblies and, of course, variable frequency drives (VFDs).
Polyphase motors have rotor bars shaped to give different speed-torque characteristics. The current distribution within the rotor bars varies depending on the frequency of the induced current. At standstill, the rotor current is the same frequency as the stator current, and tends to travel at the outermost parts of the cage rotor bars (by skin effect). The different bar shapes can give usefully different speed-torque characteristics as well as some control over the inrush current at startup.
Although polyphase motors are inherently self-starting, their starting and pull-up torque design limits must be high enough to overcome actual load conditions.
In wound rotor motors, rotor circuit connection through slip rings to external resistances allows change of speed-torque characteristics for acceleration control and speed control purposes.
Speed control
Resistance
Before the development of semiconductor power electronics, it was difficult to vary the frequency, and cage induction motors were mainly used in fixed speed applications. Applications such as electric overhead cranes used DC drives or wound rotor motors (WRIM) with slip rings for rotor circuit connection to variable external resistance allowing considerable range of speed control. However, resistor losses associated with low speed operation of WRIMs is a major cost disadvantage, especially for constant loads. Large slip ring motor drives, termed slip energy recovery systems, some still in use, recover energy from the rotor circuit, rectify it, and return it to the power system using a VFD.
Cascade
The speed of a pair of slip-ring motors can be controlled by a cascade connection, or concatenation. The rotor of one motor is connected to the stator of the other. If the two motors are also mechanically connected, they will run at half speed. This system was once widely used in three-phase AC railway locomotives, such as FS Class E.333. By the turn of this century, however, such cascade-based electromechanical systems became much more efficiently and economically solved using power semiconductor elements solutions.
Variable-frequency drive
In many industrial variable-speed applications, DC and WRIM drives are being displaced by VFD-fed cage induction motors. The most common efficient way to control asynchronous motor speed of many loads is with VFDs. Barriers to adoption of VFDs due to cost and reliability considerations have been reduced considerably over the past three decades such that it is estimated that drive technology is adopted in as many as 30–40% of all newly installed motors.
Variable frequency drives implement the scalar or vector control of an induction motor.
With scalar control, only the magnitude and frequency of the supply voltage are controlled without phase control (absent feedback by rotor position). Scalar control is suitable for application where the load is constant.
Vector control allows independent control of the speed and torque of the motor, making it possible to maintain a constant rotation speed at varying load torque. But vector control is more expensive because of the cost of the sensor (not always) and the requirement for a more powerful controller.
Construction
The stator of an induction motor consists of poles carrying supply current to induce a magnetic field that penetrates the rotor. To optimize the distribution of the magnetic field, windings are distributed in slots around the stator, with the magnetic field having the same number of north and south poles. Induction motors are most commonly run on single-phase or three-phase power, but two-phase motors exist; in theory, induction motors can have any number of phases. Many single-phase motors having two windings can be viewed as two-phase motors, since a capacitor is used to generate a second power phase 90° from the single-phase supply and feeds it to the second motor winding. Single-phase motors require some mechanism to produce a rotating field on startup. Induction motors using a squirrel-cage rotor winding may have the rotor bars skewed slightly to smooth out torque in each revolution.
Standardized NEMA & IEC motor frame sizes throughout the industry result in interchangeable dimensions for shaft, foot mounting, general aspects as well as certain motor flange aspect. Since an open, drip proof (ODP) motor design allows a free air exchange from outside to the inner stator windings, this style of motor tends to be slightly more efficient because the windings are cooler. At a given power rating, lower speed requires a larger frame.
Rotation reversal
The method of changing the direction of rotation of an induction motor depends on whether it is a three-phase or single-phase machine. A three-phase motor can be reversed by swapping any two of its phase connections. Motors required to change direction regularly (such as hoists) will have extra switching contacts in their controller to reverse rotation as needed. A variable frequency drive nearly always permits reversal by electronically changing the phase sequence of voltage applied to the motor.
In a single-phase split-phase motor, reversal is achieved by reversing the connections of the starting winding. Some motors bring out the start winding connections to allow selection of rotation direction at installation. If the start winding is permanently connected within the motor, it is impractical to reverse the sense of rotation. Single-phase shaded-pole motors have a fixed rotation unless a second set of shading windings is provided.
Power factor
The power factor of induction motors varies with load, typically from about 0.85 or 0.90 at full load to as low as about 0.20 at no-load, due to stator and rotor leakage and magnetizing reactances. Power factor can be improved by connecting capacitors either on an individual motor basis or, by preference, on a common bus covering several motors. For economic and other considerations, power systems are rarely power factor corrected to unity power factor.
Power capacitor application with harmonic currents requires power system analysis to avoid harmonic resonance between capacitors and transformer and circuit reactances. Common bus power factor correction is recommended to minimize resonant risk and to simplify power system analysis.
Efficiency
Full-load motor efficiency ranges from 85–97%, with losses as follows:
Friction and windage, 5–15%
Iron or core losses, 15–25%
Stator losses, 25–40%
Rotor losses, 15–25%
Stray load losses, 10–20%.
For an electric motor, the efficiency, represented by the Greek letter Eta, is defined as the quotient of the mechanical output power and the electric input power, calculated using this formula:
Regulatory authorities in many countries have implemented legislation to encourage the manufacture and use of higher efficiency electric motors. Some legislation mandates the future use of premium-efficiency induction motors in certain equipment. For more information, see: Premium efficiency.
Steinmetz equivalent circuit
Many useful motor relationships between time, current, voltage, speed, power factor, and torque can be obtained from analysis of the Steinmetz equivalent circuit (also termed T-equivalent circuit or IEEE recommended equivalent circuit), a mathematical model used to describe how an induction motor's electrical input is transformed into useful mechanical energy output. The equivalent circuit is a single-phase representation of a multiphase induction motor that is valid in steady-state balanced-load conditions.
The Steinmetz equivalent circuit is expressed simply in terms of the following components:
Stator resistance and leakage reactance (, ).
Rotor resistance, leakage reactance, and slip (, or , , and ).
Magnetizing reactance ().
Paraphrasing from Alger in Knowlton, an induction motor is simply an electrical transformer the magnetic circuit of which is separated by an air gap between the stator winding and the moving rotor winding. The equivalent circuit can accordingly be shown either with equivalent circuit components of respective windings separated by an ideal transformer or with rotor components referred to the stator side as shown in the following circuit and associated equation and parameter definition tables.
The following rule-of-thumb approximations apply to the circuit:
Maximum current happens under locked rotor current (LRC) conditions and is somewhat less than , with LRC typically ranging between 6 and 7 times rated current for standard Design B motors.
Breakdown torque happens when and such that and thus, with constant voltage input, a low-slip induction motor's percent-rated maximum torque is about half its percent-rated LRC.
The relative stator to rotor leakage reactance of standard Design B cage induction motors is
.
Neglecting stator resistance, an induction motor's torque curve reduces to the Kloss equation
, where is slip at .
Linear induction motor
Linear induction motors, which work on the same general principles as rotary induction motors (frequently three-phase), are designed to produce straight line motion. Uses include magnetic levitation, linear propulsion, linear actuators, and liquid metal pumping.
See also
AC motor
Circle diagram
Induction generator
Premium efficiency
Variable refrigerant flow
Notes
References
Classical sources
External links
Who Invented the Polyphase Electric Motor?
Silvanus Phillips Thompson: Polyphase electric currents and alternate current motors
Induction motor topics from Hyperphysics website hosted by C.R. Nave, GSU Physics and Astronomy Dept.
Cowern Papers
Electric motors
AC motors
Inventions by Nikola Tesla
19th-century inventions | Induction motor | [
"Technology",
"Engineering"
] | 4,470 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
251,399 | https://en.wikipedia.org/wiki/Observable%20universe | The observable universe is a spherical region of the universe consisting of all matter that can be observed from Earth; the electromagnetic radiation from these objects has had time to reach the Solar System and Earth since the beginning of the cosmological expansion. Assuming the universe is isotropic, the distance to the edge of the observable universe is roughly the same in every direction. That is, the observable universe is a spherical region centered on the observer. Every location in the universe has its own observable universe, which may or may not overlap with the one centered on Earth.
The word observable in this sense does not refer to the capability of modern technology to detect light or other information from an object, or whether there is anything to be detected. It refers to the physical limit created by the speed of light itself. No signal can travel faster than light, hence there is a maximum distance, called the particle horizon, beyond which nothing can be detected, as the signals could not have reached us yet. Sometimes astrophysicists distinguish between the observable universe and the visible universe. The former includes signals since the end of the inflationary epoch, while the latter includes only signals emitted since recombination.
According to calculations, the current comoving distance to particles from which the cosmic microwave background radiation (CMBR) was emitted, which represents the radius of the visible universe, is about 14.0 billion parsecs (about 45.7 billion light-years). The comoving distance to the edge of the observable universe is about 14.3 billion parsecs (about 46.6 billion light-years), about 2% larger. The radius of the observable universe is therefore estimated to be about 46.5 billion light-years. Using the critical density and the diameter of the observable universe, the total mass of ordinary matter in the universe can be calculated to be about . In November 2018, astronomers reported that extragalactic background light (EBL) amounted to photons.
As the universe's expansion is accelerating, all currently observable objects, outside the local supercluster, will eventually appear to freeze in time, while emitting progressively redder and fainter light. For instance, objects with the current redshift z from 5 to 10 will only be observable up to an age of 4–6 billion years. In addition, light emitted by objects currently situated beyond a certain comoving distance (currently about ) will never reach Earth.
Overview
The universe's size is unknown, and it may be infinite in extent. Some parts of the universe are too far away for the light emitted since the Big Bang to have had enough time to reach Earth or space-based instruments, and therefore lie outside the observable universe. In the future, light from distant galaxies will have had more time to travel, so one might expect that additional regions will become observable. Regions distant from observers (such as us) are expanding away faster than the speed of light, at rates estimated by Hubble's law. The expansion rate appears to be accelerating, which dark energy was proposed to explain.
Assuming dark energy remains constant (an unchanging cosmological constant) so that the expansion rate of the universe continues to accelerate, there is a "future visibility limit" beyond which objects will never enter the observable universe at any time in the future because light emitted by objects outside that limit could never reach the Earth. Note that, because the Hubble parameter is decreasing with time, there can be cases where a galaxy that is receding from Earth only slightly faster than light emits a signal that eventually reaches Earth. This future visibility limit is calculated at a comoving distance of 19 billion parsecs (62 billion light-years), assuming the universe will keep expanding forever, which implies the number of galaxies that can ever be theoretically observed in the infinite future is only larger than the number currently observable by a factor of 2.36 (ignoring redshift effects).
In principle, more galaxies will become observable in the future; in practice, an increasing number of galaxies will become extremely redshifted due to ongoing expansion, so much so that they will seem to disappear from view and become invisible. A galaxy at a given comoving distance is defined to lie within the "observable universe" if we can receive signals emitted by the galaxy at any age in its history, say, a signal sent from the galaxy only 500 million years after the Big Bang. Because of the universe's expansion, there may be some later age at which a signal sent from the same galaxy can never reach the Earth at any point in the infinite future, so, for example, we might never see what the galaxy looked like 10 billion years after the Big Bang, even though it remains at the same comoving distance less than that of the observable universe.
This can be used to define a type of cosmic event horizon whose distance from the Earth changes over time. For example, the current distance to this horizon is about 16 billion light-years, meaning that a signal from an event happening at present can eventually reach the Earth if the event is less than 16 billion light-years away, but the signal will never reach the Earth if the event is further away.
The space before this cosmic event horizon can be called "reachable universe", that is all galaxies closer than that could be reached if we left for them today, at the speed of light; all galaxies beyond that are unreachable. Simple observation will show the future visibility limit (62 billion light-years) is exactly equal to the reachable limit (16 billion light-years) added to the current visibility limit (46 billion light-years).
"The universe" versus "the observable universe"
Both popular and professional research articles in cosmology often use the term "universe" to mean "observable universe". This can be justified on the grounds that we can never know anything by direct observation about any part of the universe that is causally disconnected from the Earth, although many credible theories require a total universe much larger than the observable universe. No evidence exists to suggest that the boundary of the observable universe constitutes a boundary on the universe as a whole, nor do any of the mainstream cosmological models propose that the universe has any physical boundary in the first place. However, some models propose it could be finite but unbounded, like a higher-dimensional analogue of the 2D surface of a sphere that is finite in area but has no edge.
It is plausible that the galaxies within the observable universe represent only a minuscule fraction of the galaxies in the universe. According to the theory of cosmic inflation initially introduced by Alan Guth and D. Kazanas, if it is assumed that inflation began about 10−37 seconds after the Big Bang and that the pre-inflation size of the universe was approximately equal to the speed of light times its age, that would suggest that at present the entire universe's size is at least light-years—at least times the radius of the observable universe.
If the universe is finite but unbounded, it is also possible that the universe is smaller than the observable universe. In this case, what we take to be very distant galaxies may actually be duplicate images of nearby galaxies, formed by light that has circumnavigated the universe. It is difficult to test this hypothesis experimentally because different images of a galaxy would show different eras in its history, and consequently might appear quite different. Bielewicz et al. claim to establish a lower bound of 27.9 gigaparsecs (91 billion light-years) on the diameter of the last scattering surface. This value is based on matching-circle analysis of the WMAP 7-year data. This approach has been disputed.
Size
The comoving distance from Earth to the edge of the observable universe is about 14.26 gigaparsecs (46.5 billion light-years or ) in any direction. The observable universe is thus a sphere with a diameter of about 28.5 gigaparsecs (93 billion light-years or ). Assuming that space is roughly flat (in the sense of being a Euclidean space), this size corresponds to a comoving volume of about ( or ).
These are distances now (in cosmological time), not distances at the time the light was emitted. For example, the cosmic microwave background radiation that we see right now was emitted at the time of photon decoupling, estimated to have occurred about years after the Big Bang, which occurred around 13.8 billion years ago. This radiation was emitted by matter that has, in the intervening time, mostly condensed into galaxies, and those galaxies are now calculated to be about 46 billion light-years from Earth. To estimate the distance to that matter at the time the light was emitted, we may first note that according to the Friedmann–Lemaître–Robertson–Walker metric, which is used to model the expanding universe, if we receive light with a redshift of z, then the scale factor at the time the light was originally emitted is given by
.
WMAP nine-year results combined with other measurements give the redshift of photon decoupling as z = , which implies that the scale factor at the time of photon decoupling would be . So if the matter that originally emitted the oldest CMBR photons has a present distance of 46 billion light-years, then the distance would have been only about 42 million light-years at the time of decoupling.
The light-travel distance to the edge of the observable universe is the age of the universe times the speed of light, 13.8 billion light years. This is the distance that a photon emitted shortly after the Big Bang, such as one from the cosmic microwave background, has traveled to reach observers on Earth. Because spacetime is curved, corresponding to the expansion of space, this distance does not correspond to the true distance at any moment in time.
Matter and mass
Number of galaxies and stars
The observable universe contains as many as an estimated 2 trillion galaxies and, overall, as many as an estimated 1024 stars – more stars (and, potentially, Earth-like planets) than all the grains of beach sand on planet Earth. Other estimates are in the hundreds of billions rather than trillions. The estimated total number of stars in an inflationary universe (observed and unobserved) is 10100.
Matter content—number of atoms
Assuming the mass of ordinary matter is about as discussed above, and assuming all atoms are hydrogen atoms (which are about 74% of all atoms in the Milky Way by mass), the estimated total number of atoms in the observable universe is obtained by dividing the mass of ordinary matter by the mass of a hydrogen atom. The result is approximately 1080 hydrogen atoms, also known as the Eddington number.
Mass of ordinary matter
The mass of the observable universe is often quoted as 1053 kg. In this context, mass refers to ordinary (baryonic) matter and includes the interstellar medium (ISM) and the intergalactic medium (IGM). However, it excludes dark matter and dark energy. This quoted value for the mass of ordinary matter in the universe can be estimated based on critical density. The calculations are for the observable universe only as the volume of the whole is unknown and may be infinite.
Estimates based on critical density
Critical density is the energy density for which the universe is flat. If there is no dark energy, it is also the density for which the expansion of the universe is poised between continued expansion and collapse. From the Friedmann equations, the value for critical density, is:
where G is the gravitational constant and is the present value of the Hubble constant. The value for H0, as given by the European Space Agency's Planck Telescope, is H0 = 67.15 kilometres per second per megaparsec. This gives a critical density of , or about 5 hydrogen atoms per cubic metre. This density includes four significant types of energy/mass: ordinary matter (4.8%), neutrinos (0.1%), cold dark matter (26.8%), and dark energy (68.3%).
Although neutrinos are Standard Model particles, they are listed separately because they are ultra-relativistic and hence behave like radiation rather than like matter. The density of ordinary matter, as measured by Planck, is 4.8% of the total critical density or . To convert this density to mass we must multiply by volume, a value based on the radius of the "observable universe". Since the universe has been expanding for 13.8 billion years, the comoving distance (radius) is now about 46.6 billion light-years. Thus, volume (πr3) equals and the mass of ordinary matter equals density () times volume () or .
Large-scale structure
Sky surveys and mappings of the various wavelength bands of electromagnetic radiation (in particular 21-cm emission) have yielded much information on the content and character of the universe's structure. The organization of structure appears to follow a hierarchical model with organization up to the scale of superclusters and filaments. Larger than this (at scales between 30 and 200 megaparsecs), there seems to be no continued structure, a phenomenon that has been referred to as the End of Greatness.
Walls, filaments, nodes, and voids
The organization of structure arguably begins at the stellar level, though most cosmologists rarely address astrophysics on that scale. Stars are organized into galaxies, which in turn form galaxy groups, galaxy clusters, superclusters, sheets, walls and filaments, which are separated by immense voids, creating a vast foam-like structure sometimes called the "cosmic web". Prior to 1989, it was commonly assumed that virialized galaxy clusters were the largest structures in existence, and that they were distributed more or less uniformly throughout the universe in every direction. However, since the early 1980s, more and more structures have been discovered. In 1983, Adrian Webster identified the Webster LQG, a large quasar group consisting of 5 quasars. The discovery was the first identification of a large-scale structure, and has expanded the information about the known grouping of matter in the universe.
In 1987, Robert Brent Tully identified the Pisces–Cetus Supercluster Complex, the galaxy filament in which the Milky Way resides. It is about 1 billion light-years across. That same year, an unusually large region with a much lower than average distribution of galaxies was discovered, the Giant Void, which measures 1.3 billion light-years across. Based on redshift survey data, in 1989 Margaret Geller and John Huchra discovered the "Great Wall", a sheet of galaxies more than 500 million light-years long and 200 million light-years wide, but only 15 million light-years thick. The existence of this structure escaped notice for so long because it requires locating the position of galaxies in three dimensions, which involves combining location information about the galaxies with distance information from redshifts.
Two years later, astronomers Roger G. Clowes and Luis E. Campusano discovered the Clowes–Campusano LQG, a large quasar group measuring two billion light-years at its widest point, which was the largest known structure in the universe at the time of its announcement. In April 2003, another large-scale structure was discovered, the Sloan Great Wall. In August 2007, a possible supervoid was detected in the constellation Eridanus. It coincides with the 'CMB cold spot', a cold region in the microwave sky that is highly improbable under the currently favored cosmological model. This supervoid could cause the cold spot, but to do so it would have to be improbably big, possibly a billion light-years across, almost as big as the Giant Void mentioned above.
Another large-scale structure is the SSA22 Protocluster, a collection of galaxies and enormous gas bubbles that measures about 200 million light-years across.
In 2011, a large quasar group was discovered, U1.11, measuring about 2.5 billion light-years across. On January 11, 2013, another large quasar group, the Huge-LQG, was discovered, which was measured to be four billion light-years across, the largest known structure in the universe at that time. In November 2013, astronomers discovered the Hercules–Corona Borealis Great Wall, an even bigger structure twice as large as the former. It was defined by the mapping of gamma-ray bursts.
In 2021, the American Astronomical Society announced the detection of the Giant Arc; a crescent-shaped string of galaxies that span 3.3 billion light years in length, located 9.2 billion light years from Earth in the constellation Boötes from observations captured by the Sloan Digital Sky Survey.
End of Greatness
The End of Greatness is an observational scale discovered at roughly 100 Mpc (roughly 300 million light-years) where the lumpiness seen in the large-scale structure of the universe is homogenized and isotropized in accordance with the cosmological principle. At this scale, no pseudo-random fractalness is apparent.
The superclusters and filaments seen in smaller surveys are randomized to the extent that the smooth distribution of the universe is visually apparent. It was not until the redshift surveys of the 1990s were completed that this scale could accurately be observed.
Observations
Another indicator of large-scale structure is the 'Lyman-alpha forest'. This is a collection of absorption lines that appear in the spectra of light from quasars, which are interpreted as indicating the existence of huge thin sheets of intergalactic (mostly hydrogen) gas. These sheets appear to collapse into filaments, which can feed galaxies as they grow where filaments either cross or are dense. An early direct evidence for this cosmic web of gas was the 2019 detection, by astronomers from the RIKEN Cluster for Pioneering Research in Japan and Durham University in the U.K., of light from the brightest part of this web, surrounding and illuminated by a cluster of forming galaxies, acting as cosmic flashlights for intercluster medium hydrogen fluorescence via Lyman-alpha emissions.
In 2021, an international team, headed by Roland Bacon from the Centre de Recherche Astrophysique de Lyon (France), reported the first observation of diffuse extended Lyman-alpha emission from redshift 3.1 to 4.5 that traced several cosmic web filaments on scales of 2.5−4 cMpc (comoving mega-parsecs), in filamentary environments outside massive structures typical of web nodes.
Some caution is required in describing structures on a cosmic scale because they are often different from how they appear. Gravitational lensing can make an image appear to originate in a different direction from its real source, when foreground objects curve surrounding spacetime (as predicted by general relativity) and deflect passing light rays. Rather usefully, strong gravitational lensing can sometimes magnify distant galaxies, making them easier to detect. Weak lensing by the intervening universe in general also subtly changes the observed large-scale structure.
The large-scale structure of the universe also looks different if only redshift is used to measure distances to galaxies. For example, galaxies behind a galaxy cluster are attracted to it and fall towards it, and so are blueshifted (compared to how they would be if there were no cluster). On the near side, objects are redshifted. Thus, the environment of the cluster looks somewhat pinched if using redshifts to measure distance. The opposite effect is observed on galaxies already within a cluster: the galaxies have some random motion around the cluster center, and when these random motions are converted to redshifts, the cluster appears elongated. This creates a "finger of God"—the illusion of a long chain of galaxies pointed at Earth.
Cosmography of Earth's cosmic neighborhood
At the centre of the Hydra–Centaurus Supercluster, a gravitational anomaly called the Great Attractor affects the motion of galaxies over a region hundreds of millions of light-years across. These galaxies are all redshifted, in accordance with Hubble's law. This indicates that they are receding from us and from each other, but the variations in their redshift are sufficient to reveal the existence of a concentration of mass equivalent to tens of thousands of galaxies.
The Great Attractor, discovered in 1986, lies at a distance of between 150 million and 250 million light-years in the direction of the Hydra and Centaurus constellations. In its vicinity there is a preponderance of large old galaxies, many of which are colliding with their neighbours, or radiating large amounts of radio waves.
In 1987, astronomer R. Brent Tully of the University of Hawaii's Institute of Astronomy identified what he called the Pisces–Cetus Supercluster Complex, a structure one billion light-years long and 150 million light-years across in which, he claimed, the Local Supercluster is embedded.
Most distant objects
The most distant astronomical object identified (as of August of 2024) is a galaxy classified as JADES-GS-z14-0. In 2009, a gamma ray burst, GRB 090423, was found to have a redshift of 8.2, which indicates that the collapsing star that caused it exploded when the universe was only 630 million years old. The burst happened approximately 13 billion years ago, so a distance of about 13 billion light-years was widely quoted in the media, or sometimes a more precise figure of 13.035 billion light-years.
This would be the "light travel distance" (see Distance measures (cosmology)) rather than the "proper distance" used in both Hubble's law and in defining the size of the observable universe. Cosmologist Ned Wright argues against using this measure. The proper distance for a redshift of 8.2 would be about 9.2 Gpc, or about 30 billion light-years.
Horizons
The limit of observability in the universe is set by cosmological horizons which limit—based on various physical constraints—the extent to which information can be obtained about various events in the universe. The most famous horizon is the particle horizon which sets a limit on the precise distance that can be seen due to the finite age of the universe. Additional horizons are associated with the possible future extent of observations, larger than the particle horizon owing to the expansion of space, an "optical horizon" at the surface of last scattering, and associated horizons with the surface of last scattering for neutrinos and gravitational waves.
Gallery
See also
Notes
References
Further reading
External links
"Millennium Simulation" of structure forming – Max Planck Institute of Astrophysics, Garching, Germany
Cosmology FAQ
Forming Galaxies Captured In The Young Universe By Hubble, VLT & Spitzer
Animation of the cosmic light horizon
Inflation and the Cosmic Microwave Background by Charles Lineweaver
Logarithmic Maps of the Universe
List of publications of the 2dF Galaxy Redshift Survey
The Universe Within 14 Billion Light Years – NASA Atlas of the Universe – Note, this map only gives a rough cosmographical estimate of the expected distribution of superclusters within the observable universe; very little actual mapping has been done beyond a distance of one billion light-years.
Video: The Known Universe, from the American Museum of Natural History
NASA/IPAC Extragalactic Database
Cosmography of the Local Universe at irfu.cea.fr (17:35) (arXiv)
There are about 1082 atoms in the observable universe – LiveScience, July 2021
Limits to knowledge about Universe – Forbes, May 2019
Concepts in astronomy
Physical cosmological concepts | Observable universe | [
"Physics",
"Astronomy"
] | 4,963 | [
"Concepts in astronomy",
"Concepts in astrophysics",
"Physical cosmological concepts"
] |
251,712 | https://en.wikipedia.org/wiki/Darwin%20Mounds | Darwin Mounds is a large field of undersea sand mounds situated off the north west coast of Scotland that were first discovered in May 1998. They provide a unique habitat for ancient deep water coral reefs and were found using remote sensing techniques during surveys funded by the oil industry and steered by the joint industry and United Kingdom government group the Atlantic Frontier Environment Network (AFEN) (Masson and Jacobs 1998). The mounds were named after the research vessel, itself named for the eminent naturalist and evolutionary theorist Charles Darwin.
The mounds are about below the surface of the North Atlantic ocean, approximately north-west of Cape Wrath, the north-west tip of mainland Scotland. There are hundreds of mounds in the field, which in total cover approximately . Individual mounds are typically circular, up to high and wide. Most of the mounds are also distinguished by the presence of an additional feature referred to as a 'tail'. The tails are of a variable extent and may merge with others, but are generally a teardrop shape and are orientated south-west of the mound. The mound-tail feature of the Darwin Mounds is apparently unique globally.
Composition
The mounds are mostly sand, currently interpreted as "sand volcanoes". These features are caused when fluidised sand "de-waters" and the fluid bubbles up through the sand, pushing the sediment up into a cone shape. Sand volcanoes are common in the Devonian fossil record in UK, and in seismically active areas of the planet. In this case, tectonic activity is unlikely; some form of slumping on the south-west side of the undersea (Wyville-Thomson) Ridge being a more likely cause. The tops of the mounds have living stands of Lophelia and blocky rubble (interpreted as coral debris). The mounds provide one of the largest known northerly cold-water habitats for coral species. The mounds are also unusual in that Lophelia pertusa, a cold water coral, appears to be growing on sand rather than a hard substratum. Prior to research on the mounds in 2000, it was thought that Lophelia required a hard substratum for attachment.
The deep-water coral systems on the mounds are especially fragile. Unlike shallow-water coral reefs, they are not adapted to cope with minor disturbances such as wave action. The mounds also support significant populations of the xenophyophore Syringammina fragilissima. This is a giant single-celled organism (a protozoan) that is widespread in deep waters, but occurs in particularly high densities on the mounds and the tails. Individual xenophyophores can grow to be larger than and are often very fragile. The corals themselves provide a habitat for a wide diversity of other marine life including sponges, worms, crustaceans and molluscs. Among these starfish, sea urchins and crabs. Various fish have been observed, including blue ling, roundnose grenadier, and the orange roughy.
Conservation efforts
On 23 October 2001, UK Minister Margaret Beckett made a commitment at WWF's Oceans Recovery Summit in Edinburgh to protect the Darwin Mounds. The summit launched the Edinburgh Declaration, targeting politicians and marine stakeholders alike to sign up to action to safeguard the seas. Deep water bottom trawling had been occurring in the area, with nets as heavy as one tonne dragged across the sea floor. Researcher Jason Hall-Spencer of the University of Glasgow had found pieces of coral at least 4,500 years old in the nets of trawlers operating off the coast of Ireland and Scotland. Pieces of coral up to were found in the nets of French trawling vessels that had been scraping the seabed down. It is known that much coral was destroyed by these nets and the mounds themselves in some areas were found to be scraped and flattened. The mounds are ancient structures, and this damage is permanent.
After the discovery of the mounds, three well-documented surveys of the area were undertaken, one in June 1998 (Bett 1999), August 1999 (Bett & Jacobs 2000,) and twice during summer 2000 (B. Bett, pers. comm.). Instruments deployed during the studies included side-scan sonar, stills and video cameras and piston corers. However, the entirety of what was lost to heavy-netted fishing trawlers remains unknown. On 22 March 2004 EU Fisheries Ministers in Brussels agreed to give permanent protection to the United Kingdom's unique cold-water coral reefs, recognising the Darwin Mounds as an important habitat. In 2004 deep-water bottom trawling in the area was made illegal.
See also
List of reefs
References
External links
"Biogenic reefs – cold water corals", Joint Nature Conservation Committee, U.K. government, retrieved 8 December 2007
"Trawler ban to protect reefs, BBC News, 20 August 2003
1998 in science
1998 in Scotland
Coral reefs
Landforms of Highland (council area)
Physical oceanography
Reefs of the Atlantic Ocean
Reefs of Scotland
Scottish coast | Darwin Mounds | [
"Physics",
"Biology"
] | 1,020 | [
"Biogeomorphology",
"Applied and interdisciplinary physics",
"Physical oceanography",
"Coral reefs"
] |
251,720 | https://en.wikipedia.org/wiki/Praseodymium | Praseodymium is a chemical element; it has symbol Pr and the atomic number 59. It is the third member of the lanthanide series and is considered one of the rare-earth metals. It is a soft, silvery, malleable and ductile metal, valued for its magnetic, electrical, chemical, and optical properties. It is too reactive to be found in native form, and pure praseodymium metal slowly develops a green oxide coating when exposed to air.
Praseodymium always occurs naturally together with the other rare-earth metals. It is the sixth-most abundant rare-earth element and fourth-most abundant lanthanide, making up 9.1 parts per million of the Earth's crust, an abundance similar to that of boron. In 1841, Swedish chemist Carl Gustav Mosander extracted a rare-earth oxide residue he called didymium from a residue he called "lanthana", in turn separated from cerium salts. In 1885, the Austrian chemist Carl Auer von Welsbach separated didymium into two elements that gave salts of different colours, which he named praseodymium and neodymium. The name praseodymium comes from the Ancient Greek (), meaning 'leek-green', and () 'twin'.
Like most rare-earth elements, praseodymium most readily forms the +3 oxidation state, which is the only stable state in aqueous solution, although the +4 oxidation state is known in some solid compounds and, uniquely among the lanthanides, the +5 oxidation state is attainable in matrix-isolation conditions. The 0, +1, and +2 oxidation states are rarely found. Aqueous praseodymium ions are yellowish-green, and similarly, praseodymium results in various shades of yellow-green when incorporated into glasses. Many of praseodymium's industrial uses involve its ability to filter yellow light from light sources.
Physical properties
Praseodymium is the third member of the lanthanide series, and a member of the rare-earth metals. In the periodic table, it appears between the lanthanides cerium to its left and neodymium to its right, and above the actinide protactinium. It is a ductile metal with a hardness comparable to that of silver. Praseodymium is calculated to have a very large atomic radius; with a radius of 247 pm, barium, rubidium and caesium are larger. However, observationally, it is usually 185 pm.
Neutral praseodymium's 59 electrons are arranged in the configuration [Xe]4f36s2.
Like most other lanthanides, praseodymium usually uses only three electrons as valence electrons, as the remaining 4f electrons are too strongly bound to engage in bonding: this is because the 4f orbitals penetrate the most through the inert xenon core of electrons to the nucleus, followed by 5d and 6s, and this penetration increases with higher ionic charge. Even so, praseodymium can in some compounds lose a fourth valence electron because it is early in the lanthanide series, where the nuclear charge is still low enough and the 4f subshell energy high enough to allow the removal of further valence electrons.
Similarly to the other early lanthanides, praseodymium has a double hexagonal close-packed crystal structure at room temperature, called the alpha phase (α-Pr). At it transforms to a different allotrope that has a body-centered cubic structure (β-Pr), and it melts at .
Praseodymium, like all of the lanthanides, is paramagnetic at room temperature. Unlike some other rare-earth metals, which show antiferromagnetic or ferromagnetic ordering at low temperatures, praseodymium is paramagnetic at all temperatures above 1 K.
Chemical properties
Praseodymium metal tarnishes slowly in air, forming a spalling green oxide layer like iron rust; a centimetre-sized sample of praseodymium metal corrodes completely in about a year. It burns readily at 150 °C to form praseodymium(III,IV) oxide, a nonstoichiometric compound approximating to Pr6O11:
12 Pr + 11 O2 → 2 Pr6O11
This may be reduced to praseodymium(III) oxide (Pr2O3) with hydrogen gas. Praseodymium(IV) oxide, PrO2, is the most oxidised product of the combustion of praseodymium and can be obtained by either reaction of praseodymium metal with pure oxygen at 400 °C and 282 bar or by disproportionation of Pr6O11 in boiling acetic acid. The reactivity of praseodymium conforms to periodic trends, as it is one of the first and thus one of the largest lanthanides. At 1000 °C, many praseodymium oxides with composition PrO2−x exist as disordered, nonstoichiometric phases with 0 < x < 0.25, but at 400–700 °C the oxide defects are instead ordered, creating phases of the general formula PrnO2n−2 with n = 4, 7, 9, 10, 11, 12, and ∞. These phases PrOy are sometimes labelled α and β′ (nonstoichiometric), β (y = 1.833), δ (1.818), ε (1.8), ζ (1.778), ι (1.714), θ, and σ.
Praseodymium is an electropositive element and reacts slowly with cold water and quite quickly with hot water to form praseodymium(III) hydroxide:
2 Pr (s) + 6 H2O (l) → 2 Pr(OH)3 (aq) + 3 H2 (g)
Praseodymium metal reacts with all the stable halogens to form trihalides:
2 Pr (s) + 3 F2 (g) → 2 PrF3 (s) [green]
2 Pr (s) + 3 Cl2 (g) → 2 PrCl3 (s) [green]
2 Pr (s) + 3 Br2 (g) → 2 PrBr3 (s) [green]
2 Pr (s) + 3 I2 (g) → 2 PrI3 (s)
The tetrafluoride, PrF4, is also known, and is produced by reacting a mixture of sodium fluoride and praseodymium(III) fluoride with fluorine gas, producing Na2PrF6, following which sodium fluoride is removed from the reaction mixture with liquid hydrogen fluoride. Additionally, praseodymium forms a bronze diiodide; like the diiodides of lanthanum, cerium, and gadolinium, it is a praseodymium(III) electride compound.
Praseodymium dissolves readily in dilute sulfuric acid to form solutions containing the chartreuse Pr3+ ions, which exist as [Pr(H2O)9]3+ complexes:
2 Pr (s) + 3 H2SO4 (aq) → 2 Pr3+ (aq) + 3 (aq) + 3 H2 (g)
Dissolving praseodymium(IV) compounds in water does not result in solutions containing the yellow Pr4+ ions; because of the high positive standard reduction potential of the Pr4+/Pr3+ couple at +3.2 V, these ions are unstable in aqueous solution, oxidising water and being reduced to Pr3+. The value for the Pr3+/Pr couple is −2.35 V. However, in highly basic aqueous media, Pr4+ ions can be generated by oxidation with ozone.
Although praseodymium(V) in the bulk state is unknown, the existence of praseodymium in its +5 oxidation state (with the stable electron configuration of the preceding noble gas xenon) under noble-gas matrix isolation conditions was reported in 2016. The species assigned to the +5 state were identified as [PrO2]+, its O2 and Ar adducts, and PrO2(η2-O2).
Organopraseodymium compounds
Organopraseodymium compounds are very similar to those of the other lanthanides, as they all share an inability to undergo π backbonding. They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric. The coordination chemistry of praseodymium is largely that of the large, electropositive Pr3+ ion, and is thus largely similar to those of the other early lanthanides La3+, Ce3+, and Nd3+. For instance, like lanthanum, cerium, and neodymium, praseodymium nitrates form both 4:3 and 1:1 complexes with 18-crown-6, whereas the middle lanthanides from promethium to gadolinium can only form the 4:3 complex and the later lanthanides from terbium to lutetium cannot successfully coordinate to all the ligands. Such praseodymium complexes have high but uncertain coordination numbers and poorly defined stereochemistry, with exceptions resulting from exceptionally bulky ligands such as the tricoordinate [Pr{N(SiMe3)2}3]. There are also a few mixed oxides and fluorides involving praseodymium(IV), but it does not have an appreciable coordination chemistry in this oxidation state like its neighbour cerium. However, the first example of a molecular complex of praseodymium(IV) has recently been reported.
Isotopes
Praseodymium has only one stable and naturally occurring isotope, 141Pr. It is thus a mononuclidic and monoisotopic element, and its standard atomic weight can be determined with high precision as it is a constant of nature. This isotope has 82 neutrons, which is a magic number that confers additional stability. This isotope is produced in stars through the s- and r-processes (slow and rapid neutron capture, respectively). Thirty-eight other radioisotopes have been synthesized. All of these isotopes have half-lives under a day (and most under a minute), with the single exception of 143Pr with a half-life of 13.6 days. Both 143Pr and 141Pr occur as fission products of uranium. The primary decay mode of isotopes lighter than 141Pr is positron emission or electron capture to isotopes of cerium, while that of heavier isotopes is beta decay to isotopes of neodymium.
History
In 1751, the Swedish mineralogist Axel Fredrik Cronstedt discovered a heavy mineral from the mine at Bastnäs, later named cerite. Thirty years later, the fifteen-year-old Wilhelm Hisinger, from the family owning the mine, sent a sample of it to Carl Scheele, who did not find any new elements within. In 1803, after Hisinger had become an ironmaster, he returned to the mineral with Jöns Jacob Berzelius and isolated a new oxide, which they named ceria after the dwarf planet Ceres, which had been discovered two years earlier. Ceria was simultaneously and independently isolated in Germany by Martin Heinrich Klaproth. Between 1839 and 1843, ceria was shown to be a mixture of oxides by the Swedish surgeon and chemist Carl Gustaf Mosander, who lived in the same house as Berzelius; he separated out two other oxides, which he named lanthana and didymia. He partially decomposed a sample of cerium nitrate by roasting it in air and then treating the resulting oxide with dilute nitric acid. The metals that formed these oxides were thus named lanthanum and didymium.
While lanthanum turned out to be a pure element, didymium was not and turned out to be only a mixture of all the stable early lanthanides from praseodymium to europium, as had been suspected by Marc Delafontaine after spectroscopic analysis, though he lacked the time to pursue its separation into its constituents. The heavy pair of samarium and europium were only removed in 1879 by Paul-Émile Lecoq de Boisbaudran and it was not until 1885 that Carl Auer von Welsbach separated didymium into praseodymium and neodymium. Von Welsbach confirmed the separation by spectroscopic analysis, but the products were of relatively low purity. Since neodymium was a larger constituent of didymium than praseodymium, it kept the old name with disambiguation, while praseodymium was distinguished by the leek-green colour of its salts (Greek πρασιος, "leek green"). The composite nature of didymium had previously been suggested in 1882 by Bohuslav Brauner, who did not experimentally pursue its separation.
Occurrence and production
Praseodymium is not particularly rare, despite it being in the rare-earth metals, making up 9.2 mg/kg of the Earth's crust. Praseodymium's classification as a rare-earth metal comes from its rarity relative to "common earths" such as lime and magnesia, the few known minerals containing it for which extraction is commercially viable, as well as the length and complexity of extraction. Although not particularly rare, praseodymium is never found as a dominant rare earth in praseodymium-bearing minerals. It is always preceded by cerium and lanthanum and usually also by neodymium.
The Pr3+ ion is similar in size to the early lanthanides of the cerium group (those from lanthanum up to samarium and europium) that immediately follow in the periodic table, and hence it tends to occur along with them in phosphate, silicate and carbonate minerals, such as monazite (MIIIPO4) and bastnäsite (MIIICO3F), where M refers to all the rare-earth metals except scandium and the radioactive promethium (mostly Ce, La, and Y, with somewhat less Nd and Pr). Bastnäsite is usually lacking in thorium and the heavy lanthanides, and the purification of the light lanthanides from it is less involved. The ore, after being crushed and ground, is first treated with hot concentrated sulfuric acid, evolving carbon dioxide, hydrogen fluoride, and silicon tetrafluoride. The product is then dried and leached with water, leaving the early lanthanide ions, including lanthanum, in solution.
The procedure for monazite, which usually contains all the rare earth, as well as thorium, is more involved. Monazite, because of its magnetic properties, can be separated by repeated electromagnetic separation. After separation, it is treated with hot concentrated sulfuric acid to produce water-soluble sulfates of rare earth. The acidic filtrates are partially neutralized with sodium hydroxide to pH 3–4, during which thorium precipitates as hydroxide and is removed. The solution is treated with ammonium oxalate to convert rare earth to their insoluble oxalates, the oxalates are converted to oxides by annealing, and the oxides are dissolved in nitric acid. This last step excludes one of the main components, cerium, whose oxide is insoluble in HNO3. Care must be taken when handling some of the residues as they contain 228Ra, the daughter of 232Th, which is a strong gamma emitter.
Praseodymium may then be separated from the other lanthanides via ion-exchange chromatography, or by using a solvent such as tributyl phosphate where the solubility of Ln3+ increases as the atomic number increases. If ion-exchange chromatography is used, the mixture of lanthanides is loaded into one column of cation-exchange resin and Cu2+ or Zn2+ or Fe3+ is loaded into the other. An aqueous solution of a complexing agent, known as the eluant (usually triammonium edtate), is passed through the columns, and Ln3+ is displaced from the first column and redeposited in a compact band at the top of the column before being re-displaced by . The Gibbs free energy of formation for Ln(edta·H) complexes increases along with the lanthanides by about one quarter from Ce3+ to Lu3+, so that the Ln3+ cations descend the development column in a band and are fractionated repeatedly, eluting from heaviest to lightest. They are then precipitated as their insoluble oxalates, burned to form the oxides, and then reduced to metals.
Applications
Leo Moser (not to be confused with the mathematician of the same name), son of Ludwig Moser, founder of the Moser Glassworks in what is now Karlovy Vary in the Czech Republic, investigated the use of praseodymium in glass coloration in the late 1920s, yielding a yellow-green glass given the name "Prasemit". However, at that time far cheaper colorants could give a similar color, so Prasemit was not popular, few pieces were made, and examples are now extremely rare. Moser also blended praseodymium with neodymium to produce "Heliolite" glass ("Heliolit" in German), which was more widely accepted. The first enduring commercial use of purified praseodymium, which continues today, is in the form of a yellow-orange "Praseodymium Yellow" stain for ceramics, which is a solid solution in the zircon lattice. This stain has no hint of green in it; by contrast, at sufficiently high loadings, praseodymium glass is distinctly green rather than pure yellow.
Like many other lanthanides, praseodymium's shielded f-orbitals allow for long excited state lifetimes and high luminescence yields. Pr3+ as a dopant ion therefore sees many applications in optics and photonics. These include DPSS-lasers, single-mode fiber optical amplifiers, fiber lasers, upconverting nanoparticles as well as activators in red, green, blue, and ultraviolet phosphors. Silicate crystals doped with praseodymium ions have also been used to slow a light pulse down to a few hundred meters per second.
As the lanthanides are so similar, praseodymium can substitute for most other lanthanides without significant loss of function, and indeed many applications such as mischmetal and ferrocerium alloys involve variable mixes of several lanthanides, including small quantities of praseodymium. The following more modern applications involve praseodymium specifically or at least praseodymium in a small subset of the lanthanides:
In combination with neodymium, another rare-earth element, praseodymium is used to create high-power magnets notable for their strength and durability. In general, most alloys of the cerium-group rare earths (lanthanum through samarium) with 3d transition metals give extremely stable magnets that are often used in small equipment, such as motors, printers, watches, headphones, loudspeakers, and magnetic storage.
Praseodymium–nickel intermetallic (PrNi5) has such a strong magnetocaloric effect that it has allowed scientists to approach within one thousandth of a degree of absolute zero.
As an alloying agent with magnesium to create high-strength metals that are used in aircraft engines; yttrium and neodymium are suitable substitutes.
Praseodymium is present in the rare-earth mixture whose fluoride forms the core of carbon arc lights, which are used in the motion picture industry for studio lighting and projector lights.
Praseodymium compounds give glasses, enamels and ceramics a yellow color.
Praseodymium is a component of didymium glass, which is used to make certain types of welder's and glass blower's goggles.
Praseodymium oxide in solid solution with ceria or ceria-zirconia has been used as an oxidation catalyst.
Due to its role in permanent magnets used for wind turbines, it has been argued that praseodymium will be one of the main objects of geopolitical competition in a world running on renewable energy. However, this perspective has been criticized for failing to recognize that most wind turbines do not use permanent magnets and for underestimating the power of economic incentives for expanded production.
Biological role and precautions
The early lanthanides have been found to be essential to some methanotrophic bacteria living in volcanic mudpots, such as Methylacidiphilum fumariolicum: lanthanum, cerium, praseodymium, and neodymium are about equally effective. Praseodymium is otherwise not known to have a biological role in any other organisms, but it is not very toxic either. Intravenous injection of rare earths into animals has been known to impair liver function, but the main side effects from inhalation of rare-earth oxides in humans come from radioactive thorium and uranium impurities.
Notes
References
Bibliography
Further reading
R. J. Callow, The Industrial Chemistry of the Lanthanons, Yttrium, Thorium, and Uranium, Pergamon Press, 1967.
Bouhani, H (2020). "Engineering the magnetocaloric properties of PrVO3 epitaxial oxide thin films by strain effects". Applied Physics Letters. 117 (7). arXiv:2008.09193. doi:10.1063/5.0021031.
External links
WebElements.com—Praseodymium
It's Elemental—The Element Praseodymium
Chemical elements
Chemical elements with double hexagonal close-packed structure
Lanthanides
Reducing agents | Praseodymium | [
"Physics",
"Chemistry"
] | 4,752 | [
"Chemical elements",
"Redox",
"Reducing agents",
"Atoms",
"Matter"
] |
252,250 | https://en.wikipedia.org/wiki/Orbit%20%28dynamics%29 | In mathematics, specifically in the study of dynamical systems, an orbit is a collection of points related by the evolution function of the dynamical system. It can be understood as the subset of phase space covered by the trajectory of the dynamical system under a particular set of initial conditions, as the system evolves. As a phase space trajectory is uniquely determined for any given set of phase space coordinates, it is not possible for different orbits to intersect in phase space, therefore the set of all orbits of a dynamical system is a partition of the phase space. Understanding the properties of orbits by using topological methods is one of the objectives of the modern theory of dynamical systems.
For discrete-time dynamical systems, the orbits are sequences; for real dynamical systems, the orbits are curves; and for holomorphic dynamical systems, the orbits are Riemann surfaces.
Definition
Given a dynamical system (T, M, Φ) with T a group, M a set and Φ the evolution function
where with
we define
then the set
is called the orbit through x. An orbit which consists of a single point is called constant orbit. A non-constant orbit is called closed or periodic if there exists a in such that
.
Real dynamical system
Given a real dynamical system (R, M, Φ), I(x) is an open interval in the real numbers, that is . For any x in M
is called positive semi-orbit through x and
is called negative semi-orbit through x.
Discrete time dynamical system
For a discrete time dynamical system with a time-invariant evolution function :
The forward orbit of x is the set :
If the function is invertible, the backward orbit of x is the set :
and orbit of x is the set :
where :
is the evolution function
set is the dynamical space,
is number of iteration, which is natural number and
is initial state of system and
General dynamical system
For a general dynamical system, especially in homogeneous dynamics, when one has a "nice" group acting on a probability space in a measure-preserving way, an orbit will be called periodic (or equivalently, closed) if the stabilizer is a lattice inside .
In addition, a related term is a bounded orbit, when the set is pre-compact inside .
The classification of orbits can lead to interesting questions with relations to other mathematical areas, for example the Oppenheim conjecture (proved by Margulis) and the Littlewood conjecture (partially proved by Lindenstrauss) are dealing with the question whether every bounded orbit of some natural action on the homogeneous space is indeed periodic one, this observation is due to Raghunathan and in different language due to Cassels and Swinnerton-Dyer . Such questions are intimately related to deep measure-classification theorems.
Notes
It is often the case that the evolution function can be understood to compose the elements of a group, in which case the group-theoretic orbits of the group action are the same thing as the dynamical orbits.
Examples
The orbit of an equilibrium point is a constant orbit.
Stability of orbits
A basic classification of orbits is
constant orbits or fixed points
periodic orbits
non-constant and non-periodic orbits
An orbit can fail to be closed in two ways.
It could be an asymptotically periodic orbit if it converges to a periodic orbit. Such orbits are not closed because they never truly repeat, but they become arbitrarily close to a repeating orbit.
An orbit can also be chaotic. These orbits come arbitrarily close to the initial point, but fail to ever converge to a periodic orbit. They exhibit sensitive dependence on initial conditions, meaning that small differences in the initial value will cause large differences in future points of the orbit.
There are other properties of orbits that allow for different classifications. An orbit can be hyperbolic if nearby points approach or diverge from the orbit exponentially fast.
See also
Wandering set
Phase space method
Phase space
Cobweb plot or Verhulst diagram
Periodic points of complex quadratic mappings and multiplier of orbit
Orbit portrait
References
Dynamical systems
Group actions (mathematics) | Orbit (dynamics) | [
"Physics",
"Mathematics"
] | 838 | [
"Mechanics",
"Group actions",
"Symmetry",
"Dynamical systems"
] |
252,329 | https://en.wikipedia.org/wiki/Sequent%20calculus | In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems of a first-order theory rather than conditional tautologies.
Sequent calculus is one of several extant styles of proof calculus for expressing line-by-line logical arguments.
Hilbert style. Every line is an unconditional tautology (or theorem).
Gentzen style. Every line is a conditional tautology (or theorem) with zero or more conditions on the left.
Natural deduction. Every (conditional) line has exactly one asserted proposition on the right.
Sequent calculus. Every (conditional) line has zero or more asserted propositions on the right.
In other words, natural deduction and sequent calculus systems are particular distinct kinds of Gentzen-style systems. Hilbert-style systems typically have a very small number of inference rules, relying more on sets of axioms. Gentzen-style systems typically have very few axioms, if any, relying more on sets of rules.
Gentzen-style systems have significant practical and theoretical advantages compared to Hilbert-style systems. For example, both natural deduction and sequent calculus systems facilitate the elimination and introduction of universal and existential quantifiers so that unquantified logical expressions can be manipulated according to the much simpler rules of propositional calculus. In a typical argument, quantifiers are eliminated, then propositional calculus is applied to unquantified expressions (which typically contain free variables), and then the quantifiers are reintroduced. This very much parallels the way in which mathematical proofs are carried out in practice by mathematicians. Predicate calculus proofs are generally much easier to discover with this approach, and are often shorter. Natural deduction systems are more suited to practical theorem-proving. Sequent calculus systems are more suited to theoretical analysis.
Overview
In proof theory and mathematical logic, sequent calculus is a family of formal systems sharing a certain style of inference and certain formal properties. The first sequent calculi systems, LK and LJ, were introduced in 1934/1935 by Gerhard Gentzen as a tool for studying natural deduction in first-order logic (in classical and intuitionistic versions, respectively). Gentzen's so-called "Main Theorem" (Hauptsatz) about LK and LJ was the cut-elimination theorem, a result with far-reaching meta-theoretic consequences, including consistency. Gentzen further demonstrated the power and flexibility of this technique a few years later, applying a cut-elimination argument to give a (transfinite) proof of the consistency of Peano arithmetic, in surprising response to Gödel's incompleteness theorems. Since this early work, sequent calculi, also called Gentzen systems, and the general concepts relating to them, have been widely applied in the fields of proof theory, mathematical logic, and automated deduction.
Hilbert-style deduction systems
One way to classify different styles of deduction systems is to look at the form of judgments in the system, i.e., which things may appear as the conclusion of a (sub)proof. The simplest judgment form is used in Hilbert-style deduction systems, where a judgment has the form
where is any formula of first-order logic (or whatever logic the deduction system applies to, e.g., propositional calculus or a higher-order logic or a modal logic). The theorems are those formulas that appear as the concluding judgment in a valid proof. A Hilbert-style system needs no distinction between formulas and judgments; we make one here solely for comparison with the cases that follow.
The price paid for the simple syntax of a Hilbert-style system is that complete formal proofs tend to get extremely long. Concrete arguments about proofs in such a system almost always appeal to the deduction theorem. This leads to the idea of including the deduction theorem as a formal rule in the system, which happens in natural deduction.
Natural deduction systems
In natural deduction, judgments have the shape
where the 's and are again formulas and . In other words, a judgment consists of a list (possibly empty) of formulas on the left-hand side of a turnstile symbol "", with a single formula on the right-hand side, (though permutations of the 's are often immaterial). The theorems are those formulae such that (with an empty left-hand side) is the conclusion of a valid proof.
(In some presentations of natural deduction, the s and the turnstile are not written down explicitly; instead a two-dimensional notation from which they can be inferred is used.)
The standard semantics of a judgment in natural deduction is that it asserts that whenever , , etc., are all true, will also be true. The judgments
and
are equivalent in the strong sense that a proof of either one may be extended to a proof of the other.
Sequent calculus systems
Finally, sequent calculus generalizes the form of a natural deduction judgment to
a syntactic object called a sequent. The formulas on left-hand side of the turnstile are called the antecedent, and the formulas on right-hand side are called the succedent or consequent; together they are called cedents or sequents. Again, and are formulas, and and are nonnegative integers, that is, the left-hand-side or the right-hand-side (or neither or both) may be empty. As in natural deduction, theorems are those where is the conclusion of a valid proof.
The standard semantics of a sequent is an assertion that whenever every is true, at least one will also be true. Thus the empty sequent, having both cedents empty, is false. One way to express this is that a comma to the left of the turnstile should be thought of as an "and", and a comma to the right of the turnstile should be thought of as an (inclusive) "or". The sequents
and
are equivalent in the strong sense that a proof of either sequent may be extended to a proof of the other sequent.
At first sight, this extension of the judgment form may appear to be a strange complication—it is not motivated by an obvious shortcoming of natural deduction, and it is initially confusing that the comma seems to mean entirely different things on the two sides of the turnstile. However, in a classical context the semantics of the sequent can also (by propositional tautology) be expressed either as
(at least one of the As is false, or one of the Bs is true)
or as
(it cannot be the case that all of the As are true and all of the Bs are false).
In these formulations, the only difference between formulas on either side of the turnstile is that one side is negated. Thus, swapping left for right in a sequent corresponds to negating all of the constituent formulas. This means that a symmetry such as De Morgan's laws, which manifests itself as logical negation on the semantic level, translates directly into a left–right symmetry of sequents—and indeed, the inference rules in sequent calculus for dealing with conjunction (∧) are mirror images of those dealing with disjunction (∨).
Many logicians feel that this symmetric presentation offers a deeper insight in the structure of the logic than other styles of proof system, where the classical duality of negation is not as apparent in the rules.
Distinction between natural deduction and sequent calculus
Gentzen asserted a sharp distinction between his single-output natural deduction systems (NK and NJ) and his multiple-output sequent calculus systems (LK and LJ). He wrote that the intuitionistic natural deduction system NJ was somewhat ugly. He said that the special role of the excluded middle in the classical natural deduction system NK is removed in the classical sequent calculus system LK. He said that the sequent calculus LJ gave more symmetry than natural deduction NJ in the case of intuitionistic logic, as also in the case of classical logic (LK versus NK). Then he said that in addition to these reasons, the sequent calculus with multiple succedent formulas is intended particularly for his principal theorem ("Hauptsatz").
Origin of word "sequent"
The word "sequent" is taken from the word "Sequenz" in Gentzen's 1934 paper. Kleene makes the following comment on the translation into English: "Gentzen says 'Sequenz', which we translate as 'sequent', because we have already used 'sequence' for any succession of objects, where the German is 'Folge'."
Proving logical formulas
Reduction trees
Sequent calculus can be seen as a tool for proving formulas in propositional logic, similar to the method of analytic tableaux. It gives a series of steps that allows one to reduce the problem of proving a logical formula to simpler and simpler formulas until one arrives at trivial ones.
Consider the following formula:
This is written in the following form, where the proposition that needs to be proven is to the right of the turnstile symbol :
Now, instead of proving this from the axioms, it is enough to assume the premise of the implication and then try to prove its conclusion. Hence one moves to the following sequent:
Again the right hand side includes an implication, whose premise can further be assumed so that only its conclusion needs to be proven:
Since the arguments in the left-hand side are assumed to be related by conjunction, this can be replaced by the following:
This is equivalent to proving the conclusion in both cases of the disjunction on the first argument on the left. Thus we may split the sequent to two, where we now have to prove each separately:
In the case of the first judgment, we rewrite as and split the sequent again to get:
The second sequent is done; the first sequent can be further simplified into:
This process can always be continued until there are only atomic formulas in each side.
The process can be graphically described by a rooted tree, as depicted on the right. The root of the tree is the formula we wish to prove; the leaves consist of atomic formulas only. The tree is known as a reduction tree.
The items to the left of the turnstile are understood to be connected by conjunction, and those to the right by disjunction. Therefore, when both consist only of atomic symbols, the sequent is accepted axiomatically (and always true) if and only if at least one of the symbols on the right also appears on the left.
Following are the rules by which one proceeds along the tree. Whenever one sequent is split into two, the tree vertex has two child vertices, and the tree is branched. Additionally, one may freely change the order of the arguments in each side; Γ and Δ stand for possible additional arguments.
The usual term for the horizontal line used in Gentzen-style layouts for natural deduction is inference line.
Starting with any formula in propositional logic, by a series of steps, the right side of the turnstile can be processed until it includes only atomic symbols. Then, the same is done for the left side. Since every logical operator appears in one of the rules above, and is removed by the rule, the process terminates when no logical operators remain: The formula has been decomposed.
Thus, the sequents in the leaves of the trees include only atomic symbols, which are either provable by the axiom or not, according to whether one of the symbols on the right also appears on the left.
It is easy to see that the steps in the tree preserve the semantic truth value of the formulas implied by them, with conjunction understood between the tree's different branches whenever there is a split. It is also obvious that an axiom is provable if and only if it is true for every assignment of truth values to the atomic symbols. Thus this system is sound and complete for classical propositional logic.
Relation to standard axiomatizations
Sequent calculus is related to other axiomatizations of classical propositional calculus, such as Frege's propositional calculus or Jan Łukasiewicz's axiomatization (itself a part of the standard Hilbert system): Every formula that can be proven in these has a reduction tree. This can be shown as follows: Every proof in propositional calculus uses only axioms and the inference rules. Each use of an axiom scheme yields a true logical formula, and can thus be proven in sequent calculus; examples for these are shown below. The only inference rule in the systems mentioned above is modus ponens, which is implemented by the cut rule.
The system LK
This section introduces the rules of the sequent calculus LK (standing for Logistische Kalkül) as introduced by Gentzen in 1934.
A (formal) proof in this calculus is a finite sequence of sequents, where each of the sequents is derivable from sequents appearing earlier in the sequence by using one of the rules below.
Inference rules
The following notation will be used:
known as the turnstile, separates the assumptions on the left from the propositions on the right
and denote formulas of first-order predicate logic (one may also restrict this to propositional logic),
, and are finite (possibly empty) sequences of formulas (in fact, the order of formulas does not matter; see ), called contexts,
when on the left of the , the sequence of formulas is considered conjunctively (all assumed to hold at the same time),
while on the right of the , the sequence of formulas is considered disjunctively (at least one of the formulas must hold for any assignment of variables),
denotes an arbitrary term,
and denote variables.
a variable is said to occur free within a formula if it is not bound by quantifiers or .
denotes the formula that is obtained by substituting the term for every free occurrence of the variable in formula with the restriction that the term must be free for the variable in (i.e., no occurrence of any variable in becomes bound in ).
, , , , , : These six stand for the two versions of each of three structural rules; one for use on the left ('L') of a , and the other on its right ('R'). The rules are abbreviated 'W' for Weakening (Left/Right), 'C' for Contraction, and 'P' for Permutation.
Note that, contrary to the rules for proceeding along the reduction tree presented above, the following rules are for moving in the opposite directions, from axioms to theorems. Thus they are exact mirror-images of the rules above, except that here symmetry is not implicitly assumed, and rules regarding quantification are added.
Restrictions: In the rules and , the variable must not occur free anywhere in the respective lower sequents.
An intuitive explanation
The above rules can be divided into two major groups: logical and structural ones. Each of the logical rules introduces a new logical formula either on the left or on the right of the turnstile . In contrast, the structural rules operate on the structure of the sequents, ignoring the exact shape of the formulas. The two exceptions to this general scheme are the axiom of identity (I) and the rule of (Cut).
Although stated in a formal way, the above rules allow for a very intuitive reading in terms of classical logic. Consider, for example, the rule . It says that, whenever one can prove that can be concluded from some sequence of formulas that contain , then one can also conclude from the (stronger) assumption that holds. Likewise, the rule states that, if and suffice to conclude , then from alone one can either still conclude or must be false, i.e. holds. All the rules can be interpreted in this way.
For an intuition about the quantifier rules, consider the rule . Of course concluding that holds just from the fact that is true is not in general possible. If, however, the variable y is not mentioned elsewhere (i.e. it can still be chosen freely, without influencing the other formulas), then one may assume, that holds for any value of y. The other rules should then be pretty straightforward.
Instead of viewing the rules as descriptions for legal derivations in predicate logic, one may also consider them as instructions for the construction of a proof for a given statement. In this case the rules can be read bottom-up; for example, says that, to prove that follows from the assumptions and , it suffices to prove that can be concluded from and can be concluded from , respectively. Note that, given some antecedent, it is not clear how this is to be split into and . However, there are only finitely many possibilities to be checked since the antecedent by assumption is finite. This also illustrates how proof theory can be viewed as operating on proofs in a combinatorial fashion: given proofs for both and , one can construct a proof for .
When looking for some proof, most of the rules offer more or less direct recipes of how to do this. The rule of cut is different: it states that, when a formula can be concluded and this formula may also serve as a premise for concluding other statements, then the formula can be "cut out" and the respective derivations are joined. When constructing a proof bottom-up, this creates the problem of guessing (since it does not appear at all below). The cut-elimination theorem is thus crucial to the applications of sequent calculus in automated deduction: it states that all uses of the cut rule can be eliminated from a proof, implying that any provable sequent can be given a cut-free proof.
The second rule that is somewhat special is the axiom of identity (I). The intuitive reading of this is obvious: every formula proves itself. Like the cut rule, the axiom of identity is somewhat redundant: the completeness of atomic initial sequents states that the rule can be restricted to atomic formulas without any loss of provability.
Observe that all rules have mirror companions, except the ones for implication. This reflects the fact that the usual language of first-order logic does not include the "is not implied by" connective that would be the De Morgan dual of implication. Adding such a connective with its natural rules would make the calculus completely left–right symmetric.
Example derivations
Here is the derivation of "", known as
the Law of excluded middle (tertium non datur in Latin).
Next is the proof of a simple fact involving quantifiers. Note that the converse is not true, and its falsity can be seen when attempting to derive it bottom-up, because an existing free variable cannot be used in substitution in the rules and .
For something more interesting we shall prove . It is straightforward to find the derivation, which exemplifies the usefulness of LK in automated proving.
These derivations also emphasize the strictly formal structure of the sequent calculus. For example, the logical rules as defined above always act on a formula immediately adjacent to the turnstile, such that the permutation rules are necessary. Note, however, that this is in part an artifact of the presentation, in the original style of Gentzen. A common simplification involves the use of multisets of formulas in the interpretation of the sequent, rather than sequences, eliminating the need for an explicit permutation rule. This corresponds to shifting commutativity of assumptions and derivations outside the sequent calculus, whereas LK embeds it within the system itself.
Relation to analytic tableaux
For certain formulations (i.e. variants) of the sequent calculus, a proof in such a calculus is isomorphic to an upside-down, closed analytic tableau.
Structural rules
The structural rules deserve some additional discussion.
Weakening (W) allows the addition of arbitrary elements to a sequence. Intuitively, this is allowed in the antecedent because we can always restrict the scope of our proof (if all cars have wheels, then it's safe to say that all black cars have wheels); and in the succedent because we can always allow for alternative conclusions (if all cars have wheels, then it's safe to say that all cars have either wheels or wings).
Contraction (C) and Permutation (P) assure that neither the order (P) nor the multiplicity of occurrences (C) of elements of the sequences matters. Thus, one could instead of sequences also consider sets.
The extra effort of using sequences, however, is justified since part or all of the structural rules may be omitted. Doing so, one obtains the so-called substructural logics.
Properties of the system LK
This system of rules can be shown to be both sound and complete with respect to first-order logic, i.e. a statement follows semantically from a set of premises if and only if the sequent can be derived by the above rules.
In the sequent calculus, the rule of cut is admissible. This result is also referred to as Gentzen's Hauptsatz ("Main Theorem").
Variants
The above rules can be modified in various ways:
Minor structural alternatives
There is some freedom of choice regarding the technical details of how sequents and structural rules are formalized without changing what sequents the system derives.
First of all, as mentioned above, the sequents can be viewed to consist of sets or multisets. In this case, the rules for permuting and (when using sets) contracting formulas are unnecessary.
The rule of weakening becomes admissible if the axiom (I) is changed to derive any sequent of the form . Any weakening that appears in a derivation can then be moved to the beginning of the proof. This may be a convenient change when constructing proofs bottom-up.
One may also change whether rules with more than one premise share the same context for each of those premises or split their contexts between them: For example, may be instead formulated as
Contraction and weakening make this version of the rule interderivable with the version above, although in their absence, as in linear logic, these rules define different connectives.
Absurdity
One can introduce , the absurdity constant representing false, with the axiom:
Or if, as described above, weakening is to be an admissible rule, then with the axiom:
With , negation can be subsumed as a special case of implication, via the definition .
Substructural logics
Alternatively, one may restrict or forbid the use of some of the structural rules. This yields a variety of substructural logic systems. They are generally weaker than LK (i.e., they have fewer theorems), and thus not complete with respect to the standard semantics of first-order logic. However, they have other interesting properties that have led to applications in theoretical computer science and artificial intelligence.
Intuitionistic sequent calculus: System LJ
Surprisingly, some small changes in the rules of LK suffice to turn it into a proof system for intuitionistic logic. To this end, one has to restrict to sequents with at most one formula on the right-hand side, and modify the rules to maintain this invariant. For example, is reformulated as follows (where C is an arbitrary formula):
The resulting system is called LJ. It is sound and complete with respect to intuitionistic logic and admits a similar cut-elimination proof. This can be used in proving disjunction and existence properties.
In fact, the only rules in LK that need to be restricted to single-formula consequents are , (which can be seen as a special case of , as described above) and . When multi-formula consequents are interpreted as disjunctions, all of the other inference rules of LK are derivable in LJ, while the rules and become
and (when does not occur free in the bottom sequent)
These rules are not intuitionistically valid.
See also
Cirquent calculus
Nested sequent calculus
Resolution (logic)
Proof theory
Notes
References
External links
Proof Theory (Sequent Calculi) in the Stanford Encyclopedia of Philosophy
A Brief Diversion: Sequent Calculus
Interactive tutorial of the Sequent Calculus
Proof theory
Logical calculi
Automated theorem proving | Sequent calculus | [
"Mathematics"
] | 5,205 | [
"Automated theorem proving",
"Proof theory",
"Mathematical logic",
"Logical calculi",
"Computational mathematics"
] |
23,588,655 | https://en.wikipedia.org/wiki/Acetylenediol | Acetylenediol, or ethynediol, is a chemical substance with formula HO−C≡C−OH (an ynol). It is the diol of acetylene. Acetylenediol is unstable in the condensed phase, although its tautomer glyoxal (CHO)2 is well known.
Detection
Acetylenediol was first observed in the gas-phase by mass spectrometry. The compound was later obtained by photolysis of squaric acid in a solid argon matrix at . Recently, this molecule was synthesized in interstellar ice analogs composed of carbon monoxide (CO) and water (H2O) upon exposure to energetic electrons and detected upon sublimation by isomer-selective photoionization reflectron time-of-flight mass spectrometry.
Derivatives
Alkoxide derivatives
Like the diol, most simple ether derivatives are labile. Di-tert-butoxyacetylene is however a distillable liquid.
Acetylenediolate salts
Salts of the acetylenediolate (ethynediolate) dianion −O−C≡C−O− are known. They are not however prepared from ethynediol, but by the reduction of carbon monoxide. Potassium acetylenediolate (K2C2O2) was first obtained by Liebig in 1834, from the reaction of carbon monoxide with metallic potassium; but for a long time the product was assumed to be "potassium carbonyl" (KCO). Over the next 130 years were described the "carbonyls" of sodium (Johannis, 1893), barium (Gunz and Mentrel, 1903), strontium (Roederer, 1906), and lithium, rubidium, and caesium (Pearson, 1933). The reaction was eventually shown to yield a mixture of the potassium acetylenediolate and potassium benzenehexolate .
The structure of these salts was clarified only in 1963 by Büchner and Weiss.
Acetylenediolates can also be prepared by the rapid reaction of CO and a solution of the corresponding metal in liquid ammonia at low temperature. Potassium acetylenediolate is a pale yellow solid that reacts explosively with air, halogens, halogenated hydrocarbons, alcohols, water, and any substance which possesses an acidic hydrogen.
Coordination complexes
Acetylenediol can form coordination compounds, such as [TaH(HOC≡COH)(dmpe)2Cl]+Cl− where dmpe is bis(dimethylphosphino)ethane.
Acetylenediolate and related anions such as deltate and squarate have been obtained from carbon monoxide under mild conditions by reductive coupling of CO ligands in organouranium complexes.
See also
Acetylenedicarboxylic acid
Ethynol
Ynol
References
Alkynols
Diols
Organic compounds with 2 carbon atoms | Acetylenediol | [
"Chemistry"
] | 619 | [
"Organic compounds",
"Organic compounds with 2 carbon atoms"
] |
23,589,578 | https://en.wikipedia.org/wiki/Assimilative%20capacity | Assimilative capacity is the ability for pollutants to be absorbed by an environment without detrimental effects to the environment or those who use of it. Natural absorption into an environment is achieved through dilution, dispersion and removal through chemical or biological processes. The term assimilative capacity has been used interchangeably with environmental capacity, receiving capacity and absorptive capacity. It is used as a measurement perimeter in hydrology, meteorology and pedology for a variety of environments examples consist of: lakes, rivers, oceans, cities and soils. Assimilative capacity is a subjective measurement that is quantified by governments and institutions such as Environmental Protection Agency (EPA) of environments into guidelines. Using assimilative capacity as a guideline can help the allocation of resources while reducing the impact on organisms in an environment. This concept is paired with carrying capacity in order to facilitate sustainable development of city regions. Assimilative capacity has been critiqued as to its effectiveness due to ambiguity in its definition that can confuses readers and false assumptions that a small amount of pollutants has no harmful effect on an environment.
Hydrosphere
Assimilative capacity in hydrology is defined as the maximum amount of contaminating pollutants that a body of water can naturally absorb without exceeding the water quality guidelines and criteria. This determines the concentration of pollutants that can cause detrimental effects on aquatic life and humans that use it. Self-purification and dilution are the main factors effecting the total amount of assimilative capacity a body of water has. Estimations of breaches of assimilative capacity focus on the health of aquatic organisms in order to predict an excess of pollutants in a body of water. Dilution is the main way that bodies of water reduce the concentration of contaminants to levels under their assimilative capacity. This means that body of water that move rapidly or have a large volume of water will have larger assimilative capacities then a slow-moving stream.
Coastal and Marine
Coastal and Marine environments will have much greater assimilative capacity due to the large volumes of water creating a much greater dilution factor. Contaminants added into areas would be needed in much greater volumes in order to exceed the assimilative capacity and create harmful negative effects on aquatic life. However, oceans often are the end point for many pollutants resulting in large accumulation of pollutants. It is estimated that “270 tonnes of nitrogen enter the sea annually” in Western Australia.
Rivers
Rivers have a large focus on being monitored as they are the primary place for runoff from agricultural industries. This results in there being large changes in their original conditions. Agricultural runoff is high in contaminants including Phosphorus and Nitrogen. When phosphorus is added to a river eutrophication occurs a rapid production of algae that’s production was previously limited by the amount of phosphorus in the water. These algae have a high biochemical oxygen demand and reduce the available oxygen for other aquatic organisms. Close monitoring of the assimilative capacity or rivers is needed in order to stop eutrophication which can result in the loss of many aquatic organism.
Atmosphere
Assimilative capacity of the atmosphere is defined as the maximum load of pollutants that can be added without compromise of its resources. Meteorologists calculate assimilative capacity through the atmosphere using ventilation coefficient or through the pollution potential. The ventilation coefficient is calculated by multiply the mixing height (the height at which vigorous mixing of gasses occur) with the average wind speed. Atmospheric concentrations change rapidly as gasses move due to winds, convection current and dispersion of gasses. The pollution potential is determined by calculating the concentration of pollutants and comparing that to the acceptable limits. This way of calculating takes into consideration the current level of pollutants and assesses how much more can be added in order to reach the assimilative capacity. Sulphur dioxide (SO2), Nitrogen monoxide (NO) and Nitrogen dioxide (NO2) and suspended particulate matter (SPM) are important pollutants to measure. High concentrations sulphur dioxide can cause acid rain which damages structures and increases acidity of soils and bodies of water. High concentrations nitrogen monoxide and nitrogen dioxide can cause photochemical smog which has adverse effects on those with compromised lungs. High concentrations of suspended particulate matter can be absorbed by the lungs into the bloodstream can cause pneumonia.
Uses to determine management of environments
Assimilative capacity is used as a monitoring guideline for sustainable growth of city regions. Assimilative capacity allows governments to understand how much pressure a region is under. Working within the assimilative capacity means that regions will be constructed with future stability in mind. “An assimilative capacity study develops specific scientific modelling to support and assist municipalities and other legislative authorities in predicting the impacts of land use”.
United States
In the United States legislation on assimilative capacity as a guideline for the maximum amount of pollutants to be added to bodies of water comes from each individual state and from the environmental protection agency. Assimilative capacity is a quantitatively useful concept codified in the Clean Water Act and other laws and regulations that is unrelated to the perception of an environmental crisis. Assimilative capacity specifically refers to the capacity for a body of water to absorb constituents without exceeding a specific concentration, such as a water quality objective. Water quality objectives are set and periodically revised by regulatory agencies, such as the Environmental Protection Agency (EPA), to define the limits of water quality for different uses, which include human health, but also other ecologically important functions, wildlife habitat, irrigated agriculture, etc. For example, if the irrigation water quality objective for salt is 450 mg/L of total dissolved solids, the assimilative capacity of a body of water would be the amount of salt that could be added to the water such that its concentration would not exceed 450 mg/L.
India
India uses assimilative capacity in management of land, water and air. Though each have largely varying assimilative capacities due to variations in type of pollutants and the difference in dilution dispersion and chemical and biological breakdown in differing environments.
Comparison to accommodative capacity
Assimilative capacity has been critiqued as to the value it adds as a tool for creating guidelines in hydrology. There is a large amount of ambiguity in the definition as it is subjective. It has been questioned as to what exactly statements such as whether harmful to aquatic organism means “death of individual organisms, elimination of food chains, or a change in energy flow patterns”. Inconsistency in assimilative capacity has led to the term to be restricted by the National Oceanic and Atmospheric Administration (NOAA) and Environmental Protection Agency (EPA). Accommodative capacity is used to mean “the rate at which waste material can be added to a body of water in such a way that the ambient concentration of contaminants is maintained below levels that produce unacceptable biological impact”. Accommodative capacity has been suggested to remove ambiguity as uses of it have been more greatly defined in quantitative numbers.
See also
Water pollution
Sewage
References
Environmental science
Ecology
Hydrology
Atmosphere
Environmental management schemes | Assimilative capacity | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 1,463 | [
"Hydrology",
"Ecology",
"nan",
"Environmental engineering"
] |
6,165,182 | https://en.wikipedia.org/wiki/Lyman-alpha%20blob | In astronomy, a Lyman-alpha blob (LAB) is a huge concentration of a gas emitting the Lyman-alpha emission line. LABs are some of the largest known individual objects in the Universe. Some of these gaseous structures are more than 400,000 light years across. So far they have only been found in the high-redshift universe because of the ultraviolet nature of the Lyman-alpha emission line. Since Earth's atmosphere is very effective at filtering out UV photons, the Lyman-alpha photons must be redshifted in order to be transmitted through the atmosphere.
The most famous Lyman-alpha blobs were discovered in 2000 by Steidel et al. Matsuda et al., using the Subaru Telescope of the National Astronomical Observatory of Japan extended the search for LABs and found over 30 new LABs in the original field of Steidel et al., although they were all smaller than the originals. These LABs form a structure which is more than 200 million light-years in extent. It is currently unknown whether LABs trace overdensities of galaxies in the high-redshift universe (as high redshift radio galaxies—which also have extended Lyman-alpha halos—do, for example), nor which mechanism produces the Lyman-alpha emission line, or how the LABs are connected to the surrounding galaxies. Lyman-alpha blobs may hold valuable clues to determine how galaxies are formed.
The most massive Lyman-alpha blobs have been discovered by Tristan Friedrich et al. (2021), Steidel et al. (2000), Francis et al. (2001), Matsuda et al. (2004), Dey et al. (2005), Nilsson et al. (2006), and Smith & Jarvis et al. (2007).
Examples
Himiko
LAB-1
EQ J221734.0+001701, the SSA22 Protocluster
Ton 618, hyperluminous quasar powering a Lyman-alpha blob; also possesses one of the most massive black holes known.
See also
Damped Lyman-alpha system
Galaxy filament
Green bean galaxy
Lyman-alpha forest
Lyman-alpha emitter
Lyman break galaxy
Newfound Blob (disambiguation)
Notes
Astronomical spectroscopy
Intergalactic media
Large-scale structure of the cosmos
Articles containing video clips | Lyman-alpha blob | [
"Physics",
"Chemistry",
"Astronomy"
] | 486 | [
"Spectrum (physical sciences)",
"Outer space",
"Intergalactic media",
"Astrophysics",
"Astronomical spectroscopy",
"Spectroscopy"
] |
6,165,800 | https://en.wikipedia.org/wiki/Mixer-settler | Mixer settlers are a class of mineral process equipment used in the solvent extraction process. A mixer settler consists of a first stage that mixes the phases together followed by a quiescent settling stage that allows the phases to separate by gravity.
Mixer
A mixing chamber where a mechanical agitator brings in intimate contact the feed solution and the solvent to carry out the transfer of solute(s). The mechanical agitator is equipped with a motor which drives a mixing and pumping turbine. This turbine draws the two phases from the settlers of the adjacent stages, mixes them, and transfers this emulsion to the associated settler.
The mixer may consists of one or multiple stages of mixing tanks. Common laboratory mixers consist of a single mixing stage, whereas industrial scale copper mixers may consist of up to three mixer stages where each stage performs a combined pumping and mixing action. Use of multiple stages allows a longer reaction time and also minimizes the short circuiting of unreacted material through the mixers.
Settler
A settling chamber where the two phases separate by static decantation. Coalescence plates facilitate the separation of the emulsion into two phases (heavy and light). The two phases then pass to continuous stages by overflowing the light phase and heavy phase weirs. The height of the heavy phase weir can be adjusted in order to position the heavy/light interphase in the settling chamber based on the density of each one of the phases.
The settler is a calm pool downstream of the mixer where the liquids are allowed to separate by gravity. The liquids are then removed separately from the end of the mixer.
Use
Industrial mixer settlers are commonly used in the copper, nickel, uranium, lanthanide, and cobalt hydrometallurgy industries, when solvent extraction processes are applied.
They are also used in the Nuclear reprocessing field to separate and purify primarily Uranium and Plutonium, removing the fission product impurities.
In the multiple countercurrent process, multiple mixer settlers are installed with mixing and settling chambers located at alternating ends for each stage (since the outlet of the settling sections feed the inlets of the adjacent stage's mixing sections). Mixer-settlers are used when a process requires longer residence times and when the solutions are easily separated by gravity. They require a large facility footprint, but do not require much headspace, and need limited remote maintenance capability for occasional replacement of mixing motors. (Colven, 1956; Davidson, 1957)
The equipment units can be arrayed as:
extraction (moving an ion of interest from an aqueous phase to an organic phase),
washing (rinsing entrained aqueous contaminant out of an organic phase containing the ion of interest), and
stripping (moving an ion of interest from an organic phase into an aqueous phase).
Copper Example
In the case of oxide copper ore, a heap leaching pad will dissolve a dilute copper sulfate solution in a weak sulfuric acid solution. This pregnant leach solution (PLS) is pumped to an extraction mixer settler where it is mixed with the organic phase (a kerosene hosted extractant). The copper transfers to the organic phase, and the aqueous phase (now called raffinate) is pumped back to the heap to recover more copper.
In a high-chloride environment typical of Chilean copper mines, a wash stage will rinse any residual pregnant solution entrained in the organic with clean water.
The copper is then stripped from organic phase in the strip stage into a strong sulfuric acid solution suitable for electrowinning. This strong acid solution is called barren electrolyte when it enters the cell, and strong electrolyte when it is copper bearing after reacting in the cell.
See also
Solvent extraction
Hydrometallurgy
Mineral processing
References
University of Illinois in Chicago (Fall 1999) by Zachary Fijal, Constantinos Loukeris, Zhaleh Naghibzadeh, John Walsdorf, URL: https://web.archive.org/web/20060901162817/http://vienna.bioengr.uic.edu/teaching/che396/sepProj/Snrtem~1.pdf as found on 21 November 2006
Separation processes
Chemical equipment | Mixer-settler | [
"Chemistry",
"Engineering"
] | 871 | [
"Chemical equipment",
"nan",
"Separation processes"
] |
6,166,592 | https://en.wikipedia.org/wiki/Hartog%20Plate | Hartog Plate or Dirk Hartog's Plate is either of two pewter plates, although primarily the first, which were left on Dirk Hartog Island on the western coast of Australia before European settlement there. The first plate, left in 1616 by Dutch explorer Dirk Hartog, is the oldest-known artefact of European exploration in Australia still in existence. A replacement, which includes the text of the original and some new text, was left in 1697; the original dish was returned to the Netherlands, where it is now on display in the Rijksmuseum. Further additions at the site, in 1801 and 1818, led to the location being named Cape Inscription.
Dirk Hartog, 1616
Dirk Hartog was the first confirmed European to see Western Australia, reaching it in his ship the Eendracht. On 25 October 1616, he landed at Cape Inscription on the very northernmost tip of Dirk Hartog Island, in Shark Bay. Before departing, Hartog left behind a pewter dinner plate, nailed to a post and placed upright in a fissure on the cliff top.
The plate bears the inscription:
1616, DEN 25 OCTOBER IS HIER AENGECOMEN HET SCHIP D EENDRACHT
VAN AMSTERDAM, DEN OPPERKOPMAN GILLIS MIBAIS VAN LVICK SCHIPPER DIRCK HATICHS VAN AMSTERDAM
DE 27 DITO TE SEIL GEGHM (sic) NA BANTAM DEN
ONDERCOOPMAN JAN STINS OPPERSTVIERMAN PIETER DOEKES VAN BIL Ao 1616.
Translated into English:
1616, on the 25th October, arrived here the ship Eendracht of
Amsterdam; the upper merchant, Gilles Mibais of Liège; Captain Dirk
Hartog of Amsterdam; the 27th ditto set sail for Bantam; undermerchant
Jan Stein, upper steersman, Pieter Doekes from Bil, A[nn]o 1616.
Willem de Vlamingh, 1697
Eighty-one years later, in 1697, the Dutch sea captain Willem de Vlamingh also reached the island and discovered Hartog's pewter dish with the post almost rotted away. He removed it and replaced it with another plate which was attached to a new post. The new post was made of a cypress pine trunk taken from Rottnest Island. The original dish was returned to the Netherlands, where it is still kept in the Rijksmuseum Amsterdam. De Vlamingh's replacement dish contains all of the text of Hartog's original plate as well as listing the senior crew of his own voyage. It concludes with:
1697. Den 4den Februaij is hier aengecomen het schip de GEELVINK voor
Amsterdam, den Comander ent schipper, Willem de Vlamingh van Vlielandt,
Adsistent Joannes van Bremen, van Coppenhagen; Opperstvierman Michil Bloem vant
Sticgt, van Bremen De Hoecker de NYPTANGH, schipper Gerrit Colaart van
Amsterdam; Adsistent Theodorus Heirmans van dito Opperstierman Gerrit
Gerritsen van Bremen, 't Galjoot t' WESELTJE, Gezaghebber Cornelis de
Vlamingh van Vlielandt; Stvierman Coert Gerritsen van Bremen, en van hier
gezeilt met onse vlot den voorts net Zvydtland verder te ondersoecken en
gedestineert voor Batavia.
Translated into English:
On the 4th of February, 1697, arrived here the ship
GEELVINCK, of Amsterdam; Commandant Wilhelm de Vlamingh, of Vlielandt;
assistant, Jan van Bremen, of Copenhagen; first pilot, Michiel Bloem van
Estight, of Bremen. The hooker, the NYPTANGH, Captain Gerrit Collaert, of
Amsterdam, Assistant Theodorus Heermans, of the same place; first pilot,
Gerrit Gerritz, of Bremen; then the galliot WESELTJE, Commander Cornelis
de Vlaming, of Vlielandt; Pilot Coert Gerritz, from Bremen. Sailed from
here with our fleet on the 12th, to explore the South Land, and
afterwards bound for Batavia.
Emmanuel Hamelin, 1801
In 1801, the French captain of the Naturaliste, Jacques Félix Emmanuel Hamelin, second-in-command of an expedition led by Nicolas Baudin in the Geographe entered Shark Bay and sent a party ashore. The party found Vlamingh's plate, even though it was half buried in the sand, as the post had rotted away with the ravages of the weather. When they took the plate to the ship, Hamelin ordered it to be returned, believing its removal would be tantamount to sacrilege. He also had a plate, or similar, of his own prepared and inscribed with details of his voyage (dating to 16 July 1801) and he had both erected at the Vlamingh site, even adding a small Dutch flag to the plaque. It was then named Cape Inscription.
Louis de Freycinet, 1818
In 1818, in the Uranie, French explorer Louis de Freycinet, who had been an officer in Hamelin's 1801 crew, sent a boat ashore to recover Vlamingh's plate and substituted a lead plate, which has never been found. His wife Rose de Freycinet, who was on board, having stowed away with her husband's assistance, recorded the event in what was in effect a diary of her circumnavigation. After the Uranie was wrecked in the Falkland Islands, the plate and other materials from the voyage were later transferred to another ship and taken to France, where the plate was presented to the Académie Française in Paris.
After being lost for more than a century, the Vlamingh plate was rediscovered in 1940 on the bottom shelf of a small room, mixed up with old copper engraving plates. In recognition of Australian losses in the defence of France during the two world wars, the plate was eventually returned to Australia in 1947 and is currently housed in the Western Australian Maritime Museum in Fremantle, Western Australia.
Cape Inscription Lighthouse plaques
Marking the location in 1938, the Commonwealth government commemorated Dirk Hartog's landing with a brass plaque.
Just short of 60 years later, on 12 February 1997, the then-premier of Western Australia Richard Court unveiled a bronze plaque to mark the tricentennial of Vlamingh's visit.
The lighthouse and plaques are located at .
See also
History of Western Australia
References
Exploration of Western Australia
Shark Bay
Archaeological artifacts
European exploration of Australia
Maritime history of the Dutch East India Company
Former properties of the Dutch East India Company
17th-century inscriptions
Metallic objects
1616 works
1616 in Oceania
1697 works
1697 in Oceania
Collection of the Rijksmuseum | Hartog Plate | [
"Physics"
] | 1,437 | [
"Metallic objects",
"Physical objects",
"Matter"
] |
6,168,349 | https://en.wikipedia.org/wiki/Electric%20energy%20consumption | Electric energy consumption is energy consumption in the form of electrical energy. About a fifth of global energy is consumed as electricity: for residential, industrial, commercial, transportation and other purposes.
The global electricity consumption in 2022 was 24,398 terawatt-hour (TWh), almost exactly three times the amount of consumption in 1981 (8,132 TWh). China, the United States, and India accounted for more than half of the global share of electricity consumption. Japan and Russia followed with nearly twice the consumption of the remaining industrialized countries.
Overview
Electric energy is most often measured either in joules (J), or in watt hours (W·h).
1 W·s = 1 J
1 W·h = 3,600 W·s = 3,600 J
1 kWh = 3,600 kWs = 1,000 Wh = 3.6 million W·s = 3.6 million J
Electric and electronic devices consume electric energy to generate desired output (light, heat, motion, etc.). During operation, some part of the energy is lost depending on the electrical efficiency.
Electricity has been generated in power stations since 1882. The invention of the steam turbine in 1884 to drive the electric generator led to an increase in worldwide electricity consumption.
In 2022, the total worldwide electricity production was nearly 29,000 TWh. Total primary energy is converted into numerous forms, including, but not limited to, electricity, heat and motion. Some primary energy is lost during the conversion to electricity, as seen in the United States, where a little more than 60% was lost in 2022.
Electricity accounted for more than 20% of worldwide final energy consumption in 2022, with oil being less than 40%, coal being less than 9%, natural gas being less than 15%, biofuels and waste less than 10%, and other sources (such as heat, solar electricity, wind electricity and geothermal) being more than 5%. The total final electricity consumption in 2022 was split unevenly between the following sectors: industry (42.2%), residential (26.8%), commercial and public services (21.1%), transport (1.8%), and other (8.1%; i.e., agriculture and fishing). In 1981, the final electricity consumption continued to decrease in the industrial sector and increase in the residential, commercial and public services sectors.
A sensitivity analysis on an adaptive neuro-fuzzy network model for electric demand estimation shows that employment is the most critical factor influencing electrical consumption. The study used six parameters as input data, employment, GDP, dwelling, population, heating degree day and cooling degree day, with electricity demand as output variable.
World electricity consumption
The table lists 45 electricity-consuming countries, which used about 22,000 TWh. These countries comprise about 90% of the final consumption of 190+ countries. The final consumption to generate this electricity is provided for every country. The data is from 2022.
In 2022, OECD's final electricity consumption was over 10,000 TWh. In that year, the industrial sector consumed about 42.2% of the electricity, with the residential sector consuming nearly 26.8%, the commercial and public services sectors consuming about 21.1%, the transport sector consuming nearly 1.8%, and the other sectors (such as agriculture and fishing) consuming nearly 8.1%. In recent decades, the consumption in the residential and commercial and public services sectors has grown, while the industry consumption has declined. More recently, the transport sector has witnessed an increase in consumption with the growth in the electric vehicle market.
Consumption per capita
The final consumption divided by the number of inhabitants provides a country's consumption per capita. In Western Europe, this is between 4 and 8 MWh/year. (1 MWh = 1,000 kWh) In Scandinavia, the United States, Canada, Taiwan, South Korea, Australia, Japan and the United Kingdom, the per capita consumption is higher; however, in developing countries, it is much lower. The world's average was about 3 MWh/year in 2022. Very low consumption levels, such as those in Philippines, not included in the table, indicate that many inhabitants are not connected to the electricity grid, and that is the reason why some of the world's most populous countries, including Nigeria and Bangladesh, do not appear in the table.
Electricity generation and GDP
The table lists 30 countries, which represent about 76% of the world population, 84% of the world GDP, and 85% of the world electricity generation. Productivity per electricity generation (concept similar to energy intensity) can be measured by dividing GDP over the electricity generated. The data is from 2019.
Electricity consumption by sector
The table below lists the 15 countries with the highest final electricity consumption, which comprised more than 70% of the global consumption in 2022.
Electricity outlook
Looking forward, increasing energy efficiency will result in less electricity needed for a given demand in power, but demand will increase strongly on the account of:
Economic growth in developing countries, and
Electrification of transport and heating. Combustion engines are replaced by electric drive and for heating less gas and oil, but more electricity is used, if possible with heat pumps.
The International Energy Agency expects revisions of subsidies for fossil fuels which amounted to $550 billion in 2013, more than four times renewable energy subsidies. In this scenario, almost half of the increase in 2040 of electricity consumption is covered by more than 80% growth of renewable energy. Many new nuclear plants will be constructed, mainly to replace old ones. The nuclear part of electricity generation will increase from 11 to 12%. The renewable part goes up much more, from 21 to 33%. The IEA warns that in order to restrict global warming to 2 °C, carbon dioxide emissions must not exceed 1000 gigaton (Gt) from 2014. This limit is reached in 2040 and emissions will not drop to zero ever.
The World Energy Council sees world electricity consumption increasing to more than 40,000 TWh/a in 2040. The fossil part of generation depends on energy policy. It can stay around 70% in the so-called "Jazz" scenario where countries rather independently "improvise" but it can also decrease to around 40% in the "Symphony" scenario if countries work "orchestrated" for more climate friendly policy. Carbon dioxide emissions, 32 Gt/a in 2012, will increase to 46 Gt/a in Jazz but decrease to 26 Gt/a in Symphony. Accordingly, until 2040 the renewable part of generation will stay at about 20% in Jazz but increase to about 45% in Symphony.
An EU survey conducted on climate and energy consumption in 2022 found that 63% of people in the European Union want energy costs to be dependent on use, with the greatest consumers paying more. This is compared to 83% in China, 63% in the UK and 57% in the US. 24% of Americans surveyed believing that people and businesses should do more to cut their own usage (compared to 20% in the UK, 19% in the EU, and 17% in China).
Nearly half of those polled in the European Union (47%) and the United Kingdom (45%) want their government to focus on the development of renewable energies. This is compared to 37% in both the United States and China when asked to list their priorities on energy.
See also
Electricity generation
Electricity retailing
List of countries by energy intensity
List of countries by carbon dioxide emissions
List of countries by electricity consumption
List of countries by electricity production
List of countries by energy consumption per capita
List of countries by greenhouse gas emissions
List of countries by renewable electricity production
List of countries by energy consumption and production
World energy supply and consumption
References
External links
World Electricity production 2012
World Map and Chart of Energy Consumption by country by Lebanese-economy-forum, World Bank data
Electricity Information 2019 - IEA
Electric power
Consumption
Energy consumption
Energy development
Energy policy | Electric energy consumption | [
"Physics",
"Engineering",
"Environmental_science"
] | 1,630 | [
"Physical quantities",
"Energy policy",
"Power (physics)",
"Electric power",
"Electrical engineering",
"Environmental social science"
] |
6,168,723 | https://en.wikipedia.org/wiki/Heterodiamond | Heterodiamond is a superhard material containing boron, carbon, and nitrogen (BCN). It is formed at high temperatures and high pressures, e.g., by application of an explosive shock wave to a mixture of diamond and cubic boron nitride (c-BN). The heterodiamond is a polycrystalline material coagulated with nano-crystallites and the fine powder is tinged with deep bluish black. The heterodiamond has both the high hardness of diamond and the excellent heat resistance of cubic BN. These characteristic properties are due to the diamond structure combined with the sp3 σ-bonds among carbon and the heteroatoms.
Cubic BC2N can be synthesized from graphite-like BC2N at pressures above 18 GPa and temperatures higher than . The bulk modulus of c-BC2N is 282 GPa which is one of the highest bulk moduli known for any solid, and is exceeded only by the bulk moduli of diamond and c-BN. The hardness of c-BC2N is higher than that of c-BN single crystals.
References
See also
Boron nitride
Superhard materials
Boron compounds
Carbides
Nitrides | Heterodiamond | [
"Physics"
] | 254 | [
"Materials stubs",
"Materials",
"Superhard materials",
"Matter"
] |
22,194,510 | https://en.wikipedia.org/wiki/Signorini%20problem | The Signorini problem is an elastostatics problem in linear elasticity: it consists in finding the elastic equilibrium configuration of an anisotropic non-homogeneous elastic body, resting on a rigid frictionless surface and subject only to its mass forces. The name was coined by Gaetano Fichera to honour his teacher, Antonio Signorini: the original name coined by him is problem with ambiguous boundary conditions.
History
The problem was posed by Antonio Signorini during a course taught at the Istituto Nazionale di Alta Matematica in 1959, later published as the article , expanding a previous short exposition he gave in a note published in 1933. himself called it problem with ambiguous boundary conditions, since there are two alternative sets of boundary conditions the solution must satisfy on any given contact point. The statement of the problem involves not only equalities but also inequalities, and it is not a priori known what of the two sets of boundary conditions is satisfied at each point. Signorini asked to determine if the problem is well-posed or not in a physical sense, i.e. if its solution exists and is unique or not: he explicitly invited young analysts to study the problem.
Gaetano Fichera and Mauro Picone attended the course, and Fichera started to investigate the problem: since he found no references to similar problems in the theory of boundary value problems, he decided to approach it by starting from first principles, specifically from the virtual work principle.
During Fichera's researches on the problem, Signorini began to suffer serious health problems: nevertheless, he desired to know the answer to his question before his death. Picone, being tied by a strong friendship with Signorini, began to chase Fichera to find a solution: Fichera himself, being tied as well to Signorini by similar feelings, perceived the last months of 1962 as worrying days. Finally, on the first days of January 1963, Fichera was able to give a complete proof of the existence of a unique solution for the problem with ambiguous boundary condition, which he called the "Signorini problem" to honour his teacher. A preliminary research announcement, later published as , was written up and submitted to Signorini exactly a week before his death. Signorini expressed great satisfaction to see a solution to his question.
A few days later, Signorini had with his family Doctor, Damiano Aprile, the conversation quoted above.
The solution of the Signorini problem coincides with the birth of the field of variational inequalities.
Formal statement of the problem
The content of this section and the following subsections follows closely the treatment of Gaetano Fichera in , and also : his derivation of the problem is different from Signorini's one in that he does not consider only incompressible bodies and a plane rest surface, as Signorini does. The problem consists in finding the displacement vector from the natural configuration of an anisotropic non-homogeneous elastic body that lies in a subset of the three-dimensional euclidean space whose boundary is and whose interior normal is the vector , resting on a rigid frictionless surface whose contact surface (or more generally contact set) is and subject only to its body forces , and surface forces applied on the free (i.e. not in contact with the rest surface) surface : the set and the contact surface characterize the natural configuration of the body and are known a priori. Therefore, the body has to satisfy the general equilibrium equations
written using the Einstein notation as all in the following development, the ordinary boundary conditions on
and the following two sets of boundary conditions on , where is the Cauchy stress tensor. Obviously, the body forces and surface forces cannot be given in arbitrary way but they must satisfy a condition in order for the body to reach an equilibrium configuration: this condition will be deduced and analyzed in the following development.
The ambiguous boundary conditions
If is any tangent vector to the contact set , then the ambiguous boundary condition in each point of this set are expressed by the following two systems of inequalities
or
Let's analyze their meaning:
Each set of conditions consists of three relations, equalities or inequalities, and all the second members are the zero function.
The quantities at first member of each first relation are proportional to the norm of the component of the displacement vector directed along the normal vector .
The quantities at first member of each second relation are proportional to the norm of the component of the tension vector directed along the normal vector ,
The quantities at the first member of each third relation are proportional to the norm of the component of the tension vector along any vector tangent in the given point to the contact set .
The quantities at the first member of each of the three relations are positive if they have the same sense of the vector they are proportional to, while they are negative if not, therefore the constants of proportionality are respectively and .
Knowing these facts, the set of conditions applies to points of the boundary of the body which do not leave the contact set in the equilibrium configuration, since, according to the first relation, the displacement vector has no components directed as the normal vector , while, according to the second relation, the tension vector may have a component directed as the normal vector and having the same sense. In an analogous way, the set of conditions applies to points of the boundary of the body which leave that set in the equilibrium configuration, since displacement vector has a component directed as the normal vector , while the tension vector has no components directed as the normal vector . For both sets of conditions, the tension vector has no tangent component to the contact set, according to the hypothesis that the body rests on a rigid frictionless surface.
Each system expresses a unilateral constraint, in the sense that they express the physical impossibility of the elastic body to penetrate into the surface where it rests: the ambiguity is not only in the unknown values non-zero quantities must satisfy on the contact set but also in the fact that it is not a priori known if a point belonging to that set satisfies the system of boundary conditions or . The set of points where is satisfied is called the area of support of the elastic body on , while its complement respect to is called the area of separation.
The above formulation is general since the Cauchy stress tensor i.e. the constitutive equation of the elastic body has not been made explicit: it is equally valid assuming the hypothesis of linear elasticity or the ones of nonlinear elasticity. However, as it would be clear from the following developments, the problem is inherently nonlinear, therefore assuming a linear stress tensor does not simplify the problem.
The form of the stress tensor in the formulation of Signorini and Fichera
The form assumed by Signorini and Fichera for the elastic potential energy is the following one (as in the previous developments, the Einstein notation is adopted)
where
is the elasticity tensor
is the infinitesimal strain tensor
The Cauchy stress tensor has therefore the following form
and it is linear with respect to the components of the infinitesimal strain tensor; however, it is not homogeneous nor isotropic.
Solution of the problem
As for the section on the formal statement of the Signorini problem, the contents of this section and the included subsections follow closely the treatment of Gaetano Fichera in , , and also : obviously, the exposition focuses on the basics steps of the proof of the existence and uniqueness for the solution of problem , , , and , rather than the technical details.
The potential energy
The first step of the analysis of Fichera as well as the first step of the analysis of Antonio Signorini in is the analysis of the potential energy, i.e. the following functional
where belongs to the set of admissible displacements i.e. the set of displacement vectors satisfying the system of boundary conditions or . The meaning of each of the three terms is the following
the first one is the total elastic potential energy of the elastic body
the second one is the total potential energy due to the body forces, for example the gravitational force
the third one is the potential energy due to surface forces, for example the forces exerted by the atmospheric pressure
was able to prove that the admissible displacement which minimize the integral is a solution of the problem with ambiguous boundary conditions , , , and , provided it is a function supported on the closure of the set : however Gaetano Fichera gave a class of counterexamples in showing that in general, admissible displacements are
not smooth functions of these class. Therefore, Fichera tries to minimize the functional in a wider function space: in doing so, he first calculates the first variation (or functional derivative) of the given functional in the neighbourhood of the sought minimizing admissible displacement , and then requires it to be greater than or equal to zero
Defining the following functionals
and
the preceding inequality is can be written as
This inequality is the variational inequality for the Signorini problem.
See also
Linear elasticity
Variational inequality
Notes
References
Historical references
.
. A brief research survey describing the field of variational inequalities.
. The encyclopedia entry about problems with unilateral constraints (the class of boundary value problems the Signorini problem belongs to) he wrote for the Handbuch der Physik on invitation by Clifford Truesdell.
. The birth of the theory of variational inequalities remembered thirty years later (English translation of the contribution title) is an historical paper describing the beginning of the theory of variational inequalities from the point of view of its founder.
. A volume collecting almost all works of Gaetano Fichera in the fields of history of mathematics and scientific divulgation.
, (vol. 1), (vol. 2), (vol. 3). Three volumes collecting Gaetano Fichera's most important mathematical papers, with a biographical sketch of Olga A. Oleinik.
. A volume collecting Antonio Signorini's most important works with an introduction and a commentary of Giuseppe Grioli.
Research works
.
. A short research note announcing and describing (without proofs) the solution of the Signorini problem.
. The first paper where aa existence and uniqueness theorem for the Signorini problem is proved.
. An English translation of the previous paper.
.
.
External links
Alessio Figalli, On global homogeneous solutions to the Signorini problem,
Calculus of variations
Continuum mechanics
Elasticity (physics)
Partial differential equations | Signorini problem | [
"Physics",
"Materials_science"
] | 2,159 | [
"Physical phenomena",
"Continuum mechanics",
"Elasticity (physics)",
"Deformation (mechanics)",
"Classical mechanics",
"Physical properties"
] |
22,196,872 | https://en.wikipedia.org/wiki/Little%20string%20theory | In theoretical physics, little string theory is a non-gravitational non-local theory in six spacetime dimensions that can be obtained as an effective theory of NS5-branes in the limit in which gravity decouples. Little string theories exhibit T-duality, much like the full string theory.
References
String theory | Little string theory | [
"Astronomy"
] | 66 | [
"String theory",
"Astronomical hypotheses"
] |
22,200,477 | https://en.wikipedia.org/wiki/Ringing%20artifacts | In signal processing, particularly digital image processing, ringing artifacts are artifacts that appear as spurious signals near sharp transitions in a signal. Visually, they appear as bands or "ghosts" near edges; audibly, they appear as "echos" near transients, particularly sounds from percussion instruments; most noticeable are the pre-echos. The term "ringing" is because the output signal oscillates at a fading rate around a sharp transition in the input, similar to a bell after being struck. As with other artifacts, their minimization is a criterion in filter design.
Introduction
The main cause of ringing artifacts is due to a signal being bandlimited (specifically, not having high frequencies) or passed through a low-pass filter; this is the frequency domain description.
In terms of the time domain, the cause of this type of ringing is the ripples in the sinc function, which is the impulse response (time domain representation) of a perfect low-pass filter. Mathematically, this is called the Gibbs phenomenon.
One may distinguish overshoot (and undershoot), which occurs when transitions are accentuated – the output is higher than the input – from ringing, where after an overshoot, the signal overcorrects and is now below the target value; these phenomena often occur together, and are thus often conflated and jointly referred to as "ringing".
The term "ringing" is most often used for ripples in the time domain, though it is also sometimes used for frequency domain effects:
windowing a filter in the time domain by a rectangular function causes ripples in the frequency domain for the same reason as a brick-wall low pass filter (rectangular function in the frequency domain) causes ripples in the time domain, in each case the Fourier transform of the rectangular function being the sinc function.
There are related artifacts caused by other frequency domain effects,
and similar artifacts due to unrelated causes.
Causes
Description
By definition, ringing occurs when a non-oscillating input yields an oscillating output: formally, when an input signal which is monotonic on an interval has output response which is not monotonic. This occurs most severely when the impulse response or step response of a filter has oscillations – less formally, if for a spike input, respectively a step input (a sharp transition), the output has bumps. Ringing most commonly refers to step ringing, and that will be the focus.
Ringing is closely related to overshoot and undershoot, which is when the output takes on values higher than the maximum (respectively, lower than the minimum) input value: one can have one without the other, but in important cases, such as a low-pass filter, one first has overshoot, then the response bounces back below the steady-state level, causing the first ring, and then oscillates back and forth above and below the steady-state level. Thus overshoot is the first step of the phenomenon, while ringing is the second and subsequent steps. Due to this close connection, the terms are often conflated, with "ringing" referring to both the initial overshoot and the subsequent rings.
If one has a linear time invariant (LTI) filter, then one can understand the filter and ringing in terms of the impulse response (the time domain view), or in terms of its Fourier transform, the frequency response (the frequency domain view). Ringing is a time domain artifact, and in filter design is traded off with desired frequency domain characteristics: the desired frequency response may cause ringing, while reducing or eliminating ringing may worsen the frequency response.
sinc filter
The central example, and often what is meant by "ringing artifacts", is the ideal (brick-wall) low-pass filter, the sinc filter. This has an oscillatory impulse response function, as illustrated above, and the step response – its integral, the sine integral – thus also features oscillations, as illustrated at right.
These ringing artifacts are not results of imperfect implementation or windowing: the ideal low-pass filter, while possessing the desired frequency response, necessarily causes ringing artifacts in the time domain.
Time domain
In terms of impulse response, the correspondence between these artifacts and the behavior of the function is as follows:
impulse undershoot is equivalent to the impulse response having negative values,
impulse ringing (ringing near a point) is precisely equivalent to the impulse response having oscillations, which is equivalent to the derivative of the impulse response alternating between negative and positive values,
and there is no notion of impulse overshoot, as the unit impulse is assumed to have infinite height (and integral 1 – a Dirac delta function), and thus cannot be overshot.
Turning to step response,
the step response is the integral of the impulse response; formally, the value of the step response at time a is the integral of the impulse response. Thus values of the step response can be understood in terms of tail integrals of the impulse response.
Assume that the overall integral of the impulse response is 1, so it sends constant input to the same constant as output – otherwise the filter has gain, and scaling by gain gives an integral of 1.
Step undershoot is equivalent to a tail integral being negative, in which case the magnitude of the undershoot is the value of the tail integral.
Step overshoot is equivalent to a tail integral being greater than 1, in which case the magnitude of the overshoot is the amount by which the tail integral exceeds 1 – or equivalently the value of the tail in the other direction, since these add up to 1.
Step ringing is equivalent to tail integrals alternating between increasing and decreasing – taking derivatives, this is equivalent to the impulse response alternating between positive and negative values. Regions where an impulse response are below or above the x-axis (formally, regions between zeros) are called lobes, and the magnitude of an oscillation (from peak to trough) equals the integral of the corresponding lobe.
The impulse response may have many negative lobes, and thus many oscillations, each yielding a ring, though these decay for practical filters, and thus one generally only sees a few rings, with the first generally being most pronounced.
Note that if the impulse response has small negative lobes and larger positive lobes, then it will exhibit ringing but not undershoot or overshoot: the tail integral will always be between 0 and 1, but will oscillate down at each negative lobe. However, in the sinc filter, the lobes monotonically decrease in magnitude and alternate in sign, as in the alternating harmonic series, and thus tail integrals alternate in sign as well, so it exhibits overshoot as well as ringing.
Conversely, if the impulse response is always nonnegative, so it has no negative lobes – the function is a probability distribution – then the step response will exhibit neither ringing nor overshoot or undershoot – it will be a monotonic function growing from 0 to 1, like a cumulative distribution function. Thus the basic solution from the time domain perspective is to use filters with nonnegative impulse response.
Frequency domain
The frequency domain perspective is that ringing is caused by the sharp cut-off in the rectangular passband in the frequency domain, and thus is reduced by smoother roll-off, as discussed below.
Solutions
Solutions depend on the parameters of the problem: if the cause is a low-pass filter, one may choose a different filter design, which reduces artifacts at the expense of worse frequency domain performance. On the other hand, if the cause is a band-limited signal, as in JPEG, one cannot simply replace a filter, and ringing artifacts may prove hard to fix – they are present in JPEG 2000 and many audio compression codecs (in the form of pre-echo), as discussed in the examples.
Low-pass filter
If the cause is the use of a brick-wall low-pass filter, one may replace the filter with one that reduces the time domain artifacts, at the cost of frequency domain performance. This can be analyzed from the time domain or frequency domain perspective.
In the time domain, the cause is an impulse response that oscillates, assuming negative values. This can be resolved by using a filter whose impulse response is non-negative and does not oscillate, but shares desired traits. For example, for a low-pass filter, the Gaussian filter is non-negative and non-oscillatory, hence causes no ringing. However, it is not as good as a low-pass filter: it rolls off in the passband, and leaks in the stopband: in image terms, a Gaussian filter "blurs" the signal, which reflects the attenuation of desired higher frequency signals in the passband.
A general solution is to use a window function on the sinc filter, which cuts off or reduces the negative lobes: these respectively eliminate and reduce overshoot and ringing. Note that truncating some but not all of the lobes eliminates the ringing beyond that point, but does not reduce the amplitude of the ringing that is not truncated (because this is determined by the size of the lobe), and increases the magnitude of the overshoot if the last non-cut lobe is negative, since the magnitude of the overshoot is the integral of the tail, which is no longer canceled by positive lobes.
Further, in practical implementations one at least truncates sinc, otherwise one must use infinitely many data points (or rather, all points of the signal) to compute every point of the output – truncation corresponds to a rectangular window, and makes the filter practically implementable, but the frequency response is no longer perfect.
In fact, if one takes a brick wall low-pass filter (sinc in time domain, rectangular in frequency domain) and truncates it (multiplies with a rectangular function in the time domain), this convolves the frequency domain with sinc (Fourier transform of the rectangular function) and causes ringing in the frequency domain, which is referred to as ripple. In symbols, The frequency ringing in the stopband is also referred to as side lobes. Flat response in the passband is desirable, so one windows with functions whose Fourier transform has fewer oscillations, so the frequency domain behavior is better.
Multiplication in the time domain corresponds to convolution in the frequency domain, so multiplying a filter by a window function corresponds to convolving the Fourier transform of the original filter by the Fourier transform of the window, which has a smoothing effect – thus windowing in the time domain corresponds to smoothing in the frequency domain, and reduces or eliminates overshoot and ringing.
In the frequency domain, the cause can be interpreted as due to the sharp (brick-wall) cut-off, and ringing reduced by using a filter with smoother roll-off. This is the case for the Gaussian filter, whose magnitude Bode plot is a downward opening parabola (quadratic roll-off), as its Fourier transform is again a Gaussian, hence (up to scale) – taking logarithms yields
In electronic filters, the trade-off between frequency domain response and time domain ringing artifacts is well-illustrated by the Butterworth filter: the frequency response of a Butterworth filter slopes down linearly on the log scale, with a first-order filter having slope of −6 dB per octave, a second-order filter –12 dB per octave, and an nth order filter having slope of dB per octave – in the limit, this approaches a brick-wall filter. Thus, among these the, first-order filter rolls off slowest, and hence exhibits the fewest time domain artifacts, but leaks the most in the stopband, while as order increases, the leakage decreases, but artifacts increase.
Benefits
While ringing artifacts are generally considered undesirable, the initial overshoot (haloing) at transitions increases acutance (apparent sharpness) by increasing the derivative across the transition, and thus can be considered as an enhancement.
Related phenomena
Overshoot
Another artifact is overshoot (and undershoot), which manifests itself not as rings, but as an increased jump at the transition. It is related to ringing, and often occurs in combination with it.
Overshoot and undershoot are caused by a negative tail – in the sinc, the integral from the first zero to infinity, including the first negative lobe. While ringing is caused by a following positive tail – in sinc, the integral from the second zero to infinity, including the first non-central positive lobe.
Thus overshoot is necessary for ringing, but can occur separately: for example, the 2-lobed Lanczos filter has only a single negative lobe on each side, with no following positive lobe, and thus exhibits overshoot but no ringing, while the 3-lobed Lanczos filter exhibits both overshoot and ringing, though the windowing reduces this compared to the sinc filter or the truncated sinc filter.
Similarly, the convolution kernel used in bicubic interpolation is similar to a 2-lobe windowed sinc, taking on negative values, and thus produces overshoot artifacts, which appear as halos at transitions.
Clipping
Following from overshoot and undershoot is clipping.
If the signal is bounded, for instance an 8-bit or 16-bit integer, this overshoot and undershoot can exceed the range of permissible values, thus causing clipping.
Strictly speaking, the clipping is caused by the combination of overshoot and limited numerical accuracy, but it is closely associated with ringing, and often occurs in combination with it.
Clipping can also occur for unrelated reasons, from a signal simply exceeding the range of a channel.
On the other hand, clipping can be exploited to conceal ringing in images. Some modern JPEG codecs, such as mozjpeg and ISO libjpeg, use such a trick to reduce ringing by deliberately causing overshoots in the IDCT results. This idea originated in a mozjpeg patch.
Ringing and ripple
In signal processing and related fields, the general phenomenon of time domain oscillation is called ringing, while frequency domain oscillations are generally called ripple, though generally not "rippling".
A key source of ripple in digital signal processing is the use of window functions: if one takes an infinite impulse response (IIR) filter, such as the sinc filter, and windows it to make it have finite impulse response, as in the window design method, then the frequency response of the resulting filter is the convolution of the frequency response of the IIR filter with the frequency response of the window function. Notably, the frequency response of the rectangular filter is the sinc function (the rectangular function and the sinc function are Fourier dual to each other), and thus truncation of a filter in the time domain corresponds to multiplication by the rectangular filter, thus convolution by the sinc filter in the frequency domain, causing ripple. In symbols, the frequency response of is In particular, truncating the sinc function itself yields in the time domain, and in the frequency domain, so just as low-pass filtering (truncating in the frequency domain) causes ringing in the time domain, truncating in the time domain (windowing by a rectangular filter) causes ripple in the frequency domain.
Examples
JPEG
JPEG compression can introduce ringing artifacts at sharp transitions, which are particularly visible in text.
This is a due to loss of high frequency components, as in step response ringing.
JPEG uses 8×8 blocks, on which the discrete cosine transform (DCT) is performed. The DCT is a Fourier-related transform, and ringing occurs because of loss of high frequency components or loss of precision in high frequency components.
They can also occur at the edge of an image: since JPEG splits images into 8×8 blocks, if an image is not an integer number of blocks, the edge cannot easily be encoded, and solutions such as filling with a black border create a sharp transition in the source, hence ringing artifacts in the encoded image.
Ringing also occurs in the wavelet-based JPEG 2000.
JPEG and JPEG 2000 have other artifacts, as illustrated above, such as blocking ("jaggies") and edge busyness ("mosquito noise"), though these are due to specifics of the formats, and are not ringing as discussed here.
Some illustrations:
Baseline JPEG and JPEG2000 Artifacts Illustrated
Pre-echo
In audio signal processing, ringing can cause echoes to occur before and after transients, such as the impulsive sound from percussion instruments, such as cymbals (this is impulse ringing). The (causal) echo after the transient is not heard, because it is masked by the
transient, an effect called temporal masking. Thus only the (anti-causal) echo before the transient is heard, and the phenomenon is called pre-echo.
This phenomenon occurs as a compression artifact in audio compression algorithms that use Fourier-related transforms, such as MP3, AAC, and Vorbis.
Similar phenomena
Other phenomena have similar symptoms to ringing, but are otherwise distinct in their causes. In cases where these cause circular artifacts around point sources, these may be referred to as "rings" due to the round shape (formally, an annulus), which is unrelated to the "ringing" (oscillatory decay) frequency phenomenon discussed on this page.
Edge enhancement
Edge enhancement, which aims to increase edges, may cause ringing phenomena, particularly under repeated application, such as by a DVD player followed by a television. This may be done by high-pass filtering, rather than low-pass filtering.
Special functions
Many special functions exhibit oscillatory decay, and thus convolving with such a function yields ringing in the output; one may consider these ringing, or restrict the term to unintended artifacts in frequency domain signal processing.
Fraunhofer diffraction yields the Airy disk as point spread function, which has a ringing pattern.
The Bessel function of the first kind, which is related to the Airy function, exhibits such decay.
In cameras, a combination of defocus and spherical aberration can yield circular artifacts ("ring" patterns). However, the pattern of these artifacts need not be similar to ringing (as discussed on this page) – they may exhibit oscillatory decay (circles of decreasing intensity), or other intensity patterns, such as a single bright band.
Interference
Ghosting is a form of television interference where an image is repeated. Though this is not ringing, it can be interpreted as convolution with a function, which is 1 at the origin and ε (the intensity of the ghost) at some distance, which is formally similar to the above functions (a single discrete peak, rather than continuous oscillation).
Lens flare
In photography, lens flare is a defect where various circles can appear around highlights, and with ghosts throughout a photo, due to undesired light, such as reflection and scattering off elements in the lens.
Visual illusions
Visual illusions can occur at transitions, as in Mach bands, which perceptually exhibit a similar undershoot/overshoot to the Gibbs phenomenon.
See also
Artifact (error)
Digital artifact
sinc filter
Brick-wall filter
Chromatic aberration
Ghosting (television)
Gibbs phenomenon
Low-pass filter
Pre-echo
Purple fringing
References
Signal processing
Computer graphic artifacts | Ringing artifacts | [
"Technology",
"Engineering"
] | 4,011 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
22,201,177 | https://en.wikipedia.org/wiki/Chlorinated%20polycyclic%20aromatic%20hydrocarbon | Chlorinated polycyclic aromatic hydrocarbons (Cl-PAHs) are a group of compounds comprising polycyclic aromatic hydrocarbons with two or more aromatic rings and one or more chlorine atoms attached to the ring system. Cl-PAHs can be divided into two groups: chloro-substituted PAHs, which have one or more hydrogen atoms substituted by a chlorine atom, and chloro-added Cl-PAHs, which have two or more chlorine atoms added to the molecule. They are products of incomplete combustion of organic materials. They have many congeners, and the occurrences and toxicities of the congeners differ. Cl-PAHs are hydrophobic compounds and their persistence within ecosystems is due to their low water solubility. They are structurally similar to other halogenated hydrocarbons such as polychlorinated dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs), and polychlorinated biphenyls (PCBs). Cl-PAHs in the environment are strongly susceptible to the effects of gas/particle partitioning, seasonal sources, and climatic conditions.
Sources
Chlorinated polycyclic aromatic hydrocarbons are generated by combustion of organic compounds. Cl-PAHs enter the environment from a multiplicity of sources and tend to persist in soil and in particulate matter in air. Environmental data and emission sources analysis for Cl-PAHs reveal that the dominant process of generation is by reaction of PAHs with chlorine in pyrosynthesis. Cl-PAHs have commonly been detected in tap water, fly ash from an incineration plant for radioactive waste, emissions from coal combustion and municipal waste incineration, automobile exhaust, snow, and urban air. They have also been detected in electronic wastes, workshop-floor dust, vegetation, and surface soil collected from the vicinity of an electronic waste (e-waste) recycling facility and in surface soil from a chemical industrial complex (comprising a coke-oven plant, a coal-fired power plant, and a chlor-alkali plant), and agricultural areas in central and eastern China. In addition, the combustion of polyvinylchloride and plastic wrap made from polyvinylidene chloride result in the production of Cl-PAHs, suggesting that combustion of organic materials including chlorine is a possible source of environmental pollution.
A specific class of Cl-PAHs, polychlorinated naphthalenes (PCNs), are persistent, bioaccumulative, and toxic contaminants that have been reported to occur in a wide variety of environmental and biological matrixes. Cl-PAHs with three to five rings have been reported to occur in air from road tunnels, sediment, snow, and kraft pulp mills.
Recently, the occurrence of particulate Cl-PAHs has been investigated. Results have shown that most particulate Cl-PAH concentration detected in urban air tended to be high in colder seasons and low in warmer seasons. This study also determined through compositional analysis that relatively low molecular weight Cl-PAHs dominated in warmer seasons and high molecular weight Cl-PAHs dominated in colder seasons.
Toxicity
Some Cl-PAHs have structural similarities to dioxins, they are suspected of having similar toxicities. These types of compounds are known to be carcinogenic, mutagenic, and teratogenic. Toxicological studies have shown that some Cl-PAHs possess greater mutagenicity, aryl-hydorcarbon receptor activity, and dioxin-like toxicity than the corresponding parent PAHs.
The relative potency of three ring Cl-PAHs was found to increase with increasing degree of chlorination as well as with increasing degree of chlorination. However, the relative potencies of the most toxic Cl-PAHs assessed up to now have been found to be 100,000-fold lower than the relative potency of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). Even though Cl-PAHs aren’t as toxic as TCDD, it has been determined using recombinant bacterial cells that the toxicities of exposure to Cl-PAHs based on AhR activity were approximately 30-50 times higher than that of dioxins. Cl-PAHs demonstrate a high enough toxicity to be a potential health risk to human populations that come into contact with them.
DNA interaction
One of the well-established mechanisms by which chlorinated polycyclic aromatic hydrocarbons can exert their toxic effects is via the function of the aryl hydrocarbon receptor (AhR). The AhR-mediated activities of Cl-PAHs have been determined by using yeast assay systems. Aryl Hydrocarbon Receptor (AhR) is a cytosolic, ligand-activated transcription receptor. Cl-PAHs have the ability to bind to and activate the AhR. The biological pathway involves translocation of the activated AhR to the nucleus. In the nucleus, the AhR binds with the AhR nuclear translator protein to form a heterodimer. This process leads to transcriptional modulation of genes, causing adverse changes in cellular processes and function.
Several Cl-PAHs have been determined to be AhR-active. One such Cl-PAH, 6-chlorochrysene, has been shown to have a high affinity for the Ah receptor and to be a potent AHH inducer. Therefore, Cl-PAHs may be toxic to humans, and it is important to better understand their behavior in the environment.
Several Cl-PAHs have also been found to exhibit mutagenic activity toward Salmonella typhimurium in the Ames assay.
References
Chloroarenes
Incineration | Chlorinated polycyclic aromatic hydrocarbon | [
"Chemistry",
"Engineering"
] | 1,202 | [
"Combustion engineering",
"Incineration"
] |
22,202,480 | https://en.wikipedia.org/wiki/Orbital%20propellant%20depot | An orbital propellant depot is a cache of propellant that is placed in orbit around Earth or another body to allow spacecraft or the transfer stage of the spacecraft to be fueled in space. It is one of the types of space resource depots that have been proposed for enabling infrastructure-based space exploration. Many depot concepts exist depending on the type of fuel to be supplied, location, or type of depot which may also include a propellant tanker that delivers a single load to a spacecraft at a specified orbital location and then departs. In-space fuel depots are not necessarily located near or at a space station.
Potential users of in-orbit refueling and storage facilities include space agencies, defense ministries and communications satellite or other commercial companies.
Satellite servicing depots would extend the lifetime of satellites that have nearly consumed their orbital maneuvering fuel and are likely placed in a geosynchronous orbit. The spacecraft would conduct a space rendezvous with the depot, or vice versa, and then transfer propellant to be used for subsequent orbital maneuvers. In 2011, Intelsat showed interest in an initial demonstration mission to refuel several satellites in geosynchronous orbit, but all plans have been since scrapped.
A low Earth orbit (LEO) depot's primary function would be to provide propellant to a transfer stage headed to the Moon, Mars, or possibly a geosynchronous orbit. Since all or a fraction of the transfer stage propellant can be off-loaded, the separately launched spacecraft with payload and/or crew could have a larger mass or use a smaller launch vehicle. With a LEO depot or tanker fill, the size of the launch vehicle can be reduced and the flight rate increased—or, with a newer mission architecture where the beyond-Earth-orbit spacecraft also serves as the second stage, can facilitate much larger payloads—which may reduce the total launch costs since the fixed costs are spread over more flights and fixed costs are usually lower with smaller launch vehicles. A depot could also be placed at Earth-Moon Lagrange point 1 (EML-1) or behind the Moon at EML-2 to reduce costs to travel to the Moon or Mars. Placing a depot in Mars orbit has also been suggested.
In 2024, on Starship’s third integrated flight, intravehicular propellant transfer in orbit was demonstrated, an intervehicular propellant transfer demonstration mission is planned for 2025, as this capability critical for landing a crew on the Moon with the Starship HLS vehicle.
LEO depot fuels
For rockets and space vehicles, propellants usually take up 2/3 or more of their total mass.
Large upper-stage rocket engines generally use a cryogenic fuel like liquid hydrogen and liquid oxygen (LOX) as an oxidizer because of the large specific impulse possible, but must carefully consider a problem called "boil off," or the evaporation of the cryogenic propellant. The boil off from only a few days of delay may not allow sufficient fuel for higher orbit injection, potentially resulting in a mission abort. Lunar or Mars missions will require weeks to months to accumulate tens of thousands to hundreds of thousands of kilograms of propellant, so additional equipment may be required on the transfer stage or the depot to mitigate boiloff.
Non-cryogenic, earth-storable liquid rocket propellants including RP-1 (kerosene), hydrazine and nitrogen tetroxide (NTO), and mildly cryogenic, space-storable propellants like liquid methane and liquid oxygen, can be kept in liquid form with less boiloff than the cryogenic fuels, but also have lower specific impulse. Additionally, gaseous or supercritical propellants such as those used by ion thrusters include xenon, argon, and bismuth.
Propellant launch costs
Ex-NASA administrator Mike Griffin commented at the 52nd AAS Annual Meeting in Houston, Texas, November 2005, that "at a conservatively low government price of $10,000 per kg in LEO, 250 MT of fuel for two missions per year is worth $2.5 billion, at government rates."
Cryogenic depot architectures and types
In the depot-centric architecture, the depot is filled by tankers, and then the propellant is transferred to an upper stage prior to orbit insertion, similar to a gas station filled by tankers for automobiles. By using a depot, the launch vehicle size can be reduced and the flight rate increased. Since the accumulation of propellant may take many weeks to months, careful consideration must be given to boiloff mitigation.
In simple terms, a passive cryogenic depot is a transfer stage with stretched propellant tanks, additional insulation, and a sun shield. In one concept, hydrogen boiloff is also redirected to reduce or eliminate liquid oxygen boiloff and then used for attitude control, power, or reboost. An active cryogenic depot is a passive depot with additional power and refrigeration equipment/cryocoolers to reduce or eliminate propellant boiloff. Other active cryogenic depot concepts include electrically powered attitude control equipment to conserve fuel for the end payload.
Heavy lift versus depot-centric architectures
In the heavy lift architecture, propellant, which can be two-thirds or more of the total mission mass, is accumulated in fewer launches and possibly shorter time frame than the depot centric architecture. Typically the transfer stage is filled directly and no depot is included in the architecture. For cryogenic vehicles and cryogenic depots, additional boiloff mitigation equipment is typically included on the transfer stage, reducing payload fraction and requiring more propellant for the same payload unless the mitigation hardware is expended.
Heavy Lift is compared with using Commercial Launch and Propellant Depots in this power point by Dr. Alan Wilhite given at FISO Telecon.
Feasibility of propellant depots
Both theoretical studies and funded development projects that are currently underway aim to provide insight into the feasibility of propellant depots. Studies have shown that a depot-centric architecture with smaller launch vehicles could be less expensive than a heavy-lift architecture over a 20-year time frame. The cost of large launch vehicles is so high that a depot able to hold the propellant lifted by two or more medium-sized launch vehicles may be cost effective and support more payload mass on beyond-Earth orbit trajectories.
In a 2010 NASA study, an additional flight of an Ares V heavy launch vehicle was required to stage a US government Mars reference mission due to 70 tons of boiloff, assuming 0.1% boiloff/day for hydrolox propellant. The study identified the need to decrease the design boiloff rate by an order of magnitude or more.
Approaches to the design of low Earth orbit (LEO) propellant depots were also discussed in the 2009 Augustine report to NASA, which "examined the [then] current concepts for in-space refueling." The report determined there are essentially two approaches to refueling a spacecraft in LEO:
Propellant tanker delivery. In this approach, a single tanker performs a rendezvous and docking with an on-orbit spacecraft. The tanker then transfers propellant and departs. This approach is "much like an airborne tanker refuels an aircraft."
In-space depot. An alternative approach is for many tankers to rendezvous and transfer propellant to an orbital depot. Then, at a later time, a spacecraft may dock with the depot and receive a propellant load before departing Earth orbit.
Both approaches were considered feasible with 2009 spaceflight technology, but anticipated that significant further engineering development and in-space demonstration would be required before missions could depend on the technology. Both approaches were seen to offer the potential of long-term life-cycle savings.
In 2010 United Launch Alliance (ULA) proposed their Advanced Cryogenic Evolved Stage (ACES) tanker, a concept that dates to work by Boeing in 2006, sized to transport up to of propellant—in early design, a first flight was proposed for no earlier than 2023, with initial usage as a propellant tanker potentially beginning in the mid-2020s. ACES was not funded, but some of the ideas were used in the Centaur stage of the Vulcan Centaur rocket.
Beyond theoretical studies, since at least 2017, SpaceX has undertaken funded development of an interplanetary set of technologies. While the interplanetary mission architecture consists of a combination of several elements that are considered by SpaceX to be key to making long-duration beyond Earth orbit (BEO) spaceflights possible by reducing the cost per ton delivered to Mars by multiple orders of magnitude over what NASA approaches have achieved, refilling of propellants in orbit is one of the four key elements. In a novel mission architecture, the SpaceX design intends to enable the long-journey spacecraft to expend almost all of its propellant load during the launch to low Earth orbit while it serves as the second stage of the SpaceX Starship, and then after refilling on orbit by multiple Starship tankers, provide the large amount of energy required to put the spacecraft onto an interplanetary trajectory. The Starship tanker is designed to transport approximately of propellant to low Earth orbit. In April 2021, NASA selected the SpaceX Lunar Starship with in-orbit refueling for their initial lunar human landing system.
Advantages
Because a large portion of a rocket is propellant at time of launch, proponents point out several advantages of using a propellant depot architecture. Spacecraft could be launched unfueled and thus require less structural mass, or the depot tanker itself could serve as the second-stage on launch when it is reusable. An on-orbit market for refueling may be created where competition to deliver propellant for the lowest price takes place, and it may also enable an economy of scale by permitting existing rockets to fly more often to refuel the depot. If used in conjunction with a mining facility on the moon, water or propellant could be exported back to the depot, further reducing the cost of propellant. An exploration program based on a depot architecture could be less expensive and more capable, not needing a specific rocket or a heavy lift such as the SLS to support multiple destinations such as the Moon, Lagrange points, asteroids, and Mars.
NASA studies in 2011 showed lower cost and faster alternatives than the Heavy Lift Launch System and listed the following advantages:
Tens of billions of dollars of cost savings to fit the budget profile
Allows first NEA/Lunar mission by 2024 using conservative budgets
Launch every few months rather than once every 12–18 months
Allows multiple competitors for propellant delivery
Reduced critical path mission complexity (AR&Ds, events, number of unique elements)
History and plans
USA
Propellant depots were proposed as part of the Space Transportation System (along with nuclear "tugs" to take payloads from LEO to other destinations) in the mid-1960s.
In October 2009, the U.S. Air Force and United Launch Alliance (ULA) performed an experimental on-orbit demonstration on a modified Centaur upper stage on the DMSP-18 launch to improve "understanding of propellant settling and slosh, pressure control, RL10 chilldown and RL10 two-phase shutdown operations." "The light weight of DMSP-18 allowed of remaining liquid O2 and liquid H2 propellant, 28% of Centaur's capacity," for the on-orbit demonstrations. The post-spacecraft mission extension ran 2.4 hours before executing the deorbit burn.
NASA's Launch Services Program is working on an ongoing slosh fluid dynamics experiments with partners called CRYOTE. , ULA is also planning additional in-space laboratory experiments to further develop cryogenic fluid management technologies using the Centaur upper stage after primary payload separation. Named CRYOTE, or CRYogenic Orbital TEstbed, it will be a testbed for demonstrating a number of technologies needed for cryogenic propellant depots, with several small-scale demonstrations planned for 20122014. , ULA said this mission could launch as soon as 2012 if funded. The ULA CRYOTE small-scale demonstrations are intended to lead to a ULA large-scale cryo-sat flagship technology demonstration in 2015.
The Future In-Space Operations (FISO) Working Group, a consortium of participants from NASA, industry and academia, discussed propellant depot concepts and plans on several occasions in 2010, with presentations of optimal depot locations for human space exploration beyond low Earth orbit, a proposed simpler (single vehicle) first-generation propellant depot and six important propellant-depot-related technologies for reusable cislunar transportation.
NASA also has plans to mature techniques for enabling and enhancing space flights that use propellant depots in the "CRYOGENIC Propellant STorage And Transfer (CRYOSTAT) Mission". The CRYOSTAT vehicle was expected to be launched to LEO in 2015.
The CRYOSTAT architecture comprises technologies in the following categories:
Storage of Cryogenic Propellants
Cryogenic Fluid Transfer
Instrumentation
Automated Rendezvous and Docking (AR&D)
Cryogenic Based Propulsion
The "Simple Depot" mission was proposed by NASA in 2011 as a potential first PTSD mission, with launch no earlier than 2015, on an Atlas V 551. Simple Depot would use the "used" (nearly-emptied) Centaur upper stage LH2 tank for long-term storage of LO2 while LH2 would be stored in the Simple Depot LH2 module, which would be launched with only ambient-temperature gaseous Helium in it. The SD LH2 tank was to be diameter and long, in volume, and store 5 mT of LH2. "At a useful mixture ratio (MR) of 6:1 this quantity of LH2 can be paired with 25.7 mT of LO2, allowing for 0.7 mT of LH2 to be used for vapor cooling, for a total useful propellant mass of 30 mT. ... the described depot would have a boil-off rate approaching 0.1 percent per day, consisting entirely of hydrogen."
In September 2010, ULA released a Depot-Based Space Transportation Architecture concept to propose propellant depots that could be used as way-stations for other spacecraft to stop and refuel—either in low Earth orbit (LEO) for beyond-LEO missions, or at Lagrangian point for interplanetary missions—at the AIAA Space 2010 conference. The concept proposes that waste gaseous hydrogen—an inevitable byproduct of long-term liquid hydrogen storage in the radiative heat environment of space—would be usable as a monopropellant in a solar-thermal propulsion system. The waste hydrogen would be productively used for both orbital stationkeeping and attitude control, as well as providing limited propellant and thrust to use for orbital maneuvers to better rendezvous with other spacecraft that would be inbound to receive fuel from the depot. As part of the Depot-Based Space Transportation Architecture, ULA has proposed the Advanced Common Evolved Stage (ACES) upper stage rocket. ACES hardware is designed from the start as an in-space propellant depot that could be used as way-stations for other rockets to stop and refuel on the way to beyond-LEO or interplanetary missions, and to provide the high-energy technical capacity for the cleanup of space debris.
In August 2011, NASA made a significant contractual commitment to the development of propellant depot technology by funding four aerospace companies to "define demonstration missions that would validate the concept of storing cryogenic propellants in space to reduce the need for large launch vehicles for deep-space exploration." These study contracts for storing/transferring cryogenic propellants and cryogenic depots were signed with Analytical Mechanics Associates, Boeing, Lockheed Martin and Ball Aerospace. Each company was to receive under the contract.
In April 2021, NASA selected the SpaceX Lunar Starship with in-orbit refuelling for their initial lunar human landing system. In 2022, a larger propellant-depot Starship was being planned for Lunar Starship HLS.
Rest of world
The Chinese Space Agency (CNSA) performed its first satellite-to-satellite on-orbit refueling test in June 2016.
Engineering design issues
There are a number of design issues with propellant depots, as well as several tasks that have not, to date, been tested in space for on-orbit servicing missions. The design issues include propellant settling and transfer, propellant usage for attitude control and reboost, the maturity of the refrigeration equipment/cryocoolers, and the power and mass required for reduced or zero boiloff depots with refrigeration.
Propellant settling
Transfer of liquid propellants in microgravity is complicated by the uncertain distribution of liquid and gasses within a tank. Propellant settling at an in-space depot is thus more challenging than in even a slight gravity field. ULA plans to use the DMSP-18 mission to flight-test centrifugal propellant settling as a cryogenic fuel management technique that might be used in future propellant depots. The proposed Simple Depot PTSD mission would use several techniques to achieve adequate settling for propellant transfer.
Propellant transfer
In the absence of gravity, propellant transfer is somewhat more difficult, since liquids can float away from the inlet.
As part of the Orbital Express mission in 2007, hydrazine propellant was successfully transferred between two single-purpose designed technology demonstration spacecraft. The Boeing servicing spacecraft ASTRO transferred propellant to the Ball Aerospace serviceable client spacecraft NEXTSat. Since no crew were present on either spacecraft, this was reported as the first autonomous spacecraft-to-spacecraft fluid transfer.
Refilling
After propellant has been transferred to a customer, the depot's tanks will need refilling. Organizing the construction and launch of the tanker rockets bearing the new fuel is the responsibility of the propellant depot's operator. Since space agencies like NASA hope to be purchasers rather than owners, possible operators include the aerospace company that constructed the depot, manufacturers of the rockets, a specialist space depot company, or an oil/chemical company that refines the propellant. By using several tanker rockets the tankers can be smaller than the depot and larger than the spacecraft they are intended to resupply. Short range chemical propulsion tugs belonging to the depot may be used to simplify docking tanker rockets and large vehicles like Mars Transfer Vehicles.
Transfers of propellant between a LEO depot, reachable by rockets from Earth, and the possible deep space ones such as at the Lagrange Points and Phobos depots could be performed using Solar electric propulsion (SEP) tugs.
Two missions are currently under development or proposed to support propellant depot refilling.
In addition to refueling and servicing geostationary communications satellites with the fuel that is initially launched with the MDA Space Infrastructure Servicing vehicle, the SIS vehicle is being designed to have the ability to orbitally maneuver to rendezvous with a replacement fuel canister after transferring the of fuel in the launch load, enabling further refueling of additional satellites after the initial multi-satellite servicing mission is complete.
The proposed Simple Depot cryogenic PTSD (Propellant Transfer and Storage Demonstration) mission would uses "remote berthing arm and docking and fluid transfer ports" both for propellant transfer to other vehicles, as well as for refilling the depot up to the full 30 tonne propellant capacity. It was proposed in 2010, for launch in 2015.
In 1962, S.T. Demetriades proposed a method for refilling by collecting atmospheric gases. Moving in low Earth orbit, at an altitude of around 120 km, Demetriades' proposed depot extracts air from the fringes of the atmosphere, compresses and cools it, and extracts liquid oxygen. The remaining nitrogen is used as propellant for a nuclear-powered magnetohydrodynamic engine, which maintains the orbit, compensating for atmospheric drag. This system was called "PROFAC" (PROpulsive Fluid ACcumulator). There are, however, safety concerns with placing a nuclear reactor in low Earth orbit.
Demetriades' proposal was further refined by Christopher Jones and others In this proposal, multiple collection vehicles accumulate propellant gases at around 120 km altitude, later transferring them to a higher orbit. However, Jones' proposal does require a network of orbital power-beaming satellites, to avoid placing nuclear reactors in orbit.
Asteroids can also be processed to provide liquid oxygen.
Orbital planes and launch windows
Propellant depots in LEO are of little use for transfer between two low earth orbits when the depot is in a different orbital plane than the target orbit. The delta-v to make the necessary plane change is typically extremely high. On the other hand, depots are typically proposed for exploration missions, where the change over time of the depot's orbit can be chosen to align with the departure vector. This allows one well-aligned departure time minimizing fuel use that requires a very precisely-timed departure. Less efficient departure times from the same depot to the same destination exist before and after the well-aligned opportunity, but more research is required to show whether the efficiency falls off quickly or slowly. By contrast, launching directly in only one launch from the ground without orbital refueling or docking with another craft already on orbit offers daily launch opportunities though it requires larger and more expensive launchers.
The restrictions on departure windows arise because low earth orbits are susceptible to significant perturbations; even over short periods they are subject to nodal regression and, less importantly, precession of perigee. Equatorial depots are more stable but also more difficult to reach.
New approaches have been discovered for LEO to interplanetary orbital transfers where a three-burn orbital transfer is used, which includes a plane change at apogee in a highly-elliptical phasing orbit, in which the incremental delta-v is small—typically less than five percent of the total delta-v—"enabling departures to deep-space destinations [taking] advantage of a depot in LEO" and providing frequent departure opportunities. More specifically, the 3-burn departure strategy has been shown to enable a single LEO depot in an ISS-inclination orbit (51 degrees) to dispatch nine spacecraft to "nine different interplanetary targets [where the depot need not] perform any phasing maneuvers to align with any of the departure asymptotes ... [including enabling] extending the economic benefits of dedicated smallsat launch to interplanetary missions."
Specific issues of cryogenic depots
Boil-off mitigation
Boil-off of cryogenic propellants in space may be mitigated by both technological solutions as well as system-level planning and design. From a technical perspective: for a propellant depot with passive insulation system to effectively store cryogenic fluids, boil-off caused by heating from solar and other sources must be mitigated, eliminated, or used for economic purposes. For non-cryogenic propellants, boil-off is not a significant design problem.
Boil-off rate is governed by heat leakage and by the quantity of propellant in the tanks. With partially filled tanks, the percentage loss is higher. Heat leakage depends on surface area, while the original mass of propellant in the tanks depends on volume. So by the cube-square law, the smaller the tank, the faster the liquids will boil off. Some propellant tank designs have achieved a liquid hydrogen boil off rate as low as approximately 0.13% per day (3.8% per month) while the much higher temperature cryogenic fluid of liquid oxygen would boil off much less, about 0.016% per day (0.49% per month).
It is possible to achieve zero boil-off (ZBO) with cryogenic propellant storage using an active thermal control system. Tests conducted at the NASA Lewis Research Center's Supplemental Multilayer Insulation Research Facility (SMIRF) over the summer of 1998 demonstrated that a hybrid thermal control system could eliminate boiloff of cryogenic propellants. The hardware consisted of a pressurized tank insulated with 34 layers of insulation, a condenser, and a Gifford-McMahon (GM) cryocooler that has a cooling capacity of 15 to 17.5 watts (W). Liquid hydrogen was the test fluid. The test tank was installed into a vacuum chamber, simulating space vacuum.
In 2001, a cooperative effort by NASA's Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop ZBO concepts for in-space cryogenic storage. The main program element was a large-scale, ZBO demonstration using the MSFC multipurpose hydrogen test bed (MHTB) – 18.10 m3 L tank (about 1300 kg of ). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.
Another NASA study in June 2003 for conceptual Mars mission showed mass savings over traditional, passive-only cryogenic storage when mission durations are 5 days in LEO for oxygen, 8.5 days for methane and 64 days for hydrogen. Longer missions equate to greater mass savings. Cryogenic xenon saves mass over passive storage almost immediately. When power to run the ZBO is already available, the break-even mission durations are even shorter, e.g. about a month for hydrogen. The larger the tank, the fewer days in LEO when ZBO has reduced mass.
In addition to technical solutions to the challenge of excessive boil-off of cryogenic rocket propellants, system-level solutions have been proposed. From a systems perspective, reductions in the standby time of the liquid H2 cryogenic storage in order to achieve, effectively, a just in time delivery to each customer, matched with the balanced refinery technology to split the long-term storable feedstock—water—into the stoichiometric LOX/LH2 necessary, is theoretically capable of achieving a system-level solution to boil-off. Such proposals have been suggested as supplementing good technological techniques to reduce boil-off, but would not replace the need for efficient technological storage solutions.
Sun shields
United Launch Alliance (ULA) has proposed a cryogenic depot which would use a conical sun shield to protect the cold propellants from solar and Earth radiation. The open end of the cone allows residual heat to radiate to the cold of deep space, while the closed cone layers attenuates the radiative heat from the Sun and Earth.
Other issues
Other issues are hydrogen embrittlement, a process by which some metals (including iron and titanium) become brittle and fracture following exposure to hydrogen. The resulting leaks make storing cryogenic propellants in zero gravity conditions difficult.
In-space refueling demonstration projects
In the early 2010s, several in-space refueling projects got underway. Two private initiatives and a government sponsored test mission were in some level of development or testing .
Robotic Refueling Mission
The NASA Robotic Refueling Mission (RRM) was launched in 2011 and successfully completed a series of robotically actuated propellant transfer experiments on the exposed facility platform of the International Space Station in January 2013.
The set of experiments included a number of propellant valves, nozzles and seals similar to those used on many satellites and a series of four prototype tools that could be attached to the distal end of a Space Station robotic arm. Each tool was a prototype of "devices that could be used by future satellite servicing missions to refuel spacecraft in orbit. RRM is the first in-space refueling demonstration using a platform and fuel valve representative of most existing satellites, which were never designed for refueling. Other satellite servicing demos, such as the U.S. military's Orbital Express mission in 2007, transferred propellant between satellites with specially-built pumps and connections."
MDA in-space refueling demonstration project
, a small-scale refueling demonstration project for reaction control system (RCS) fluids was under development. Canada-based MDA Corporation announced in early 2010 that they were designing a single spacecraft that would refuel other spacecraft in orbit as a satellite-servicing demonstration. "The business model, which is still evolving, could ask customers to pay per kilogram of fuel successfully added to their satellite, with the per-kilogram price being a function of the additional revenue the operator can expect to generate from the spacecraft's extended operational life."
The plan is that the fuel-depot vehicle would maneuver to an operational communications satellite, dock at the target satellite's apogee-kick motor, remove a small part of the target spacecraft's thermal protection blanket, connect to a fuel-pressure line and deliver the propellant. "MDA officials estimate the docking maneuver would take the communications satellite out of service for about 20 minutes."
, MDA had secured a major customer for the initial demonstration project. Intelsat agreed to purchase one-half of the of propellant payload that the MDA spacecraft would carry into geostationary orbit. Such a purchase would add somewhere between two and four years of additional service life for up to five Intelsat satellites, assuming 200 kg of fuel is delivered to each one. , the spacecraft could be ready to begin refueling communication satellites by 2015. , no customers had signed up for an MDA refueling mission.
In 2017, MDA announced that it was restarting its satellite servicing business, with Luxembourg-based satellite owner/operator SES S.A. as its first customer.
Space tug alternatives to direct refueling
Competitive design alternatives to in-space RCS fuel transfer exist. It is possible to bring additional propellant to a space asset, and use the propellant for attitude control or orbital velocity change, without ever transferring the propellant to the target space asset.
The ViviSat Mission Extension Vehicle, also under development since the early 2010s, illustrates one alternative approach that would connect to the target satellite similarly to MDA SIS, via the kick motor, but would not transfer fuel. Rather, the Mission Extension Vehicle would use "its own thrusters to supply attitude control for the target." ViviSat believes their approach is more simple and can operate at lower cost than the MDA propellant transfer approach, while having the technical ability to dock with and service a greater number (90 percent) of the approximately 450 geostationary satellites in orbit. , no customers had signed up for a ViviSat-enabled mission extension.
In 2015, Lockheed Martin proposed the Jupiter space tug. If built, Jupiter would operate in low Earth orbit shuttling cargo carriers to and from the International Space Station, remaining on orbit indefinitely, and refueling itself from subsequent transport ships carrying later cargo carrier modules.
New Space Involvement
In December 2018, Orbit Fab, a silicon valley startup company founded in early 2018, flew the first of a series of experiments to the ISS in order to test and demonstrate technologies to allow for commercial in space refueling. These first rounds of testing used water as a propellant simulant. In June 2021, Orbit Fab flew the first propellant depot, Tanker-001 Tenzing, carrying Hydrogen Peroxide in Sun-synchronous orbit.
Gallery
See also
Progress (spacecraft)
Automated Transfer Vehicle
Liquid rocket propellants
Asteroid mining
Propulsive Fluid Accumulator, satellite that gathers oxygen and other gasses to supply the depot
Flexible path option of the Review of United States Human Space Flight Plans Committee
In-situ resource utilization
Shackleton Energy Company
Aquarius Launch Vehicle
Quicklaunch
References
External links
Text
A Backgrounder for On-Orbit Satellite Servicing, March 2011
Presentation of Boeing's proposed LEO Propellant Depot, 2007
Evolved Human Space Exploration Architecture Using Commercial Launch/Propellant Depots, Wilhite/Arney/Jones/Chai, October 2012
Distributed Launch – Enabling Beyond LEO Missions , United Launch Alliance, September 2015
Video
Animation of a Boeing depot launch and refuel operation, November 2011 (1 min)
NASA Cryogenic Propellant Depot – Mission Animation, May 2013 (1 min)
Advantages of a depot architecture, Jeff Greason of XCOR Aerospace, Augustine Commission meeting, July 2009 (25 min)
A Settlement Strategy for NASA, Jeff Greason of XCOR Aerospace, ISDC 2011 (42 min)
Cislunar Space, The Next Frontier, Dr. Paul Spudis of the Lunar and Planetary Institute, ISDC 2011 (25 min)
Plan to mine water on the moon using depots, Bill Stone of the Shackleton Energy Company, TED 2011 (7 min)
Spaceflight concepts
Rocket propellants
Private spaceflight
Space applications
Fuels infrastructure
Oxygen
Industrial gases
Industry in space | Orbital propellant depot | [
"Chemistry",
"Astronomy"
] | 6,671 | [
"Industry in space",
"Outer space",
"Space applications",
"Industrial gases",
"Chemical process engineering"
] |
22,203,669 | https://en.wikipedia.org/wiki/Schur%27s%20lemma%20%28Riemannian%20geometry%29 | In Riemannian geometry, Schur's lemma is a result that says, heuristically, whenever certain curvatures are pointwise constant then they are forced to be globally constant. The proof is essentially a one-step calculation, which has only one input: the second Bianchi identity.
The Schur lemma for the Ricci tensor
Suppose is a smooth Riemannian manifold with dimension Recall that this defines for each element of :
the sectional curvature, which assigns to every 2-dimensional linear subspace of a real number
the Riemann curvature tensor, which is a multilinear map
the Ricci curvature, which is a symmetric bilinear map
the scalar curvature, which is a real number
The Schur lemma states the following:
The Schur lemma is a simple consequence of the "twice-contracted second Bianchi identity," which states that
understood as an equality of smooth 1-forms on Substituting in the given condition one finds that
Alternative formulations of the assumptions
Let be a symmetric bilinear form on an -dimensional inner product space Then
Additionally, note that if for some number then one automatically has { With these observations in mind, one can restate the Schur lemma in the following form:
Note that the dimensional restriction is important, since every two-dimensional Riemannian manifold which does not have constant curvature would be a counterexample.
The Schur lemma for the Riemann tensor
The following is an immediate corollary of the Schur lemma for the Ricci tensor.
The Schur lemma for Codazzi tensors
Let be a smooth Riemannian or pseudo-Riemannian manifold of dimension Let he a smooth symmetric (0,2)-tensor field whose covariant derivative, with respect to the Levi-Civita connection, is completely symmetric. The symmetry condition is an analogue of the Bianchi identity; continuing the analogy, one takes a trace to find that
If there is a function on such that for all in then upon substitution one finds
Hence implies that is constant on each connected component of As above, one can then state the Schur lemma in this context:
Applications
The Schur lemmas are frequently employed to prove roundness of geometric objects. A noteworthy example is to characterize the limits of convergent geometric flows.
For example, a key part of Richard Hamilton's 1982 breakthrough on the Ricci flow was his "pinching estimate" which, informally stated, says that for a Riemannian metric which appears in a 3-manifold Ricci flow with positive Ricci curvature, the eigenvalues of the Ricci tensor are close to one another relative to the size of their sum. If one normalizes the sum, then, the eigenvalues are close to one another in an absolute sense. In this sense, each of the metrics appearing in a 3-manifold Ricci flow of positive Ricci curvature "approximately" satisfies the conditions of the Schur lemma. The Schur lemma itself is not explicitly applied, but its proof is effectively carried out through Hamilton's calculations.
In the same way, the Schur lemma for the Riemann tensor is employed to study convergence of Ricci flow in higher dimensions. This goes back to Gerhard Huisken's extension of Hamilton's work to higher dimensions, where the main part of the work is that the Weyl tensor and the semi-traceless Riemann tensor become zero in the long-time limit. This extends to the more general Ricci flow convergence theorems, some expositions of which directly use the Schur lemma. This includes the proof of the differentiable sphere theorem.
The Schur lemma for Codazzi tensors is employed directly in Huisken's foundational paper on convergence of mean curvature flow, which was modeled on Hamilton's work. In the final two sentences of Huisken's paper, it is concluded that one has a smooth embedding with
where is the second fundamental form and is the mean curvature. The Schur lemma implies that the mean curvature is constant, and the image of this embedding then must be a standard round sphere.
Another application relates full isotropy and curvature. Suppose that is a connected thrice-differentiable Riemannian manifold, and that for each the group of isometries acts transitively on This means that for all and all there is an isometry such that and This implies that also acts transitively on that is, for every there is an isometry such that and Since isometries preserve sectional curvature, this implies that is constant for each The Schur lemma implies that has constant curvature. A particularly notable application of this is that any spacetime which models the cosmological principle must be the warped product of an interval and a constant-curvature Riemannian manifold. See O'Neill (1983, page 341).
Stability
Recent research has investigated the case that the conditions of the Schur lemma are only approximately satisfied.
Consider the Schur lemma in the form "If the traceless Ricci tensor is zero then the scalar curvature is constant." Camillo De Lellis and Peter Topping have shown that if the traceless Ricci tensor is approximately zero then the scalar curvature is approximately constant. Precisely:
Suppose is a closed Riemannian manifold with nonnegative Ricci curvature and dimension Then, where denotes the average value of the scalar curvature, one has
Next, consider the Schur lemma in the special form "If is a connected embedded surface in whose traceless second fundamental form is zero, then its mean curvature is constant." Camillo De Lellis and Stefan Müller have shown that if the traceless second fundamental form of a compact surface is approximately zero then the mean curvature is approximately constant. Precisely
there is a number such that, for any smooth compact connected embedded surface one has where is the second fundamental form, is the induced metric, and is the mean curvature
As an application, one can conclude that itself is 'close' to a round sphere.
References
Shoshichi Kobayashi and Katsumi Nomizu. Foundations of differential geometry. Vol. I. Interscience Publishers, a division of John Wiley & Sons, New York-London 1963 xi+329 pp.
Barrett O'Neill. Semi-Riemannian geometry. With applications to relativity. Pure and Applied Mathematics, 103. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York, 1983. xiii+468 pp.
Riemannian geometry
Riemannian manifolds
Theorems in Riemannian geometry
Lemmas | Schur's lemma (Riemannian geometry) | [
"Mathematics"
] | 1,358 | [
"Mathematical theorems",
"Space (mathematics)",
"Riemannian manifolds",
"Metric spaces",
"Mathematical problems",
"Lemmas"
] |
4,711,412 | https://en.wikipedia.org/wiki/Hemocyte%20%28invertebrate%20immune%20system%20cell%29 | A hemocyte is a cell that plays a role in the immune system of invertebrates. It is found within the hemolymph.
Hemocytes are phagocytes of invertebrates.
Hemocytes in Drosophila melanogaster can be divided into two categories: embryonic and larval. Embryonic hemocytes are derived from head mesoderm and enter the hemolymph as circulating cells. Larval hemocytes, on the other hand, are responsible for tissue remodeling during development. Specifically, they are released during the pupa stage in order to prepare the fly for the transition into an adult and the massive associated tissue reorganization that must occur.
There are four basic types of hemocytes found in fruit flies: secretory, plasmatocytes, crystal cells, and lamellocytes. Secretory cells are never released into the hemolymph and instead send out signalling molecules responsible for cell differentiation. Plasmatocytes are the hemocytes responsible for cell ingestion (phagocytosis) and represent about 95% of circulating hemocytes. Crystal cells are only found in the larval stage of Drosophila, and they are involved in melanization, a process by which microbes/pathogens are engulfed in a hardened gel and destroyed via anti-microbial peptides and other proteins involved in the humoral response. They constitute about 5% of circulating hemocytes. Lamellocytes are flat cells that are never found in adult cells, and instead are only present in larval cells for their ability to encapsulate invading pathogens. They specifically act on parasitic wasp eggs that bind to the surfaces of cells, and are incapable of being phagocytosed by host cells.
In mosquitoes, hemocytes are functionally divided into three populations: granulocytes, oenocytoids and prohemocytes. Granulocytes are the most abundant cell type. They rapidly attach to foreign surfaces and readily engage in phagocytosis. Oenocytoids do not readily spread on foreign surfaces and are the major producers of phenoloxidase, which is the major enzyme of the melanization immune pathway. Prohemocytes are small cells of unknown function, which may result from the asymmetric mitosis of granulocytes.
References
External links
Immunology
Blood cells | Hemocyte (invertebrate immune system cell) | [
"Biology"
] | 474 | [
"Immunology"
] |
4,715,518 | https://en.wikipedia.org/wiki/Isothermal%20titration%20calorimetry | In chemical thermodynamics, isothermal titration calorimetry (ITC) is a physical technique used to determine the thermodynamic parameters of interactions in solution. It is most often used to study the binding of small molecules (such as medicinal compounds) to larger macromolecules (proteins, DNA etc.) in a label-free environment. It consists of two cells which are enclosed in an adiabatic jacket. The compounds to be studied are placed in the sample cell, while the other cell, the reference cell, is used as a control and contains the buffer in which the sample is dissolved.
The technique was developed by H. D. Johnston in 1968 as a part of his Ph.D. dissertation at Brigham Young University, and was considered niche until introduced commercially by MicroCal Inc. in 1988. Compared to other calorimeters, ITC has an advantage in not requiring any correctors since there was no heat exchange between the system and the environment.
Thermodynamic measurements
ITC is a quantitative technique that can determine the binding affinity (), reaction enthalpy (), and binding stoichiometry () of the interaction between two or more molecules in solution. This is achieved by measuring the enthalpies of a series of binding reactions caused by injections of a solution of one molecule to a reaction cell containing a solution of another molecule. The enthalpy values are plotted over the molar ratios resulting from the injections. From the plot, the molar reaction enthalpy , the affinity constant () and the stochiometry are determined by curve fitting. The reaction's Gibbs free energy change () and entropy change () can be determined using the relationship:
(where is the gas constant and is the absolute temperature).
For accurate measurements of binding affinity, the curve of the thermogram must be sigmoidal. The profile of the curve is determined by the c-value, which is calculated using the equation:
where is the stoichiometry of the binding, is the association constant and is the concentration of the molecule in the cell. The c-value must fall between 1 and 1000, ideally between 10 and 100. In terms of binding affinity, it would be approximately from ~ within the limit range.
Instrumental measurements
An isothermal titration calorimeter is composed of two identical cells made of a highly efficient thermally conducting and chemically inert material such as Hastelloy alloy or gold, surrounded by an adiabatic jacket. Sensitive thermopile/thermocouple circuits are used to detect temperature differences between the reference cell (filled with buffer or water) and the sample cell containing the macromolecule. Prior to addition of ligand, a constant power (<1 mW) is applied to the reference cell. This directs a feedback circuit, activating a heater located on the sample cell. During the experiment, ligand is titrated into the sample cell in precisely known aliquots, causing heat to be either taken up or evolved (depending on the nature of the reaction). Measurements consist of the time-dependent input of power required to maintain equal temperatures between the sample and reference cells.
In an exothermic reaction, the temperature in the sample cell increases upon addition of ligand. This causes the feedback power to the sample cell to be decreased (remember: a reference power is applied to the reference cell) in order to maintain an equal temperature between the two cells. In an endothermic reaction, the opposite occurs; the feedback circuit increases the power in order to maintain a constant temperature (isothermal operation).
Observations are plotted as the power needed to maintain the reference and the sample cell at an identical temperature against time. As a result, the experimental raw data consists of a series of spikes of heat flow (power), with every spike corresponding to one ligand injection. These heat flow spikes/pulses are integrated with respect to time, giving the total heat exchanged per injection. The pattern of these heat effects as a function of the molar ratio [ligand]/[macromolecule] can then be analyzed to give the thermodynamic parameters of the interaction under study.
To obtain an optimum result, each injection should be given enough time for a reaction equilibrium to reach. Degassing samples is often necessary in order to obtain good measurements as the presence of gas bubbles within the sample cell will lead to abnormal data plots in the recorded results. The entire experiment takes place under computer control.
Direct titration is performed most commonly with ITC to obtain the thermodynamic data, by binding two components of the reaction directly to each other. However, many of the chemical reactions and binding interactions may have higher binding affinity above what is desirable with the c-window. To troubleshoot the limitation of c-window and conditions for certain binding interactions, various different methods of titration can be performed. In some cases, simply doing a reverse titration of changing the samples between the injection syringe and sample cell can solve the issue, depending on the binding mechanism. For most of the high or low affinity bindings require chelation or competitive titration. This method is done by loading pre-bound complex solution in the sample cell, and chelating one of the components out with a reagent of higher observed binding affinity within the desirable c-window.
Analysis and interpretation
Post-hoc analysis and proton inventory
The collected experimental data reflects not only the binding thermodynamics of the interaction of interest, but any contributing competing equilibria associated to it. A post-hoc analysis can be performed to determine the buffer or solvent-independent enthalpy from the experimental thermodynamics, by simply going through the process of Hess’ law. Below example shows a simple interaction between a metal ion (M) and a ligand (L). B represents the buffer used for this interaction and H+ represents protons.M - B <=> M + B -\Delta H_{MB}
L - H <=> L + H+
H+ + B <=> H - B
M + L <=> M - L
Therefore,which can be further processed to calculate the enthalpy of metal-ligand interaction. Although this example is between a metal and a ligand, it is applicable to any ITC experiment, regarding binding interactions.
As a part of the analysis, a number of protons are required to calculate the solvent-independent thermodynamics. This can be easily done by plotting a graph such as shown below.
The linear equation of this plot is the rearranged version of the equation above from the post-hoc analysis in a form of y = mx + b:
Equilibrium constant
Equilibrium constant of the reaction is also not independent from the other competing equilibria. Competition would include buffer interactions and other pH-dependent reactions depending on the experimental conditions. The competition from species other than the species of interest is included in the competition factor, Q in the following equation:where, represents species such a buffer or protons, represents their equilibrium constant, when,
Applications
For the past 30 years, isothermal titration calorimetry has been used in a wide array of fields. In the old days, this technique was used to determine fundamental thermodynamic values for basic small molecular interactions. In recent years, ITC has been used in more industrially applicable areas, such as drug discovery and testing synthetic materials. Although it is still heavily used in fundamental chemistry, the trend has shifted over to the biological side, where label-free and buffer independent values are relatively harder to achieve.
Enzyme kinetics
Using the thermodynamic data from ITC, it is possible to deduce enzyme kinetics including proton or electron transfer, allostery and cooperativity, and enzyme inhibition. ITC collects data over time that is useful for any kinetic experiments, but especially with the proteins due to constant aliquots of injections. In terms of calculation, equilibrium constant and the slopes of binding can be directly utilized to determine the allostery and charge transfer, by comparing experimental data of different conditions (pH, use of mutated peptide chain and binding sites, etc.) .
Membrane and self-assembling peptide studies
Membrane proteins and self-assembly properties of certain proteins can be studied under this technique, due to being a label-free calorimeter. Membrane proteins are known to have difficulties with selection of proper solubilization and purification protocols. As ITC is a non-destructive calorimetric tool, it can be used as a detector to locate the fraction of protein with desired binding sites, by binding a known binding ligand to the protein. This feature also applies in studies of self-assembling proteins, especially in use of measuring thermodynamics of their structural transformation.
Drug development
Binding affinity carries a huge importance in medicinal chemistry, as drugs need to bind to the protein effectively within a desired range. However, determining enthalpy changes and optimization of thermodynamic parameters are hugely difficult when designing drugs. ITC troubleshoots this issue easily by deducing the binding affinity, enthalpic/entropic contributions and its binding stoichiometry.
Chiral chemistry
Applying the ideas above, chirality of organometallic compounds can be deduced as well with this technique. Each chiral compound has its own unique properties and binding mechanisms that are comparable to each other, which leads to differences in thermodynamic properties. By binding chiral solutions in a binding site can deduce the type of chirality and depending on the purpose, which chiral compound is more suitable for binding.
Metal binding interactions
Binding metal ions to protein and other components of biological material is one of the most popular uses of ITC, since ovotransferrin to ferric iron binding study published by Lin et al. from MicroCal Inc. This is due to some of the metal ions utilized in biological systems having d10 electron configuration which cannot be studied with other common techniques such as UV-vis spectrophotometry or electron paramagnetic resonance. It is also closely related to biochemical and medicinal studies due to the large abundance of metal binding enzymes in biological systems.
Carbon nanotubes and related materials
The technique has been well utilized in studying carbon nanotubes to determine thermodynamic binding interactions with biological molecules and graphene composite interactions. Another notable use of ITC with carbon nanotubes is optimization of preparation of carbon nanotubes from graphene composite and polyvinyl alcohol (PVA). PVA assembly process can be measured thermodynamically as mixing of the two ingredients is an exothermic reaction, and its binding trend can be easily observed by ITC.
See also
Differential scanning calorimetry
Dual polarisation interferometry
Sorption calorimetry
Pressure perturbation calorimetry
Surface plasmon resonance
References
Scientific techniques
Biochemistry methods
Biophysics
Chemical thermodynamics
Calorimetry | Isothermal titration calorimetry | [
"Physics",
"Chemistry",
"Biology"
] | 2,273 | [
"Biochemistry methods",
"Biochemistry",
"Applied and interdisciplinary physics",
"Biophysics",
"Chemical thermodynamics"
] |
4,716,700 | https://en.wikipedia.org/wiki/3C%2075 | 3C 75 (also called 3C75) is a binary black hole system in the dumbbell-shaped galaxy NGC 1128 in the galaxy cluster Abell 400. It has four relativistic jets, two coming from each accreting supermassive black hole. It is travelling at 1200 kilometers per second, causing the jets to be swept back. 3C 75 may be X-ray source 2A 0252+060 (1H 0253+058, XRS 02522+060).
References
External links
What is known about 3C 75
Binary Black Hole in 3C 75. Astronomy Picture of the Day. 2010 March 14
NRAO press release
Visible image of 3C75 binary
Simbad MCG+B01-08-027
Supermassive black holes
075
Abell 400
Radio galaxies
Cetus | 3C 75 | [
"Physics",
"Astronomy"
] | 173 | [
"Black holes",
"Galaxy stubs",
"Unsolved problems in physics",
"Supermassive black holes",
"Astronomy stubs",
"Constellations",
"Cetus"
] |
4,718,833 | https://en.wikipedia.org/wiki/Eukaryotic%20chromosome%20structure | Eukaryotic chromosome structure refers to the levels of packaging from raw DNA molecules to the chromosomal structures seen during metaphase in mitosis or meiosis. Chromosomes contain long strands of DNA containing genetic information. Compared to prokaryotic chromosomes, eukaryotic chromosomes are much larger in size and are linear chromosomes. Eukaryotic chromosomes are also stored in the cell nucleus, while chromosomes of prokaryotic cells are not stored in a nucleus. Eukaryotic chromosomes require a higher level of packaging to condense the DNA molecules into the cell nucleus because of the larger amount of DNA. This level of packaging includes the wrapping of DNA around proteins called histones in order to form condensed nucleosomes.
History
The double helix was discovered in 1953 by James Watson and Francis Crick. Other researchers made very important, but unconnected findings about the composition of DNA. Ultimately it was Watson and Crick who put all of these findings together to come up with a model for DNA. Later, chemist Alexander Todd determined that the backbone of a DNA molecule contained repeating phosphate and deoxyribose sugar groups. The biochemist Erwin Chargaff found that adenine and thymine always paired while cytosine and guanine always paired. High resolution X-ray images of DNA that were obtained by Maurice Wilkins and Rosalind Franklin suggested a helical, or corkscrew like shape. Some of the first scientists to recognize the structures now known as chromosomes were Schleiden, Virchow, and Bütschli. The term was coined by Heinrich Wilhelm Gottfried von Waldeyer-Hartz, referring to the term chromatin, was introduced by Walther Flemming. Scientists also discovered plant and animal cells have a central compartment called the nucleus. They soon realized chromosomes were found inside the nucleus and contained different information for many different traits.
Structure
In eukaryotes, such as humans, roughly 3.2 billion nucleotides are spread out over 23 different chromosomes (males have both an X chromosome and a Y chromosome instead of a pair of X chromosomes as seen in females). Each chromosome consists enormously long linear DNA molecule associated with proteins that fold and pack the fine thread of DNA into a more compact structure.
Commonly, many people think the structure of a chromosome is in an "X" shape. But this is only present when the cell divides. Researchers have now been able to model the structure of chromosomes when they are active. This is extremely important because the way that DNA folds up in chromosome structures is linked to the way DNA is used. Scientists have been able to develop the 3D structures of chromosomes in a single cell. The scientists used hundreds of measurements of where different parts of the DNA get close to one another to help create this model. This research was done by scientists at the Department of Biochemistry at Cambridge, working with others from the Babraham Institute and the Weizmann Institute.
Nucleosomes
The nucleosome is the basic unit of DNA condensation and consists of a DNA double helix bound to an octamer of core histones (2 dimers of H2A and H2B, and an H3/H4 tetramer). About 147 base pairs of DNA coil around 1 octamer, and ~20 base pairs are sequestered by the addition of the linker histone (H1), and various length of "linker" DNA (~0-100 bp) separate the nucleosomes. The spacing of nucleosomes along DNA results in a “beads on a string” appearance. Histone modification controls the accessibility to DNA. Histone acetyltransferases or HATs acetylate residues on the histone tail leading to increased accessibility to DNA.
Packaging
Packaging of DNA is facilitated by the electrostatic charge distribution: phosphate groups cause DNA to have a negative charge, whilst the histones are positively charged. Most eukaryotic cells contain histones (with a few exceptions) as well as the kingdom Archaea. Specifically histones H3 and H4 are nearly identical in structure among all eukaryotes, suggesting strict evolutionary conservation of both structure and function. Histones are positively charged molecules as they contain lysine and arginine in larger quantities and DNA is negatively charged. This allows histones to make a strong ionic bond to DNA form a nucleosome. The most basic level of DNA condensation is the wrapping of DNA around the histone core proteins. Higher-order packaging is accomplished by specialized proteins that bind and fold the DNA. This generates a series of loops and coils that provide increasingly higher levels of organization and prevent the DNA from becoming tangled and unmanageable. This complex of DNA and proteins are called chromatin. But in addition to proteins involved with packaging, chromosomes are associated with proteins involved with DNA replication, DNA repair, and gene expression.
References
Molecular genetics
DNA | Eukaryotic chromosome structure | [
"Chemistry",
"Biology"
] | 1,009 | [
"Molecular genetics",
"Molecular biology"
] |
4,719,126 | https://en.wikipedia.org/wiki/Micropatterning | Micropatterning is the art of miniaturisation of patterns. Especially used for electronics, it has recently become a standard in biomaterials engineering and for fundamental research on cellular biology by mean of soft lithography. It generally uses photolithography methods but many techniques have been developed.
In cellular biology, micropatterns can be used to control the geometry of adhesion and substrate rigidity. This tool helped scientists to discover how the environment influences processes such as the orientation of the cell division axis, organelle positioning, cytoskeleton rearrangement cell differentiation and directionality of cell migration.
Micropatterns can be made on a wide range of substrates, from glass to polyacrylamide and polydimethylsiloxane (PDMS). The polyacrylamide and PDMS in particular come into handy because they let scientists specifically regulate the stiffness of the substrate, and they allow researchers to measure cellular forces (traction force microscopy). Advanced custom micropatterning allow precise and relatively rapid experiments controlling cell adhesion, cell migration, guidance, 3D confinement and microfabrication of microstructured chips. Using advanced tools, protein patterns can be produced in virtually unlimited numbers (2D/ 3D shapes and volumes).
Nanopatterning of proteins has been achieved through using top-down lithography techniques.
Aerosol micropatterning for biomaterials uses spray microscopic characteristics to obtain semi-random patterns particularly well adapted for biomaterials.
References
External links
Team of Matthieu Piel working a lot with micropatterns and inventing new techniques
Website of Manuel Théry with numerous papers on micropatterning
Linked companies
Alvéole Lab
4Dcell
Cytoo
Innopsys
Forcyte Biotechnologies
Lithography (microfabrication)
Microtechnology | Micropatterning | [
"Materials_science",
"Engineering"
] | 381 | [
"Nanotechnology",
"Materials science",
"Microtechnology",
"Lithography (microfabrication)"
] |
2,554,508 | https://en.wikipedia.org/wiki/Tinplate | Tinplate consists of sheets of steel coated with a thin layer of tin to impede rusting. Before the advent of cheap mild steel, the backing metal (known as "") was wrought iron. While once more widely used, the primary use of tinplate now is the manufacture of tin cans.
In the tinning process, tinplate is made by rolling the steel (or formerly iron) in a rolling mill, removing any mill scale by pickling it in acid and then coating it with a thin layer of tin. Plates were once produced individually (or in small groups) in what became known as a pack mill. In the late 1920s pack mills began to be replaced by strip mills which produced larger quantities more economically.
Formerly, tinplate was used for tin ceiling, and holloware (cheap pots and pans), also known as tinware. The people who made tinware (metal spinning) were tinplate workers.
For many purposes, tinplate has been replaced by galvanised metal, the base being treated with a zinc coating. It is suitable in many applications where tinplate was formerly used, although not for cooking vessels, or in other high temperature situationswhen heated, fumes from zinc oxide are given off; exposure to such gases can produce toxicity syndromes such as metal fume fever. The zinc layer prevents the iron from rusting through sacrificial protection with the zinc oxidizing instead of the iron, whereas tin will only protect the iron if the tin surface remains unbroken.
History of production processes and markets
The practice of tin mining likely began circa 3000 B.C. in Western Asia, British Isles and Europe. Tin was an essential ingredient of bronze production during the Bronze Age.
The practice of tinning ironware to protect it against rust is an ancient one. This may have been the work of the whitesmith. This was done after the article was fabricated, whereas tinplate was tinned before fabrication. Tinplate was apparently produced in the 1620s at a mill of (or under the patronage of) the Earl of Southampton, but it is not clear how long this continued.
The first production of tinplate was probably in Bohemia, from where the trade spread to Saxony, and was well-established there by the 1660s. Andrew Yarranton and Ambrose Crowley (a Stourbridge blacksmith and father of the more famous Sir Ambrose) visited Dresden in 1667 and learned how it was made. In doing so, they were sponsored by various local ironmasters and people connected with the project to make the river Stour navigable. In Saxony, the plates were forged, but when they conducted experiments on their return to England, they tried rolling the iron. This led to the ironmasters Philip Foley and Joshua Newborough (two of the sponsors) in 1670 erecting a new mill, Wolverley Lower Mill (or forge) in Worcestershire. This contained three shops, one being a slitting mill (which would serve as a rolling mill), and the others were forges. In 1678 one of these was making frying pans and the other drawing out blooms made in finery forges elsewhere. It is likely that the intention was to roll the plates and then finish them under a hammer, but the plan was frustrated by William Chamberlaine renewing a patent granted to him and Dud Dudley in 1662.
The slitter at Wolverley was Thomas Cooke. Another Thomas Cooke, perhaps his son, moved to Pontypool and worked there for John Hanbury. He had a slitting mill there and was also producing iron plates called 'Pontpoole plates'. Edward Lhuyd reported the existence of this mill in 1697. This has been claimed as a tinplate works, but it was almost certainly only producing (untinned) .
Tinplate first begins to appear in the Gloucester Port Books (which record trade passing through Gloucester), mostly from ports in the Bristol Channel in 1725. The tinplate was shipped from Newport, Monmouthshire. This immediately follows the first appearance (in French) of Reamur's Principes de l'art de fer-blanc, and prior to a report of it being published in England.
Further mills followed a few years later, initially in many iron-making regions in England and Wales, but later mainly in south Wales, most notably the Melingriffith Tin Plate Works, Whitchurch, Cardiff, which was founded some time before 1750. In 1805, 80,000 boxes were made and 50,000 exported. The industry continued to grow until 1891. One of the greatest markets was the United States, but that market was cut off in 1891 when the McKinley tariff was enacted. This caused a great retrenchment in the British industry and the emigration to America of many of those were no longer employed in the surviving tinplate works.
Despite this blow, the industry continued, but on a smaller scale. There were 518 mills in operation in 1937, including 224 belonging to Richard Thomas & Co. The traditional 'pack mill' had been overtaken by the improved 'strip mill', of which the first in Great Britain was built by Richard Thomas & Co. in the late 1930s. Strip mills rendered the old pack mills obsolete and the last of them closed circa the 1960s.
Pack mill process
The raw material was bar iron, or (from the introduction of mild steel in the late 19th century), a bar of steel. This was drawn into a flat bar (known as a tin bar) at the ironworks or steel works where it was made. The cross-section of the bar needed to be accurate in size as this would be the cross-section of the pack of plates made from it. The bar was cut to the correct length (being the width of the plates) and heated. It was then passed four or five times through the rolls of the rolling mill, to produce a thick plate about 30 inches long. Between each pass the plate is passed over (or round) the rolls, and the gap between the rolls is narrowed by means of a screw.
This was then rolled until it had doubled in length. The plate was then folded in half ('doubled') using a doubling shear, which was like a table where one half of the surface folds over on top of the other. It is then put into a furnace to be heated until it is well 'soaked'. This is repeated until there is a pack of 8 or 16 plates. The pack is then allowed to cool. When cool, the pack was sheared (using powered shears) and the plates separated by 'openers' (usually women). Defective plates were discarded, and the rest passed to the pickling department.
In the pickling department, the plates were immersed in baths of acid (to remove scale, i.e., oxide), then in water (washing them). After inspection they were placed in an annealing furnace, where they were heated for 10–14 hours. This was known as 'black pickling' and 'black annealing'. After being removed they were allowed to cool for up to 48 hours. The plates were then rolled cold through highly polished rolls to remove any unevenness and give them a polished surface. They were then annealed again at a lower temperature and pickled again, this being known as 'white annealing' and 'white pickling'. They were then washed and stored in slightly acid water (where they would not rust) awaiting tinning.
The tinning set consisted of two pots with molten tin (with flux on top) and a grease pot. The flux dries the plate and prepares it for the tin to adhere. The second tin pot (called the wash pot) had tin at a lower temperature. This is followed by the grease pot (containing an oil), removing the excess tin. Then follow cleaning and polishing processes. Finally, the tinplates were packed in boxes of 112 sheets ready for sale. Single plates were ; doubles twice that. A box weighed approximately a hundredweight (cwt; ).
Strip mill process
The strip mill was a major innovation, with the first being erected at Ashland, Kentucky in 1923. This provided a continuous process, eliminating the need to pass the plates over the rolls and to double them. At the end the strip was cut with a guillotine shear or rolled into a coil. Earlyhot rollingstrip mills did not produce strip suitable for tinplate, but in 1929 cold rolling began to be used to reduce the gauge further. The first strip mill in Great Britain was opened at Ebbw Vale in 1938 with an annual output of 200,000 imperial tons ().
The strip mill had several advantages over pack mills:
It was cheaper due to having all parts of the process, starting with blast furnaces, on the same site.
Softer steel could be used.
Larger sheets could be produced at lower cost; this reduced cost and enabled tinplate and steel sheet to be used for more purposes.
It was capital-intensive, rather than labour-intensive.
See also
Plating for other processes for plating metals.
Tinsmith
Tinware
Terne plate, a cheaper version, but not food-safe, using a mixture of lead and tin.
Notes
Citations
Sources
:
.
.
.
. Gloucestershire Heritage catalogue record
Further reading
Articles in the series "The Rise of the Tinplate Industry", parts I–IV, by F. W. Gibbs, from the Annals of Science (; Informa UK Limited):
Irwin, D. A. (1998). "Did late nineteenth century U.S. tariffs promote infant industries? Evidence from the tinplate industry". NBER working paper 6835.
Industrial processes
Thin film deposition
Metallurgy
Steelmaking
Metal plating
Coatings
Tin
Packaging | Tinplate | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,986 | [
"Thin film deposition",
"Metallurgical processes",
"Metallurgy",
"Coatings",
"Steelmaking",
"Thin films",
"Materials science",
"nan",
"Planes (geometry)",
"Solid state engineering",
"Metal plating"
] |
2,556,705 | https://en.wikipedia.org/wiki/Schur%27s%20inequality | In mathematics, Schur's inequality, named after Issai Schur,
establishes that for all non-negative real numbers
x, y, z, and t>0,
with equality if and only if x = y = z or two of them are equal and the other is zero. When t is an even positive integer, the inequality holds for all real numbers x, y and z.
When , the following well-known special case can be derived:
Proof
Since the inequality is symmetric in we may assume without loss of generality that . Then the inequality
clearly holds, since every term on the left-hand side of the inequality is non-negative. This rearranges to Schur's inequality.
Extensions
A generalization of Schur's inequality is the following:
Suppose a,b,c are positive real numbers. If the triples (a,b,c) and (x,y,z) are similarly sorted, then the following inequality holds:
In 2007, Romanian mathematician Valentin Vornicu showed that a yet further generalized form of Schur's inequality holds:
Consider , where , and either or . Let , and let be either convex or monotonic. Then,
The standard form of Schur's is the case of this inequality where x = a, y = b, z = c, k = 1, ƒ(m) = mr.
Another possible extension states that if the non-negative real numbers with and the positive real number t are such that x + v ≥ y + z then
Notes
Inequalities
Articles containing proofs
Issai Schur | Schur's inequality | [
"Mathematics"
] | 327 | [
"Mathematical theorems",
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Articles containing proofs",
"Mathematical problems"
] |
2,557,627 | https://en.wikipedia.org/wiki/Curved%20space | Curved space often refers to a spatial geometry which is not "flat", where a flat space has zero curvature, as described by Euclidean geometry. Curved spaces can generally be described by Riemannian geometry, though some simple cases can be described in other ways. Curved spaces play an essential role in general relativity, where gravity is often visualized as curved spacetime. The Friedmann–Lemaître–Robertson–Walker metric is a curved metric which forms the current foundation for the description of the expansion of the universe and the shape of the universe. The fact that photons have no mass yet are distorted by gravity, means that the explanation would have to be something besides photonic mass. Hence, the belief that large bodies curve space and so light, traveling on the curved space will, appear as being subject to gravity. It is not, but it is subject to the curvature of space.
Simple two-dimensional example
A very familiar example of a curved space is the surface of a sphere. While to our familiar outlook the sphere looks three-dimensional, if an object is constrained to lie on the surface, it only has two dimensions that it can move in. The surface of a sphere can be completely described by two dimensions, since no matter how rough the surface may appear to be, it is still only a surface, which is the two-dimensional outside border of a volume. Even the surface of the Earth, which is fractal in complexity, is still only a two-dimensional boundary along the outside of a volume.
Embedding
One of the defining characteristics of a curved space is its departure from the Pythagorean theorem. In a curved space
.
The Pythagorean relationship can often be restored by describing the space with an extra dimension. Suppose we have a three-dimensional non-Euclidean space with coordinates . Because it is not flat
.
But if we now describe the three-dimensional space with four dimensions () we can choose coordinates such that
.
Note that the coordinate is not the same as the coordinate .
For the choice of the 4D coordinates to be valid descriptors of the original 3D space it must have the same number of degrees of freedom. Since four coordinates have four degrees of freedom it must have a constraint placed on it. We can choose a constraint such that Pythagorean theorem holds in the new 4D space. That is
.
The constant can be positive or negative. For convenience we can choose the constant to be
where now is positive and .
We can now use this constraint to eliminate the artificial fourth coordinate . The differential of the constraining equation is
leading to .
Plugging into the original equation gives
.
This form is usually not particularly appealing and so a coordinate transform is often applied: , , . With this coordinate transformation
.
Without embedding
The geometry of a n-dimensional space can also be described with Riemannian geometry. An isotropic and homogeneous space can be described by the metric:
.
This reduces to Euclidean space when . But a space can be said to be "flat" when the Weyl tensor has all zero components. In three dimensions this condition is met when the Ricci tensor () is equal to the metric times the Ricci scalar (, not to be confused with the R of the previous section). That is . Calculation of these components from the metric gives that
where .
This gives the metric:
.
where can be zero, positive, or negative and is not limited to ±1.
Open, flat, closed
An isotropic and homogeneous space can be described by the metric:
.
In the limit that the constant of curvature () becomes infinitely large, a flat, Euclidean space is returned. It is essentially the same as setting to zero. If is not zero the space is not Euclidean. When the space is said to be closed or elliptic. When the space is said to be open or hyperbolic.
Triangles which lie on the surface of an open space will have a sum of angles which is less than 180°. Triangles which lie on the surface of a closed space will have a sum of angles which is greater than 180°. The volume, however, is not .
See also
CAT(k) space
Non-positive curvature
References
Further reading
The Feynman Lectures on Physics Vol. II Ch. 42: Curved Space
External links
Curved Spaces, simulator for multi-connected universes developed by Jeffrey Weeks
Riemannian geometry
Physical cosmology
Differential geometry
General relativity | Curved space | [
"Physics",
"Astronomy"
] | 903 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"General relativity",
"Theory of relativity",
"Physical cosmology"
] |
2,557,692 | https://en.wikipedia.org/wiki/Hobbs%20meter | Hobbs meter is a generic trademark for devices used in aviation to measure the time that an aircraft is in use. The meters typically display hours and tenths of an hour, but there are several ways in which the meter may be activated:
It can measure the time that the electrical system is on. This maximizes the recorded time.
It can be activated by oil pressure running into a pressure switch, and therefore runs while the engine is running. Many rental aircraft use this method to remove the incentive to fly with the master electrical switch off.
It can be activated by another switch, either an airspeed sensing vane under a wing (as in the Cessna Caravan) or a pressure switch attached to the landing gear (as in many twin engine planes). In these cases, the meter only measures the time the aircraft is actually flying. Metrics such as Time In Service and Turbine Actual Runtime are kept to monitor overhaul cycles, and are usually used by commercial operators under Federal Aviation Regulations Parts 135, 121, or 125.
It can be activated when the engine alternators are online (as in the Cirrus SR series).
General aviation use
For general aviation, "Hobbs time" is usually recorded in the pilot's log book, and many fixed-base operators that rent airplanes charge an hourly rate based on Hobbs time. Tachometer time or "tach time" is recorded in the engine's log books and is used, for example, to determine when the oil should be changed and the time between overhauls. Tach time differs from Hobbs time in that it is linked to engine revolutions per minute (RPM). Tach time records the time at a specific RPM. It is most accurate at cruise RPM, and least accurate while taxiing or stationary with the engine running. At these times, the clock runs slower. Depending on the type of flight, tach time can be 10–20% less than Hobbs time. Many organizations, such as flying clubs, charge by tach time so as to differentiate themselves from fixed-base operators as 10–20% less time recorded makes it 10–20% cheaper to fly (if the hourly rate is the same). In the case where flying clubs use tach time, many will charge a "dry rate", requiring the renter to pay for fuel on top of the hourly tach time rate.
History
The Hobbs meter is named after John Weston Hobbs (1889–1968), who in 1938 founded the company named after him in Springfield, Illinois, which manufactured the first electrically wound clocks for vehicle use. World War II created the demand for aviation hour meters which led to the development of the original Hobbs meter. The company was eventually renamed Honeywell Hobbs after being acquired by Honeywell International, who in 2009 announced plans to move manufacturing to Mexico.
In 2022, Honeywell obsoleted all of their hour meters including the Hobbs meter line.
References
Timers
Aircraft instruments
Avionics | Hobbs meter | [
"Technology",
"Engineering"
] | 596 | [
"Avionics",
"Aircraft instruments",
"Measuring instruments"
] |
2,559,736 | https://en.wikipedia.org/wiki/Animal%20studies | Animal studies is a recently recognised field in which animals are studied in a variety of cross-disciplinary ways. Scholars who engage in animal studies may be formally trained in a number of diverse fields, including art history, anthropology, biology, film studies, geography, history, psychology, literary studies, museology, philosophy, communication, and sociology. They engage with questions about notions of "animality," "animalization," or "becoming animal," to understand human-made representations of and cultural ideas about "the animal" and what it is to be human by employing various theoretical perspectives. Using these perspectives, those who engage in animal studies seek to understand both human-animal relations now and in the past as defined by our knowledge of them. Because the field is still developing, scholars and others have some freedom to define their own criteria about what issues may structure the field.
History
Animal studies became popular in the 1970s as an interdisciplinary subject, animal studies exists at the intersection of a number of different fields of study such as journals and books series, etc. Different fields began to turn to animals as an important topic at different times and for various reasons, and these separate disciplinary histories shape how scholars approach animal studies. Historically, the field of environmental history has encouraged attention to animals.
Ethics
Throughout Western history, humankind has put itself above the "nonhuman species." In part, animal studies developed out of the animal liberation movement and was grounded in ethical questions about co-existence with other species: whether it is moral to eat animals, to do scientific research on animals for human benefit, and so on. Take rats, for example, with a history of being used as “an experimental subject, feeder, and “pest.” However, fewer than 10% of research studies on animals result in new medical findings for human patients. This has led researchers to find new Non-animal Approach Methodologies (NAMs) that provide more accurate human reactions. Animal studies scholars who explore the field from an ethical perspective frequently cite Australian philosopher Peter Singer's 1975 work, Animal Liberation, as a founding document in animal studies. Singer's work followed Jeremy Bentham's by trying to expand utilitarian questions about pleasure and pain beyond humans to other sentient creatures. Overall, progress happens slowly, but the marginal voices help introduce new concepts and ethics that can eventually transform society's relationship with other species.
Some still believe that the primary purpose of animal interaction is solely for food. However, animal domestication created a new intimate bond between human and non-human, and changed the way that humans live their lives. Theorists interested in the role of animals in literature, culture, and Continental philosophy also consider the late work of Jacques Derrida a driving force behind the rise of interest in animal studies in the humanities. Derrida's final lecture series, The Animal That Therefore I Am, examined how interactions with animal life affect human attempts to define humanity and the self through language. Taking up Derrida's deconstruction and extending it to other cultural territory, Cary Wolfe published Animal Rites in 2003 and critiqued earlier animal rights philosophers such as Peter Singer and Thomas Regan. Wolfe's study points out an insidious humanism at play in their philosophies and others. Recently also the Italian philosopher Giorgio Agamben published a book on the question of the animal: The Open. Man and Animal.
Art
Animals also played an essential role in the art community. One of the earliest forms of art was on the walls of caves from the early man, where they usually drew what they hunted. The country of Namibia has a large collection of ancient rock art from the Stone Age. The skillfully engraved depiction of animal tracks provides important information about the animals of that time. Then, in the Middle Ages, animals would appear for more religious reasons. Later in the 15th century, artists began coinciding with animals as a serious subject when discoveries in foreign lands were brought back to England. During the Renaissance era, the influential artist Leonardo da Vinci took interest in animal studies. Leonardo da Vinci studied animal anatomy to create anatomically accurate drawings of various species. Years later, animal representation took the form of woodworking, lithography, and photographs. In the late 1800s, photographers became interested in capturing animal locomotion.
Research topics and methodologies
Researchers in animal studies examine the questions and issues that arise when traditional modes of humanistic and scientific inquiry begin to take animals seriously as subjects of thought and activity. Students of animal studies may examine how humanity is defined in relation to animals, or how representations of animals create understandings (and misunderstandings) of other species. In fact, animals often elicit fear in humans. A well-known animal phobia is ophidiophobia, the fear of snakes. People with animal phobias tend to negatively generalize animals, even species that are harmless.
In most movies, predatory animals such as sharks and wolves are usually the antagonists, but this only causes significant damage to their reputation and makes people fear what they think their true nature is. In order to do so, animal studies pays close attention to the ways that humans anthropomorphize animals, and asks how humans might avoid bias in observing other creatures. Anthropomorphized animals are frequently found in children's books and films. Researchers are analyzing the positive and negative effects of anthropomorphized animals on a child's view of the non-human species. In addition, Donna Haraway's book, Primate Visions, examines how dioramas created for the American Museum of Natural History showed family groupings that conformed to the traditional human nuclear family, which misrepresented the animals' observed behavior in the wild. Critical approaches in animal studies have also considered representations of non-human animals in popular culture, including species diversity in animated films. By highlighting these issues, animal studies strives to re-examine traditional ethical, political, and epistemological categories in the context of a renewed attention to and respect for animal life. The assumption that focusing on animals might clarify human knowledge is neatly expressed in Claude Lévi-Strauss's famous dictum that animals are "good to think."
See also
Intersectionality
Anthrozoology (human–animal studies)
Animality studies
Critical animal studies
Ecocriticism
Ecosophy
References
Bibliography
Bjorkdahl, Kristian, and Alex Parrish (2017) Rhetorical Animals: Boundaries of the Human in the Study of Persuasion. Lantham: Lexington Press. ISBM 9781498558457.
Boehrer, Bruce, editor, A Cultural History of Animals in the Renaissance, Berg, 2009, .
De Ornellas, Kevin (2014). The Horse in Early Modern English Culture, Fairleigh Dickinson University Press, .
Kalof, Linda (2017). The Oxford Handbook of Animal Studies. Oxford: Oxford University Press. .
External links
Animal Studies Journal
Animal Rights History
Animal Studies and Film: An interview with Matthew Brower, professor of graduate Art History at York University
Animal Studies Online Bibliography
Animals and the Law
Australian Animal Studies Group
Italian Animal Studies Review
Animal Studies at Michigan State University
Animal rights
Animal testing
Art criticism
Art history
Behavioural sciences
Social sciences | Animal studies | [
"Chemistry",
"Biology"
] | 1,474 | [
"Behavioural sciences",
"Animal testing",
"Behavior"
] |
1,843,447 | https://en.wikipedia.org/wiki/Crank%E2%80%93Nicolson%20method | In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the 1940s.
For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step times the thermal diffusivity to the square of space step, , is large (typically, larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is often used, which is both stable and immune to oscillations.
Principle
The Crank–Nicolson method is based on the trapezoidal rule, giving second-order convergence in time. For linear equations, the trapezoidal rule is equivalent to the implicit midpoint method—the simplest example of a Gauss–Legendre implicit Runge–Kutta method—which also has the property of being a geometric integrator. For example, in one dimension, suppose the partial differential equation is
Letting and evaluated for and , the equation for Crank–Nicolson method is a combination of the forward Euler method at and the backward Euler method at (note, however, that the method itself is not simply the average of those two methods, as the backward Euler equation has an implicit dependence on the solution):
Note that this is an implicit method: to get the "next" value of in time, a system of algebraic equations must be solved. If the partial differential equation is nonlinear, the discretization will also be nonlinear, so that advancing in time will involve the solution of a system of nonlinear algebraic equations, though linearizations are possible. In many problems, especially linear diffusion, the algebraic problem is tridiagonal and may be efficiently solved with the tridiagonal matrix algorithm, which gives a fast direct solution, as opposed to the usual for a full matrix, in which indicates the matrix size.
Example: 1D diffusion
The Crank–Nicolson method is often applied to diffusion problems. As an example, for linear diffusion,
applying a finite difference spatial discretization for the right-hand side, the Crank–Nicolson discretization is then
or, letting ,
Given that the terms on the right-hand side of the equation are known, this is a tridiagonal problem, so that may be efficiently solved by using the tridiagonal matrix algorithm in favor over the much more costly matrix inversion.
A quasilinear equation, such as (this is a minimalistic example and not general)
would lead to a nonlinear system of algebraic equations, which could not be easily solved as above; however, it is possible in some cases to linearize the problem by using the old value for , that is, instead of . Other times, it may be possible to estimate using an explicit method and maintain stability.
Example: 1D diffusion with advection for steady flow, with multiple channel connections
This is a solution usually employed for many purposes when there is a contamination problem in streams or rivers under steady flow conditions, but information is given in one dimension only. Often the problem can be simplified into a 1-dimensional problem and still yield useful information.
Here we model the concentration of a solute contaminant in water. This problem is composed of three parts: the known diffusion equation ( chosen as constant), an advective component (which means that the system is evolving in space due to a velocity field), which we choose to be a constant , and a lateral interaction between longitudinal channels ():
where is the concentration of the contaminant, and subscripts and correspond to previous and next channel.
The Crank–Nicolson method (where represents position, and time) transforms each component of the PDE into the following:
Now we create the following constants to simplify the algebra:
and substitute (), (), (), (), (), (), , and into (). We then put the new time terms on the left () and the present time terms on the right () to get
To model the first channel, we realize that it can only be in contact with the following channel (), so the expression is simplified to
In the same way, to model the last channel, we realize that it can only be in contact with the previous channel (), so the expression is simplified to
To solve this linear system of equations, we must now see that boundary conditions must be given first to the beginning of the channels:
: initial condition for the channel at present time step,
: initial condition for the channel at next time step,
: initial condition for the previous channel to the one analyzed at present time step,
: initial condition for the next channel to the one analyzed at present time step.
For the last cell of the channels (), the most convenient condition becomes an adiabatic one, so
This condition is satisfied if and only if (regardless of a null value)
Let us solve this problem (in a matrix form) for the case of 3 channels and 5 nodes (including the initial boundary condition). We express this as a linear system problem:
where
Now we must realize that AA and BB should be arrays made of four different subarrays (remember that only three channels are considered for this example, but it covers the main part discussed above):
where the elements mentioned above correspond to the next arrays, and an additional 4×4 full of zeros. Please note that the sizes of AA and BB are 12×12:
The d vector here is used to hold the boundary conditions. In this example it is a 12×1 vector:
To find the concentration at any time, one must iterate the following equation:
Example: 2D diffusion
When extending into two dimensions on a uniform Cartesian grid, the derivation is similar and the results may lead to a system of band-diagonal equations rather than tridiagonal ones. The two-dimensional heat equation
can be solved with the Crank–Nicolson discretization of
assuming that a square grid is used, so that . This equation can be simplified somewhat by rearranging terms and using the CFL number
For the Crank–Nicolson numerical scheme, a low CFL number is not required for stability, however, it is required for numerical accuracy. We can now write the scheme as
Solving such a linear system is costly. Hence an alternating-direction implicit method can be implemented to solve the numerical PDE, whereby one dimension is treated implicitly, and other dimension explicitly for half of the assigned time step and conversely for the remainder half of the time step. The benefit of this strategy is that the implicit solver only requires a tridiagonal matrix algorithm to be solved. The difference between the true Crank–Nicolson solution and ADI approximated solution has an order of accuracy of and hence can be ignored with a sufficiently small time step.
Crank–Nicolson for nonlinear problems
Because the Crank–Nicolson method is implicit, it is generally impossible to solve exactly. Instead, an iterative technique should be used to converge to the solution. One option is to use Newton's method to converge on the prediction, but this requires the computation of the Jacobian. For a high-dimensional system like those in computational fluid dynamics or numerical relativity, it may be infeasible to compute this Jacobian.
A Jacobian-free alternative is fixed-point iteration. If is the velocity of the system, then the Crank–Nicolson prediction will be a fixed point of the map If the map iteration does not converge, the parameterized map , with , may be better behaved. In expanded form, the update formula is
where is the current guess and is the previous time-step.
Even for high-dimensional systems, iteration of this map can converge surprisingly quickly.
Application in financial mathematics
Because a number of other phenomena can be modeled with the heat equation (often called the diffusion equation in financial mathematics), the Crank–Nicolson method has been applied to those areas as well. Particularly, the Black–Scholes option pricing model's differential equation can be transformed into the heat equation, and thus numerical solutions for option pricing can be obtained with the Crank–Nicolson method.
The importance of this for finance is that option pricing problems, when extended beyond the standard assumptions (e.g. incorporating changing dividends), cannot be solved in closed form, but can be solved using this method. Note however, that for non-smooth final conditions (which happen for most financial instruments), the Crank–Nicolson method is not satisfactory as numerical oscillations are not damped. For vanilla options, this results in oscillation in the gamma value around the strike price. Therefore, special damping initialization steps are necessary (e.g., fully implicit finite difference method).
See also
Financial mathematics
Trapezoidal rule
References
External links
Numerical PDE Techniques for Scientists and Engineers, open access Lectures and Codes for Numerical PDEs
An example of how to apply and implement the Crank–Nicolson method for the Advection equation
Mathematical finance
Numerical differential equations
Finite differences | Crank–Nicolson method | [
"Mathematics"
] | 1,928 | [
"Applied mathematics",
"Mathematical analysis",
"Mathematical finance",
"Finite differences"
] |
1,843,913 | https://en.wikipedia.org/wiki/Desert%20ecology | Desert ecology is the study of interactions between both biotic and abiotic components of desert environments. A desert ecosystem is defined by interactions between organisms, the climate in which they live, and any other non-living influences on the habitat. Deserts are arid regions that are generally associated with warm temperatures; however, cold deserts also exist. Deserts can be found in every continent, with the largest deserts located in Antarctica, the Arctic, Northern Africa, and the Middle East.
Climate
Deserts experience a wide range of temperatures and weather conditions, and can be classified into four types: hot, semiarid, coastal, and cold. Hot deserts experience warm temperatures year round, and low annual precipitation. Low levels of humidity in hot deserts contribute to high daytime temperatures, and extensive night time heat loss. The average annual temperature in hot deserts is approximately 20 to 25 °C, however, extreme weather conditions can lead to temperatures ranging from -18 to 49 °C.
Rainfall generally occurs, followed by long periods of dryness. Semiarid deserts experience similar conditions to hot deserts, however, the maximum and minimum temperatures tend to be less extreme, and generally range from 10 to 38 °C. Coastal deserts are cooler than hot and semiarid deserts, with average summer temperatures ranging between 13 and 24 °C. They also feature higher total rainfall values. Cold deserts are similar in temperature to coastal deserts, however, they receive more annual precipitation in the form of snowfall. Deserts are most notable for their dry climates; usually a result from their surrounding geography. For example, rain-blocking mountain ranges, and distance from oceans are two geographic features that contribute to desert aridity. Rain-blocking mountain ranges create Rain Shadows. As air rises and cools, its relative humidity increases and some or most moisture rains out, leaving little to no water vapor to form precipitation on the other side of the mountain range.
Deserts occupy one-fifth of the Earth's land surface and occur in two belts: between 15° and 35° latitude in both the southern and northern hemispheres. These bands are associated with the high solar intensities that all areas in the tropics receive, and with the dry air brought down by the descending arms of both the Hadley and Ferell atmospheric circulation cells. Dry winds hold little moisture for these areas, and also tend to evaporate any water present.
Many desert ecosystems are limited by available water levels, rather than rates of radiation or temperature. Water flow in these ecosystems can be thought of as similar to energy flow; in fact, it is often useful to look at water and energy flow together when studying desert ecosystems and ecology.
Water availability in deserts may also be hindered by loose sediments. Dust clouds commonly form in windy, arid climates. Scientists have previously theorised that desert dust clouds would enhance rainfall, however, some more recent studies have shown that precipitation is actually inhibited by this phenomenon by absorbing moisture from the atmosphere. This absorption of atmospheric moisture can result in a positive feedback loop, which leads to further desertification.
Landscape
Desert landscapes can contain a wide variety of geological features, such as oases, rock outcrops, dunes, and mountains. Dunes are structures formed by wind moving sediments into mounds. Desert dunes are generally classified based on their orientation relative to wind directly. Possibly the most recognizable dune type are transverse dunes, characterized by crests transverse to the wind direction. Many dunes are considered to be active, meaning that they can travel and change over time due to the influence of the wind. However, some dunes can be anchored in place by vegetation or topography, preventing their movement. Some dunes may also be referred to as sticky. These types of dunes occur when individual grains of sand become cemented together. Sticky dunes tend to be more stable, and resistant to wind reworking than loose dunes. Barchan, and Seif dunes are among the most common of desert dunes. Barchan dunes are formed as winds continuously blow in the same direction, and are characterized by a crescent-shape atop the dune. Seif dunes are long and narrow, featuring a sharp crest, and are more common in the Sahara Desert.
Analysis of geological features in desert environments can reveal a lot about the geologic history of the area. Through observation and identification of rock deposits, geologists are able to interpret the order of events that occurred during desert formation. For example, research conducted on the surface geology of the Namib Desert allowed geologists to interpret ancient movements of the Kuiseb River based on rock ages and features identified in the area.
Organism adaptation
Animals
Deserts support diverse communities of plant and animals that have evolved resistance, and circumventing methods of extreme temperatures and arid conditions. For example, desert grasslands are more humid and slightly cooler than its surrounding ecosystems. Many animals obtain energy by eating the surrounding vegetation, however, desert plants are much more difficult for organisms to consume. To avoid intense temperatures, the majority of small desert mammals are nocturnal, living in burrows to avoid the intense desert sun during the daytime. These burrows prevent overheating and dehydration as they maintain an optimal temperature for the mammal. Desert ecology is characterized by dry, alkaline soils, low net production and opportunistic feeding patterns by herbivores and carnivores. Other organisms' survival tactics are physiologically based. Such tactics include the completion of life cycles ahead of anticipated drought seasons, and storing water with the help of specialized organs.
Desert climates are particularly demanding on endothermic organisms. However, endothermic organisms have adapted mechanisms to aid in water retention in habitats such as desert ecosystems which are commonly affected by drought. In environments where the external temperature is less than their body temperature, most endotherms are able to balance heat production and heat loss to maintain a comfortable temperature. However, in deserts where air and ground temperatures exceed body temperature, endotherms must be able to dissipate the large amounts of heat being absorbed in these environments. In order to cope with extreme conditions, desert endotherms have adapted through the means of avoidance, relaxation of homeostasis, and specializations. Nocturnal desert rodents, like the kangaroo rat, will spend the daytime in cool burrows deep underground, and emerge at night to seek food. Birds are much more mobile than ground-dwelling endotherms, and can therefore avoid heat-induced dehydration by flying between water sources. To prevent overheating, the body temperatures of many desert mammals have adapted to be much higher than non-desert mammals. Camels, for example, can maintain body temperatures that are about equal to typical desert air temperatures. This adaptations allows camels to retain large amounts of water for extended periods of time. Other examples of higher body temperature in desert mammals include the diurnal antelope ground squirrel, and the oryx. Certain desert endotherms have evolved very specific and unique characteristics to combat dehydration. Male sandgrouse have specialized belly feathers that are able to trap and carry water. This allows the sandgrouse to provide a source of hydration for their chicks, who do not yet have the ability to fly to water sources themselves.
Plants
Although deserts have severe climates, some plants still manage to grow. Plants that can survive in arid deserts are called xerophytes, meaning they are able to survive long dry periods. Such plants may close their stomata during the daytime and open them again at night. During the night, temperatures are much cooler, and plants will experience less water loss, and intake larger amounts of carbon dioxide for photosynthesis.
Adaptations in xerophytes include resistance to heat and water loss, increased water storage capabilities, and reduced surface area of leaves. One of the most common families of desert plants are the cacti, which are covered in sharp spines or bristles for defence against herbivory. The bristles on certain cacti also have the ability to reflect sunlight, such as those of the old man cactus. Certain xerophytes, like oleander, feature stomata that are recessed as a form of protection against hot, dry desert winds, which allows the leaves to retain water more effectively. Another unique adaptation can be found in xerophytes like ocotillo, which are "leafless during most of the year, thereby avoiding excessive water loss".
There are also plants called phreatophytes which have adapted to the harsh desert conditions by developing extremely long root systems, some of which are 80 ft. long; to reach the water table which ensures a water supply to the plant.
Exploration and research
The harsh climate of most desert regions is a major obstacle in conducting research into these ecosystems. In the environments requiring special adaptations to survive, it is often difficult or even impossible for researchers to spend extended periods of time investigating the ecology of such regions. To overcome the limitations imposed by desert climates, some scientists have used technological advancements in the area of remote sensing and robotics. One such experiment, conducted in 1997, had a specialized robot named Nomad travel through a portion of the Atacama Desert. During this expedition, Nomad travelled over 200 kilometers and provided the researchers with many photographs of sites visited along its path. In another experiment in 2004, named the United Arab Emirates Unified Aerosol Experiment, researchers used satellites and computer models to study emissions and their effect on the climate in the Arabian Desert.
See also
Aridisols
References
Deserts
Ecology
Ecology by biome
Habitats | Desert ecology | [
"Biology"
] | 1,913 | [
"Deserts",
"Ecosystems"
] |
1,844,527 | https://en.wikipedia.org/wiki/Exponential%20dichotomy | In the mathematical theory of dynamical systems, an exponential dichotomy is a property of an equilibrium point that extends the idea of hyperbolicity to non-autonomous systems.
Definition
If
is a linear non-autonomous dynamical system in Rn with fundamental solution matrix Φ(t), Φ(0) = I, then the equilibrium point 0 is said to have an exponential dichotomy if there exists a (constant) matrix P such that P2 = P and positive constants K, L, α, and β such that
and
If furthermore, L = 1/K and β = α, then 0 is said to have a uniform exponential dichotomy.
The constants α and β allow us to define the spectral window of the equilibrium point, (−α, β).
Explanation
The matrix P is a projection onto the stable subspace and I − P is a projection onto the unstable subspace. What the exponential dichotomy says is that the norm of the projection onto the stable subspace of any orbit in the system decays exponentially as t → ∞ and the norm of the projection onto the unstable subspace of any orbit decays exponentially as t → −∞, and furthermore that the stable and unstable subspaces are conjugate (because ).
An equilibrium point with an exponential dichotomy has many of the properties of a hyperbolic equilibrium point in autonomous systems. In fact, it can be shown that a hyperbolic point has an exponential dichotomy.
References
Coppel, W. A. Dichotomies in stability theory, Springer-Verlag (1978),
Dynamical systems
Dichotomies | Exponential dichotomy | [
"Physics",
"Mathematics"
] | 340 | [
"Mechanics",
"Dynamical systems"
] |
1,845,123 | https://en.wikipedia.org/wiki/Chain%20drive | Chain drive is a way of transmitting mechanical power from one place to another. It is often used to convey power to the wheels of a vehicle, particularly bicycles and motorcycles. It is also used in a wide variety of machines besides vehicles.
Most often, the power is conveyed by a roller chain, known as the drive chain or transmission chain, passing over a sprocket, with the teeth of the gear meshing with the holes in the links of the chain. The gear is turned, and this pulls the chain putting mechanical force into the system. Another type of drive chain is the Morse chain, invented by the Morse Chain Company of Ithaca, New York, United States. This has inverted teeth.
Sometimes the power is output by simply rotating the chain, which can be used to lift or drag objects. In other situations, a second gear is placed and the power is recovered by attaching shafts or hubs to this gear. Though drive chains are often simple oval loops, they can also go around corners by placing more than two gears along the chain; gears that do not put power into the system or transmit it out are generally known as idler-wheels. By varying the diameter of the input and output gears with respect to each other, the gear ratio can be altered. For example, when the bicycle pedals' gear rotates once, it causes the gear that drives the wheels to rotate more than one revolution. Duplex chains are another type of chain which are essentially two chains joined side by side which allow for more power and torque to be transmitted.
History
The oldest known application of a chain drive appears in the Polybolos, described by the Greek engineer Philon of Byzantium (3rd century BC). Two flat-linked chains were connected to a windlass, which by winding back and forth would automatically fire the machine's arrows until its magazine was empty. Although the device did not transmit power continuously since the chains "did not transmit power from shaft to shaft, and hence they were not in the direct line of ancestry of the chain-drive proper", the Greek design marks the beginning of the history of the chain drive since "no earlier instance of such a cam is known, and none as complex is known until the 16th century." It is here that the flat-link chain, often attributed to Leonardo da Vinci, actually made its first appearance."
The first continuous as well as the first endless chain drive was originally depicted in the written horological treatise of the Song dynasty by the medieval Chinese polymath mathematician and astronomer Su Song (1020–1101 AD), who used it to operate the armillary sphere of his astronomical clock tower, which is the first astronomical clock, as well as the clock jack figurines presenting the time of day by mechanically banging gongs and drums. The chain drive itself converted rotary to reclinear motion and was given power via the hydraulic works of Su's water clock tank and waterwheel, the latter which acted as a large gear.
Alternatives
Belt drive
Most chain drive systems use teeth to transfer motion between the chain and the rollers. This results in lower frictional losses than belt drive systems, which often rely on friction to transfer motion.
Although chains can be made stronger than belts, their greater mass increases drive train inertia.
Drive chains are most often made of metal, while belts are often rubber, plastic, urethane, or other substances. If the drive chain is heavier than an equivalent drive belt, the system will have a higher inertia. Theoretically, this can lead to a greater flywheel effect, however in practice the belt or chain inertia often makes up a small proportion of the overall drivetrain inertia.
One problem with roller chains is the variation in speed, or surging, caused by the acceleration and deceleration of the chain as it goes around the sprocket link by link. It starts as soon as the pitch line of the chain contacts the first tooth of the sprocket. This contact occurs at a point below the pitch circle of the sprocket. As the sprocket rotates, the chain is raised up to the pitch circle and is then dropped down again as sprocket rotation continues. Because of the fixed pitch length, the pitch line of the link cuts across the chord between two pitch points on the sprocket, remaining in this position relative to the sprocket until the link exits the sprocket. This rising and falling of the pitch line is what causes chordal effect or speed variation.
In other words, conventional roller chain drives suffer the potential for vibration, as the effective radius of action in a chain and sprocket combination constantly changes during revolution ("Chordal action"). If the chain moves at constant speed, then the shafts must accelerate and decelerate constantly. If one sprocket rotates at a constant speed, then the chain (and probably all other sprockets that it drives) must accelerate and decelerate constantly. This is usually not an issue with many drive systems; however, most motorcycles are fitted with a rubber bushed rear wheel hub to virtually eliminate this vibration issue. Toothed belt drives are designed to limit this issue by operating at a constant pitch radius).
Chains are often narrower than belts, and this can make it easier to shift them to larger or smaller gears in order to vary the gear ratio. Multi-speed bicycles with derailleurs make use of this. Also, the more positive meshing of a chain can make it easier to build gears that can increase or shrink in diameter, again altering the gear ratio. However, some newer synchronous belts claim to have "equivalent capacity to roller chain drives in the same width".
Both can be used to move objects by attaching pockets, buckets, or frames to them; chains are often used to move things vertically by holding them in frames, as in industrial toasters, while belts are good at moving things horizontally in the form of conveyor belts. It is not unusual for the systems to be used in combination; for example the rollers that drive conveyor belts are themselves often driven by drive chains.
Drive shafts
Drive shafts are another common method used to move mechanical power around that is sometimes evaluated in comparison to chain drive; in particular belt drive vs chain drive vs shaft drive is a key design decision for most motorcycles. Drive shafts tend to be tougher and more reliable than chain drive, but the bevel gears have far more friction than a chain. For this reason virtually all high-performance motorcycles use chain drive, with shaft-driven arrangements generally used for non-sporting machines. Toothed-belt drives are used for some (non-sporting) models.
Use in vehicles
Bicycles
Chain drive was the main feature which differentiated the safety bicycle introduced in 1885, with its two equal-sized wheels, from the direct-drive penny-farthing or "high wheeler" type of bicycle. The popularity of the chain-driven safety bicycle brought about the demise of the penny-farthing, and is still a basic feature of bicycle design today.
Automobiles
Many early cars used a chain drive system, which was a popular alternative to the Système Panhard. A common design was using a differential located near the centre of the car, which then transferred drive to the rear axle via roller chains. This system allowed for a relatively simple design which could accommodate the vertical axle movement associated with the rear suspension system.
Frazer Nash were strong proponents of this system using one chain per gear selected by dog clutches. Their chain drive system, (designed for the GN Cyclecar Company) was very effective, allowing for fast gear selections. This system was used in many racing cars of the 1920s and 1930s. The last popular chain drive automobile was the Honda S600 of the 1960s.
Motorcycles
Chain drive versus belt drive or use of a driveshaft is a fundamental design decision in motorcycle design; nearly all motorcycles use one of these three designs.
See also
Bicycle chain
Chain pump
Chainsaw
Gear
Rolling mills
References
Bibliography
Needham, Joseph (1986). Science and Civilization in China: Volume 4, Chemistry and Chemical Technology, Part 2, Mechanical Engineering. Taipei: Caves Books Ltd.
Sclater, Neil. (2011). "Chain and belt devices and mechanisms." Mechanisms and Mechanical Devices Sourcebook. 5th ed. New York: McGraw Hill. pp. 262–277. . Drawings and designs of various drives.
External links
The Complete Guide to Chain
Motorcycle primary and drive chains explained
Mechanics
Automotive transmission technologies
Chinese inventions
Mechanical power control
Mechanical power transmission | Chain drive | [
"Physics",
"Engineering"
] | 1,741 | [
"Mechanical power transmission",
"Mechanics",
"Mechanical power control",
"Mechanical engineering"
] |
8,126,986 | https://en.wikipedia.org/wiki/Polyglutamylation | Polyglutamylation is a form of reversible posttranslational modification of glutamate residues seen for example in alpha and beta tubulins, nucleosome assembly proteins NAP1 and NAP2. The γ-carboxy group of glutamate may form peptide-like bond with the amino group of a free glutamate whose α-carboxy group can now be extended into a polyglutamate chain. The glutamylation is done by the enzyme glutamylase and removed by deglutamylase.
Polyglutamylation of chain length of up to six occurs in certain glutamate residues near the C terminus of most major forms of tubulins. These residues, though themselves not involved in direct binding, cause conformational shifts that regulate binding of microtubule associated proteins (MAP and Tau) and motors.
External links
The role of tubulin polymodifications in microtubule functions
References
Post-translational modification
Protein structure | Polyglutamylation | [
"Chemistry"
] | 213 | [
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Structural biology",
"Protein structure"
] |
8,128,733 | https://en.wikipedia.org/wiki/Regulon | In molecular genetics, a regulon is a group of genes that are regulated as a unit, generally controlled by the same regulatory gene that expresses a protein acting as a repressor or activator. This terminology is generally, although not exclusively, used in reference to prokaryotes, whose genomes are often organized into operons; the genes contained within a regulon are usually organized into more than one operon at disparate locations on the chromosome. Applied to eukaryotes, the term refers to any group of non-contiguous genes controlled by the same regulatory gene.
A modulon is a set of regulons or operons that are collectively regulated in response to changes in overall conditions or stresses, but may be under the control of different or overlapping regulatory molecules. The term stimulon is sometimes used to refer to the set of genes whose expression responds to specific environmental stimuli.
Examples
Commonly studied regulons in bacteria are those involved in response to stress such as heat shock. The heat shock response in E. coli is regulated by the sigma factor
σ32 (RpoH), whose regulon has been characterized as containing at least 89 open reading frames.
Regulons involving virulence factors in pathogenic bacteria are of particular research interest; an often-studied example is the phosphate regulon in E. coli, which couples phosphate homeostasis to pathogenicity through a two-component system. Regulons can sometimes be pathogenicity islands.
The Ada regulon in E. coli is a well-characterized example of a group of genes involved in the adaptive response form of DNA repair.
Quorum sensing behavior in bacteria is a commonly cited example of a modulon or stimulon, though some sources describe this type of intercellular auto-induction as a separate form of regulation.
Evolution
Changes in the regulation of gene networks are a common mechanism for prokaryotic evolution. An example of the effects of different regulatory environments for homologous proteins is the DNA-binding protein OmpR, which is involved in response to osmotic stress in E. coli but is involved in response to acidic environments in the close relative Salmonella Typhimurium.
References
External links
Gene expression | Regulon | [
"Chemistry",
"Biology"
] | 461 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
8,129,403 | https://en.wikipedia.org/wiki/Kinetoplast | A kinetoplast is a network of circular DNA (called kDNA) inside a mitochondrion that contains many copies of the mitochondrial genome. The most common kinetoplast structure is a disk, but they have been observed in other arrangements. Kinetoplasts are only found in Excavata of the class Kinetoplastida. The variation in the structures of kinetoplasts may reflect phylogenic relationships between kinetoplastids. A kinetoplast is usually adjacent to the organism's flagellar basal body, suggesting that it is bound to some components of the cytoskeleton. In Trypanosoma brucei this cytoskeletal connection is called the tripartite attachment complex and includes the protein p166.
Trypanosoma
In trypanosomes, a group of flagellated protozoans, the kinetoplast exists as a dense granule of DNA within the mitochondrion. Trypanosoma brucei, the parasite which causes African trypanosomiasis (African sleeping sickness), is an example of a trypanosome with a kinetoplast. Its kinetoplast is easily visible in samples stained with DAPI, a fluorescent DNA stain, or by the use of fluorescent in situ hybridization (FISH) with BrdU, a thymidine analogue.
Structure
The kinetoplast contains circular DNA in two forms, maxicircles and minicircles. Maxicircles are between 20 and 40kb in size and there are a few dozen per kinetoplast. There are several thousand minicircles per kinetoplast and they are between 0.5 and 1kb in size. Maxicircles encode the typical protein products needed for the mitochondria which is encrypted. Herein lies the only known function of the minicircles - producing guide RNA (gRNA) to decode this encrypted maxicircle information, typically through the insertion or deletion of uridine residues. The network of maxicircles and minicircles are catenated to form a planar network that resembles chain mail. Reproduction of this network then requires that these rings be disconnected from the parental kinetoplast and subsequently reconnected in the daughter kinetoplast. This unique mode of DNA replication may inspire potential drug targets.
The best studied kDNA structure is that of Crithidia fasciculata, a catenated disk of circular kDNA maxicircles and minicircles, most of which are not supercoiled. Exterior to the kDNA disk but directly adjacent are two complexes of proteins situated 180˚ from each other and are involved in minicircle replication.
Variations
Variations of kinetoplast networks have also been observed and are described by the arrangement and location of their kDNA.
A pro-kDNA kinetoplast is a bundle-like structure found in the mitochondrial matrix proximal to the flagellar basal body. In contrast to the conventional kDNA network, a pro-kDNA kinetoplast contains very little catenation and its maxicircles and minicircles are relaxed instead of supercoiled. Pro-kDNA has been observed in Bodo saltans, Bodo designis, Procryptobia sorokini syn. Bodo sorokini, Rhynchomonas nasuta, and Cephalothamnium cyclopi.
A poly-kDNA kinetoplast is similar in kDNA structure to a pro-kDNA kinetoplast. It contains little catenation and no supercoiling. The distinctive feature of poly-kDNA is that instead of being composed of a single globular bundle as in pro-kDNA, the poly-kDNA is distributed among various discrete foci throughout the mitochondrial lumen. Poly-kDNA has been observed in Dimastigella trypaniformis (a commensal in the intestine of a termite), Dismastigella mimosa (a free-living kinetoplastid), and Cruzella marina (a parasite of the intestine of a sea squirt).
A pan-kDNA kinetoplast, like poly-kDNA and pro-kDNA, contains a lesser degree of catenation but it does contain minicircles that are supercoiled. Pan-kDNA kinetoplasts fill most of the mitochondrial matrix and are not limited to discrete foci like poly-kDNA. Pan-kDNA has been observed in Cryptobia helicis (a parasite of the receptaculum seminis of snails), Bodo caudatus, and Cryptobia branchialis (a parasite of fish).
A mega-kDNA kinetoplast is distributed fairly uniformly throughout the mitochondrial matrix, but does not contain minicircles. Instead, sequences of kDNA similar in sequence to other kinetoplast minicircles are connected in tandem into larger molecules approximately 200kb in length. Mega-kDNA (or structures similar to mega-kDNA) have been observed in Trypanoplasme borreli (a fish parasite) and Jarrellia sp. (a whale parasite).
The presence of this variety of kDNA structures reinforces the evolutionary relationship between the species of kinetoplastids. As pan-kDNA most closely resembles a DNA plasmid, it may be the ancestral form of kDNA.
Replication
The replication of the kinetoplast occurs simultaneously to the duplication of the adjacent flagellum and just prior to the nuclear DNA replication. In a traditional Crithidia fasciculata kDNA network, initiation of replication is promoted by the unlinking of kDNA minicircles via topoisomerase II. The free minicircles are released into a region between the kinetoplast and the mitochondrial membrane called the kinetoflagellar zone (KFZ). After replication the minicircles migrate by unknown mechanisms to the antipodal protein complexes that contain several replication proteins including an endonuclease, helicase, DNA polymerase, DNA primase, and DNA ligase, which initiate repair of remaining discontinuities in the newly replicated minicircles.
This process occurs one minicircle at a time, and only a small number of minicircles are unlinked at any given moment. To keep track of which minicircles have been replicated, upon rejoining to the kDNA network a small gap remains in the nascent minicircles, which identifies them as having already been replicated. Minicircles that have not yet been replicated are still covalently closed. Immediately after replication, each progeny is attached to the kDNA network proximal to the antipodal protein complexes and the gaps are partially repaired.
As minicircle replication progresses, to prevent the build-up of new minicircles, the entire kDNA network will rotate around the central axis of the disk. The rotation is believed to be directly connected to the replication of the adjacent flagellum, as the daughter basal body will also rotate around the mother basal body in a timing and manner similar to the rotation of the kinetoplast. By rotating, the minicircles of the daughter kinetoplast are assembled in a spiral fashion and begin moving inward toward the center of the disk as new minicircles are unlinked and moved into the KFZ for replication.
While the exact mechanisms for maxicircle kDNA have yet to be determined in the same detail as minicircle kDNA, a structure called a nabelschnur (German for "umbilical cord") is observed that tethers the daughter kDNA networks but eventually breaks during separation. Using FISH probes to target the nabelschnur, it has been found to contain maxicircle kDNA.
Kinetoplast replication is described as occurring in five stages, each in relation to the replication of the adjacent flagellum.
Stage I: The kinetoplast has not yet initiated replication, contains no antipodal protein complexes, and is positioned relative to a single flagellar basal body.
Stage II: The kinetoplast begins to show antipodal protein complexes. The flagellar basal body begins replication, as does the kinetoplast. The association of the replicating kinetoplast to the two basal bodies causes it to develop a domed appearance.
Stage III: The new flagellum begin to separate and the kinetoplast takes on a bilobed shape.
Stage IV: The kinetoplasts appear as separate disks but remain connected by the nabelschnur.
Stage V: The daughter kinetoplasts are completely separated as the nabelschnur is broken. Their structure is identical to that seen in Stage I.
DNA repair
Trypanosoma cruzi is able to repair nucleotides in its genomic or kinetoplast DNA that have been damaged by reactive oxygen species produced by the parasite's host during infection. DNA polymerase beta expressed in T. cruzi is employed in the removal of oxidative DNA damages by the process of base excision repair. It appears that DNA polymerase beta acts during kinetoplast DNA replication to repair oxidative DNA damages induced by genotoxic stress in this organelle.
References
Kinetoplastids
Mitochondria
Organelles
Mitochondrial genetics | Kinetoplast | [
"Chemistry"
] | 1,993 | [
"Mitochondria",
"Metabolism"
] |
8,134,415 | https://en.wikipedia.org/wiki/Vasculum | A vasculum or a botanical box is a stiff container used by botanists to keep field samples viable for transportation. The main purpose of the vasculum is to transport plants without crushing them and by maintaining a cool, humid environment.
Construction
Vascula are cylinders typically made from tinned and sometimes lacquered iron, though wooden examples are known. The box was carried horizontally on a strap so that plant specimens lie flat and lined with moistened cloth. Traditionally, British and American vascula were somewhat flat and valise-like with a single room, while continental examples were more cylindrical and often longer, sometimes with two separate compartments. Access to the interior is through one (sometimes two) large lids in the side, allowing plants to be put in and taken out without bending or distorting them unnecessarily. This is particularly important with wildflowers, which are often fragile.
Some early 20th century specimen are made from sheet aluminium rather than tin, but otherwise follow the 19th century pattern. The exterior is usually left rough, or lacquered green.
History
The roots of the vasculum are lost in time, but may have evolved from the 17th century tin candle-box of similar construction. Linnaeus called it a vasculum dillenianum, from Latin vasculum – small container and dillenianum, referring to J.J. Dillenius, Linnaeus' friend and colleague at Oxford Botanic Garden. With rise of botany as a scientific field the mid 18th century, the vasculum became an indispensable part of the botanist's equipment.
Together with the screw-down plant press, the vasculum was popularized in Britain by naturalist William Withering around 1770. The shortened term "vasculum" appears to have become the common name applied to them around 1830. Being a hallmark of field botany, vascula were in common use until World War II . With post-war emphasis on systematics rather than alpha taxonomy and new species often collected in far-away places, field botany and the use of vascula went into decline.
Aluminium vascula are still made and in use, though zipper bags and clear plastic folders are today cheaper and more common in use.
The Vasculum
The Vasculum was "An Illustrated Quarterly dealing primarily with the Natural History of Northumberland and Durham and the tracts immediately adjacent," from 1915 to 2015.
The newsletter of the Society of Herbarium Curators is named "The Vasculum" since 2006.
References
External links
Darwin's vasculum at the Linnean Society of London
Botany
Containers | Vasculum | [
"Biology"
] | 530 | [
"Plants",
"Botany"
] |
23,729,510 | https://en.wikipedia.org/wiki/Einstein%E2%80%93de%20Sitter%20universe | The Einstein–de Sitter universe is a model of the universe proposed by Albert Einstein and Willem de Sitter in 1932. On first learning of Edwin Hubble's discovery of a linear relation between the redshift of the galaxies and their distance, Einstein set the cosmological constant to zero in the Friedmann equations, resulting in a model of the expanding universe known as the Friedmann–Einstein universe. In 1932, Einstein and De Sitter proposed an even simpler cosmic model by assuming a vanishing spatial curvature as well as a vanishing cosmological constant. In modern parlance, the Einstein–de Sitter universe can be described as a cosmological model for a flat matter-only Friedmann–Lemaître–Robertson–Walker metric (FLRW) universe.
In the model, Einstein and de Sitter derived a simple relation between the average density of matter in the universe and its expansion according to H02 = кρ/3, where H0 is the Hubble constant, ρ is the average density of matter and к is the Einstein gravitational constant. The size of the Einstein–de Sitter universe evolves with time as , making its current age 2/3 times the Hubble time. The Einstein–de Sitter universe became a standard model of the universe for many years because of its simplicity and because of a lack of empirical evidence for either spatial curvature or a cosmological constant. It also represented an important theoretical case of a universe of critical matter density poised just at the limit of eventually contracting. However, Einstein's later reviews of cosmology make it clear that he saw the model as only one of several possibilities for the expanding universe.
The Einstein–de Sitter universe was particularly popular in the 1980s, after the theory of cosmic inflation predicted that the curvature of the universe should be very close to zero. This case with zero cosmological constant implies the Einstein–de Sitter model, and the theory of cold dark matter was developed, initially with a cosmic matter budget around 95% cold dark matter and 5% baryons. However, in the 1990s various observations including galaxy clustering and measurements of the Hubble constant led to increasingly serious problems for this model. Following the discovery of the accelerating universe in 1998, and observations of the cosmic microwave background and galaxy redshift surveys in 2000–2003, it is now generally accepted that dark energy makes up around 70 percent of the present energy density while cold dark matter contributes around 25 percent, as in the modern Lambda-CDM model.
The Einstein–de Sitter model remains a good approximation to our universe in the past at redshifts between around 300 and 2, i.e. well after the radiation-dominated era but before dark energy became important.
See also
Shape of the universe
de Sitter universe
Ultimate fate of the universe
Notes and references
General relativity
Albert Einstein | Einstein–de Sitter universe | [
"Physics"
] | 583 | [
"General relativity",
"Theory of relativity"
] |
23,729,622 | https://en.wikipedia.org/wiki/C30H50 | {{DISPLAYTITLE:C30H50}}
The molecular formula C30H50 (molar mass: 410.72 g/mol, exact mass: 410.3913 u) may refer to:
Squalene
Hopene
diploptene (Hop-22(29)-ene)
Molecular formulas | C30H50 | [
"Physics",
"Chemistry"
] | 69 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,732,455 | https://en.wikipedia.org/wiki/Upstream%20open%20reading%20frame | An upstream open reading frame (uORF) is an open reading frame (ORF) within the 5' untranslated region (5'UTR) of an mRNA. uORFs can regulate eukaryotic gene expression. Translation of the uORF typically inhibits downstream expression of the primary ORF. However, in some genes such as yeast GCN4, translation of specific uORFs may increase translation of the main ORF.
In humans
Approximately 50% of human genes contain uORFs in their 5'UTR, and when present, these cause reductions in protein expression. Human peptides derived from translated uORFs can be detected from cellular material with a mass spectrometer.
uORFs were found in two thirds of proto-oncogenes and related proteins.
In bacteria
In bacteria, uORFs are called leader peptides and were originally discovered on the basis of their impact on the regulation of genes involved in the synthesis or transport of amino acids.
See also
Eukaryotic translation
Short open reading frame
Micropeptides
Leaky scanning
References
Gene expression
Molecular biology | Upstream open reading frame | [
"Chemistry",
"Biology"
] | 227 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
23,735,926 | https://en.wikipedia.org/wiki/Trojan%20wave%20packet | In physics, a trojan wave packet is a wave packet that is nonstationary and nonspreading. It is part of an artificially created system that consists of a nucleus and one or more electron wave packets, and that is highly excited under a continuous electromagnetic field. Its discovery as one of significant contributions to the quantum mechanics was awarded the 2022 Wigner Medal for Iwo Bialynicki-Birula
The strong, polarized electromagnetic field, holds or "traps" each electron wave packet in an intentionally selected orbit (energy shell). They derive their names from the trojan asteroids in the Sun–Jupiter system. Trojan asteroids orbit around the Sun in Jupiter's orbit at its Lagrange points L4 and L5, where they are phase-locked and protected from collision with each other, and this phenomenon is analogous to the way the wave packet is held together.
Concepts and research
The concept of the trojan wave packet is derived from manipulating atoms and ions at the atomic level creating ion traps. Ion traps allow the manipulation of atoms and are used to create new states of matter including ionic liquids, Wigner crystals and Bose–Einstein condensates.
This ability to manipulate the quantum properties directly is key to the development of applicable nanodevices such as quantum dots and microchip traps. In 2004 it was shown that it is possible to create a trap which is actually a single atom. Within the atom, the behavior of an electron can be manipulated.
During experiments in 2004 using lithium atoms in an excited state, researchers were able to localize an electron in a classical orbit for 15,000 orbits (900 ns). It was neither spreading nor dispersing. This "classical atom" was synthesized by "tethering" the electron using a microwave field to which its motion is phase locked. The phase lock of the electrons in this unique atomic system is, as mentioned above, analogous to the phase locked asteroids of Jupiter's orbit.
The techniques explored in this experiment are a solution to a problem that dates back to 1926. Physicists at that time realized that any initially localized wave packet will inevitably spread around the orbit of the electrons. Physicists noticed that "the wave equation is dispersive for the atomic Coulomb potential." In the 1980s several groups of researchers proved this to be true. The wave packets spread all the way around the orbits and coherently interfered with themselves. Recently the real world innovation realized with experiments such as trojan wave packets, is localizing the wave packets, i.e., with no dispersion. Applying a polarized circular EM field, at microwave frequencies, synchronized with an electron wave packet, intentionally keeps the electron wave packets in a Lagrange type orbit.
The trojan wave packet experiments built on previous work with lithium atoms in an excited state. These are atoms, which respond sensitively to electric and magnetic fields, have decay periods that are relatively prolonged, and electrons, which for all intents and purposes actually operate in classical orbits. The sensitivity to electric and magnetic fields is important because this allows control and response by the polarized microwave field.
Beyond single electron wave packets
The next logical step is to attempt to move from single electron wave packets to more than one electron wave packet. This had already been accomplished in barium atoms, with two electron wave packets. These two were localized. However, eventually, these created dispersion after colliding near the nucleus. Another technique employed a nondispersive pair of electrons, but one of these had to have a localized orbit close to the nucleus. The nondispersive two-electron trojan wave packets demonstration changes all that. These are the next step analogue of the one electron
trojan wave packets – and designed for excited helium atoms.
As of July 2005, atoms with coherent, stable two-electron, nondispersing wave packets had been created. These are excited helium-like atoms, or quantum dot helium (in solid-state applications), and are atomic (quantum) analogues to the three body problem of Newton's classical physics, which includes today's astrophysics. In tandem, circularly polarized electromagnetic and magnetic fields stabilize the two electron configuration in the helium atom or the quantum dot helium (with impurity center). The stability is maintained over a broad spectrum, and because of this, the configuration of two electron wave packets is considered to be truly nondispersive. For example, with the quantum dot helium, configured for confining electrons in two spatial dimensions, there now exists a variety of trojan wave packet configurations with two electrons, and as of 2005, only one in three dimensions. In 2012 an essential experimental step was undertaken not only generating but locking the trojan wavepackets on adiabatically changed frequency and expanding the atoms as once predicted by Kalinski and Eberly. It will allow
to create two electron Langmuir trojan wave packets in helium by the sequential excitation in adiabatic Stark field
able to produce the circular one-electron aureola over first and then put the second electron in similar state.
See also
Atomic orbital
Rydberg state
Soliton wave
Quantum scar
Gausson
References
Further reading
Books
Journal articles
External links
Aharonov-Bohm Oscillations In "Trojan Electrons"
Experimental creation of "Trojan Wave Packets" - Barry Dunnings's talk on youtube
Multi-electron extensions of "Trojan Wave Packets" - Matt Kalinski's talk (1) on youtube
Multi-electron extensions of "Trojan Wave Packets" - Matt Kalinski's talk (2) on youtube
Opposite phenomenon - Cycloatoms (PPT Presentation by Robert Wagner)- accelerated counterintuitive relativistic spreading of the sharp Gaussian wave packet originally resembling the ground state into the ring in the hydrogen atom in the ultra-strong magnetic and the laser fields (animation)
Materials science
Microelectronic and microelectromechanical systems
Microtechnology
Nanoelectronics
Nanotechnology
Quantum states | Trojan wave packet | [
"Physics",
"Materials_science",
"Engineering"
] | 1,219 | [
"Applied and interdisciplinary physics",
"Microtechnology",
"Quantum mechanics",
"Materials science",
"Nanoelectronics",
"nan",
"Nanotechnology",
"Microelectronic and microelectromechanical systems",
"Quantum states"
] |
23,736,226 | https://en.wikipedia.org/wiki/Dulong%E2%80%93Petit%20law | The Dulong–Petit law, a thermodynamic law proposed by French physicists Pierre Louis Dulong and Alexis Thérèse Petit, states that the classical expression for the molar specific heat capacity of certain chemical elements is constant for temperatures far from the absolute zero.
In modern terms, Dulong and Petit found that the heat capacity of a mole of many solid elements is about 3R, where R is the universal gas constant. The modern theory of the heat capacity of solids states that it is due to lattice vibrations in the solid.
History
Experimentally Pierre Louis Dulong and Alexis Thérèse Petit had found in 1819 that the heat capacity per weight (the mass-specific heat capacity) for 13 measured elements was close to a constant value, after it had been multiplied by a number representing the presumed relative atomic weight of the element. These atomic weights had shortly before been suggested by John Dalton and modified by Jacob Berzelius.
Dulong and Petit were unaware of the relationship with R, since this constant had not yet been defined from the later kinetic theory of gases. The value of 3R is about 25 joules per kelvin, and Dulong and Petit essentially found that this was the heat capacity of certain solid elements per mole of atoms they contained.
The Kopp's law developed in 1865 by Hermann Franz Moritz Kopp extended the Dulong–Petit law to chemical compounds from further experimental data.
Amedeo Avogadro remarked in 1833 that the law did not fit the experimental data of carbon samples. In 1876, Heinrich Friedrich Weber, noticed that the specific heat of diamond was sensible to temperature.
In 1877, Ludwig Boltzmann showed that the constant value of Dulong–Petit law could be explained in terms of independent classical harmonic oscillators. With the advent of quantum mechanics, this assumption was refined by Weber's student, Albert Einstein in 1907, employing quantum harmonic oscillators to explain the experimentally observed decrease of the heat capacity at low temperatures in diamond.
Peter Debye followed in 1912 with a new model based on Max Planck's photon gas, where the vibrations are not to individual oscillators but as vibrational modes of the ionic lattice. Debye's model allowed to predict the behavior of the ionic heat capacity at temperature close to 0 kelvin, and as the Einstein solid, both recover the Dulong–Petit law at high temperature.
The electronic heat capacity was overestimated by the 1900 Drude-Lorentz model to be half of the value predicted by Dulong–Petit. With the development of the quantum mechanical free electron model in 1927 by Arnold Sommerfeld the electronic contribution was found to be orders of magnitude smaller. This model explained why conductors and insulators have roughly the same heat capacity at large temperatures as it depends mostly on the lattice and not on the electronic properties.
Equivalent forms of statement of the law
An equivalent statement of the Dulong–Petit law in modern terms is that, regardless of the nature of the substance, the specific heat capacity c of a solid element (measured in joule per kelvin per kilogram) is equal to 3R/M, where R is the gas constant (measured in joule per kelvin per mole) and M is the molar mass (measured in kilogram per mole). Thus, the heat capacity per mole of many elements is 3R.
The initial form of the Dulong–Petit law was:
where K is a constant which we know today is about 3R.
In modern terms the mass m of the sample divided by molar mass M gives the number of moles n.
Therefore, using uppercase C for the full heat capacity (in joule per kelvin), we have:
or
.
Therefore, the heat capacity of most solid crystalline substances is 3R per mole of substance.
Dulong and Petit did not state their law in terms of the gas constant R (which was not then known). Instead, they measured the values of heat capacities (per weight) of substances and found them smaller for substances of greater atomic weight as inferred by Dalton and other early atomists. Dulong and Petit then found that when multiplied by these atomic weights, the value for the heat capacity per mole was nearly constant, and equal to a value which was later recognized to be 3R.
In other modern terminology, the dimensionless heat capacity C/(nR) is equal to 3.
The law can also be written as a function of the total number of atoms N in the sample:
,
where kB is Boltzmann constant.
Application limits
Despite its simplicity, Dulong–Petit law offers a fairly good prediction for the heat capacity of many elementary solids with relatively simple crystal structure at high temperatures. This agreement is because in the classical statistical theory of Ludwig Boltzmann, the heat capacity of solids approaches a maximum of 3R per mole of atoms because full vibrational-mode degrees of freedom amount to 3 degrees of freedom per atom, each corresponding to a quadratic kinetic energy term and a quadratic potential energy term. By the equipartition theorem, the average of each quadratic term is kBT, or RT per mole (see derivation below). Multiplied by 3 degrees of freedom and the two terms per degree of freedom, this amounts to 3R per mole heat capacity.
The Dulong–Petit law fails at room temperatures for light atoms bonded strongly to each other, such as in metallic beryllium and in carbon as diamond. Here, it predicts higher heat capacities than are actually found, with the difference due to higher-energy vibrational modes not being populated at room temperatures in these substances.
In the very low (cryogenic) temperature region, where the quantum mechanical nature of energy storage in all solids manifests itself with larger and larger effect, the law fails for all substances. For crystals under such conditions, the Debye model, an extension of the Einstein theory that accounts for statistical distributions in atomic vibration when there are lower amounts of energy to distribute, works well.
Derivation for an Einstein solid
A system of vibrations in a crystalline solid lattice can be modeled as an Einstein solid, i.e. by considering N quantum harmonic oscillator potentials along each degree of freedom. Then, the free energy of the system can be written as
where the index α sums over all the degrees of freedom. In the 1907 Einstein model (as opposed to the later Debye model) we consider only the high-energy limit:
Then
and we have
Define geometric mean frequency by
where g measures the total number of spatial degrees of freedom of the system.
Thus we have
Using energy
we have
This gives heat capacity at constant volume
which is independent of the temperature.
For another more precise derivation, see Debye model.
See also
Heat capacity
Kopp–Neumann law
References
External links
(Annales de Chimie et de Physique article is translated)
Condensed matter physics
Laws of thermodynamics
Statistical mechanics
Analytical chemistry | Dulong–Petit law | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,407 | [
"Matter",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Thermodynamics",
"nan",
"Statistical mechanics",
"Laws of thermodynamics"
] |
23,737,949 | https://en.wikipedia.org/wiki/Two-way%20satellite%20time%20and%20frequency%20transfer | Two-way satellite time and frequency transfer (TWSTFT) is a high-precision long distance time and frequency transfer mechanism between time bureaux to determine and distribute time and frequency standards.
TWSTFT is being evaluated as an alternative to be used by the Bureau International des Poids et Mesures in the determination of International Atomic Time (TAI), as a complement to the current standard method of simultaneous observations of GPS transmissions.
External links
TWSTFT page at the National Physical Laboratory
TWSTFT page at the Physikalisch-Technische Bundesanstalt
NIST TWSTFT page
TWSTFT page at the US Naval Observatory
Time
Telecommunications techniques
Synchronization | Two-way satellite time and frequency transfer | [
"Physics",
"Mathematics",
"Engineering"
] | 148 | [
"Telecommunications engineering",
"Physical quantities",
"Time",
"Time stubs",
"Quantity",
"Spacetime",
"Wikipedia categories named after physical quantities",
"Synchronization"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.