id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
963,881 | https://en.wikipedia.org/wiki/Alarm%20management | Alarm management is the application of human factors and ergonomics along with instrumentation engineering and systems thinking to manage the design of an alarm system to increase its usability. Most often the major usability problem is that there are too many alarms annunciated in a plant upset, commonly referred to as alarm flood (similar to an interrupt storm), since it is so similar to a flood caused by excessive rainfall input with a basically fixed drainage output capacity. However, there can also be other problems with an alarm system such as poorly designed alarms, improperly set alarm points, ineffective annunciation, unclear alarm messages, etc. Poor alarm management is one of the leading causes of unplanned downtime, contributing to over $20B in lost production every year, and of major industrial incidents. Developing good alarm management practices is not a discrete activity, but more of a continuous process (i.e., it is more of a journey than a destination).
Alarm problem history
From their conception, large chemical, refining, power generation, and other processing plants required the use of a control system to keep the process operating successfully and producing products. Due to the fragility of the components as compared to the process, these control systems often required a control room to protect them from the elements and process conditions. In the early days of control rooms, they used what were referred to as "panel boards" which were loaded with control instruments and indicators. These were tied to sensors located in the process streams and on the outside of process equipment. The sensors relayed their information to the control instruments via analogue signals, such as a 4-20 mA current loop in the form of twisted pair wiring. At first these systems merely yielded information, and a well-trained operator was required to make adjustments either by changing flow rates, or altering energy inputs to keep the process within its designed limits.
Alarms were added to alert the operator to a condition that was about to exceed a design limit, or had already exceeded a design limit. Additionally, shutdown systems were employed to halt a process that was in danger of exceeding either safety, environmental or monetarily acceptable process limits. Alarm were indicated to the operator by annunciator horns, and lights of different colours. (For instance, green lights meant OK, Yellow meant not OK, and Red meant BAD.) Panel boards were usually laid out in a manner that replicated the process flow in the plant. So instrumentation indicating operating units with the plant was grouped together for recognition sake and ease of problem solution. It was a simple matter to look at the entire panel board, and discern whether any section of the plant was running poorly. This was due to both the design of the instruments and the implementation of the alarms associated with the instruments. Instrumentation companies put a lot of effort into the design and individual layout of the instruments they manufactured. To do this they employed behavioural psychology practices which revealed how much information a human being could collect in a quick glance. More complex plants had more complex panel boards, and therefore often more human operators or controllers.
Thus, in the early days of panel board systems, alarms were regulated by both size and cost. In essence, they were limited by the amount of available board space, and the cost of running wiring, and hooking up an annunciator (horn), indicator (light) and switches to flip to acknowledge, and clear a resolved alarm. It was often the case that if a new alarm was needed, an old one had to be given up.
As technology developed, the control system and control methods were tasked to continue to advance a higher degree of plant automation with each passing year. Highly complex material processing called for highly complex control methodologies. Also, global competition pushed manufacturing operations to increase production while using less energy, and producing less waste. In the days of the panel boards, a special kind of engineer was required to understand a combination of the electronic equipment associated with process measurement and control, the control algorithms necessary to control the process (PID basics), and the actual process that was being used to make the products. Around the mid 80's, we entered the digital revolution. Distributed control systems (DCS) were a boon to the industry. The engineer could now control the process without having to understand the equipment necessary to perform the control functions. Panel boards were no longer required, because all of the information that once came across analogue instruments could be digitised, stuffed into a computer and manipulated to achieve the same control actions once performed with amplifiers and potentiometers.
As a side effect, that also meant that alarms were easy and cheap to configure and deploy. You simply typed in a location, a value to alarm on and set it to active. The unintended result was that soon people alarmed everything. Initial installers set an alarm at 80% and 20% of the operating range of any variable just as a habit. The integration of programmable logic controllers, safety instrumented systems, and packaged equipment controllers has been accompanied by an overwhelming increase in associated alarms. One other unfortunate part of the digital revolution was that what once covered several square yards of panel space, now had to be fit into a 17-inch computer monitor. Multiple pages of information was thus employed to replicate the information on the replaced panel board. Alarms were used to tell an operator to go look at a page he was not viewing. Alarms were used to tell an operator that a tank was filling. Every mistake made in operations usually resulted in a new alarm. With the implementation of the OSHA 1910 regulations, HAZOPS studies usually requested several new alarms. Alarms were everywhere. Incidents began to accrue as a combination of too much data collided with too little useful information.
Alarm management history
Recognizing that alarms were becoming a problem, industrial control system users banded together and formed the Alarm Management Task Force, which was a customer advisory board led by Honeywell in 1990. The AMTF included participants from chemical, petrochemical, and refining operations. They gathered and wrote a document on the issues associated with alarm management. This group quickly realised that alarm problems were simply a subset of a larger problem, and formed the Abnormal Situation Management Consortium (ASM is a registered trademark of Honeywell). The ASM Consortium developed a research proposal and was granted funding from the National Institute of Standards and Technology (NIST) in 1994. The focus of this work was addressing the complex human-system interaction and factors that influence successful performance for process operators. Automation solutions have often been developed without consideration of the human that needs to interact with the solution. In particular, alarms are intended to improve situation awareness for the control room operator, but a poorly configured alarm system does not achieve this goal.
The ASM Consortium has produced documents on best practices in alarm management, as well as operator situation awareness, operator effectiveness, and other operator-oriented issues. These documents were originally for ASM Consortium members only, but the ASMC has recently offered these documents publicly.
The ASM consortium also participated in development of an alarm management guideline published by the Engineering Equipment & Materials Users' Association (EEMUA) in the UK. The ASM Consortium provided data from their member companies, and contributed to the editing of the guideline. The result is EEMUA 191 "Alarm Systems- A Guide to Design, Management and Procurement".
Several institutions and societies are producing standards on alarm management to assist their members in the best practices use of alarms in industrial manufacturing systems. Among them are the ISA (ISA 18.2), API (API 1167) and NAMUR (Namur NA 102). Several companies also offer software packages to assist users in dealing with alarm management issues. Among them are DCS manufacturing companies, and third-party vendors who offer add-on systems.
Concepts
The fundamental purpose of alarm annunciation is to alert the operator to deviations from normal operating conditions, i.e. abnormal operating situations. The ultimate objective is to prevent, or at least minimise, physical and economic loss through operator intervention in response to the condition that was alarmed. For most digital control system users, losses can result from situations that threaten environmental safety, personnel safety, equipment integrity, economy of operation, and product quality control as well as plant throughput. A key factor in operator response effectiveness is the speed and accuracy with which the operator can identify the alarms that require immediate action.
By default, the assignment of alarm trip points and alarm priorities constitute basic alarm management. Each individual alarm is designed to provide an alert when that process indication deviates from normal. The main problem with basic alarm management is that these features are static. The resultant alarm annunciation does not respond to changes in the mode of operation or the operating conditions.
When a major piece of process equipment like a charge pump, compressor, or fired heater shuts down, many alarms become unnecessary. These alarms are no longer independent exceptions from normal operation. They indicate, in that situation, secondary, non-critical effects and no longer provide the operator with important information. Similarly, during start-up or shutdown of a process unit, many alarms are not meaningful. This is often the case because the static alarm conditions conflict with the required operating criteria for start-up and shutdown.
In all cases of major equipment failure, start-ups, and shutdowns, the operator must search alarm annunciation displays and analyse which alarms are significant. This wastes valuable time when the operator needs to make important operating decisions and take swift action. If the resultant flood of alarms becomes too great for the operator to comprehend, then the basic alarm management system has failed as a system that allows the operator to respond quickly and accurately to the alarms that require immediate action. In such cases, the operator has virtually no chance to minimise, let alone prevent, a significant loss.
In short, one needs to extend the objectives of alarm management beyond the basic level. It is not sufficient to utilise multiple priority levels because priority itself is often dynamic. Likewise, alarm disabling based on unit association or suppressing audible annunciation based on priority do not provide dynamic, selective alarm annunciation. The solution must be an alarm management system that can dynamically filter the process alarms based on the current plant operation and conditions so that only the currently significant alarms are annunciated.
The fundamental purpose of dynamic alarm annunciation is to alert the operator to relevant abnormal operating situations. They include situations that have a necessary or possible operator response to ensure:
Personnel and Environmental Safety,
Equipment Integrity,
Product Quality Control.
The ultimate objectives are no different from the previous basic alarm annunciation management objectives. Dynamic alarm annunciation management focuses the operator's attention by eliminating extraneous alarms, providing better recognition of critical problems, and insuring swifter, more accurate operator response.
The need for alarm management
Alarm management is usually necessary in a process manufacturing environment that is controlled by an operator using a supervisory control system, such as a DCS, a SCADA or a programmable logic controller (PLC). Such a system may have hundreds of individual alarms that up until very recently have probably been designed with only limited consideration of other alarms in the system. Since humans can only do one thing at a time and can pay attention to a limited number of things at a time, there needs to be a way to ensure that alarms are presented at a rate that can be assimilated by a human operator, particularly when the plant is upset or in an unusual condition. Alarms also need to be capable of directing the operator's attention to the most important problem that he or she needs to act upon, using a priority to indicate degree of importance or rank, for instance. To ensure a continuous production, a seamless service, a perfect quality at any time of day or night, there must be an organisation which implies several teams of people handling, one after the other, the occurring events.
This is more commonly called the on-call management. The on-call management relies on a team of one or more persons (site manager, maintenance staff) or on external organisation (guards, telesurveillance centre). To avoid having a full-time person to monitor a single process or a level, information and / or events transmission is mandatory. This information transmission will enable the on-call staff to be more mobile, more efficient and will allow it to perform other tasks at the same time.
Some improvement methods
The techniques for achieving rate reduction range from the extremely simple ones of reducing nuisance and low value alarms to redesigning the alarm system in a holistic way that considers the relationships among individual alarms.
Design guide
This step involves documenting the methodology or philosophy of how to design alarms. It can include things such as what to alarm, standards for alarm annunciation and text messages, how the operator will interact with the alarms.
Rationalization and Documentation
This phase is a detailed review of all alarms to document their design purpose, and to ensure that they are selected and set properly and meet the design criteria. Ideally this stage will result in a reduction of alarms, but doesn't always.
Advanced methods
The above steps will often still fail to prevent an alarm flood in an operational upset, so advanced methods such as alarm suppression under certain circumstances are then necessary. As an example, shutting down a pump will always cause a low flow alarm on the pump outlet flow, so the low flow alarm may be suppressed if the pump was shut down since it adds no value for the operator, because he or she already knows it was caused by the pump being shut down. This technique can of course get very complicated and requires considerable care in design. In the above case for instance, it can be argued that the low flow alarm does add value as it confirms to the operator that the pump has indeed stopped. Process boundaries (Boundary Management) must also be taken into account.
Alarm management becomes more and more necessary as the complexity and size of manufacturing systems increases. A lot of the need for alarm management also arises because alarms can be configured on a DCS at nearly zero incremental cost, whereas in the past on physical control panel systems that consisted of individual pneumatic or electronic analogue instruments, each alarm required expenditure and control panel area, so more thought usually went into the need for an alarm. Numerous disasters such as Three Mile Island, Chernobyl accident and the Deepwater Horizon have established a clear need for alarm management.
The seven steps to alarm management
Step 1: Create and adopt an alarm philosophy
A comprehensive design and guideline document is produced which defines a plant standard employing a best-practise alarm management methodology.
Step 2: Alarm performance benchmarking
Analyze the alarm system to determine its strengths and deficiencies, and effectively map out a practical solution to improve it.
Step 3: “Bad actor” alarm resolution
From experience, it is known that around half of the entire alarm load usually comes from a relatively few alarms. The methods for making them work properly are documented, and can be applied with minimum effort and maximum performance improvement.
Step 4: Alarm documentation and rationalisation (D&R)
A full overhaul of the alarm system to ensure that each alarm complies with the alarm philosophy and the principles of good alarm management.
Step 5: Alarm system audit and enforcement
DCS alarm systems are notoriously easy to change and generally lack proper security. Methods are needed to ensure that the alarm system does not drift from its rationalised state.
Step 6: Real-time alarm management
More advanced alarm management techniques are often needed to ensure that the alarm system properly supports, rather than hinders, the operator in all operating scenarios. These include Alarm Shelving, State-Based Alarming, and Alarm Flood Suppression technologies.
Step 7: Control and maintain alarm system performance
Proper management of change and longer term analysis and KPI monitoring are needed, to ensure that the gains that have been achieved from performing the steps above do not dwindle away over time. Otherwise they will; the principle of “entropy” definitely applies to an alarm system.
See also
List of human-computer interaction topics, since most control systems are computer-based
Design, especially interaction design
Detection theory
Physical security
Annunciator panel
Alarm fatigue
Fault management
Notes
References
SSM InfoTech Solutions Pvt. Ltd. -
EPRI (2005) Advanced Control Room Alarm System: Requirements and Implementation Guidance. Palo Alto, CA. EPRI report 1010076.
EEMUA 191 Alarm Systems - A Guide to Design, Management and Procurement - Edition 3 (2013)
PAS - The Alarm Management Handbook - Second Edition (2010)
ASM Consortium (2009) - Effective Alarm Management Practices
ANSI/ISA–18.2–2009 - Management of Alarm Systems for the Process Industries
IEC 62682 Management of alarms systems for the process industries
Ako-Tec AG - Description of a modern Alarm Management System
Alarm Management and ISA-18 A Journey Not a Destination
RFC8632 A YANG Data Model for Alarm Management
External links
"Principles for alarm system design" YA-711 Norwegian Petroleum Directorate
Alarms
Safety
Security
Process safety
Production and manufacturing | Alarm management | [
"Chemistry",
"Technology",
"Engineering"
] | 3,459 | [
"Warning systems",
"Safety engineering",
"Alarms",
"Process safety",
"Chemical process engineering"
] |
964,161 | https://en.wikipedia.org/wiki/Modulus%20of%20continuity | In mathematical analysis, a modulus of continuity is a function ω : [0, ∞] → [0, ∞] used to measure quantitatively the uniform continuity of functions. So, a function f : I → R admits ω as a modulus of continuity if
for all x and y in the domain of f. Since moduli of continuity are required to be infinitesimal at 0, a function turns out to be uniformly continuous if and only if it admits a modulus of continuity. Moreover, relevance to the notion is given by the fact that sets of functions sharing the same modulus of continuity are exactly equicontinuous families. For instance, the modulus ω(t) := kt describes the k-Lipschitz functions, the moduli ω(t) := ktα describe the Hölder continuity, the modulus ω(t) := kt(|log t|+1) describes the almost Lipschitz class, and so on. In general, the role of ω is to fix some explicit functional dependence of ε on δ in the (ε, δ) definition of uniform continuity. The same notions generalize naturally to functions between metric spaces. Moreover, a suitable local version of these notions allows to describe quantitatively the continuity at a point in terms of moduli of continuity.
A special role is played by concave moduli of continuity, especially in connection with extension properties, and with approximation of uniformly continuous functions. For a function between metric spaces, it is equivalent to admit a modulus of continuity that is either concave, or subadditive, or uniformly continuous, or sublinear (in the sense of growth). Actually, the existence of such special moduli of continuity for a uniformly continuous function is always ensured whenever the domain is either a compact, or a convex subset of a normed space. However, a uniformly continuous function on a general metric space admits a concave modulus of continuity if and only if the ratios
are uniformly bounded for all pairs (x, x′) bounded away from the diagonal of X x X. The functions with the latter property constitute a special subclass of the uniformly continuous functions, that in the following we refer to as the special uniformly continuous functions. Real-valued special uniformly continuous functions on the metric space X can also be characterized as the set of all functions that are restrictions to X of uniformly continuous functions over any normed space isometrically containing X. Also, it can be characterized as the uniform closure of the Lipschitz functions on X.
Formal definition
Formally, a modulus of continuity is any increasing real-extended valued function ω : [0, ∞] → [0, ∞], vanishing at 0 and continuous at 0, that is
Moduli of continuity are mainly used to give a quantitative account both of the continuity at a point, and of the uniform continuity, for functions between metric spaces, according to the following definitions.
A function f : (X, dX) → (Y, dY) admits ω as (local) modulus of continuity at the point x in X if and only if,
Also, f admits ω as (global) modulus of continuity if and only if,
One equivalently says that ω is a modulus of continuity (resp., at x) for f, or shortly, f is ω-continuous (resp., at x). Here, we mainly treat the global notion.
Elementary facts
If f has ω as modulus of continuity and ω1 ≥ ω, then f admits ω1 too as modulus of continuity.
If f : X → Y and g : Y → Z are functions between metric spaces with moduli respectively ω1 and ω2 then the composition map has modulus of continuity .
If f and g are functions from the metric space X to the Banach space Y, with moduli respectively ω1 and ω2, then any linear combination af+bg has modulus of continuity |a|ω1+|b|ω2. In particular, the set of all functions from X to Y that have ω as a modulus of continuity is a convex subset of the vector space C(X, Y), closed under pointwise convergence.
If f and g are bounded real-valued functions on the metric space X, with moduli respectively ω1 and ω2, then the pointwise product fg has modulus of continuity .
If is a family of real-valued functions on the metric space X with common modulus of continuity ω, then the inferior envelope , respectively, the superior envelope , is a real-valued function with modulus of continuity ω, provided it is finite valued at every point. If ω is real-valued, it is sufficient that the envelope be finite at one point of X at least.
Remarks
Some authors do not require monotonicity, and some require additional properties such as ω being continuous. However, if f admits a modulus of continuity in the weaker definition, it also admits a modulus of continuity which is increasing and infinitely differentiable in (0, ∞). For instance, is increasing, and ω1 ≥ ω; is also continuous, and ω2 ≥ ω1, and a suitable variant of the preceding definition also makes ω2 infinitely differentiable in [0, ∞].
Any uniformly continuous function admits a minimal modulus of continuity ωf, that is sometimes referred to as the (optimal) modulus of continuity of f: Similarly, any function continuous at the point x admits a minimal modulus of continuity at x, ωf(t; x) (the (optimal) modulus of continuity of f at x) : However, these restricted notions are not as relevant, for in most cases the optimal modulus of f could not be computed explicitly, but only bounded from above (by any modulus of continuity of f). Moreover, the main properties of moduli of continuity concern directly the unrestricted definition.
In general, the modulus of continuity of a uniformly continuous function on a metric space needs to take the value +∞. For instance, the function f : N → R such that f(n) := n2 is uniformly continuous with respect to the discrete metric on N, and its minimal modulus of continuity is ωf(t) = +∞ for any t≥1, and ωf(t) = 0 otherwise. However, the situation is different for uniformly continuous functions defined on compact or convex subsets of normed spaces.
Special moduli of continuity
Special moduli of continuity also reflect certain global properties of functions such as extendibility and uniform approximation. In this section we mainly deal with moduli of continuity that are concave, or subadditive, or uniformly continuous, or sublinear. These properties are essentially equivalent in that, for a modulus ω (more precisely, its restriction on [0, ∞)) each of the following implies the next:
ω is concave;
ω is subadditive;
ω is uniformly continuous;
ω is sublinear, that is, there are constants a and b such that ω(t) ≤ at+b for all t;
ω is dominated by a concave modulus, that is, there exists a concave modulus of continuity such that for all t.
Thus, for a function f between metric spaces it is equivalent to admit a modulus of continuity which is either concave, or subadditive, or uniformly continuous, or sublinear. In this case, the function f is sometimes called a special uniformly continuous map. This is always true in case of either compact or convex domains. Indeed, a uniformly continuous map f : C → Y defined on a convex set C of a normed space E always admits a subadditive modulus of continuity; in particular, real-valued as a function ω : [0, ∞) → [0, ∞). Indeed, it is immediate to check that the optimal modulus of continuity ωf defined above is subadditive if the domain of f is convex: we have, for all s and t:
Note that as an immediate consequence, any uniformly continuous function on a convex subset of a normed space has a sublinear growth: there are constants a and b such that |f(x)| ≤ a|x|+b for all x. However, a uniformly continuous function on a general metric space admits a concave modulus of continuity if and only if the ratios are uniformly bounded for all pairs (x, x′) with distance bounded away from zero; this condition is certainly satisfied by any bounded uniformly continuous function; hence in particular, by any continuous function on a compact metric space.
Sublinear moduli, and bounded perturbations from Lipschitz
A sublinear modulus of continuity can easily be found for any uniformly continuous function which is a bounded perturbation of a Lipschitz function: if f is a uniformly continuous function with modulus of continuity ω, and g is a k Lipschitz function with uniform distance r from f, then f admits the sublinear module of continuity min{ω(t), 2r+kt}. Conversely, at least for real-valued functions, any special uniformly continuous function is a bounded, uniformly continuous perturbation of some Lipschitz function; indeed more is true as shown below (Lipschitz approximation).
Subadditive moduli, and extendibility
The above property for uniformly continuous function on convex domains admits a sort of converse at least in the case of real-valued functions: that is, every special uniformly continuous real-valued function f : X → R defined on a metric space X, which is a metric subspace of a normed space E, admits extensions over E that preserves any subadditive modulus ω of f. The least and the greatest of such extensions are respectively:
As remarked, any subadditive modulus of continuity is uniformly continuous: in fact, it admits itself as a modulus of continuity. Therefore, f∗ and f* are respectively inferior and superior envelopes of ω-continuous families; hence still ω-continuous. Incidentally, by the Kuratowski embedding any metric space is isometric to a subset of a normed space. Hence, special uniformly continuous real-valued functions are essentially the restrictions of uniformly continuous functions on normed spaces. In particular, this construction provides a quick proof of the Tietze extension theorem on compact metric spaces. However, for mappings with values in more general Banach spaces than R, the situation is quite more complicated; the first non-trivial result in this direction is the Kirszbraun theorem.
Concave moduli and Lipschitz approximation
Every special uniformly continuous real-valued function f : X → R defined on the metric space X is uniformly approximable by means of Lipschitz functions. Moreover, the speed of convergence in terms of the Lipschitz constants of the approximations is strictly related to the modulus of continuity of f. Precisely, let ω be the minimal concave modulus of continuity of f, which is
Let δ(s) be the uniform distance between the function f and the set Lips of all Lipschitz real-valued functions on C having Lipschitz constant s :
Then the functions ω(t) and δ(s) can be related with each other via a Legendre transformation: more precisely, the functions 2δ(s) and −ω(−t) (suitably extended to +∞ outside their domains of finiteness) are a pair of conjugated convex functions, for
Since ω(t) = o(1) for t → 0+, it follows that δ(s) = o(1) for s → +∞, that exactly means that f is uniformly approximable by Lipschitz functions. Correspondingly, an optimal approximation is given by the functions
each function fs has Lipschitz constant s and
in fact, it is the greatest s-Lipschitz function that realize the distance δ(s). For example, the α-Hölder real-valued functions on a metric space are characterized as those functions that can be uniformly approximated by s-Lipschitz functions with speed of convergence while the almost Lipschitz functions are characterized by an exponential speed of convergence
Examples of use
Let f : [a, b] → R a continuous function. In the proof that f is Riemann integrable, one usually bounds the distance between the upper and lower Riemann sums with respect to the Riemann partition P := {t0, ..., tn} in terms of the modulus of continuity of f and the mesh of the partition P (which is the number )
For an example of use in the Fourier series, see Dini test.
History
Steffens (2006, p. 160) attributes the first usage of omega for the modulus of continuity to Lebesgue (1909, p. 309/p. 75) where omega refers to the oscillation of a Fourier transform. De la Vallée Poussin (1919, pp. 7-8) mentions both names (1) "modulus of continuity" and (2) "modulus of oscillation" and then concludes "but we choose (1) to draw attention to the usage we will make of it".
The translation group of Lp functions, and moduli of continuity Lp.
Let 1 ≤ p; let f : Rn → R a function of class Lp, and let h ∈ Rn. The h-translation of f, the function defined by (τhf)(x) := f(x−h), belongs to the Lp class; moreover, if 1 ≤ p < ∞, then as ǁhǁ → 0 we have:
Therefore, since translations are in fact linear isometries, also
as ǁhǁ → 0, uniformly on v ∈ Rn.
In other words, the map h → τh defines a strongly continuous group of linear isometries of Lp. In the case p = ∞ the above property does not hold in general: actually, it exactly reduces to the uniform continuity, and defines the uniform continuous functions. This leads to the following definition, that generalizes the notion of a modulus of continuity of the uniformly continuous functions: a modulus of continuity Lp for a measurable function f : X → R is a modulus of continuity ω : [0, ∞] → [0, ∞] such that
This way, moduli of continuity also give a quantitative account of the continuity property shared by all Lp functions.
Modulus of continuity of higher orders
It can be seen that formal definition of the modulus uses notion of finite difference of first order:
If we replace that difference with a difference of order n, we get a modulus of continuity of order n:
See also
Constructive analysis
Modulus of convergence
References
Reproduced in:
Lipschitz maps
Approximation theory
Constructivism (mathematics)
Fourier analysis | Modulus of continuity | [
"Mathematics"
] | 3,069 | [
"Approximation theory",
"Mathematical logic",
"Mathematical relations",
"Constructivism (mathematics)",
"Approximations"
] |
964,229 | https://en.wikipedia.org/wiki/Protoplast | Protoplast (), is a biological term coined by Hanstein in 1880 to refer to the entire cell, excluding the cell wall. Protoplasts can be generated by stripping the cell wall from plant, bacterial, or fungal cells by mechanical, chemical or enzymatic means.
Protoplasts differ from spheroplasts in that their cell wall has been completely removed. Spheroplasts retain part of their cell wall. In the case of Gram-negative bacterial spheroplasts, for example, the peptidoglycan component of the cell wall has been removed but the outer membrane component has not.
Enzymes for the preparation of protoplasts
Cell walls are made of a variety of polysaccharides. Protoplasts can be made by degrading cell walls with a mixture of the appropriate polysaccharide-degrading enzymes:
During and subsequent to digestion of the cell wall, the protoplast becomes very sensitive to osmotic stress. This means cell wall digestion and protoplast storage must be done in an isotonic solution to prevent rupture of the plasma membrane.
Uses for protoplasts
Protoplasts can be used to study membrane biology, including the uptake of macromolecules and viruses . These are also used in somaclonal variation.
Protoplasts are widely used for DNA transformation (for making genetically modified organisms), since the cell wall would otherwise block the passage of DNA into the cell. In the case of plant cells, protoplasts may be regenerated into whole plants first by growing into a group of plant cells that develops into a callus and then by regeneration of shoots (caulogenesis) from the callus using plant tissue culture methods. Growth of protoplasts into callus and regeneration of shoots requires the proper balance of plant growth regulators in the tissue culture medium that must be customized for each species of plant. Unlike protoplasts from vascular plants, protoplasts from mosses, such as Physcomitrella patens, do not need phytohormones for regeneration, nor do they form a callus during regeneration. Instead, they regenerate directly into the filamentous protonema, mimicking a germinating moss spore.
Protoplasts may also be used for plant breeding, using a technique called protoplast fusion. Protoplasts from different species are induced to fuse by using an electric field or a solution of polyethylene glycol. This technique may be used to generate somatic hybrids in tissue culture.
Additionally, protoplasts of plants expressing fluorescent proteins in certain cells may be used for Fluorescence Activated Cell Sorting (FACS), where only cells fluorescing a selected wavelength are retained. Among other things, this technique is used to isolate specific cell types (e.g., guard cells from leaves, pericycle cells from roots) for further investigations, such as transcriptomics.
See also
Bacterial morphological plasticity
L-form bacteria
Spheroplasts
References
Cell biology
Membrane biology
Molecular biology
Plant physiology
Plant reproduction | Protoplast | [
"Chemistry",
"Biology"
] | 649 | [
"Plant physiology",
"Behavior",
"Cell biology",
"Plant reproduction",
"Plants",
"Reproduction",
"Membrane biology",
"Molecular biology",
"Biochemistry"
] |
964,378 | https://en.wikipedia.org/wiki/Ramachandran%20plot | In biochemistry, a Ramachandran plot (also known as a Rama plot, a Ramachandran diagram or a [φ,ψ] plot), originally developed in 1963 by G. N. Ramachandran, C. Ramakrishnan, and V. Sasisekharan, is a way to visualize energetically allowed regions for backbone dihedral angles ( also called as torsional angles , phi and psi angles ) ψ against φ of amino acid residues in protein structure. The figure on the left illustrates the definition of the φ and ψ backbone dihedral angles (called φ and φ' by Ramachandran). The ω angle at the peptide bond is normally 180°, since the partial-double-bond character keeps the peptide bond planar. The figure in the top right shows the allowed φ,ψ backbone conformational regions from the Ramachandran et al. 1963 and 1968 hard-sphere calculations: full radius in solid outline, reduced radius in dashed, and relaxed tau (N-Cα-C) angle in dotted lines. Because dihedral angle values are circular and 0° is the same as 360°, the edges of the Ramachandran plot "wrap" right-to-left and bottom-to-top. For instance, the small strip of allowed values along the lower-left edge of the plot are a continuation of the large, extended-chain region at upper left.
Uses
A Ramachandran plot can be used in two somewhat different ways. One is to show in theory which values, or conformations, of the ψ and φ angles are possible for an amino-acid residue in a protein (as at top right). A second is to show the empirical distribution of datapoints observed in a single structure (as at right, here) in usage for structure validation, or else in a database of many structures (as in the lower 3 plots at left). It's used to predict about Drug-ligand interaction and helpful in pharmaceutical industries. Either case is usually shown against outlines for the theoretically favored regions.
Amino-acid preferences
One might expect that larger side chains would result in more restrictions and consequently a smaller allowable region in the Ramachandran plot, but the effect of side chains is small. In practice, the major effect seen is that of the presence or absence of the methylene group at Cβ. Glycine has only a hydrogen atom for its side chain, with a much smaller van der Waals radius than the CH3, CH2, or CH group that starts the side chain of all other amino acids. Hence it is least restricted, and this is apparent in the Ramachandran plot for glycine (see Gly plot in gallery) for which the allowable area is considerably larger. In contrast, the Ramachandran plot for proline, with its 5-membered-ring side chain connecting Cα to backbone N, shows a limited number of possible combinations of ψ and φ (see Pro plot in gallery). The residue preceding proline ("pre-proline") also has limited combinations compared to the general case.
More recent updates
The first Ramachandran plot was calculated just after the first protein structure at atomic resolution was determined (myoglobin, in 1960), although the conclusions were based on small-molecule crystallography of short peptides. Now, many decades later, there are tens of thousands of high-resolution protein structures determined by X-ray crystallography and deposited in the Protein Data Bank (PDB). Many studies have taken advantage of this data to produce more detailed and accurate φ,ψ plots (e.g., Morris et al. 1992; Kleywegt & Jones 1996; Hooft et al. 1997; Hovmöller et al. 2002; Lovell et al. 2003; Anderson et al. 2005. Ting et al.'' 2010).
The four figures below show the datapoints from a large set of high-resolution structures and contours for favored and for allowed conformational regions for the general case (all amino acids except Gly, Pro, and pre-Pro), for Gly, and for Pro. The most common regions are labeled: α for α helix, Lα for left-handed helix, β for β-sheet, and ppII for polyproline II. Such a clustering is alternatively described in the ABEGO system, where each letter stands for α (and 310) helix, right-handed β sheets (and extended structures), left-handed helixes, left-handed sheets, and finally unplottable cis peptide bonds sometimes seen with proline; it has been used in the classification of motifs and more recently for designing proteins.
While the Ramachandran plot has been a textbook resource for explaining the structural behavior of peptide bond, an exhaustive exploration of how a peptide behaves in every region of the Ramachandran plot was only recently published (Mannige 2017).
The Molecular Biophysics Unit at Indian Institute of Science celebrated 50 years of Ramachandran Map by organizing International Conference on Biomolecular Forms and Functions from 8–11 January 2013.
Related conventions
One can also plot the dihedral angles in polysaccharides (e.g. with CARP ).
Gallery
Software
Web-based Structural Analysis tool for any uploaded PDB file, producing Ramachandran plots, computing dihedral angles and extracting sequence from PDB
Web-based tool showing Ramachandran plot of any PDB entry
MolProbity web service that produces Ramachandran plots and other validation of any PDB-format file
SAVES (Structure Analysis and Verification) — uses WHATCHECK, PROCHECK, and does its own internal Ramachandran Plot
STING
Pymol with the DynoPlot extension
VMD, distributed with dynamic Ramachandran plot plugin
WHAT CHECK, the stand-alone validation routines from the WHAT IF software
UCSF Chimera, found under the Model Panel.
Sirius
Swiss PDB Viewer
TALOS
Zeus molecular viewer — found under "Tools" menu, high quality plots with regional contours
Procheck
Neighbor-Dependent and Neighbor-Independent Ramachandran Probability Distributions
See also PDB for a list of similar software.
References
Further reading
, available on-line at Anatax
External links
DynoPlot in PyMOL wiki
Link to Ramachandran Plot Map of alpha-helix and beta-sheet locations
Link to Ramachandran plot calculated from protein structures determined by X-ray crystallography compared to the original Ramachan.
Proteopedia Ramachandran Plot
Biochemistry methods
Plots (graphics) | Ramachandran plot | [
"Chemistry",
"Biology"
] | 1,353 | [
"Biochemistry methods",
"Biochemistry"
] |
964,428 | https://en.wikipedia.org/wiki/Weighing%20scale | A scale or balance is a device used to measure weight or mass. These are also known as mass scales, weight scales, mass balances, massometers, and weight balances.
The traditional scale consists of two plates or bowls suspended at equal distances from a fulcrum. One plate holds an object of unknown mass (or weight), while objects of known mass or weight, called weights, are added to the other plate until mechanical equilibrium is achieved and the plates level off, which happens when the masses on the two plates are equal. The perfect scale rests at neutral. A spring scale will make use of a spring of known stiffness to determine mass (or weight). Suspending a certain mass will extend the spring by a certain amount depending on the spring's stiffness (or spring constant). The heavier the object, the more the spring stretches, as described in Hooke's law. Other types of scales making use of different physical principles also exist.
Some scales can be calibrated to read in units of force (weight) such as newtons instead of units of mass such as kilograms. Scales and balances are widely used in commerce, as many products are sold and packaged by mass.
Pan balance
History
The balance scale is such a simple device that its usage likely far predates the evidence. What has allowed archaeologists to link artifacts to weighing scales are the stones for determining absolute mass. The balance scale itself was probably used to determine relative mass long before absolute mass.
The oldest attested evidence for the existence of weighing scales dates to the Fourth Dynasty of Egypt, with Deben (unit) balance weights, from the reign of Sneferu (c. 2600 BC) excavated, though earlier usage has been proposed. Carved stones bearing marks denoting mass and the Egyptian hieroglyphic symbol for gold have been discovered, which suggests that Egyptian merchants had been using an established system of mass measurement to catalog gold shipments or gold mine yields. Although no actual scales from this era have survived, many sets of weighing stones as well as murals depicting the use of balance scales suggest widespread usage.
Examples, dating , have also been found in the Indus River valley. Uniform, polished stone cubes discovered in early settlements were probably used as mass-setting stones in balance scales. Although the cubes bear no markings, their masses are multiples of a common denominator. The cubes are made of many different kinds of stones with varying densities. Clearly their mass, not their size or other characteristics, was a factor in sculpting these cubes.
In China, the earliest weighing balance excavated was from a tomb of the State of Chu of the Chinese Warring States Period dating back to the 3rd to 4th century BC in Mount Zuojiagong near Changsha, Hunan. The balance was made of wood and used bronze masses.
Variations on the balance scale, including devices like the cheap and inaccurate bismar (unequal-armed scales), began to see common usage by c. 400 BC by many small merchants and their customers. A plethora of scale varieties each boasting advantages and improvements over one another appear throughout recorded history, with such great inventors as Leonardo da Vinci lending a personal hand in their development.
Even with all the advances in weighing scale design and development, all scales until the seventeenth century AD were variations on the balance scale. The standardization of the weights used – and ensuring traders used the correct weights – was a considerable preoccupation of governments throughout this time.
The original form of a balance consisted of a beam with a fulcrum at its center. For highest accuracy, the fulcrum would consist of a sharp V-shaped pivot seated in a shallower V-shaped bearing. To determine the mass of the object, a combination of reference masses was hung on one end of the beam while the object of unknown mass was hung on the other end (see balance and steelyard balance). For high precision work, such as empirical chemistry, the center beam balance is still one of the most accurate technologies available, and is commonly used for calibrating test masses.
However, bronze fragments discovered in central Germany and Italy had been used during the Bronze Age as an early form of currency. In the same time period, merchants had used standard weights of equivalent value between 8 and 10.5 grams from Great Britain to Mesopotamia.
Mechanical balances
The balance (also balance scale, beam balance and laboratory balance) was the first mass measuring instrument invented. In its traditional form, it consists of a pivoted horizontal lever with arms of equal lengththe beam or tron and a weighing pan suspended from each arm (hence the plural name "scales for a weighing instrument). The unknown mass is placed in one pan and standard masses are added to the other pan until the beam is as close to equilibrium as possible. In precision balances, a more accurate determination of the mass is given by the position of a sliding mass moved along a graduated scale. A decimal balance uses the lever in which the arm for weights is 10 times longer than the arm for weighted objects, so that much lighter weights may be used to weigh heavy object. Similarly a centesimal balance uses arms in ratio 1:100.
Unlike spring-based scales, balances are used for the precision measurement of mass as their accuracy is not affected by variations in the local gravitational field. (On Earth, for example, these can amount to ±0.5% between locations.) A change in the strength of the gravitational field caused by moving the balance does not change the measured mass, because the moments of force on either side of the center balanced beam are affected equally. A center beam balance will render an accurate measurement of mass at any location experiencing a constant gravity or acceleration.
Very precise measurements are achieved by ensuring that the balance's fulcrum is essentially friction-free (a knife edge is the traditional solution), by attaching a pointer to the beam which amplifies any deviation from a balance position; and finally by using the lever principle, which allows fractional masses to be applied by movement of a small mass along the measuring arm of the beam, as described above. For greatest accuracy, there needs to be an allowance for the buoyancy in air, whose effect depends on the densities of the masses involved.
To reduce the need for large reference masses, an off-center beam can be used. A balance with an off-center beam can be almost as accurate as a scale with a center beam, but the off-center beam requires special reference masses and cannot be intrinsically checked for accuracy by simply swapping the contents of the pans as a center-beam balance can. To reduce the need for small graduated reference masses, a sliding weight called a poise can be installed so that it can be positioned along a calibrated scale. A poise adds further intricacies to the calibration procedure, since the exact mass of the poise must be adjusted to the exact lever ratio of the beam.
For greater convenience in placing large and awkward loads, a platform can be floated on a cantilever beam system which brings the proportional force to a noseiron bearing; this pulls on a stilyard rod to transmit the reduced force to a conveniently sized beam.
One still sees this design in portable beam balances of 500 kg capacity which are commonly used in harsh environments without electricity, as well as in the lighter duty mechanical bathroom scale (which actually uses a spring scale, internally). The additional pivots and bearings all reduce the accuracy and complicate calibration; the float system must be corrected for corner errors before the span is corrected by adjusting the balance beam and poise.
Roberval balance
In 1669 the Frenchman Gilles Personne de Roberval presented a new kind of balance scale to the French Academy of Sciences. This scale consisted of a pair of vertical columns separated by a pair of equal-length arms and pivoting in the center of each arm from a central vertical column, creating a parallelogram. From the side of each vertical column a peg extended. To the amazement of observers, no matter where Roberval hung two equal weight along the peg, the scale still balanced. In this sense, the scale was revolutionary: it evolved into the more-commonly encountered form consisting of two pans placed on vertical column located above the fulcrum and the parallelogram below them. The advantage of the Roberval design is that no matter where equal weights are placed in the pans, the scale will still balance.
Further developments have included a "gear balance" in which the parallelogram is replaced by any odd number of interlocking gears greater than one, with alternating gears of the same size and with the central gear fixed to a stand and the outside gears fixed to pans, as well as the "sprocket gear balance" consisting of a bicycle-type chain looped around an odd number of sprockets with the central one fixed and the outermost two free to pivot and attached to a pan.
Because it has more moving joints which add friction, the Roberval balance is consistently less accurate than the traditional beam balance, but for many purposes this is compensated for by its usability.
Torsion balance
The torsion balance is one of the most mechanically accurate of analog balances. Pharmacy schools still teach how to use torsion balances in the U.S. It utilizes pans like a traditional balance that lie on top of a mechanical chamber which bases measurements on the amount of twisting of a wire or fiber inside the chamber. The scale must still use a calibration weight to compare against, and can weigh objects greater than 120 mg and come within a margin of error +/- 7 mg. Many microbalances and ultra-microbalances that weigh fractional gram values are torsion balances. A common fiber type is quartz crystal.
Electronic devices
Microbalance
A microbalance (also called an ultramicrobalance, or nanobalance) is an instrument capable of making precise measurements of the mass of objects of relatively small mass: on the order of a million parts of a gram and below.
Analytical balance
An analytical balance is a class of balance designed to measure small mass in the sub-milligram range. The measuring pan of an analytical balance (0.1 mg or better) is inside a transparent enclosure with doors so that dust does not collect and so any air currents in the room do not affect the balance's operation. This enclosure is often called a draft shield. The use of a mechanically vented balance safety enclosure, which has uniquely designed acrylic airfoils, allows a smooth turbulence-free airflow that prevents balance fluctuation and the measure of mass down to 1 μg without fluctuations or loss of product. Also, the sample must be at room temperature to prevent natural convection from forming air currents inside the enclosure from causing an error in reading. Single-pan mechanical substitution balances maintain consistent response throughout the useful capacity, which is achieved by maintaining a constant load on the balance beam and thus the fulcrum by subtracting mass on the same side of the beam to which the sample is added.
Electronic analytical scales measure the force needed to counter the mass being measured rather than using actual masses. As such they must have calibration adjustments made to compensate for gravitational differences. They use an electromagnet to generate a force to counter the sample being measured and output the result by measuring the force needed to achieve balance. Such a measurement device is called an electromagnetic force restoration sensor.
Pendulum balance scales
Pendulum type scales do not use springs. These designs use pendulums and operate as a balance that is unaffected by differences in gravity. An example of application of this design are scales made by the Toledo Scale Company.
Programmable scales
A programmable scale has a programmable logic controller in it, allowing it to be programmed for various applications such as batching, labeling, filling (with check weight function), truck scales, and more.
Another important function is counting, e. g. used to count small parts in larger quantities during the annual stock taking. Counting scales (which can also do just weighing) can range from mg to tonnes.
Symbolism
The scales (specifically, a two-pan, beam balance) are one of the traditional symbols of justice, as wielded by statues of Lady Justice. This corresponds to the use in a metaphor of matters being "held in the balance". It has its origins in ancient Egypt.
Scales also are widely used as a symbol of finance, commerce, or trade, in which they have played a traditional, vital role since ancient times. For instance, balance scales are depicted in the seal of the U.S. Department of the Treasury and the Federal Trade Commission.
Scales are also the symbol for the astrological sign Libra.
Scales (specifically, a two-pan, beam balance in a state of equal balance) are the traditional symbol of Pyrrhonism indicating the equal balance of arguments used in inducing epoche.
Force-measuring (weight) scales
History
Although records dating to the 1700s refer to spring scales for measuring mass, the earliest design for such a device dates to 1770 and credits Richard Salter, an early scale-maker. Spring scales came into wide usage in the United Kingdom after 1840 when R. W. Winfield developed the candlestick scale for weighing letters and packages, required after the introduction of the Uniform Penny Post. Postal workers could work more quickly with spring scales than balance scales because they could be read instantaneously and did not have to be carefully balanced with each measurement.
By the 1940s, various electronic devices were being attached to these designs to make readings more accurate. Load cells – transducers that convert force to an electrical signal – have their beginnings as early as the late nineteenth century, but it was not until the late twentieth century that their widespread usage became economically and technologically viable.
Mechanical scales
A mechanical scale or balance is used to describe a weighing device that is used to measure the mass, force exertion, tension, and resistance of an object without the need of a power supply. Types of mechanical scales include decimal balances, spring scales, hanging scales, triple beam balances, and force gauges.
Spring scales
A spring scale measures mass by reporting the distance that a spring deflects under a load. This contrasts to a balance, which compares the torque on the arm due to a sample weight to the torque on the arm due to a standard reference mass using a horizontal lever. Spring scales measure force, which is the tension force of constraint acting on an object, opposing the local force of gravity. They are usually calibrated so that measured force translates to mass at earth's gravity. The object to be weighed can be simply hung from the spring or set on a pivot and bearing platform.
In a spring scale, the spring either stretches (as in a hanging scale in the produce department of a grocery store) or compresses (as in a simple bathroom scale). By Hooke's law, every spring has a proportionality constant that relates how hard it is pulled to how far it stretches. Weighing scales use a spring with a known spring constant (see Hooke's law) and measure the displacement of the spring by any variety of mechanisms to produce an estimate of the gravitational force applied by the object. Rack and pinion mechanisms are often used to convert the linear spring motion to a dial reading.
Spring scales have two sources of error that balances do not: the measured mass varies with the strength of the local gravitational force (by as much as 0.5% at different locations on Earth), and the elasticity of the measurement spring can vary slightly with temperature. With proper manufacturing and setup, however, spring scales can be rated as legal for commerce. To remove the temperature error, a commerce-legal spring scale must either have temperature-compensated springs or be used at a fairly constant temperature. To eliminate the effect of gravity variations, a commerce-legal spring scale must be calibrated where it is used.
Hydraulic or pneumatic scale
It is also common in high-capacity applications such as crane scales to use hydraulic force to sense mass. The test force is applied to a piston or diaphragm and transmitted through hydraulic lines to a dial indicator based on a Bourdon tube or electronic sensor.
Domestic Weighing Scale
Electronic digital scales display weight as a number, usually on a liquid crystal display (LCD). They are versatile because they may perform calculations on the measurement and transmit it to other digital devices. On a digital scale, the force of the weight causes a spring to deform, and the amount of deformation is measured by one or more transducers called strain gauges. A strain gauge is a conductor whose electrical resistance changes when its length changes. Strain gauges have limited capacity and larger digital scales may use a hydraulic transducer called a load cell instead. A voltage is applied to the device, and the weight causes the current through it to change. The current is converted to a digital number by an analog-to-digital converter, translated by digital logic to the correct units, and displayed on the display. Usually, the device is run by a microprocessor chip.
Digital bathroom scale
A digital bathroom scale is a scale on the floor which a person stands on. The weight is shown on an LED or LCD display. The digital electronics may do more than just display weight, it may calculate body fat, BMI, lean mass, muscle mass, and water ratio. Some modern bathroom scales are wirelessly or cellularly connected and have features like smartphone integration, cloud storage, and fitness tracking. They are usually powered by a button cell, or battery of AA or AAA size.
Digital kitchen scale
Digital kitchen scales are used for weighing food in a kitchen during cooking. These are usually lightweight and compact.
Strain gauge scale
In electronic versions of spring scales, the deflection of a beam supporting the unknown mass is measured using a strain gauge, which is a length-sensitive electrical resistance. The capacity of such devices is only limited by the resistance of the beam to deflection. The results from several supporting locations may be added electronically, so this technique is suitable for determining the mass of very heavy objects, such as trucks and rail cars, and is used in a modern weighbridge.
Supermarket and other retail scale
These scales are used in the modern bakery, grocery, delicatessen, seafood, meat, produce and other perishable goods departments. Supermarket scales can print labels and receipts, mark mass and count, unit price, total price and in some cases tare. Some modern supermarket scales print an RFID tag that can be used to track the item for tampering or returns. In most cases, these types of scales have a sealed calibration so that the reading on the display is correct and cannot be tampered with. In the US, the scales are certified by the National Type Evaluation Program (NTEP), in South Africa by the South African Bureau of Standards, in Australia, they are certified by the National Measurement Institute (NMI) and in the UK by the International Organization of Legal Metrology.
Industrial weighing scale
An industrial weighing scale is a device that measures the weight or mass of objects in various industries. It can range from small bench scales to large weighbridges, and it can have different features and capacities. Industrial weighing scales are used for quality control, inventory management, and trade purposes.
There are many kinds of industrial weighing scales that are used for different purposes and applications. Some of the common types are:
Weighbridges : A large scale that can weigh trucks, lorries, containers, and other heavy-duty vehicles. They are used in industries like manufacturing, shipping, mining, agriculture, etc
Container Stacker Scale : A container stacker scale is a specialized weighing system designed for accurately measuring the weight of shipping containers. It is typically integrated into the equipment used for loading and unloading containers, such as container handlers or stacker cranes. Container stacker scales provide real-time weight measurements, allowing logistics professionals to ensure that each container is loaded within the specified weight limits. Container stacker scales are used in industries like ports, shipping, and logistics
Forklift scale : A forklift scale is a weighing system that is built into a forklift truck. It allows for the weighing of loads while they are being lifted and transported by the forklift. This eliminates the need for separate weighing operations and reduces the time and labor required for material handling operations. Forklift scales are used in various industries, such as manufacturing, logistics, and shipping.
Material Handler Scale : A Material Handler Scale is a weighing system that is integrated into a material handler machine, such as a grapple or a magnet. It allows for the accurate and efficient weighing of materials while they are being moved, unloaded, or loaded. A Material Handler Scale can be used in various industries, such as scrap, recycling, waste, and port and harbor. A Material Handler Scale can also transfer the weighing information to a cloud service or an ERP system for real-time monitoring and management of material flow.
A pallet jack scale is a device that combines a pallet jack and a weighing scale. It allows you to weigh and move pallets at the same time, saving time and labor. Pallet jack scales are used in various industries, such as manufacturing, logistics, and shipping.
Crane Scale : A crane scale is a device that measures the weight or mass of objects that are suspended from a crane. It has a hook at the bottom and a large display that allows distant viewing. Crane scales are used for various industrial applications, such as manufacturing, shipping, mining, recycling, and more
Wheel Loader Scale : A wheel loader scale is a system that measures the weight of the materials lifted by a wheel loader, a type of heavy machinery used for moving large amounts of earth, sand, gravel, or other materials. A wheel loader scale can help improve the efficiency and accuracy of loading operations, as well as the inventory management and safety of the industries that use them. A wheel loader scale typically consists of a hydraulic sensor, a display unit, and a data management system. The hydraulic sensor is installed in the wheel loader and detects the pressure changes caused by the load. The display unit shows the weight information to the operator and allows them to set target loads, select products and customers, and export data. The data management system can store, analyze, and transmit the weight data to other devices or platforms.
Testing and certification
Most countries regulate the design and servicing of scales used for commerce. For example, in the European Union weighing instruments are subject to 2014/31/EU and 2014/32/EU directives. A conformity assessment procedure is carried out before placing the instrument on the market, andv the instruments are verified after a given period of time in member states of the European Union. This has tended to cause scale technology to lag behind other technologies because expensive regulatory hurdles are involved in introducing new designs. Nevertheless, there has been a trend to "digital load cells" which are actually strain-gauge cells with dedicated analog converters and networking built into the cell itself. Such designs have reduced the service problems inherent with combining and transmitting a number of 20 millivolt signals in hostile environments.
Government regulation generally requires periodic inspections by licensed technicians, using masses whose calibration is traceable to an approved laboratory. Scales intended for non-trade use, such as those used in bathrooms, doctor's offices, kitchens (portion control), and price estimation (but not official price determination) may be produced, but must by law be labelled "Not Legal for Trade" to ensure that they are not re-purposed in a way that jeopardizes commercial interest. In the United States, the document describing how scales must be designed, installed, and used for commercial purposes is NIST Handbook 44. Legal For Trade (LFT) certification usually approve the readability by testing repeatability of measurements to ensure a maximum margin of error of 10%.
Because gravity varies by over 0.5% over the surface of the earth, the distinction between force due to gravity and mass is relevant for accurate calibration of scales for commercial purposes. Usually, the goal is to measure the mass of the sample rather than its force due to gravity at that particular location.
Traditional mechanical balance-beam scales intrinsically measured mass. But ordinary electronic scales intrinsically measure the gravitational force between the sample and the earth, i.e. the weight of the sample, which varies with location. So such a scale has to be re-calibrated after installation, for that specific location, in order to obtain an accurate indication of mass.
Sources of error
Some of the sources of error in weighing are:
Buoyancy – Objects in air develop a buoyancy force that is directly proportional to the volume of air displaced. The difference in density of air due to barometric pressure and temperature creates errors.
Error in the mass of reference weight
Air gusts, even small ones, which push the scale up or down
Friction in the moving components that causes the scale to reach equilibrium at a different configuration than a frictionless equilibrium should occur.
Settling airborne dust contributing to the weight
Mis-calibration over time, due to drift in the circuit's accuracy, or temperature change
Mis-aligned mechanical components due to thermal expansion or contraction of components
Magnetic fields acting on ferrous components
Forces from electrostatic fields, for example, from feet shuffled on carpets on a dry day
Chemical reactivity between air and the substance being weighed (or the balance itself, in the form of corrosion)
Condensation of atmospheric water on cold items
Evaporation of water from wet items
Convection of air from hot or cold items
Gravitational differences for a scale which measures force, but not for a balance.
Vibration and seismic disturbances
Hybrid spring and balance scales
Elastic arm scale
In 2014 a concept of hybrid scale was introduced, the elastically deformable arm scale, which is a combination between a spring scale and a beam balance, exploiting simultaneously both principles of equilibrium and deformation. In this scale, the rigid arms of a classical beam balance (for example a steelyard) are replaced with a flexible elastic rod in an inclined frictionless sliding sleeve. The rod can reach a unique sliding equilibrium when two vertical dead loads (or masses) are applied at its edges. Equilibrium, which would be impossible with rigid arms, is guaranteed because configurational forces develop at the two edges of the sleeve as a consequence of both the free sliding condition and the nonlinear kinematics of the elastic rod. This mass measuring device can also work without a counterweight.
See also
Ampere balance
Apparent weight
Auncel
Combination weigher
Digital spoon scale
Digital Weight Indicator
Evans balance
Faraday balance
Gouy balance
Kibble balance, also known as a Watt balance
Mass versus weight
Multihead weigher
Nutrition scale
On-board scale, an on-vehicle truck scale
Themis
Weigh house - historic public building for the weighing of goods
Weigh lock - for weighing canal barges
Weigh station, a checkpoint to inspect vehicular weights, usually equipped with a truck scale (weigh bridge)
References
External links
This a comprehensive review of the history and contemporaneous state of weighing machines.
National Conference on Weights and Measures, NIST Handbook 44, Specifications, Tolerances, And Other Technical Requirements for Weighing and Measuring Devices, 2003
Analytical Balance article at ChemLab
relivant dual weighing scale for babies and adults
"The Precious Necklace Regarding Weigh Scales" is an 18th-century manuscript by Abd al-Rahman al-Jabarti about the "design and operation" of scales
Ancient Egyptian technology
Professional symbols
Weighing instruments | Weighing scale | [
"Physics",
"Technology",
"Engineering"
] | 5,638 | [
"Weighing instruments",
"Mass",
"Matter",
"Measuring instruments"
] |
965,387 | https://en.wikipedia.org/wiki/Bathymetry | Bathymetry (; ) is the study of underwater depth of ocean floors (seabed topography), lake floors, or river floors. In other words, bathymetry is the underwater equivalent to hypsometry or topography. The first recorded evidence of water depth measurements are from Ancient Egypt over 3000 years ago. Bathymetry has various uses including the production of bathymetric charts to guide vessels and identify underwater hazards, the study of marine life near the floor of water bodies, coastline analysis and ocean dynamics, including predicting currents and tides.
Bathymetric charts (not to be confused with hydrographic charts), are typically produced to support safety of surface or sub-surface navigation, and usually show seafloor relief or terrain as contour lines (called depth contours or isobaths) and selected depths (soundings), and typically also provide surface navigational information. Bathymetric maps (a more general term where navigational safety is not a concern) may also use a digital terrain model and artificial illumination techniques to illustrate the depths being portrayed. The global bathymetry is sometimes combined with topography data to yield a global relief model. Paleobathymetry is the study of past underwater depths.
Synonyms include seafloor mapping, seabed mapping, seafloor imaging and seabed imaging. Bathymetric measurements are conducted with various methods, from depth sounding, sonar and lidar techniques, to buoys and satellite altimetry. Various methods have advantages and disadvantages and the specific method used depends upon the scale of the area under study, financial means, desired measurement accuracy, and additional variables. Despite modern computer-based research, the ocean seabed in many locations is less measured than the topography of Mars.
Seabed topography
Measurement
Originally, bathymetry involved the measurement of ocean depth through depth sounding. Early techniques used pre-measured heavy rope or cable lowered over a ship's side. This technique measures the depth only a singular point at a time, and is therefore inefficient. It is also subject to movements of the ship and currents moving the line out of true and therefore is not accurate.
The data used to make bathymetric maps today typically comes from an echosounder (sonar) mounted beneath or over the side of a boat, "pinging" a beam of sound downward at the seafloor or from remote sensing LIDAR or LADAR systems. The amount of time it takes for the sound or light to travel through the water, bounce off the seafloor, and return to the sounder informs the equipment of the distance to the seafloor. LIDAR/LADAR surveys are usually conducted by airborne systems.
Starting in the early 1930s, single-beam sounders were used to make bathymetry maps. Today, multibeam echosounders (MBES) are typically used, which use hundreds of very narrow adjacent beams (typically 256) arranged in a fan-like swath of typically 90 to 170 degrees across. The tightly packed array of narrow individual beams provides very high angular resolution and accuracy. In general, a wide swath, which is depth dependent, allows a boat to map more seafloor in less time than a single-beam echosounder by making fewer passes. The beams update many times per second (typically 0.1–50 Hz depending on water depth), allowing faster boat speed while maintaining 100% coverage of the seafloor. Attitude sensors allow for the correction of the boat's roll and pitch on the ocean surface, and a gyrocompass provides accurate heading information to correct for vessel yaw. (Most modern MBES systems use an integrated motion-sensor and position system that measures yaw as well as the other dynamics and position.) A satellite-based global navigation system positions the soundings with respect to the surface of the earth. Sound speed profiles (speed of sound in water as a function of depth) of the water column correct for refraction or "ray-bending" of the sound waves owing to non-uniform water column characteristics such as temperature, conductivity, and pressure. A computer system processes all the data, correcting for all of the above factors as well as for the angle of each individual beam. The resulting sounding measurements are then processed either manually, semi-automatically or automatically (in limited circumstances) to produce a map of the area. a number of different outputs are generated, including a sub-set of the original measurements that satisfy some conditions (e.g., most representative likely soundings, shallowest in a region, etc.) or integrated digital terrain models (DTM) (e.g., a regular or irregular grid of points connected into a surface). Historically, selection of measurements was more common in hydrographic applications while DTM construction was used for engineering surveys, geology, flow modeling, etc. Since –2005, DTMs have become more accepted in hydrographic practice.
Satellites are also used to measure bathymetry. Satellite radar maps deep-sea topography by detecting the subtle variations in sea level caused by the gravitational pull of undersea mountains, ridges, and other masses. On average, sea level is higher over mountains and ridges than over abyssal plains and trenches.
In the United States the United States Army Corps of Engineers performs or commissions most surveys of navigable inland waterways, while the National Oceanic and Atmospheric Administration (NOAA) performs the same role for ocean waterways. Coastal bathymetry data is available from NOAA's National Geophysical Data Center (NGDC), which is now merged into National Centers for Environmental Information. Bathymetric data is usually referenced to tidal vertical datums. For deep-water bathymetry, this is typically Mean Sea Level (MSL), but most data used for nautical charting is referenced to Mean Lower Low Water (MLLW) in American surveys, and Lowest Astronomical Tide (LAT) in other countries. Many other datums are used in practice, depending on the locality and tidal regime.
Occupations or careers related to bathymetry include the study of oceans and rocks and minerals on the ocean floor, and the study of underwater earthquakes or volcanoes. The taking and analysis of bathymetric measurements is one of the core areas of modern hydrography, and a fundamental component in ensuring the safe transport of goods worldwide.
Satellite imagery
Another form of mapping the seafloor is through the use of satellites. The satellites are equipped with hyper-spectral and multi-spectral sensors which are used to provide constant streams of images of coastal areas providing a more feasible method of visualising the bottom of the seabed.
Hyper-spectral sensors
The data-sets produced by hyper-spectral (HS) sensors tend to range between 100 and 200 spectral bands of approximately 5–10 nm bandwidths. Hyper-spectral sensing, or imaging spectroscopy, is a combination of continuous remote imaging and spectroscopy producing a single set of data. Two examples of this kind of sensing are AVIRIS (airborne visible/infrared imaging spectrometer) and HYPERION.
The application of HS sensors in regards to the imaging of the seafloor is the detection and monitoring of chlorophyll, phytoplankton, salinity, water quality, dissolved organic materials, and suspended sediments. However, this does not provide a great visual interpretation of coastal environments.
Multi-spectral sensors
The other method of satellite imaging, multi-spectral (MS) imaging, tends to divide the EM spectrum into a small number of bands, unlike its partner hyper-spectral sensors which can capture a much larger number of spectral bands.
MS sensing is used more in the mapping of the seabed due to its fewer spectral bands with relatively larger bandwidths. The larger bandwidths allow for a larger spectral coverage, which is crucial in the visual detection of marine features and general spectral resolution of the images acquired.
Airborne laser bathymetry
High-density airborne laser bathymetry (ALB) is a modern, highly technical, approach to the mapping the seafloor. First developed in the 1960s and 1970s, ALB is a "light detection and ranging (LiDAR) technique that uses visible, ultraviolet, and near infrared light to optically remote sense a contour target through both an active and passive system." What this means is that airborne laser bathymetry also uses light outside the visible spectrum to detect the curves in underwater landscape.
LiDAR (light detection and ranging) is, according to the National Oceanic and Atmospheric Administration, "a remote sensing method that uses light in the form of a pulsed laser to measure distances". These light pulses, along with other data, generate a three-dimensional representation of whatever the light pulses reflect off, giving an accurate representation of the surface characteristics. A LiDAR system usually consists of a laser, scanner, and GPS receiver. Airplanes and helicopters are the most commonly used platforms for acquiring LIDAR data over broad areas. One application of LiDAR is bathymetric LiDAR, which uses water-penetrating green light to also measure seafloor and riverbed elevations.
ALB generally operates in the form of a pulse of non-visible light being emitted from a low-flying aircraft and a receiver recording two reflections from the water. The first of which originates from the surface of the water, and the second from the seabed. This method has been used in a number of studies to map segments of the seafloor of various coastal areas.
Examples of commercial LIDAR bathymetry systems
There are various LIDAR bathymetry systems that are commercially accessible. Two of these systems are the Scanning Hydrographic Operational Airborne Lidar Survey (SHOALS) and the Laser Airborne Depth Sounder (LADS). SHOALS was first developed to help the United States Army Corps of Engineers (USACE) in bathymetric surveying by a company called Optech in the 1990s. SHOALS is done through the transmission of a laser, of wavelength between 530 and 532 nm, from a height of approximately 200 m at speed of 60 m/s on average.
High resolution orthoimagery
High resolution orthoimagery (HRO) is the process of creating an image that combines the geometric qualities with the characteristics of photographs. The result of this process is an orthoimage, a scale image which includes corrections made for feature displacement such as building tilt. These corrections are made through the use of a mathematical equation, information on sensor calibration, and the application of digital elevation models.
An orthoimage can be created through the combination of a number of photos of the same target. The target is photographed from a number of different angles to allow for the perception of the true elevation and tilting of the object. This gives the viewer an accurate perception of the target area.
High resolution orthoimagery is currently being used in the 'terrestrial mapping program', the aim of which is to 'produce high resolution topography data from Oregon to Mexico'. The orthoimagery will be used to provide the photographic data for these regions.
History
The earliest known depth measurements were made about 1800 BCE by Egyptians by probing with a pole. Later a weighted line was used, with depths marked off at intervals. This process was known as sounding. Both these methods were limited by being spot depths, taken at a point, and could easily miss significant variations in the immediate vicinity. Accuracy was also affected by water movement–current could swing the weight from the vertical and both depth and position would be affected. This was a laborious and time-consuming process and was strongly affected by weather and sea conditions.
There were significant improvements with the voyage of HMS Challenger in the 1870s, when similar systems using wires and a winch were used for measuring much greater depths than previously possible, but this remained a one depth at a time procedure which required very low speed for accuracy. Greater depths could be measured using weighted wires deployed and recovered by powered winches. The wires had less drag and were less affected by current, did not stretch as much, and were strong enough to support their own weight to considerable depths. The winches allowed faster deployment and recovery, necessary when the depths measured were of several kilometers. Wire drag surveys continued to be used until the 1990s due to reliability and accuracy. This procedure involved towing a cable by two boats, supported by floats and weighted to keep a constant depth The wire would snag on obstacles shallower than the cable depth. This was very useful for finding navigational hazards which could be missed by soundings, but was limited to relatively shallow depths.
Single-beam echo sounders were used from the 1920s-1930s to measure the distance of the seafloor directly below a vessel at relatively close intervals along the line of travel. By running roughly parallel lines, data points could be collected at better resolution, but this method still left gaps between the data points, particularly between the lines. The mapping of the sea floor started by using sound waves, contoured into isobaths and early bathymetric charts of shelf topography. These provided the first insight into seafloor morphology, though mistakes were made due to horizontal positional accuracy and imprecise depths. Sidescan sonar was developed in the 1950s to 1970s and could be used to create an image of the bottom, but the technology lacked the capacity for direct depth measurement across the width of the scan. In 1957, Marie Tharp, working with Bruce Charles Heezen, created the first three-dimensional physiographic map of the world's ocean basins. Tharp's discovery was made at the perfect time. It was one of many discoveries that took place near the same time as the invention of the computer. Computers, with their ability to compute large quantities of data, have made research much easier, include the research of the world's oceans. The development of multibeam systems made it possible to obtain depth information across the width of the sonar swath, to higher resolutions, and with precise position and attitude data for the transducers, made it possible to get multiple high resolution soundings from a single pass.
The US Naval Oceanographic Office developed a classified version of multibeam technology in the 1960s. NOAA obtained an unclassified commercial version in the late 1970s and established protocols and standards. Data acquired with multibeam sonar have vastly increased understanding of the seafloor.
The U.S. Landsat satellites of the 1970s and later the European Sentinel satellites, have provided new ways to find bathymetric information, which can be derived from satellite images. These methods include making use of the different depths to which different frequencies of light penetrate the water. When water is clear and the seafloor is sufficiently reflective, depth can be estimated by measuring the amount of reflectance observed by a satellite and then modeling how far the light should penetrate in the known conditions. The Advanced Topographic Laser Altimeter System (ATLAS) on NASA's Ice, Cloud, and land Elevation Satellite 2 (ICESat-2) is a photon-counting lidar that uses the return time of laser light pulses from the Earth's surface to calculate altitude of the surface. ICESat-2 measurements can be combined with ship-based sonar data to fill in gaps and improve precision of maps of shallow water.
Mapping of continental shelf seafloor topography using remotely sensed data has applied a variety of methods to visualise the bottom topography. Early methods included hachure maps, and were generally based on the cartographer's personal interpretation of limited available data. Acoustic mapping methods developed from military sonar images produced a more vivid picture of the seafloor. Further development of sonar based technology have allowed more detail and greater resolution, and ground penetrating techniques provide information on what lies below the bottom surface. Airborne and satellite data acquisition have made further advances possible in visualisation of underwater surfaces: high-resolution aerial photography and orthoimagery is a powerful tool for mapping shallow clear waters on continental shelves, and airborne laser bathymetry, using reflected light pulses, is also very effective in those conditions, and hyperspectral and multispectral satellite sensors can provide a nearly constant stream of benthic environmental information. Remote sensing techniques have been used to develop new ways of visualizing dynamic benthic environments from general geomorphological features to biological coverage.
Charts
See also
Seabed 2030 Project
References
External links
Bathymetric Data Viewer from NOAA's NCEI
Overview for underwater terrain, data formats, etc. (vterrain.org)
High resolution bathymetry for the Great Barrier Reef and Coral Sea
A.PO.MA.B.-Academy of Positioning Marine and Bathymetry
WebMapping Application for searching free and open source Bathymetry datasets
Interactive Web Map, Set Negative Elevation for Bathymetry
NOAA Ocean Explorer
Schmidt Ocean Institute: Seafloor Mapping
Seafloormapping.co.uk
Coastal Bathymetry Map for US, Canda, Europe & Australia
Seabed 2030
Cartography
Geomorphology
Oceanography
Topography techniques | Bathymetry | [
"Physics",
"Environmental_science"
] | 3,449 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
965,419 | https://en.wikipedia.org/wiki/Stochastic%20resonance | Stochastic resonance (SR) is a phenomenon in which a signal that is normally too weak to be detected by a sensor can be boosted by adding white noise to the signal, which contains a wide spectrum of frequencies. The frequencies in the white noise corresponding to the original signal's frequencies will resonate with each other, amplifying the original signal while not amplifying the rest of the white noise – thereby increasing the signal-to-noise ratio, which makes the original signal more prominent. Further, the added white noise can be enough to be detectable by the sensor, which can then filter it out to effectively detect the original, previously undetectable signal.
This phenomenon of boosting undetectable signals by resonating with added white noise extends to many other systems – whether electromagnetic, physical or biological – and is an active area of research.
Stochastic resonance was first proposed by the Italian physicists Roberto Benzi, Alfonso Sutera and Angelo Vulpiani in 1981, and the first application they proposed (together with Giorgio Parisi) was in the context of climate dynamics.
Technical description
Stochastic resonance (SR) is observed when noise added to a system changes the system's behaviour in some fashion. More technically, SR occurs if the signal-to-noise ratio of a nonlinear system or device increases for moderate values of noise intensity. It often occurs in bistable systems or in systems with a sensory threshold and when the input signal to the system is "sub-threshold." For lower noise intensities, the signal does not cause the device to cross threshold, so little signal is passed through it. For large noise intensities, the output is dominated by the noise, also leading to a low signal-to-noise ratio. For moderate intensities, the noise allows the signal to reach threshold, but the noise intensity is not so large as to swamp it. Thus, a plot of signal-to-noise ratio as a function of noise intensity contains a peak.
Strictly speaking, stochastic resonance occurs in bistable systems, when a small periodic (sinusoidal) force is applied together with a large wide band stochastic force (noise). The system response is driven by the combination of the two forces that compete/cooperate to make the system switch between the two stable states. The degree of order is related to the amount of periodic function that it shows in the system response. When the periodic force is chosen small enough in order to not make the system response switch, the presence of a non-negligible noise is required for it to happen. When the noise is small, very few switches occur, mainly at random with no significant periodicity in the system response. When the noise is very strong, a large number of switches occur for each period of the sinusoid, and the system response does not show remarkable periodicity. Between these two conditions, there exists an optimal value of the noise that cooperatively concurs with the periodic forcing in order to make almost exactly one switch per period (a maximum in the signal-to-noise ratio).
Such a favorable condition is quantitatively determined by the matching of two timescales: the period of the sinusoid (the deterministic time scale) and the Kramers rate (i.e., the average switch rate induced by the sole noise: the inverse of the stochastic time scale).
Stochastic resonance was discovered and proposed for the first time in 1981 to explain the periodic recurrence of ice ages. Since then, the same principle has been applied in a wide variety of systems. Nowadays stochastic resonance is commonly invoked when noise and nonlinearity concur to determine an increase of order in the system response.
Suprathreshold
Suprathreshold stochastic resonance is a particular form of stochastic resonance, in which random fluctuations, or noise, provide a signal processing benefit in a nonlinear system. Unlike most of the nonlinear systems in which stochastic resonance occurs, suprathreshold stochastic resonance occurs when the strength of the fluctuations is small relative to that of an input signal, or even small for random noise. It is not restricted to a subthreshold signal, hence the qualifier.
Neuroscience, psychology and biology
Stochastic resonance has been observed in the neural tissue of the sensory systems of several organisms. Computationally, neurons exhibit SR because of non-linearities in their processing. SR has yet to be fully explained in biological systems, but neural synchrony in the brain (specifically in the gamma wave frequency) has been suggested as a possible neural mechanism for SR by researchers who have investigated the perception of "subconscious" visual sensation. Single neurons in vitro including cerebellar Purkinje cells and squid giant axon could also demonstrate the inverse stochastic resonance, when spiking is inhibited by synaptic noise of a particular variance.
Medicine
SR-based techniques have been used to create a novel class of medical devices for enhancing sensory and motor functions such as vibrating insoles especially for the elderly, or patients with diabetic neuropathy or stroke.
See the Review of Modern Physics article for a comprehensive overview of stochastic resonance.
Stochastic Resonance has found noteworthy application in the field of image processing.
Signal analysis
A related phenomenon is dithering applied to analog signals before analog-to-digital conversion. Stochastic resonance can be used to measure transmittance amplitudes below an instrument's detection limit. If Gaussian noise is added to a subthreshold (i.e., immeasurable) signal, then it can be brought into a detectable region. After detection, the noise is removed. A fourfold improvement in the detection limit can be obtained.
See also
Mutual coherence (linear algebra)
Signal detection theory
Stochastic resonance (sensory neurobiology)
References
Bibliography
Hannes Risken The Fokker-Planck Equation, 2nd edition, Springer, 1989
Bibliography for suprathreshold stochastic resonance
N. G. Stocks, "Suprathreshold stochastic resonance in multilevel threshold systems," Physical Review Letters, 84, pp. 2310–2313, 2000.
M. D. McDonnell, D. Abbott, and C. E. M. Pearce, "An analysis of noise enhanced information transmission in an array of comparators," Microelectronics Journal 33, pp. 1079–1089, 2002.
M. D. McDonnell and N. G. Stocks, "Suprathreshold stochastic resonance," Scholarpedia 4, Article No. 6508, 2009.
M. D. McDonnell, N. G. Stocks, C. E. M. Pearce, D. Abbott, Stochastic Resonance: From Suprathreshold Stochastic Resonance to Stochastic Signal Quantization, Cambridge University Press, 2008.
External links
Scholar Google profile on stochastic resonance
Newsweek Being messy, both at home and in foreign policy, may have its own advantages Retrieved 3 Jan 2011
Stochastic Resonance Conference 1998–2008 ten years of continuous growth. 17-21 Aug. 2008, Perugia (Italy)
Stochastic Resonance - From Suprathreshold Stochastic Resonance to Stochastic Signal Quantization (book)
Review of Suprathreshold Stochastic Resonance
A.S. Samardak, A. Nogaret, N. B. Janson, A. G. Balanov, I. Farrer and D. A. Ritchie. "Noise-Controlled Signal Transmission in a Multithread Semiconductor Neuron" // Phys. Rev. Lett. 102 (2009) 226802,
Biophysics
Stochastic processes
Statistical mechanics
Oscillation
Signal processing
Sensory systems | Stochastic resonance | [
"Physics",
"Technology",
"Engineering",
"Biology"
] | 1,601 | [
"Telecommunications engineering",
"Applied and interdisciplinary physics",
"Computer engineering",
"Signal processing",
"Mechanics",
"Biophysics",
"Oscillation",
"Statistical mechanics"
] |
965,569 | https://en.wikipedia.org/wiki/Jason-1 | Jason-1 was a satellite altimeter oceanography mission. It sought to monitor global ocean circulation, study the ties between the ocean and the atmosphere, improve global climate forecasts and predictions, and monitor events such as El Niño and ocean eddies. Jason-1 was launched in 2001 and it was followed by OSTM/Jason-2 in 2008, and Jason-3 in 2016the Jason satellite series. Jason-1 was launched alongside the TIMED spacecraft.
Naming
The lineage of the name begins with the JASO1 meeting (JASO=Journées Altimétriques Satellitaires pour l'Océanographie) in Toulouse, France to study the problems of assimilating altimeter data in models. Jason as an acronym also stands for "Joint Altimetry Satellite Oceanography Network". Additionally, it is used to reference the mythical quest for knowledge of Jason and the Argonauts.
History
Jason-1 is the successor to the TOPEX/Poseidon mission, which measured ocean surface topography from 1992 through 2005. Like its predecessor, Jason-1 is a joint project between the NASA (United States) and CNES (France) space agencies. Jason-1's successor, the Ocean Surface Topography Mission on the Jason-2 satellite, was launched in June 2008. These satellites provide a unique global view of the oceans that is impossible to acquire using traditional ship-based sampling.
Jason-1 was built by Thales Alenia Space using a Proteus platform, under a contract from CNES, as well as the main Jason-1 instrument, the Poseidon-2 altimeter (successor to the Poseidon altimeter on-board TOPEX/Poseidon).
Jason-1 was designed to measure climate change through very precise millimeter-per-year measurements of global sea level changes. As did TOPEX/Poseidon, Jason-1 uses an altimeter to measure the hills and valleys of the ocean's surface. These measurements of sea surface topography allow scientists to calculate the speed and direction of ocean currents and monitor global ocean circulation. The global ocean is Earth's primary storehouse of solar energy. Jason-1's measurements of sea surface height reveal where this heat is stored, how it moves around Earth by ocean currents, and how these processes affect weather and climate.
Jason-1 was launched on 7 December 2001 from Vandenberg Air Force Base, in California, aboard a Delta II Launch vehicle. During the first months Jason-1 shared an almost identical orbit to TOPEX/Poseidon, which allowed for cross calibration. At the end of this period, the older satellite was moved to a new orbit midway between each Jason ground track. Jason had a repeat cycle of 10 days.
On 16 March 2002, Jason-1 experienced a sudden attitude upset, accompanied by temporary fluctuations in the onboard electrical systems. Soon after this incident, two new small pieces of space debris were observed in orbits slightly lower than Jason-1's, and spectroscopic analysis eventually proved them to have originated from Jason-1. In 2011, it was determined that the pieces of debris had most likely been ejected from Jason-1 by an unidentified, small "high-speed particle" hitting one of the spacecraft's solar panels.
Orbit maneuvers in 2009 put the Jason-1 satellite on the opposite side of Earth from the OSTM/Jason-2 satellite, which is operated by the United States and French weather agencies. At that time, Jason-1 flew over the same region of the ocean that OSTM/Jason-2 flew over five days earlier. Its ground tracks fell midway between those of OSTM/Jason-2, which are about apart at the equator.
This interleaved tandem mission provided twice the number of measurements of the ocean's surface, bringing smaller features such as ocean eddies into view. The tandem mission also helped pave the way for a future ocean altimeter mission that would collect much more detailed data with its single instrument than the two Jason satellites now do together.
In early 2012, having helped cross-calibrate the OSTM/Jason-2 replacement mission, Jason-1 was maneuvered into its graveyard orbit and all remaining fuel was vented. The mission was still able to return science data, measuring Earth's gravity field over the ocean. On 21 June 2013, contact with Jason-1 was lost; multiple attempts to re-establish communication failed. It was determined that the last remaining transmitter on board the spacecraft had failed. Operators sent commands to the satellite to turn off remaining functioning components on 1 July 2013, rendering it decommissioned. It is estimated that the spacecraft will remain on orbit for at least 1,000 years.
The program is named after the Greek mythological hero Jason.
Satellite instruments
Jason-1 has five 5 instruments:
Poseidon 2 – Nadir pointing Radar altimeter using C band and for measuring height above sea surface.
Jason Microwave Radiometer (JMR) – measures water vapor along altimeter path to correct for pulse delay
DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) for orbit determination to within 10 cm or less and ionospheric correction data for Poseidon 2.
BlackJack Global Positioning System receiver provides precise orbit ephemeris data
Laser retroreflector array works with ground stations to track the satellite and calibrate and verify altimeter measurements.
The Jason-1 satellite, its altimeter instrument and a position-tracking antenna were built in France. The radiometer, Global Positioning System receiver and laser retroreflector array were built in the United States.
Use of information
TOPEX/Poseidon and Jason-1 have led to major advances in the science of physical oceanography and in climate studies. Their 15-year data record of ocean surface topography has provided the first opportunity to observe and understand the global change of ocean circulation and sea level. The results have improved the understanding of the role of the ocean in climate change and improved weather and climate predictions. Data from these missions are used to improve ocean models, forecast hurricane intensity, and identify and track large ocean/atmosphere phenomena such as El Niño and La Niña. The data are also used every day in applications as diverse as routing ships, improving the safety and efficiency of offshore industry operations, managing fisheries, and tracking marine mammals. Their 15-year data record of ocean surface topography has provided the first opportunity to observe and understand the global change of ocean circulation and sea level. The results have improved the understanding of the role of the ocean in climate change and improved weather and climate predictions. Data from these missions are used to improve ocean models, forecast hurricane intensity, and identify and track large ocean/atmosphere phenomena such as El Niño and La Niña. The data are also used every day in applications as diverse as routing ships, improving the safety and efficiency of offshore industry operations, managing fisheries, and tracking marine mammals.
TOPEX/Poseidon and Jason-1 have made major contributions to the understanding of:
Ocean variability
The missions revealed the surprising variability of the ocean, how much it changes from season to season, year to year, decade to decade and on even longer time scales. They ended the traditional notion of a quasi-steady, large-scale pattern of global ocean circulation by proving that the ocean is changing rapidly on all scales, from huge features such as El Niño and La Niña, which can cover the entire equatorial Pacific, to tiny eddies swirling off the large Gulf Stream in the Atlantic.
Sea level change
Measurements by Jason-1 indicate that mean sea level has been rising at an average rate of 2.28 mm (0.09 inch) per year since 2001. This is somewhat less than the rate measured by the earlier TOPEX/Poseidon mission, but over four times the rate measured by the later Envisat mission. Mean sea level measurements from Jason-1 are continuously graphed at the Centre National d'Études Spatiales web site, on the Aviso page. A composite sea level graph, using data from several satellites, is also available on that site.
The data record from these altimetry missions has given scientists important insights into how global sea level is affected by natural climate variability, as well as by human activities.
Planetary Waves
TOPEX/Poseidon and Jason-1 made clear the importance of planetary-scale waves, such as Rossby and Kelvin waves. No one had realized how widespread these waves are. Thousands of kilometers wide, these waves are driven by wind under the influence of Earth's rotation and are important mechanisms for transmitting climate signals across the large ocean basins. At high latitudes, they travel twice as fast as scientists believed previously, showing the ocean responds much more quickly to climate changes than was known before these missions.
Ocean tides
The precise measurements of TOPEX/Poseidon's and Jason-1 have brought knowledge of ocean tides to an unprecedented level. The change of water level due to tidal motion in the deep ocean is known everywhere on the globe to within 2.5 centimeters (1 inch). This new knowledge has revised notions about how tides dissipate. Instead of losing all their energy over shallow seas near the coasts, as previously believed, about one third of tidal energy is actually lost to the deep ocean. There, the energy is consumed by mixing water of different properties, a fundamental mechanism in the physics governing the general circulation of the ocean.
Ocean models
TOPEX/Poseidon and Jason-1 observations provided the first global data for improving the performance of the numerical ocean models that are a key component of climate prediction models. TOPEX/Poseidon and Jason-1 data are available at the University of Colorado Center for Astrodynamics Research, NASA's Physical Oceanography Distributed Active Archive Center, and the French data archive center AVISO.
Benefits to society
Altimetry data have a wide variety of uses from basic scientific research on climate to ship routing. Applications include:
Climate Research: altimetry data are incorporated into computer models to understand and predict changes in the distribution of heat in the ocean, a key element of climate.
El Niño and La Niña Forecasting: understanding the pattern and effects of climate cycles such as El Niño helps predict and mitigate the disastrous effects of floods and drought.
Hurricane Forecasting: altimeter data and satellite ocean wind data are incorporated into atmospheric models for hurricane season forecasting and individual storm severity.
Ship Routing: maps of ocean currents, eddies, and vector winds are used in commercial shipping and recreational yachting to optimize routes.
Offshore Industries: cable-laying vessels and offshore oil operations require accurate knowledge of ocean circulation patterns to minimize impacts from strong currents.
Marine Mammal Research: sperm whales, fur seals, and other marine mammals can be tracked, and therefore studied, around ocean eddies where nutrients and plankton are abundant.
Fisheries Management: satellite data identify ocean eddies which bring an increase in organisms that comprise the marine food web, attracting fish and fishermen.
Coral Reef Research: remotely sensed data are used to monitor and assess coral reef ecosystems, which are sensitive to changes in ocean temperature.
Marine Debris Tracking: the amount of floating and partially submerged material, including nets, timber and ship debris, is increasing with human population. Altimetry can help locate these hazardous materials.
See also
Argo - a project to measure the temperature and salinity of the upper 2 km of the water column
Seasat - an early radar altimeter satellite
TOPEX/Poseidon - the immediate predecessor to Jason-1
Ocean Surface Topography Mission/Jason-2 – the immediate successor to Jason-1
2004 Indian Ocean earthquake - Energy of the earthquake
French space program
References
External links
Jason 1 and 2 site at CNES (in French)
Jason 1 and 2 site at CNES (in English)
TOPEX/Jason site at NASA
DEOS: the Radar Altimeter Database System (RADS)
NASA Jason-1 mission page
Earth observation satellites of the United States
Earth observation satellites of France
2001 in France
Spacecraft launched in 2001
Spacecraft launched by Delta II rockets
Physical oceanography
Earth satellite radar altimeters
NASA satellites orbiting Earth
Jason satellite series
CNES | Jason-1 | [
"Physics"
] | 2,443 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
966,106 | https://en.wikipedia.org/wiki/Fluidics | Fluidics, or fluidic logic, is the use of a fluid to perform analog or digital operations similar to those performed with electronics.
The physical basis of fluidics is pneumatics and hydraulics, based on the theoretical foundation of fluid dynamics. The term fluidics is normally used when devices have no moving parts, so ordinary hydraulic components such as hydraulic cylinders and spool valves are not considered or referred to as fluidic devices.
A jet of fluid can be deflected by a weaker jet striking it at the side. This provides nonlinear amplification, similar to the transistor used in electronic digital logic. It is used mostly in environments where electronic digital logic would be unreliable, as in systems exposed to high levels of electromagnetic interference or ionizing radiation.
Nanotechnology considers fluidics as one of its instruments. In this domain, effects such as fluid–solid and fluid–fluid interface forces are often highly significant. Fluidics have also been used for military applications.
History
In 1920, Nikola Tesla patented a valvular conduit or Tesla valve that works as a fluidic diode. It was a leaky diode, i.e. the reverse flow is non-zero for any applied pressure difference. Tesla's valve also had non-linear response, as its diodicity had frequency dependence. It could be used in fluid circuits, such as a full-wave rectifier, to convert AC to DC.
In 1957, Billy M. Horton of the Harry Diamond Laboratories (which later became a part of the Army Research Laboratory) first came up with the idea for the fluidic amplifier when he realized that he could redirect the direction of flue gases using a small bellows. He proposed a theory on stream interaction, stating that one can achieve amplification by deflecting a stream of fluid with a different stream of fluid. In 1959, Horton and his associates, Dr. R. E. Bowles and Ray Warren, constructed a family of working vortex amplifiers out of soap, linoleum, and wood. Their published result caught the attention of several major industries and created a surge of interest in applying fluidics (then called fluid amplification) to sophisticated control systems, which lasted throughout the 1960s. Horton is credited for developing the first fluid amplifier control device and launching the field of fluidics. In 1961, Horton, Warren, and Bowles were among the 27 recipients to receive the first Army Research and Development Achievement Award for developing the fluid amplifier control device.
Logic elements
Logic gates can be built that use water instead of electricity to power the gating function. These are reliant on being positioned in one orientation to perform correctly. An OR gate is simply two pipes being merged, and a NOT gate (inverter) consists of "A" deflecting a supply stream to produce Ā. The AND and XOR gates are sketched in the diagram. An inverter could also be implemented with the XOR gate, as A XOR 1 = Ā.
Another kind of fluidic logic is bubble logic. Bubble logic gates conserve the number of bits entering and exiting the device, because bubbles are neither produced nor destroyed in the logic operation, analogous to billiard-ball computer gates.
Components
Amplifiers
In a fluidic amplifier, a fluid supply, which may be air, water, or hydraulic fluid, enters at the bottom. Pressure applied to the control ports C1 or C2 deflects the stream, so that it exits via either port O1 or O2. The stream entering the control ports may be much weaker than the stream being deflected, so the device has gain.
This basic device can be used to construct other fluidic logic elements, as well fluidic oscillators that can be used in analogous way as flip flops. Simple systems of digital logic can thus be built.
Fluidic amplifiers typically have bandwidths in the low kilohertz range, so systems built from them are quite slow compared to electronic devices.
Triodes
The fluidic triode, an amplification device that uses a fluid to convey the signal, has been invented, as have fluid diodes, a fluid oscillator and a variety of hydraulic "circuits," including one that has no electronic counterpart.
Uses
The MONIAC Computer built in 1949 was a fluid-based analogue computer used for teaching economic principles as it could recreate complex simulations that digital computers could not at the time. Twelve to fourteen were built and acquired by businesses and teaching establishments.
The FLODAC Computer was built in 1964 as a proof of concept fluid-based digital computer.
Fluidic components appear in some hydraulic and pneumatic systems, including some automotive automatic transmissions. As electronic digital logic has become more accepted in industrial control, the role of fluidics in industrial control has declined.
In the consumer market, fluidically controlled products are increasing in both popularity and presence, installed in items ranging from toy spray guns through shower heads and hot tub jets; all provide oscillating or pulsating streams of air or water. Logic-enabled textiles for applications in wearable technology has also been researched.
Fluid logic can be used to create a valve with no moving parts such as in some anaesthetic machines.
Fluidic oscillators were used in the design of pressure-triggered, 3D printable, emergency ventilators for the COVID-19 pandemic.
Fluidic amplifiers are used to generate ultrasound for non-destructive testing by quickly switching pressurized air from one outlet to another.
A fluidic sound ampliflication system has been demonstrated in a synagogue, where regular electronic sound amplification can not be used for religious reasons.
Fluidic injection is being researched for use in aircraft to control direction, in two ways: circulation control and thrust vectoring. In both, larger more complex mechanical parts are replaced by fluidic systems, in which larger forces in fluids are diverted by smaller jets or flows of fluid intermittently, to change the direction of vehicles. In circulation control, near the trailing edges of wings, aircraft flight control systems such as ailerons, elevators, elevons, flaps, and flaperons are replaced by openings, usually rows of holes, or elongated slots, which emit fluid flows. In thrust vectoring, in jet engine nozzles, swiveling parts are replaced by openings which inject fluid flows into jets. Such systems divert thrust via fluid effects. Tests show that air forced into a jet engine exhaust stream can deflect thrust up to 15 degrees. In such uses, fluidics is desirable for lower: mass, cost (up to 50% less), drag (up to 15% less during use), inertia (for faster, stronger control response), complexity (mechanically simpler, fewer or no moving parts or surfaces, less maintenance), and radar cross section for stealth. This will likely be used in many unmanned aerial vehicles (UAVs), 6th generation fighter aircraft, and ships.
, at least two countries are known to be researching fluidic control. In Britain, BAE Systems has tested two fluidically controlled unmanned aircraft, one starting in 2010 named Demon, and another starting in 2017 named MAGMA, with the University of Manchester. In the United States, the Defense Advanced Research Projects Agency (DARPA) program named Control of Revolutionary Aircraft with Novel Effectors (CRANE) seeks "... to design, build, and flight test a novel X-plane that incorporates active flow control (AFC) as a primary design consideration. ... In 2023, the aircraft received its official designation as X-65." In winter 2024, construction began, at Boeing subsidiary Aurora Flight Sciences. In summer 2025, flight testing is to start.
Octobot, a 2016 proof of concept soft-bodied autonomous robot containing a microfluidic logic circuit, has been developed by researchers at Harvard University's Wyss Institute for Biologically Inspired Engineering.
See also
Water integrator
Microfluidics
Bio-MEMS
Lab-on-a-chip
MONIAC
Unconventional computing
References
Further reading
FLODAC – A Pure Fluid Digital Computer:
Stanley W. Angrist: Fluid control devices. In: Scientific American, December 1964, pp. 80–88.
Pneumatic logic elements from 1969
External links
Fluidics: How They've Taught A Stream of Air to Think pp. 118–121,196.197, illustrating several switch designs and discussing applications. Scanned article available online from Google Books: Popular Science June 1967
Visualization of the flow field of a fluidic oscillator
Fluid dynamics
Logic | Fluidics | [
"Chemistry",
"Engineering"
] | 1,753 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
15,876,400 | https://en.wikipedia.org/wiki/Calcium%20chloride%20transformation | Calcium chloride (CaCl2) transformation is a laboratory technique in prokaryotic (bacterial) cell biology. The addition of calcium chloride to a cell suspension promotes the binding of plasmid DNA to lipopolysaccharides (LPS). Positively charged calcium ions attract both the negatively charged DNA backbone and the negatively charged groups in the LPS inner core. The plasmid DNA can then pass into the cell upon heat shock, where chilled cells (+4 degrees Celsius) are heated to a higher temperature (+42 degrees Celsius) for a short time.
History of bacterial transformation
Frederick Griffith published the first report of bacteria's potential for transformation in 1928. Griffith observed that mice did not succumb to the "rough" type of pneumococcus (Streptococcus pneumoniae), referred to as nonvirulent, but did succumb to the "smooth" strain, which is referred to as virulent. The smooth strain's virulence could be suppressed with heat-killing. However, when the nonvirulent rough strain was combined with the heat-killed smooth strain, the rough strain managed to pick up the smooth phenotype and thus become virulent. Griffith's research indicated that the change was brought on by a nonliving, heat-stable substance generated from the smooth strain. Later on, Oswald Avery, Colin MacLeod, and Maclyn McCarty identified this transformational substance as DNA in 1944.
Principle of calcium chloride transformation
Since DNA is a very hydrophilic molecule, it often cannot penetrate through the bacterial cell membrane. Therefore, it is necessary to make bacteria competent in order to internalize DNA. This may be accomplished by suspending bacteria in a solution with a high calcium concentration, which creates tiny holes in the bacterium's cells. Calcium suspension, along with the incubation of DNA together with competent cells on ice, followed by a brief heat shock, will directly lead extra-chromosomal DNA to forcedly enter the cell.
According to previous research, the LPS receptor molecules on the competent cell surface bind to a bare DNA molecule. This binding occurs in view of the fact that the negatively charged DNA molecules and LPS form coordination complexes with the divalent cations. Due to its size, DNA cannot pass through the cell membrane on its own to reach the cytoplasm. The cell membrane of CaCl2-treated cells is severely depolarized during the heat shock stage, and as a result, the drop in membrane potential reduces the negative nature of the cell's internal potential, allowing negatively charged DNA to flow into the interior of the cell. Afterwards, the membrane potential can be raised back to its initial value by subsequent cold shock.
Competent cells
Competent cells are bacterial cells with re-designed cell walls that make it easier for foreign DNA to get through. Without particular chemical or electrical treatments to make them capable, the majority of cell types cannot successfully take up DNA, for that reason, treatment with calcium ions is the typical procedure for modifying bacteria to be permeable to DNA. In bacteria, competence is closely regulated, and different bacterial species have different competence-related characteristics. Although they share some similarity, the competence proteins generated by Gram-positive and Gram-negative bacteria are different.
Natural Competence
Natural competence sums up in three methods where bacteria can acquire DNA from their surroundings: conjugation, transformation, and transduction. As DNA is inserted into the cell during transformation, the recipient cells must be at certain physiological condition known as the competent state in order to take up transforming DNA. Once the DNA has entered the cell's cytoplasm, enzymes such as nuclease can break it down. In cases where the DNA is extremely similar to the cell's own genetic material, DNA-repairing-enzymes recombine it with the chromosome instead.
Artificial Competence
Evidently, a cell's genes do not include any information on artificial competence. This type of competence requires a laboratory process that creates conditions that do not often exist in nature so that cells can become permeable to DNA. Although the efficiency of transformation is often poor, this process is relatively simple and quick to be applied in bacterial genetic engineering. Mandel and Higa, who created an easy procedure based on soaking the cells in cold CaCl2, provided the basis for obtaining synthetic competent cells. Chemical transformation, such as calcium chloride transformation and electroporation are the most commonly used methods to transform bacterial cells, like E.coli cells, with plasmid DNA.
Method for calcium chloride transformation
Calcium chloride treatment is generally used for the transformation of E. coli and other bacteria. It enhances plasmid DNA incorporation by the bacterial cell, promoting genetic transformation. Plasmid DNA can attach to LPS by being added to the cell solution together with CaCl2. Thus, when heat shock is applied, the negatively charged DNA backbone and LPS combine, allowing plasmid DNA to enter the bacterial cell.
The process is summarized in the following steps according to The Undergraduate Journal of Experimental Microbiology and Immunology (UJEMI) protocol:
Prepare a bacterial culture in LB broth
Before starting the main procedure, use the required volume of the previously made culture to inoculate the required volume of fresh LB broth
Pellet the cells by centrifuging at 4°C at 4000 rpm for 10 minutes
Pour off the supernatant and resuspend cells in 20 mL ice-cold 0.1 M CaCl2, then leave immediately on ice for 20 minutes
Centrifuge as in step 3, a more diffused pellet will be obtained as an indication of competent cells
Resuspend in cold CaCl2 as in step 4
Pour off supernatant and resuspend cells in 5 mL ice-cold 0.1 M CaCl2 along with 15% glycerol to combine pellets
Transfer the suspensions to sterile thin glass tubes for effective heat shocks
Add the required mg amount of DNA in the suspension tubes, and immediately leave on ice
Place the tubes on a 42°C water bath for a 30 seconds and return immediately to ice for 2 minutes
Add 1 mL of LB or SOC medium
Transfer each tube to the required mL LB broth amount in a new flask
Incubate accordingly with shaking at 37°C at 200 rpm for 60 min, however, it is advised to leave it for 90 minutes in order to allow bacteria to recover
Plate 1:10 and 1:100 dilutions of the incubated cultures on selective/ screening plates (e.g. Ampicillin and/or X-gal) onto LB plates to which the antibiotics to be used for selection have been added
Incubate overnight at 37°C
Finally, observe isolated colonies on the plates
References
External links
Animation of Calcium chloride (CaCl2) transformation
https://www.youtube.com/watch?v=7Ul9RVYG5CM&ab_channel=NewEnglandBiolabs
Cell biology
Molecular biology techniques | Calcium chloride transformation | [
"Chemistry",
"Biology"
] | 1,444 | [
"Molecular biology techniques",
"Cell biology",
"Molecular biology"
] |
15,878,352 | https://en.wikipedia.org/wiki/Sample%20preparation%20in%20mass%20spectrometry | Sample preparation for mass spectrometry is used for the optimization of a sample for analysis in a mass spectrometer (MS). Each ionization method has certain factors that must be considered for that method to be successful, such as volume, concentration, sample phase, and composition of the analyte solution.
Quite possibly the most important consideration in sample preparation is knowing what phase the sample must be in for analysis to be successful. In some cases the analyte itself must be purified before entering the ion source. In other situations, the matrix, or everything in the solution surrounding the analyte, is the most important factor to consider and adjust. Often, sample preparation itself for mass spectrometry can be avoided by coupling mass spectrometry to a chromatography method, or some other form of separation before entering the mass spectrometer.
In some cases, the analyte itself must be adjusted so that analysis is possible, such as in protein mass spectrometry, where usually the protein of interest is cleaved into peptides before analysis, either by in-gel digestion or by proteolysis in solution.
Sample phase
The first and most important step in sample preparation for mass spectrometry is determining what phase the sample needs to be in. Different ionization methods require different sample phases. Solid phase samples can be ionized through methods such as field desorption, plasma-desorption, fast atom bombardment, and secondary-ion ionization.
Liquids with the analyte dissolved in them, or solutions, can be ionized through methods such as matrix-assisted laser desorption, electrospray ionization, and atmospheric-pressure chemical ionization. Both solid and liquid samples may be ionized with ambient ionization techniques.
Gas samples, or volatile samples, can be ionized using methods such as electron ionization, photoionization, and chemical ionization.
These lists are the most commonly used state of matter for each ionization method, but the ionization methods are not necessarily limited to these states of matter. For example, fast atom bombardment ionization is typically used to ionize solid samples, but this method is typically used on solids dissolved into solutions, and can also be used to analyze components that have entered the gas phase.
Chromatography as a sample preparation method
In many mass spectrometry ionization methods, the sample must be in the liquid or gas phase for the ionization method to work. Sample preparation to ensure proper ionization can be difficult, but can be made easier by coupling the mass spectrometer to some chromatographic equipment. Gas chromatography(GC) or liquid chromatography(LC) can be used as a sample preparation method.
Gas chromatography
GC is a method involving the separation of different analytes within a sample of mixed gases. The separated gases can be detected multiple ways, but one of the most powerful detection methods for gas chromatography is mass spectrometry. After the gases separate, they enter the mass spectrometer and are analyzed. This combination not only separates the analytes, but gives structural information about each one. The GC sample must be volatile, or able to enter the gas phase, while also being thermally stable so that it does not break down as it is heated to enter the gas phase. Mass spectrometry ionization techniques requiring the sample to be in the gas phase have similar concerns.
Electron ionization (EI) in mass spectrometry requires samples that are small molecules, volatile, and thermally stable, similar to that of gas chromatography. This ensures that as long as GC is performed on the sample before entering the mass spectrometer, the sample will be prepared for ionization by EI.
Chemical ionization (CI) is another method that requires samples to be in the gas phase. This is so that the sample can react with a reagent gas to form an ion that can be analyzed by the mass spectrometer. CI has many of the same requirements in sample preparation as EI, such as volatility and thermal stability of the sample. GC is useful for sample preparation for this technique as well. One advantage of CI is that larger molecules separated by GC can be analyzed by this ionization method. CI has a larger mass range than that of EI and can analyze molecules that EI may not be able to . CI also has the advantage of being less damaging to the sample molecule, so that less fragmentation occurs and more information about the original analyte can be determined.
Photoionization (PI) was a method that was first applied as an ionization method to detecting gases separated by GC. Years later, it was also applied as a detector for LC, though the samples must be vaporized first to be detected by the photoionization detector. Eventually PI was applied to mass spectrometry, particularly as an ionization method for gas chromatography-mass spectrometry. Sample preparation for PI includes first ensuring the sample is in the gas phase. PI ionizes molecules by exciting the sample molecules with photons of light. This method only works if the sample and other components in the gas phase are excited by different wavelengths of light. It is important when preparing the sample, or photon source, that the wavelengths of ionization are adjusted to excite the sample analyte and nothing else.
Liquid chromatography
Liquid chromatography (LC) is a method that in some ways is more powerful than GC, but can be coupled to mass spectrometry just as easily. In LC, the concerns involving sample preparation can be minimal. In LC, both the stationary and mobile phase can affect the separation, whereas in GC only the stationary phase should be influential. This allows for the sample preparation to be minimal if one is willing to adjust the stationary phase or mobile phase before running the sample. The primary concern is the concentration of analyte. If the concentration is too high then separation can be unsuccessful, but mass spectrometry as a detection method does not need complete separation, showing another benefit of coupling LC to a mass spectrometer.
LC can be coupled to mass spectrometry through the vaporization of the liquid samples as they enter the mass spectrometer. This method can allow for ionization methods that require gaseous samples to be used, such as CI or PI, particularly atmospheric-pressure chemical ionization or atmospheric pressure photoionization, which allows for more interactions and more ionization.
Other ionization methods may not require the liquid sample to be vaporized, and can analyze the liquid sample itself. One example is fast-atom bombardment ionization which can allow for liquid samples separated by the LC to flow into the ionization chamber and be ionized easily. The most common ionization method coupled to LC is some form of spray ionization, which includes thermospray ionization and more commonly, electrospray (ESI) ionization.
Thermospray was first developed as a way to effectively remove solvent and vaporize samples more easily. This method involves the liquid sample from the LC flowing through an electrically heated vaporizer that simply heats the sample, removing any solvent and therefore putting the sample in the gas phase. Electrospray ionization (ESI) is similar to thermospray in the principle of removing the liquid solvent from the sample as much as possible, creating charged sample molecules either in small droplets or in gas form. Studies have shown that ESI can be as much as ten times more sensitive than other ionization methods coupled to LC. The spray methods are particularly useful considering that non-volatile samples can be analyzed easily through this method since the sample is not itself turned into a gas, the liquid is simply removed, pushing the sample into a gaseous or mist phase.
One sample preparation issue with liquid chromatography-mass spectrometry is possible matrix effects due to the presence of background molecules. These matrix effects have been shown to decrease the signal in methods such as PI and ESI by amounts as much as 60% depending on the sample being analyzed. The matrix effect can also cause an increase in signal, producing false positive results. This can be corrected by purifying the sample as much as possible before LC is performed, but in the case of analyzing environmental samples where everything in the sample is of concern, sample preparation may not be the ideal solution to fix the problem. Another method that can be applied to correct the issue is by using the standard addition method.
Fast atom bombardment
Fast atom bombardment (FAB) is a method involving using a beam of high energy atoms to strike a surface and generate ions. These solid analyte particles must be dissolved into some form of matrix, or non-volatile liquid to protect and assist in the ionization of the solid analyte. It has been shown that as the matrix is depleted, the ion formation diminishes, so choosing the right matrix compound is vital.
The overall goal of the matrix compound is to present the sample to the atom beam at a high mobile surface concentration. For maximum sensitivity, the sample should form a perfect monolayer at the surface of a substrate having low volatility. This monolayer effect can be seen in that once a certain concentration of analyte in matrix is reached, any concentration above that is seen to exhibit no effect, because once the monlayer is formed, any additional analyte is beneath the monolayer, and thus not affected by the atom beam. The concentration needed to cause this effect is seen to change as the amount of non-volatile matrix changes. So concentration of solid analyte needs to be considered in the preparation of the solution for analysis so that signal from "hidden" analyte is not missed.
To choose the matrix for each solid analyte, three criteria must be considered. First, it should dissolve the solid compound to be analysed (with or without the aid of a cosolvent or additive), thus allowing molecules of that compound to diffuse to the surface layers, replenishing the sample molecules that have been ionized or destroyed by interaction with the fast atom beam. Another mechanism for explanation of ion formation in FAB involves the idea that sputtering occurs from the bulk rather than the surface, but in that case, the solubility is still largely important to insure homogeneity of solid analyte in the bulk solution. Secondly, the matrix should have a low volatility under the conditions of the mass spectrometer. As mentioned above, as the matrix is depleted, the ionization decreases as well, so maintaining the matrix is vital. Thirdly, the matrix should not react with the solid analyte in question, or if it does react, it should be in an understood and reproducible way. This ensures reproducibility of analysis and identification of the actual analyte rather than a derivative of the analyte.
The most commonly used compounds as a matrix are variations of glycerol, such as glycerol, deuteroglycerol, thioglycerol, and aminoglycerol. If the sample cannot dissolve in the chosen matrix, such as glycerol, a cosolvent or additive can be mixed with the matrix to facilitate the dissolving of the solid analyte. For example, chlorophyll A is completely insoluble in glycerol, but by mixing in a small amount of Triton X-100, a derivative of polyethylene glycol, the chlorophyll becomes highly soluble within the matrix. It is important to note that though a good signal may be achieved through glycerol or glycerol with an additive, there could be other matrix compounds that can offer an even better signal. Optimization of matrix compounds and concentration of solid analyte are vital for FAB measurements.
Secondary ion mass spectrometry
Secondary ion mass spectrometry (SIMS) is a method very similar to FAB in that a beam of particles is fired against the surface of a sample in order to cause sputtering, in which the molecules of the sample ionize and leave the surface, thus allowing for the ions or the sample to be analyzed. The primary difference is that in SIMS, an ion beam is fired against the surface, but in FAB, an atom beam is fired against the surface. The other primary difference, of more interest to this page, is that, unlike FAB, SIMS is typically performed on a solid sample with little sample preparation required.
The main consideration with SIMS is ensuring that the sample is stable under ultra-high vacuum, or pressures less than 10−8 torr. The nature of the ultra-high vacuum is that it ensures the sample remains constant during analysis as well as ensuring the high energy ion beam strikes the sample. Ultra-high vacuum solves many of the problems that need to be considered during sample preparation. When preparing the sample for analysis, another thing that should be considered is the thickness of the film. Typically, if a thin monolayer can be deposited onto the surface of a noble metal, analysis should be successful. If the film thickness is too large, which is common in real world analysis, the problem can be solved by methods such as depositing a perforated silver foil over a nickel grid onto the film surface. This yields similar results to thin films deposited directly onto a noble metal.
Matrix-assisted laser desorption/ionization
For matrix-assisted laser desorption/ionization (MALDI) mass spectrometry a solid or liquid sample is mixed with a matrix solution, to help the sample avoid processes such as aggregation or precipitation, while helping the sample remain stable during the ionization process. The matrix crystallizes with the sample and is then deposited on a sample plate, which can be made of a range of materials, from inert metals to inert polymers. The matrix containing the sample molecules is then transferred to the gas phase by pulsed laser irradiation. The makeup of the matrix, interactions between the sample and the matrix, and how the sample is deposited are all extremely important during sample preparation to ensure the best possible results.
The selection of a matrix is the first step when preparing samples for MALDI analysis. The primary goals of the matrix are to absorb the energy from a laser, thus transferring it to the analyte molecules, and to separate the analyte molecules from each other. A consideration that should be taken into account when choosing a matrix is what type of analyte ion is expected or desired. Knowing the acidity or basicity of the analyte molecule compared with the acidity or basicity of the matrix, for example, is valuable knowledge when choosing a matrix. The matrix should not compete with the analyte molecule, so the matrix should not want to form the same type of ion as the analyte. For example, if the desired analyte has a high amount of acidity, it would be logical to choose a matrix with a high amount of basicity to avoid competition and facilitate the formation of an ion. The pH of the matrix can also be used to select what sample you want to obtain spectra for. For example, in the case of proteins, a very acidic pH can show very little of the peptide components, but can show very good signal for those components that are larger. If the pH is increased towards a more basic pH, then smaller components become easier to see.
The concentration of salt in the sample is a factor that needs to be considered when preparing a MALDI sample as well. Salts can aid a MALDI spectra by preventing aggregation or precipitation while stabilizing the sample. However, interfering signals can be observed due to side reactions of the matrix with the sample, such as in the case of the matrix interacting with alkali metal ions which can impair the analysis of the spectra. Typically the amount of salt in the matrix only becomes a problem in very high concentrations, such as 1 molar. The problem of having too high a concentration of salt in the sample can be solved by first running the solution through liquid chromatography to help purify the sample, but this method is time-consuming and results in the loss of some of the sample to be analyzed. Another method is focused on purification once the sample solution is deposited onto the sample probe. Many sample probes can be designed to have a membrane on the surface that can selectively bind the sample in question to the probe surface. The surface can then be rinsed off to remove all unnecessary salts or background molecules. The matrix of appropriate salt concentration can then be deposited directly onto the sample on the probe surface and crystallized there. Despite these negative effects of salt concentration, a separate desalting step is usually not necessary in the case of proteins, because the selection of appropriate buffer salts prevents the occurrence of this problem.
How the sample and matrix is deposited on the surface of the sample probe needs to be a consideration in sample preparation as well. The dried drop method is the simplest of deposition methods. The matrix and sample solution are mixed together and then a small drop of the mixture is placed on the sample probe surface and allowed to dry, thus crystallizing. The sandwich method involves depositing a layer of matrix onto the surface of the probe and allowing it to dry. A drop of the sample followed by a drop of additional matrix is then applied to the layer of dried matrix and allowed to dry as well. Variations on the sandwich technique involve depositing the matrix on the surface and then depositing the sample directly on top of the matrix. A particularly useful method involves depositing the matrix solution on the surface of the sample probe in a solvent that will evaporate very rapidly, thus forming a very thin fine layer of matrix. The sample solution is then placed on top of the matrix layer and allowed to evaporate slowly, thus integrating the sample into the top layer of matrix as the sample solution evaporates. An addition concern when depositing the sample on the surface of the probe is the solubility of the sample in the matrix. If the sample is insoluble in the matrix, additional methods must be employed. A method used in this case involves mechanical grinding and mixing of solid sample and solid matrix crystals. Once blended well, this powder can be deposited on the surface of the sample probe in free powder form or as a pill. Another possible method is placing the sample on the surface of the probe and applying vaporized matrix to the sample probe to allow the matrix to condense around the sample.
Electrospray ionization
Electrospray ionization (ESI) is a technique that involves using high voltages to create an electrospray, or a fine aerosol created by the high voltages. ESI sample preparation can be very important and the quality of results can be heavily determined by the characteristics of the sample. ESI experiments can be run on-line or off-line. In on-line measurements the mass spectrometer is connected to a liquid chromatograph and as the samples are separated they are ionized into the mass spectrometer by the ESI system; sample preparation is actually performed before the LC separation. In off-line measurements, the analyte solution is applied directly to the mass spectrometer by a spray capillary . Off-line sample preparation has many considerations, such as the fact that the capillary used allows for the application of volumes in the nanoliter range, which can contain a concentration too small for analysis of many compounds, such as proteins. An additional problem can be loss of ESI signal due to interference between the analyte sample and background components. Unfortunately, it has been shown that sample preparation itself can only slightly alleviate this problem which is due more to the nature of the analyte itself than the preparation. In ESI the principal problem comes not from reactions in the gas phase but rather from problems involving the solution phase of the droplets themselves. Issues can be due to non-volatile substances remaining in the drops, which can change the efficiency of droplet formation or droplet evaporation, which in turn affects the amount of charged ions in the gas phase that ultimately reach the mass spectrometer. These problems can be fixed in multiple ways, including increasing the amount of concentration of analyte compared to matrix in the sample solution or by running the sample through a more extensive chromatographic technique before analysis. An example of a chromatographic technique that can aid in signal in ESI involves using 2-D liquid chromatography, or running the sample through two separate chromatography columns, giving better separation of the analyte from the matrix.
ESI variations
There are some ESI methods that require little to no sample preparation. One such method is a method termed extractive electrospray ionization (EESI). This method involves having an electrospray of solvent directed at an angle against a different spray of the sample solution, produced by a separate nebulizer. This method requires no sample preparation in that the electrospray of solvent extracts the sample from the complex mixture, effectively removing any background contaminants. Another particularly powerful variation on ESI is desorption electrospray ionization (DESI), which involves directing an electrospray at a surface with the sample deposited on top of it. The sample is ionized in the electrospray as it splashes off the surface, then traveling to the mass spectrometer. This method is important because no sample preparation is needed for this method. A sample simply needs to be deposited on a surface, such as paper. Atmospheric pressure chemical ionization (APCI) is similar to ESI in that the sample is nebulized in droplets that are then evaporated, leaving behind a charged ion to be analyzed. APCI experiences few of the negative matrix effects experienced by ESI due to the fact that ionization occurs in the gas phase in this method rather than the within the liquid droplets as in ESI and the fact that in APCI there is an overabundance of reaction gas, thus minimizing the effect of the matrix on the ionization process.
Protein ESI
A major application for ESI is the field of protein mass spectrometry. Here, the MS is used for the identification and sizing of proteins. The identification of a protein sample can be done in an ESI-MS by de novo peptide sequencing (using tandem mass spectrometry) or peptide mass fingerprinting. Both methods require the previous digestion of proteins to peptides, mostly accomplished enzymatically using proteases. As well for the digestion in solution as for the in-gel digestion buffered solutions are needed, whose content in salts is too high and in analyte is too low for a successful ESI-MS measurement. Therefore, a combined desalting and concentration step is performed. Usually a reversed phase liquid chromatography is used, in which the peptides stay bound to the chromatography matrix whereas the salts are removed by washing. The peptides can be eluted from the matrix by the use of a small volume of a solution containing a large portion of organic solvent, which results in the reduction of the final volume of the analyte. In LC-MS the desalting/concentration is realised with a pre-column, in off-line measurements reversed phase micro columns are used, which can be used directly with microliter pipettes. Here, the peptides are eluted with the spray solution containing an appropriate portion of organic solvent. The resulting solution (usually a few microliters) is enriched with the analyte and, after transfer to the spray capillary, can be directly used in the MS.
See also
In-gel digestion
References
Mass spectrometry
Proteomics | Sample preparation in mass spectrometry | [
"Physics",
"Chemistry"
] | 4,876 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
15,878,680 | https://en.wikipedia.org/wiki/Butler%E2%80%93Volmer%20equation | In electrochemistry, the Butler–Volmer equation (named after John Alfred Valentine Butler and Max Volmer), also known as Erdey-Grúz–Volmer equation, is one of the most fundamental relationships in electrochemical kinetics. It describes how the electrical current through an electrode depends on the voltage difference between the electrode and the bulk electrolyte for a simple, unimolecular redox reaction, considering that both a cathodic and an anodic reaction occur on the same electrode:
The Butler–Volmer equation
The Butler–Volmer equation is:
or in a more compact form:
where:
: electrode current density, A/m2 (defined as j = I/S)
: exchange current density, A/m2
: electrode potential, V
: equilibrium potential, V
: absolute temperature, K
: number of electrons involved in the electrode reaction
: Faraday constant
: universal gas constant
: so-called cathodic charge transfer coefficient, dimensionless
: so-called anodic charge transfer coefficient, dimensionless
: activation overpotential (defined as ).
The right hand figure shows plots valid for .
The limiting cases
There are two limiting cases of the Butler–Volmer equation:
the low overpotential region (called "polarization resistance", i.e., when E ≈ Eeq), where the Butler–Volmer equation simplifies to:
;
the high overpotential region, where the Butler–Volmer equation simplifies to the Tafel equation. When , the first term dominates, and when , the second term dominates.
for a cathodic reaction, when E << Eeq, or
for an anodic reaction, when E >> Eeq
where and are constants (for a given reaction and temperature) and are called the Tafel equation constants. The theoretical values of the Tafel equation constants are different for the cathodic and anodic processes. However, the Tafel slope can be defined as:
where is the faradaic current, expressed as , being and the cathodic and anodic partial currents, respectively.
The extended Butler–Volmer equation
The more general form of the Butler–Volmer equation, applicable to the mass transfer-influenced conditions, can be written as:
where:
j is the current density, A/m2,
co and cr refer to the concentration of the species to be oxidized and to be reduced, respectively,
c(0,t) is the time-dependent concentration at the distance zero from the surface of the electrode.
The above form simplifies to the conventional one (shown at the top of the article) when the concentration of the electroactive species at the surface is equal to that in the bulk.
There are two rates which determine the current-voltage relationship for an electrode. First is the rate of the chemical reaction at the electrode, which consumes reactants and produces products. This is known as the charge transfer rate. The second is the rate at which reactants are provided, and products removed, from the electrode region by various processes including diffusion, migration, and convection. The latter is known as the mass-transfer rate
. These two rates determine the concentrations of the reactants and products at the electrode, which are in turn determined by them. The slowest of these rates will determine the overall rate of the process.
The simple Butler–Volmer equation assumes that the concentrations at the electrode are practically equal to the concentrations in the bulk electrolyte, allowing the current to be expressed as a function of potential only. In other words, it assumes that the mass transfer rate is much greater than the reaction rate, and that the reaction is dominated by the slower chemical reaction rate. Despite this limitation, the utility of the Butler–Volmer equation in electrochemistry is wide, and it is often considered to be "central in the phenomenological electrode kinetics".
The extended Butler–Volmer equation does not make this assumption, but rather takes the concentrations at the electrode as given, yielding a relationship in which the current is expressed as a function not only of potential, but of the given concentrations as well. The mass-transfer rate may be relatively small, but its only effect on the chemical reaction is through the altered (given) concentrations. In effect, the concentrations are a function of the potential as well. A full treatment, which yields the current as a function of potential only, will be expressed by the extended Butler–Volmer equation, but will require explicit inclusion of mass transfer effects in order to express the concentrations as functions of the potential.
Derivation
General expression
The following derivation of the extended Butler–Volmer equation is adapted from that of Bard and Faulkner and Newman and Thomas-Alyea. For a simple unimolecular, one-step reaction of the form:
O+ne− → R
The forward and backward reaction rates (vf and vb) and, from Faraday's laws of electrolysis, the associated electrical current densities (j), may be written as:
where kf and kb are the reaction rate constants, with units of frequency (1/time) and co and cr are the surface concentrations (mol/area) of the oxidized and reduced molecules, respectively (written as co(0,t) and cr(0,t) in the previous section). The net rate of reaction v and net current density j are then:
The figure above plots various Gibbs energy curves as a function of the reaction coordinate ξ. The reaction coordinate is roughly a measure of distance, with the body of the electrode being on the left, the bulk solution being on the right. The blue energy curve shows the increase in Gibbs energy for an oxidized molecule as it moves closer to the surface of the electrode when no potential is applied. The black energy curve shows the increase in Gibbs energy as a reduced molecule moves closer to the electrode. The two energy curves intersect at . Applying a potential E to the electrode will move the energy curve downward (to the red curve) by nFE and the intersection point will move to . and are the activation energies (energy barriers) to be overcome by the oxidized and reduced species respectively for a general E, while and are the activation energies for E=0.
Assume that the rate constants are well approximated by an Arrhenius equation,
where the Af and Ab are constants such that Af co = Ab cr is the "correctly oriented" O-R collision frequency, and the exponential term (Boltzmann factor) is the fraction of those collisions with sufficient energy to overcome the barrier and react.
Assuming that the energy curves are practically linear in the transition region, they may be represented there by:
{|
|-
| || (blue curve)
|-
| || (red curve)
|-
| || (black curve)
|}
The charge transfer coefficient for this simple case is equivalent to the symmetry factor, and can be expressed in terms of the slopes of the energy curves:
It follows that:
For conciseness, define:
The rate constants can now be expressed as:
where the rate constants at zero potential are:
The current density j as a function of applied potential E may now be written:
Expression in terms of the equilibrium potential
At a certain voltage Ee, equilibrium will attain and the forward and backward rates (vf and vb) will be equal. This is represented by the green curve in the above figure. The equilibrium rate constants will be written as kfe and kbe, and the equilibrium concentrations will be written coe and cre. The equilibrium currents (jce and jae) will be equal and are written as jo, which is known as the exchange current density.
Note that the net current density at equilibrium will be zero. The equilibrium rate constants are then:
Solving the above for kfo and kbo in terms of the equilibrium concentrations coe and cre and the exchange current density jo, the current density j as a function of applied potential E may now be written:
Assuming that equilibrium holds in the bulk solution, with concentrations and , it follows that and , and the above expression for the current density j is then the Butler–Volmer equation. Note that E-Ee is also known as η, the activation overpotential.
Expression in terms of the formal potential
For the simple reaction, the change in Gibbs energy is:
where aoe and are are the activities at equilibrium. The activities a are related to the concentrations c by a=γc where γ is the activity coefficient. The equilibrium potential is given by the Nernst equation:
where is the standard potential
Defining the formal potential:
the equilibrium potential is then:
Substituting this equilibrium potential into the Butler–Volmer equation yields:
which may also be written in terms of the standard rate constant ko as:
The standard rate constant is an important descriptor of electrode behavior, independent of concentrations. It is a measure of the rate at which the system will approach equilibrium. In the limit as , the electrode becomes an ideal polarizable electrode and will behave electrically as an open circuit (neglecting capacitance). For nearly ideal electrodes with small ko, large changes in the overpotential are required to generate a significant current. In the limit as , the electrode becomes an ideal non-polarizable electrode and will behave as an electrical short. For a nearly ideal electrodes with large ko, small changes in the overpotential will generate large changes in current.
See also
Advanced Simulation Library
Nernst equation
Goldman equation
Tafel equation
Notes
References
External links
Chemical kinetics
Electrochemical equations
Physical chemistry | Butler–Volmer equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,973 | [
"Chemical reaction engineering",
"Applied and interdisciplinary physics",
"Mathematical objects",
"Equations",
"Electrochemistry",
"nan",
"Chemical kinetics",
"Physical chemistry",
"Electrochemical equations"
] |
15,879,602 | https://en.wikipedia.org/wiki/SN%201999eu | SN 1999eu was a type IIP supernova that happened in NGC 1097, a barred spiral galaxy about 45 million light years away, in the constellation Fornax. It was discovered 5 November 1999, possibly three months after its initial brightening, and is unusually under-luminous for a type II supernova.
References
External links
Light curves and spectra on the Open Supernova Catalog
Supernovae
SN 1999eu
Astronomical objects discovered in 1999 | SN 1999eu | [
"Chemistry",
"Astronomy"
] | 92 | [
"Supernovae",
"Astronomical events",
"Constellations",
"Explosions",
"Fornax"
] |
15,881,178 | https://en.wikipedia.org/wiki/Displacement%20chromatography | Displacement chromatography is a chromatography technique in which a sample is placed onto the head of the column and is then displaced by a solute that is more strongly sorbed than the components of the original mixture. The result is that the components are resolved into consecutive "rectangular" zones of highly concentrated pure substances rather than solvent-separated "peaks". It is primarily a preparative technique; higher product concentration, higher purity, and increased throughput may be obtained compared to other modes of chromatography.
Discovery
The advent of displacement chromatography can be attributed to Arne Tiselius, who in 1943 first classified the modes of chromatography as frontal, elution, and displacement. Displacement chromatography found a variety of applications including isolation of transuranic elements and biochemical entities.
The technique was redeveloped by Csaba Horváth, who employed modern high-pressure columns and equipment. It has since found many applications, particularly in the realm of biological macromolecule purification.
Principle
The basic principle of displacement chromatography is: there are only a finite number of binding sites for solutes on the matrix (the stationary phase), and if a site is occupied by one molecule, it is unavailable to others. As in any chromatography, equilibrium is established between molecules of a given kind bound to the matrix and those of the same kind free in solution. Because the number of binding sites is finite, when the concentration of molecules free in solution is large relative to the dissociation constant for the sites, those sites will mostly be filled. This results in a downward-curvature in the plot of bound vs free solute, in the simplest case giving a Langmuir isotherm. A molecule with a high affinity for the matrix (the displacer) will compete more effectively for binding sites, leaving the mobile phase enriched in the lower-affinity solute. Flow of mobile phase through the column preferentially carries off the lower-affinity solute and thus at high concentration the higher-affinity solute will eventually displace all molecules with lesser affinities.
Mode of operation
Loading
At the beginning of the run, a mixture of solutes to be separated is applied to the column, under conditions selected to promote high retention. The higher-affinity solutes are preferentially retained near the head of the column, with the lower-affinity solutes moving farther downstream. The fastest moving component begins to form a pure zone downstream. The other components also begin to form zones, but the continued supply of the mixed feed at head of the column prevents full resolution.
Displacement
After the entire sample is loaded, the feed is switched to the displacer, chosen to have higher affinity than any sample component. The displacer forms a sharp-edged zone at the head of the column, pushing the other components downstream. Each sample component now acts as a displacer for the lower-affinity solutes, and the solutes sort themselves out into a series of contiguous bands (a "displacement train"), all moving downstream at the rate set by the displacer. The size and loading of the column are chosen to let this sorting process reach completion before the components reach the bottom of the column. The solutes appear at the bottom of the column as a series of contiguous zones, each consisting of one purified component, with the concentration within each individual zone effectively uniform.
Regeneration
After the last solute has been eluted, it is necessary to strip the displacer from the column. Since the displacer was chosen for high affinity, this can pose a challenge. On reverse-phase materials, a wash with a high percentage of organic solvent may suffice. Large pH shifts are also often employed. One effective strategy is to remove the displacer by chemical reaction; for instance if hydrogen ion was used as displacer it can be removed by reaction with hydroxide, or a polyvalent metal ion can be removed by reaction with a chelating agent. For some matrices, reactive groups on the stationary phase can be titrated to temporarily eliminate the binding sites, for instance weak-acid ion exchangers or chelating resins can be converted to the protonated form. For gel-type ion exchangers, selectivity reversal at very high ionic strength can also provide a solution. Sometimes the displacer is specifically designed with a titratable functional group to shift its affinity. After the displacer is washed out, the column is washed as needed to restore it to its initial state for the next run.
Comparison with elution chromatography
Common fundamentals
In any form of chromatography, the rate at which the solute moves down the column is a direct reflection of the percentage of time the solute spends in the mobile phase. To achieve separation in either elution or displacement chromatography, there must be appreciable differences in the affinity of the respective solutes for the stationary phase. Both methods rely on movement down the column to amplify the effect of small differences in distribution between the two phases. Distribution between the mobile and stationary phases is described by the binding isotherm, a plot of solute bound to (or partitioned into) the stationary phase as a function of concentration in the mobile phase. The isotherm is often linear, or approximately so, at low concentrations, but commonly curves (concave-downward) at higher concentrations as the stationary phase becomes saturated.
Characteristics of elution mode
In elution mode, solutes are applied to the column as narrow bands and, at low concentration, move down the column as approximately Gaussian peaks. These peaks continue to broaden as they travel, in proportion to the square root of the distance traveled. For two substances to be resolved, they must migrate down the column at sufficiently different rates to overcome the effects of band spreading. Operating at high concentration, where the isotherm is curved, is disadvantageous in elution chromatography because the rate of travel then depends on concentration, causing the peaks to spread and distort.
Retention in elution chromatography is usually controlled by adjusting the composition of the mobile phase (in terms of solvent composition, pH, ionic strength, and so forth) according to the type of stationary phase employed and the particular solutes to be separated. The mobile phase components generally have lower affinity for the stationary phase than do the solutes being separated, but are present at higher concentration and achieve their effects due to mass action. Resolution in elution chromatography is generally better when peaks are strongly retained, but conditions that give good resolution of early peaks lead to long run-times and excessive broadening of later peaks unless gradient elution is employed. Gradient equipment adds complexity and expense, particularly at large scale.
Advantages and disadvantages of displacement mode
In contrast to elution chromatography, solutes separated in displacement mode form sharp-edged zones rather than spreading peaks. Zone boundaries in displacement chromatography are self-sharpening: if a molecule for some reason gets ahead of its band, it enters a zone in which it is more strongly retained, and will then run more slowly until its zone catches up. Furthermore, because displacement chromatography takes advantage of the non-linearity of the isotherms, loadings are deliberately high; more material can be separated on a given column, in a given time, with the purified components recovered at significantly higher concentrations. Retention conditions can still be adjusted, but the displacer controls the migration rate of the solutes. The displacer is selected to have higher affinity for the stationary phase than does any of the solutes being separated, and its concentration is set to approach saturation of the stationary phase and to give the desired migration rate of the concentration wave. High-retention conditions can be employed without gradient operation, because the displacer ensures removal of all solutes of interest in the designed run time.
Because of the concentrating effect of loading the column under high-retention conditions, displacement chromatography is well suited to purify components from dilute feed streams. However, it is also possible to concentrate material from a dilute stream at the head of a chromatographic column and then switch conditions to elute the adsorbed material in conventional isocratic or gradient modes. Therefore, this approach is not unique to displacement chromatography, although the higher loading capacity and less dilution allow greater concentration in displacement mode.
A disadvantage of displacement chromatography is that non-idealities always give rise to an overlap zone between each pair of components; this mixed zone must be collected separately for recycle or discard to preserve the purity of the separated materials. The strategy of adding spacer molecules to form zones between the components (sometimes termed "carrier displacement chromatography") has been investigated and can be useful when suitable, readily removable spacers are found. Another disadvantage is that the raw chromatogram, for instance a plot of absorbance or refractive index vs elution volume, can be difficult to interpret for contiguous zones, especially if the displacement train is not fully developed. Documentation and troubleshooting may require additional chemical analysis to establish the distribution of a given component. Another disadvantage is that the time required for regeneration limits throughput.
According to John C. Ford's article in the Encyclopedia of Chromatography, theoretical studies indicate that at least for some systems, optimized overloaded elution chromatography offers higher throughput than displacement chromatography, though limited experimental tests suggest that displacement chromatography is superior (at least before consideration of regeneration time).
Applications
Historically, displacement chromatography was applied to preparative separations of amino acids and rare earth elements and has also been investigated for isotope separation.
Proteins
The chromatographic purification of proteins from complex mixtures can be quite challenging, particularly when the mixtures contain similarly retained proteins or when it is desired to enrich trace components in the feed. Further, column loading is often limited when high resolutions are required using traditional modes of chromatography (e.g. linear gradient, isocratic chromatography). In these cases, displacement chromatography is an efficient technique for the purification of proteins from complex mixtures at high column loadings in a variety of applications.
An important advance in the state of the art of displacement chromatography was the development of low molecular mass displacers for protein purification in ion exchange systems. This research was significant in that it represented a major departure from the conventional wisdom that large polyelectrolyte polymers are required to displace proteins in ion exchange systems.
Low molecular mass displacers have significant operational advantages as compared to large polyelectrolyte displacers. For example, if there is any overlap between the displacer and the protein of interest, these low molecular mass materials can be readily separated from the purified protein during post-displacement processing using standard size-based purification methods (e.g. size exclusion chromatography, ultrafiltration). In addition, the salt-dependent adsorption behavior of these low MW displacers greatly facilitates column regeneration. These displacers have been employed for a wide variety of high resolution separations in ion exchange systems. In addition, the utility of displacement chromatography for the purification of recombinant growth factors, antigenic vaccine proteins and antisense oligonucleotides has also been demonstrated. There are several examples in which displacement chromatography has been applied to the purification of proteins using ion exchange, hydrophobic interaction, as well as reversed-phase chromatography.
Displacement chromatography is well suited for obtaining mg quantities of purified proteins from complex mixtures using standard analytical chromatography columns at the bench scale. It is also particularly well suited for enriching trace components in the feed. Displacement chromatography can be readily carried out using a variety of resin systems including, ion exchange, HIC and RPLC.
Two-dimensional chromatography
Two-dimensional chromatography represents the most thorough and rigorous approach to evaluation of the proteome. While previously accepted approaches have utilized elution mode chromatographic approaches such as cation exchange to reversed phase HPLC, yields are typically very low requiring analytical sensitivities in the picomolar to femtomolar range. As displacement chromatography offers the advantage of concentration of trace components, two dimensional chromatography utilizing displacement rather than elution mode in the upstream chromatography step represents a potentially powerful tool for analysis of trace components, modifications, and identification of minor expressed components of the proteome.
Notes
References
Chromatography | Displacement chromatography | [
"Chemistry"
] | 2,653 | [
"Chromatography",
"Separation processes"
] |
11,694,119 | https://en.wikipedia.org/wiki/Inverse%20second | The inverse second or reciprocal second (s−1), also called per second, is a unit defined as the multiplicative inverse of the second (a unit of time). It is applicable for physical quantities of dimension reciprocal time, such as frequency and strain rate.
It is dimensionally equivalent to:
hertz (Hz), historically known as cycles per second – the SI unit for frequency and rotational frequency
becquerel (Bq) – the SI unit for the rate of occurrence of aperiodic or stochastic radionuclide events
baud (Bd) – the unit for symbol rate over a communication link
bit per second (bit/s) – the unit of bit rate
However, the special names and symbols above for s−1 are recommend for clarity.
Reciprocal second should not be confused with radian per second (rad⋅s−1), the SI unit for angular frequency and angular velocity. As the radian is a dimensionless unit, radian per second is dimensionally consistent with reciprocal second. However, they are used for different kinds of quantity, frequency and angular frequency, whose numerical value differs by 2.
The inverse minute or reciprocal minute (min−1), also called per minute, is 60−1 s−1, as 1 min = 60 s; it is used in quantities of type "counts per minute", such as:
Actions per minute
Beats per minute
Counts per minute
Revolutions per minute (rpm)
Words per minute
Inverse square second (s−2) is involved in the units of linear acceleration, angular acceleration, and rotational acceleration.
See also
Aperiodic frequency
Inverse metre
Reciprocal length
Unit of time
Notes
References
Units of frequency | Inverse second | [
"Mathematics"
] | 339 | [
"Quantity",
"Units of frequency",
"Units of measurement"
] |
11,694,610 | https://en.wikipedia.org/wiki/Two-body%20problem%20in%20general%20relativity | The two-body problem in general relativity (or relativistic two-body problem) is the determination of the motion and gravitational field of two bodies as described by the field equations of general relativity. Solving the Kepler problem is essential to calculate the bending of light by gravity and the motion of a planet orbiting its sun. Solutions are also used to describe the motion of binary stars around each other, and estimate their gradual loss of energy through gravitational radiation.
General relativity describes the gravitational field by curved space-time; the field equations governing this curvature are nonlinear and therefore difficult to solve in a closed form. No exact solutions of the Kepler problem have been found, but an approximate solution has: the Schwarzschild solution. This solution pertains when the mass M of one body is overwhelmingly greater than the mass m of the other. If so, the larger mass may be taken as stationary and the sole contributor to the gravitational field. This is a good approximation for a photon passing a star and for a planet orbiting its sun. The motion of the lighter body (called the "particle" below) can then be determined from the Schwarzschild solution; the motion is a geodesic ("shortest path between two points") in the curved space-time. Such geodesic solutions account for the anomalous precession of the planet Mercury, which is a key piece of evidence supporting the theory of general relativity. They also describe the bending of light in a gravitational field, another prediction famously used as evidence for general relativity.
If both masses are considered to contribute to the gravitational field, as in binary stars, the Kepler problem can be solved only approximately. The earliest approximation method to be developed was the post-Newtonian expansion, an iterative method in which an initial solution is gradually corrected. More recently, it has become possible to solve Einstein's field equation using a computer instead of mathematical formulae. As the two bodies orbit each other, they will emit gravitational radiation; this causes them to lose energy and angular momentum gradually, as illustrated by the binary pulsar PSR B1913+16.
For binary black holes, the numerical solution of the two-body problem was achieved after four decades of research in 2005 when three groups devised breakthrough techniques.
Historical context
Classical Kepler problem
The Kepler problem derives its name from Johannes Kepler, who worked as an assistant to the Danish astronomer Tycho Brahe. Brahe took extraordinarily accurate measurements of the motion of the planets of the Solar System. From these measurements, Kepler was able to formulate Kepler's laws, the first modern description of planetary motion:
The orbit of every planet is an ellipse with the Sun at one of the two foci.
A line joining a planet and the Sun sweeps out equal areas during equal intervals of time.
The square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit.
Kepler published the first two laws in 1609 and the third law in 1619. They supplanted earlier models of the Solar System, such as those of Ptolemy and Copernicus. Kepler's laws apply only in the limited case of the two-body problem. Voltaire and Émilie du Châtelet were the first to call them "Kepler's laws".
Nearly a century later, Isaac Newton had formulated his three laws of motion. In particular, Newton's second law states that a force F applied to a mass m produces an acceleration a given by the equation F=ma. Newton then posed the question: what must the force be that produces the elliptical orbits seen by Kepler? His answer came in his law of universal gravitation, which states that the force between a mass M and another mass m is given by the formula
where r is the distance between the masses and G is the gravitational constant. Given this force law and his equations of motion, Newton was able to show that two point masses attracting each other would each follow perfectly elliptical orbits. The ratio of sizes of these ellipses is m/M, with the larger mass moving on a smaller ellipse. If M is much larger than m, then the larger mass will appear to be stationary at the focus of the elliptical orbit of the lighter mass m. This model can be applied approximately to the Solar System. Since the mass of the Sun is much larger than those of the planets, the force acting on each planet is principally due to the Sun; the gravity of the planets for each other can be neglected to first approximation.
Apsidal precession
If the potential energy between the two bodies is not exactly the 1/r potential of Newton's gravitational law but differs only slightly, then the ellipse of the orbit gradually rotates (among other possible effects). This apsidal precession is observed for all the planets orbiting the Sun, primarily due to the oblateness of the Sun (it is not perfectly spherical) and the attractions of the other planets to one another. The apsides are the two points of closest and furthest distance of the orbit (the periapsis and apoapsis, respectively); apsidal precession corresponds to the rotation of the line joining the apsides. It also corresponds to the rotation of the Laplace–Runge–Lenz vector, which points along the line of apsides.
Newton's law of gravitation soon became accepted because it gave very accurate predictions of the motion of all the planets. These calculations were carried out initially by Pierre-Simon Laplace in the late 18th century, and refined by Félix Tisserand in the later 19th century. Conversely, if Newton's law of gravitation did not predict the apsidal precessions of the planets accurately, it would have to be discarded as a theory of gravitation. Such an anomalous precession was observed in the second half of the 19th century.
Anomalous precession of Mercury
In 1859, Urbain Le Verrier discovered that the orbital precession of the planet Mercury was not quite what it should be; the ellipse of its orbit was rotating (precessing) slightly faster than predicted by the traditional theory of Newtonian gravity, even after all the effects of the other planets had been accounted for. The effect is small (roughly 43 arcseconds of rotation per century), but well above the measurement error (roughly 0.1 arcseconds per century). Le Verrier realized the importance of his discovery immediately, and challenged astronomers and physicists alike to account for it. Several classical explanations were proposed, such as interplanetary dust, unobserved oblateness of the Sun, an undetected moon of Mercury, or a new planet named Vulcan. After these explanations were discounted, some physicists were driven to the more radical hypothesis that Newton's inverse-square law of gravitation was incorrect. For example, some physicists proposed a power law with an exponent that was slightly different from 2.
Others argued that Newton's law should be supplemented with a velocity-dependent potential. However, this implied a conflict with Newtonian celestial dynamics. In his treatise on celestial mechanics, Laplace had shown that if the gravitational influence does not act instantaneously, then the motions of the planets themselves will not exactly conserve momentum (and consequently some of the momentum would have to be ascribed to the mediator of the gravitational interaction, analogous to ascribing momentum to the mediator of the electromagnetic interaction.) As seen from a Newtonian point of view, if gravitational influence does propagate at a finite speed, then at all points in time a planet is attracted to a point where the Sun was some time before, and not towards the instantaneous position of the Sun. On the assumption of the classical fundamentals, Laplace had shown that if gravity would propagate at a velocity on the order of the speed of light then the solar system would be unstable, and would not exist for a long time. The observation that the solar system is old enough allowed him to put a lower limit on the speed of gravity that turned out to be many orders of magnitude faster than the speed of light.
Laplace's estimate for the speed of gravity is not correct in a field theory which respects the principle of relativity. Since electric and magnetic fields combine, the attraction of a point charge which is moving at a constant velocity is towards the extrapolated instantaneous position, not to the apparent position it seems to occupy when looked at. To avoid those problems, between 1870 and 1900 many scientists used the electrodynamic laws of Wilhelm Eduard Weber, Carl Friedrich Gauss, Bernhard Riemann to produce stable orbits and to explain the perihelion shift of Mercury's orbit. In 1890, Maurice Lévy succeeded in doing so by combining the laws of Weber and Riemann, whereby the speed of gravity is equal to the speed of light in his theory. And in another attempt Paul Gerber (1898) even succeeded in deriving the correct formula for the perihelion shift (which was identical to that formula later used by Einstein). However, because the basic laws of Weber and others were wrong (for example, Weber's law was superseded by Maxwell's theory), those hypotheses were rejected. Another attempt by Hendrik Lorentz (1900), who already used Maxwell's theory, produced a perihelion shift which was too low.
Einstein's theory of general relativity
Around 1904–1905, the works of Hendrik Lorentz, Henri Poincaré and finally Albert Einstein's special theory of relativity, exclude the possibility of propagation of any effects faster than the speed of light. It followed that Newton's law of gravitation would have to be replaced with another law, compatible with the principle of relativity, while still obtaining the Newtonian limit for circumstances where relativistic effects are negligible. Such attempts were made by Henri Poincaré (1905), Hermann Minkowski (1907) and Arnold Sommerfeld (1910). In 1907 Einstein came to the conclusion that to achieve this a successor to special relativity was needed. From 1907 to 1915, Einstein worked towards a new theory, using his equivalence principle as a key concept to guide his way. According to this principle, a uniform gravitational field acts equally on everything within it and, therefore, cannot be detected by a free-falling observer. Conversely, all local gravitational effects should be reproducible in a linearly accelerating reference frame, and vice versa. Thus, gravity acts like a fictitious force such as the centrifugal force or the Coriolis force, which result from being in an accelerated reference frame; all fictitious forces are proportional to the inertial mass, just as gravity is. To effect the reconciliation of gravity and special relativity and to incorporate the equivalence principle, something had to be sacrificed; that something was the long-held classical assumption that our space obeys the laws of Euclidean geometry, e.g., that the Pythagorean theorem is true experimentally. Einstein used a more general geometry, pseudo-Riemannian geometry, to allow for the curvature of space and time that was necessary for the reconciliation; after eight years of work (1907–1915), he succeeded in discovering the precise way in which space-time should be curved in order to reproduce the physical laws observed in Nature, particularly gravitation. Gravity is distinct from the fictitious forces centrifugal force and coriolis force in the sense that the curvature of spacetime is regarded as physically real, whereas the fictitious forces are not regarded as forces. The very first solutions of his field equations explained the anomalous precession of Mercury and predicted an unusual bending of light, which was confirmed after his theory was published. These solutions are explained below.
General relativity, special relativity and geometry
In the normal Euclidean geometry, triangles obey the Pythagorean theorem, which states that the square distance ds2 between two points in space is the sum of the squares of its perpendicular components
where dx, dy and dz represent the infinitesimal differences between the x, y and z coordinates of two points in a Cartesian coordinate system. Now imagine a world in which this is not quite true; a world where the distance is instead given by
where F, G and H are arbitrary functions of position. It is not hard to imagine such a world; we live on one. The surface of the earth is curved, which is why it is impossible to make a perfectly accurate flat map of the earth. Non-Cartesian coordinate systems illustrate this well; for example, in the spherical coordinates (r, θ, φ), the Euclidean distance can be written
Another illustration would be a world in which the rulers used to measure length were untrustworthy, rulers that changed their length with their position and even their orientation. In the most general case, one must allow for cross-terms when calculating the distance ds
where the nine functions gxx, gxy, ..., gzz constitute the metric tensor, which defines the geometry of the space in Riemannian geometry. In the spherical-coordinates example above, there are no cross-terms; the only nonzero metric tensor components are grr = 1, gθθ = r2 and gφφ = r2 sin2 θ.
In his special theory of relativity, Albert Einstein showed that the distance ds between two spatial points is not constant, but depends on the motion of the observer. However, there is a measure of separation between two points in space-time — called "proper time" and denoted with the symbol dτ — that is invariant; in other words, it does not depend on the motion of the observer.
which may be written in spherical coordinates as
This formula is the natural extension of the Pythagorean theorem and similarly holds only when there is no curvature in space-time. In general relativity, however, space and time may have curvature, so this distance formula must be modified to a more general form
just as we generalized the formula to measure distance on the surface of the Earth. The exact form of the metric gμν depends on the gravitating mass, momentum and energy, as described by the Einstein field equations. Einstein developed those field equations to match the then known laws of Nature; however, they predicted never-before-seen phenomena (such as the bending of light by gravity) that were confirmed later.
Geodesic equation
According to Einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. In uncurved space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. The equation for the geodesic lines is
where Γ represents the Christoffel symbol and the variable q parametrizes the particle's path through space-time, its so-called world line. The Christoffel symbol depends only on the metric tensor gμν, or rather on how it changes with position. The variable q is a constant multiple of the proper time τ for timelike orbits (which are traveled by massive particles), and is usually taken to be equal to it. For lightlike (or null) orbits (which are traveled by massless particles such as the photon), the proper time is zero and, strictly speaking, cannot be used as the variable q. Nevertheless, lightlike orbits can be derived as the ultrarelativistic limit of timelike orbits, that is, the limit as the particle mass m goes to zero while holding its total energy fixed.
Schwarzschild solution
An exact solution to the Einstein field equations is the Schwarzschild metric, which corresponds to the external gravitational field of a stationary, uncharged, non-rotating, spherically symmetric body of mass M. It is characterized by a length scale rs, known as the Schwarzschild radius, which is defined by the formula
where G is the gravitational constant. The classical Newtonian theory of gravity is recovered in the limit as the ratio rs/r goes to zero. In that limit, the metric returns to that defined by special relativity.
In practice, this ratio is almost always extremely small. For example, the Schwarzschild radius rs of the Earth is roughly 9 mm ( inch); at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The Schwarzschild radius of the Sun is much larger, roughly 2953 meters, but at its surface, the ratio rs/r is roughly 4 parts in a million. A white dwarf star is much denser, but even here the ratio at its surface is roughly 250 parts in a million. The ratio only becomes large close to ultra-dense objects such as neutron stars (where the ratio is roughly 50%) and black holes.
Orbits about the central mass
The orbits of a test particle of infinitesimal mass about the central mass is given by the equation of motion
where is the specific relative angular momentum, and is the reduced mass. This can be converted into an equation for the orbit
where, for brevity, two length-scales, and , have been introduced. They are constants of the motion and depend on the initial conditions (position and velocity) of the test particle. Hence, the solution of the orbit equation is
Effective radial potential energy
The equation of motion for the particle derived above
can be rewritten using the definition of the Schwarzschild radius rs as
which is equivalent to a particle moving in a one-dimensional effective potential
The first two terms are well-known classical energies, the first being the attractive Newtonian gravitational potential energy and the second corresponding to the repulsive "centrifugal" potential energy; however, the third term is an attractive energy unique to general relativity. As shown below and elsewhere, this inverse-cubic energy causes elliptical orbits to precess gradually by an angle δφ per revolution
where A is the semi-major axis and e is the eccentricity. Here δφ is not the change in the φ-coordinate in (t, r, θ, φ) coordinates but the change in the argument of periapsis of the classical closed orbit.
The third term is attractive and dominates at small r values, giving a critical inner radius rinner at which a particle is drawn inexorably inwards to r = 0; this inner radius is a function of the particle's angular momentum per unit mass or, equivalently, the a length-scale defined above.
Circular orbits and their stability
The effective potential V can be re-written in terms of the length a = h/c:
Circular orbits are possible when the effective force is zero:
i.e., when the two attractive forces—Newtonian gravity (first term) and the attraction unique to general relativity (third term)—are exactly balanced by the repulsive centrifugal force (second term). There are two radii at which this balancing can occur, denoted here as rinner and router:
which are obtained using the quadratic formula. The inner radius rinner is unstable, because the attractive third force strengthens much faster than the other two forces when r becomes small; if the particle slips slightly inwards from rinner (where all three forces are in balance), the third force dominates the other two and draws the particle inexorably inwards to r = 0. At the outer radius, however, the circular orbits are stable; the third term is less important and the system behaves more like the non-relativistic Kepler problem.
When a is much greater than rs (the classical case), these formulae become approximately
Substituting the definitions of a and rs into router yields the classical formula for a particle of mass m orbiting a body of mass M.
The following equation
where ωφ is the orbital angular speed of the particle, is obtained in non-relativistic mechanics by setting the centrifugal force equal to the Newtonian gravitational force:
where is the reduced mass.
In our notation, the classical orbital angular speed equals
At the other extreme, when a2 approaches 3rs2 from above, the two radii converge to a single value
The quadratic solutions above ensure that router is always greater than 3rs, whereas rinner lies between rs and 3rs. Circular orbits smaller than rs are not possible. For massless particles, a goes to infinity, implying that there is a circular orbit for photons at rinner = rs. The sphere of this radius is sometimes known as the photon sphere.
Precession of elliptical orbits
The orbital precession rate may be derived using this radial effective potential V. A small radial deviation from a circular orbit of radius router will oscillate in a stable manner with an angular frequency
which equals
Taking the square root of both sides and expanding using the binomial theorem yields the formula
Multiplying by the period T of one revolution gives the precession of the orbit per revolution
where we have used ωφT = 2 and the definition of the length-scale a. Substituting the definition of the Schwarzschild radius rs gives
This may be simplified using the elliptical orbit's semi-major axis A and eccentricity e related by the formula
to give the precession angle
Since the closed classical orbit is an ellipse in general, the quantity A(1 − e2) is the semi-latus rectum l of the ellipse.
Hence, the final formula of angular apsidal precession for a unit complete revolution is
Beyond the Schwarzschild solution
Post-Newtonian expansion
In the Schwarzschild solution, it is assumed that the larger mass M is stationary and it alone determines the gravitational field (i.e., the geometry of space-time) and, hence, the lesser mass m follows a geodesic path through that fixed space-time. This is a reasonable approximation for photons and the orbit of Mercury, which is roughly 6 million times lighter than the Sun. However, it is inadequate for binary stars, in which the masses may be of similar magnitude.
The metric for the case of two comparable masses cannot be solved in closed form and therefore one has to resort to approximation techniques such as the post-Newtonian approximation or numerical approximations. In passing, we mention one particular exception in lower dimensions (see R = T model for details). In (1+1) dimensions, i.e. a space made of one spatial dimension and one time dimension, the metric for two bodies of equal masses can be solved analytically in terms of the Lambert W function. However, the gravitational energy between the two bodies is exchanged via dilatons rather than gravitons which require three-space in which to propagate.
The post-Newtonian expansion is a calculational method that provides a series of ever more accurate solutions to a given problem. The method is iterative; an initial solution for particle motions is used to calculate the gravitational fields; from these derived fields, new particle motions can be calculated, from which even more accurate estimates of the fields can be computed, and so on. This approach is called "post-Newtonian" because the Newtonian solution for the particle orbits is often used as the initial solution.
The theory can be divided into two parts: first one finds the two-body effective potential that captures the GR corrections to the Newtonian potential. Secondly, one should solve the resulting equations of motion.
Modern computational approaches
Einstein's equations can also be solved on a computer using sophisticated numerical methods. Given sufficient computer power, such solutions can be more accurate than post-Newtonian solutions. However, such calculations are demanding because the equations must generally be solved in a four-dimensional space. Nevertheless, beginning in the late 1990s, it became possible to solve difficult problems such as the merger of two black holes, which is a very difficult version of the Kepler problem in general relativity.
Gravitational radiation
If there is no incoming gravitational radiation, according to general relativity, two bodies orbiting one another will emit gravitational radiation, causing the orbits to gradually lose energy.
The formulae describing the loss of energy and angular momentum due to gravitational radiation from the two bodies of the Kepler problem have been calculated. The rate of losing energy (averaged over a complete orbit) is given by
where e is the orbital eccentricity and a is the semimajor axis of the elliptical orbit. The angular brackets on the left-hand side of the equation represent the averaging over a single orbit. Similarly, the average rate of losing angular momentum equals
The rate of period decrease is given by
where Pb is orbital period.
The losses in energy and angular momentum increase significantly as the eccentricity approaches one, i.e., as the ellipse of the orbit becomes ever more elongated. The radiation losses also increase significantly with a decreasing size a of the orbit.
See also
Binet equation
Center of mass (relativistic)
Gravitational two-body problem
Kepler problem
Newton's theorem of revolving orbits
Schwarzschild geodesics
Notes
References
Bibliography
(See Gravitation (book).)
External links
Animation showing relativistic precession of stars around the Milky Way supermassive black hole
Excerpt from Reflections on Relativity by Kevin Brown.
Exact solutions in general relativity | Two-body problem in general relativity | [
"Mathematics"
] | 5,207 | [
"Exact solutions in general relativity",
"Mathematical objects",
"Equations"
] |
11,695,358 | https://en.wikipedia.org/wiki/On%20the%20Equilibrium%20of%20Heterogeneous%20Substances | In the history of thermodynamics, "On the Equilibrium of Heterogeneous Substances" is a 300-page paper written by American chemical physicist Willard Gibbs. It is one of the founding papers in thermodynamics, along with German physicist Hermann von Helmholtz's 1882 paper "Thermodynamik chemischer Vorgänge." Together they form the foundation of chemical thermodynamics as well as a large part of physical chemistry.
Gibbs's paper marked the beginning of chemical thermodynamics by integrating chemical, physical, electrical, and electromagnetic phenomena into a coherent system. It introduced concepts such as chemical potential, phase rule, and more, which form the basis for modern physical chemistry. American writer Bill Bryson describes Gibbs's paper as "the Principia of thermodynamics".
"On the Equilibrium of Heterogeneous Substances", was originally published in a relatively obscure American journal, the Transactions of the Connecticut Academy of Arts and Sciences, in several parts, during the years 1875 to 1878 (although most cite "1876" as the key year). It remained largely unknown until translated into German by Wilhelm Ostwald and into French by Henry Louis Le Châtelier.
Overview
Gibbs first contributed to mathematical physics with two papers published in 1873 in the Transactions of the Connecticut Academy of Arts and Sciences on "Graphical Methods in the Thermodynamics of Fluids," and "Method of Geometrical Representation of the Thermodynamic Properties of Substances by means of Surfaces." His subsequent and most important publication was "On the Equilibrium of Heterogeneous Substances" (in two parts, 1876 and 1878). In this monumental, densely woven, 300-page treatise, the first law of thermodynamics, the second law of thermodynamics, the fundamental thermodynamic relation, are applied to the predication and quantification of thermodynamic reaction tendencies in any thermodynamic system in a visual, three-dimensional graphical language of Lagrangian mechanics and phase transitions, among others. As stated by Le Chatelier, it "founded a new department of chemical science that is becoming comparable in importance to that created by [Antoine] Lavoisier." This work was translated into German by Ostwald (who styled its author the "founder of chemical energetics") in 1891 and into French by Le Châtelier in 1899.
Gibbs's Equilibrium paper is considered one of the greatest achievements in physical science in the 19th century and one of the foundations of the science of physical chemistry. In these papers Gibbs applied thermodynamics to the interpretation of physicochemical phenomena and showed the explanation and interrelationship of what had been known only as isolated, inexplicable facts.
Gibbs' papers on heterogeneous equilibria included:
Some chemical potential concepts
Some free energy concepts
A Gibbsian ensemble ideal (basis of the statistical mechanics field)
the phase rule
References
External links
At the Internet Archive, Part 1 and Part 2 in various file formats.
Thermodynamics literature
1870s in science
1876 in science
Works originally published in American magazines
1876 non-fiction books
Works originally published in science and technology magazines
Physics papers | On the Equilibrium of Heterogeneous Substances | [
"Physics",
"Chemistry"
] | 674 | [
"Thermodynamics literature",
"Thermodynamics"
] |
11,695,492 | https://en.wikipedia.org/wiki/Acetone%E2%80%93butanol%E2%80%93ethanol%20fermentation | Acetone–butanol–ethanol (ABE) fermentation, also known as the Weizmann process, is a process that uses bacterial fermentation to produce acetone, n-butanol, and ethanol from carbohydrates such as starch and glucose. It was developed by chemist Chaim Weizmann and was the primary process used to produce acetone, which was needed to make cordite, a substance essential for the British war industry during World War I.
Process
The process may be likened to how yeast ferments sugars to produce ethanol for wine, beer, or fuel, but the organisms that carry out the ABE fermentation are strictly anaerobic (obligate anaerobes). The ABE fermentation produces solvents in a ratio of 3 parts acetone, 6 parts butanol to 1 part ethanol. It usually uses a strain of bacteria from the Class Clostridia (Family Clostridiaceae). Clostridium acetobutylicum is the most well-studied and widely used. Although less effective, Clostridium beijerinckii and Clostridium saccharobutylicum bacterial strains have shown good results as well.
The ABE fermentation pathway generally proceeds in two phases. In the initial acidogenesis phase, the cells grow exponentially and accumulate acetate and butyrate. The low pH along with other factors then trigger a metabolic shift to the solventogenesis phase, in which acetate and butyrate are used to produce the solvents.
For gas stripping, the most common gases used are the off-gases from the fermentation itself, a mixture of carbon dioxide and hydrogen gas.
History
The production of butanol by biological means was first performed by Louis Pasteur in 1861. In 1905, Austrian biochemist Franz Schardinger found that acetone could similarly be produced. In 1910 Auguste Fernbach (1860–1939) developed a bacterial fermentation process using potato starch as a feedstock in the production of butanol.
Industrial exploitation of ABE fermentation started in 1916, during World War I, with Chaim Weizmann's isolation of Clostridium acetobutylicum, as described in U.S. patent 1315585.
The Weizmann process was operated by Commercial Solvents Corporation from about 1920 to 1964 with plants in the US (Terre Haute, IN, and Peoria, IL), and Liverpool, England. The Peoria plant was the largest of the three. It used molasses as feedstock and had 96 fermenters with a volume of 96,000 gallons each.
After World War II, ABE fermentation became generally non-profitable, compared to the production of the same three solvents (acetone, butanol, ethanol) from petroleum. During the 1950s and 1960s, ABE fermentation was replaced by petroleum chemical plants. Due to different raw material costs, ABE fermentation was viable in South Africa until the early 1980s, with the last plant closing in 1983. Green Biologics Ltd operated the last attempt to resurrect the process at scale but the plant closed in Minnesota in June 2019.
A new ABE biorefinery has been developed in Scotland by Celtic Renewables Ltd and will begin production in early 2022. The key difference in the process is the use of low value spent materials or residues from other processes removing the variable costs of raw feedstock crops and materials.
Improvement attempts
The most critical aspect in biomass fermentation processes is related to its productivity. The ABE fermentation via Clostridium beijerinckii or Clostridium acetobutylicum for instance is characterized by product inhibition. This means that there is a product concentration threshold that cannot be overcome, resulting in a product stream highly diluted in water.
For this reason, in order to have a comparable productivity and profitability with respect to the petrochemical processes, cost and energy effective solutions for the product purification sections are required to provide a significant product recovery at the desired purity.
The main solutions adopted during the last decades have been as follows:
The employment of less expensive raw materials, and in particular lignocellulosic waste or algae;
The microorganisms modifications or the research of new strains less sensitive to the butanol concentration poisoning to increase productivity and selectivity towards the butanol species;
The fermentation reactor optimization aimed at increasing the productivity;
The reduction of the energy costs of the separation and purification downstream processing and, in particular, to carry out the separation in-situ in the reactor;
The use of side products such as hydrogen and carbon dioxide, solid wastes and discharged microorganisms and carry out less expensive process wastewater treatments.
In the second half of the 20th century, these technologies allowed an increase in the final product concentration in the broth from 15 to 30 g/L, an increase in the final productivity from 0.46 to 4.6 g/(L*h) and an increase in the yield from 15 to 42%.
From a compound purification perspective, the main criticalities in the ABE/W product recovery are due to the water–alcohol mixture's non-ideal interactions leading to homogeneous and heterogeneous azeotropic species, as shown by the ternary equilibrium diagram.
This causes the separation by standard distillation to be particularly impractical but, on the other hand, allows the exploitation of the liquid–liquid demixing region both for analogous and alternative separation processes.
Therefore, in order to enhance the ABE fermentation yield, mainly in situ product recovery systems have been developed. These include gas stripping, pervaporation, liquid–liquid extraction, distillation via Dividing Wall Column, membrane distillation, membrane separation, adsorption, and reverse osmosis. Green Biologics Ltd. implemented many of these technologies at an industrial scale.
Moreover, differently from crude oil feedstocks, biomasses nature fluctuates over the year's seasons and according to the geographical location. For this reasons, biorefinery operations need not only to be effective but also to be flexible and to be able to switch between two operating conditions rather quickly.[citation needed]
Current perspectives
ABE fermentation is attracting renewed interest with a focus on butanol as a renewable biofuel.
Sustainability is by far the topic of major concern over the last years. The energy challenge is the key point of the environmental friendly policies adopted by all the most developed and industrialized countries worldwide. For this purpose Horizon 2020, the biggest EU Research and Innovation programme, was funded by the European Union over the 2014–2020 period.
The International Energy Agency defines renewables as the centre of the transition to a less carbon-intensive and more sustainable energy system. Biofuels are believed to represent around 30% of energy consumption in transport by 2060. Their role is particularly important in sectors which are difficult to decarbonise, such as aviation, shipping and other long-haul transport. That is why several bioprocesses have seen a renewed interest in recent years, both from a research and an industrial perspective.
For this reason, the ABE fermentation process has been reconsidered from a different perspective. Although it was originally conceived to produce acetone, it is considered as a suitable production pathway for biobutanol that has become the product of major interest.
Biogenic butanol is a possible substitute of bioethanol or even better and it is already employed both as fuel additive and as pure fuel instead of standard gasoline because, differently from ethanol, it can be directly and efficiently used in gasoline engines. Moreover, it has the advantage that it can be shipped and distributed through existing pipelines and filling stations.
Finally biobutanol is widely used as a direct solvent for paints, coatings, varnishes, resins, dyes, camphor, vegetable oils, fats, waxes, shellac, rubbers and alkaloids due to its higher energy density, lower volatility, and lower hygroscopicity. It can be produced from different kinds of cellulosic biomass and can be used for further processing of advanced biofuels such as butyl levulinate as well.
The application of n-butanol in the production of butyl acrylate has a wide scope for its expansion, which in turn would help in increasing the consumption of n-butanol globally. Butyl acrylate was the biggest n-butanol application in 2014 and is projected to be worth US$3.9 billion by 2020.
References
Fermentation | Acetone–butanol–ethanol fermentation | [
"Chemistry",
"Biology"
] | 1,783 | [
"Biochemistry",
"Cellular respiration",
"Fermentation"
] |
11,698,208 | https://en.wikipedia.org/wiki/Diphthamide | Diphthamide is a post-translationally modified histidine amino acid found in archaeal and eukaryotic elongation factor 2 (eEF-2).
Dipthamide is named after the toxin produced by the bacterium Corynebacterium diphtheriae, which targets diphthamide. Besides this toxin, it is also targeted by exotoxin A from Pseudomonas aeruginosa. It is the only target of these toxins.
Structure and biosynthesis
Diphthamide is proposed to be a 2-[3-carboxyamido-3-(trimethylammonio)propyl]histidine. Though this structure has been confirmed by X-ray crystallography, its stereochemistry is uncertain.
Diphthamide is biosynthesized from histidine and S-adenosyl methionine (SAM). The side chain bound to imidazole group and all methyl groups come from SAM. The whole synthesis takes place in three steps:
transfer of 3-amino-3-carboxypropyl group from SAM
transfer of three methyl groups from SAM – synthesis of diphtine
amidation – synthesis of diphthamide
In eukaryotes, this biosynthetic pathway contains a total of 7 genes (Dph1-7).
Biological function
Diphthamide ensures translation fidelity.
The presence or absence of diphthamide is known to affect NF-κB or death receptor pathways.
References
Amino acids
Imidazoles
Quaternary ammonium compounds
Post-translational modification
Zwitterions | Diphthamide | [
"Physics",
"Chemistry"
] | 337 | [
"Biomolecules by chemical classification",
"Matter",
"Gene expression",
"Biochemical reactions",
"Amino acids",
"Post-translational modification",
"Zwitterions",
"Ions"
] |
11,699,110 | https://en.wikipedia.org/wiki/Primakoff%20effect | In particle physics, the Primakoff effect, named after Henry Primakoff, is the resonant production of neutral pseudoscalar mesons by high-energy photons interacting with an atomic nucleus. It can be viewed as the reverse process of the decay of the meson into two photons and has been used for the measurement of the decay width of neutral mesons.
It could also take place in stars and be a production mechanism of certain hypothetical particles, such as the axion. More precisely, the Primakoff effect is the conversion of axions into photons in the presence of very strong electromagnetic field.
The effect is predicted to lead to optical properties of the vacuum state in the presence of a strong magnetic field.
See also
Two-photon physics
References
Particle physics | Primakoff effect | [
"Physics"
] | 156 | [
"Particle physics stubs",
"Particle physics"
] |
11,699,678 | https://en.wikipedia.org/wiki/Vehicle%20frame | A vehicle frame, also historically known as its chassis, is the main supporting structure of a motor vehicle to which all other components are attached, comparable to the skeleton of an organism.
Until the 1930s, virtually every car had a structural frame separate from its body. This construction design is known as body-on-frame. By the 1960s, unibody construction in passenger cars had become common, and the trend to unibody for passenger cars continued over the ensuing decades.
Nearly all trucks, buses, and most pickups continue to use a separate frame as their chassis.
Functions
The main functions of a frame in a motor vehicle are:
To support the vehicle's mechanical components and body
To deal with static and dynamic loads without undue deflection or distortion
These include:
Weight of the body, passengers, and cargo loads.
Vertical and torsional twisting transmitted by going over uneven surfaces
Transverse lateral forces caused by road conditions, side wind, and steering of the vehicle
Torque from the engine and transmission
Longitudinal tensile forces from starting and acceleration, as well as compression from braking
Sudden impacts from collisions
Frame rails
Typically, the material used to construct vehicle chassis and frames include carbon steel for strength or aluminum alloys to achieve a more lightweight construction. In the case of a separate chassis, the frame is made up of structural elements called the rails or beams. These are ordinarily made of steel channel sections by folding, rolling, or pressing steel plate.
There are three main designs for these. If the material is folded twice, an open-ended cross-section, either C-shaped or hat-shaped (U-shaped), results.
"Boxed" frames contain closed chassis rails, either by welding them up or by using premanufactured metal tubing.
C-Shaped
By far the most common, the C-channel rail has been used on nearly every type of vehicle at one time or another. It is made by taking a flat piece of steel (usually ranging in thickness from 1/8" to 3/16", but up to 1/2" or more in some heavy-duty trucks) and rolling both sides over to form a C-shaped beam running the length of the vehicle. C-channel is typically more flexible than (fully) boxed of the same gauge.
Hat
Hat frames resemble a "U" and may be either right-side-up or inverted, with the open area facing down. They are not commonly used due to weakness and a propensity to rust. However, they can be found on 1936–1954 Chevrolet cars and some Studebakers.
Abandoned for a while, the hat frame regained popularity when companies started welding it to the bottom of unibody cars, effectively creating a boxed frame.
Boxed
Originally, boxed frames were made by welding two matching C-rails together to form a rectangular tube. Modern techniques, however, use a process similar to making C-rails in that a piece of steel is bent into four sides and then welded where both ends meet.
In the 1960s, the boxed frames of conventional American cars were spot-welded in multiple places down the seam; when turned into NASCAR "stock car" racers, the box was continuously welded from end to end for extra strength.
Design features
While appearing at first glance as a simple form made of metal, frames encounter significant stress and are built accordingly. The first issue addressed is "beam height", or the height of the vertical side of a frame. The taller the frame, the better it can resist vertical flex when force is applied to the top of the frame. This is the reason semi-trucks have taller frame rails than other vehicles instead of just being thicker.
As looks, ride quality, and handling became more important to consumers, new shapes were incorporated into frames. The most visible of these are arches and kick-ups. Instead of running straight over both axles, arched frames sit lower—roughly level with their axles—and curve up over the axles and then back down on the other side for bumper placement. Kick-ups do the same thing without curving down on the other side and are more common on the front ends.
Another feature are the tapered rails that narrow vertically or horizontally in front of a vehicle's cabin. This is done mainly on trucks to save weight and slightly increase room for the engine since the front of the vehicle does not bear as much load as the back. Design developments include frames that use multiple shapes in the same frame rail. For example, some pickup trucks have a boxed frame in front of the cab, shorter, narrower rails underneath the cab, and regular C-rails under the bed.
On perimeter frames, the areas where the rails connect from front to center and center to rear are weak compared to regular frames, so that section is boxed in, creating what are called "torque boxes".
Types
Full under-body frames
Ladder frame
Named for its resemblance to a ladder, the ladder frame is one of the oldest, simplest, and most frequently used under-body, separate chassis/frame designs. It consists of two symmetrical beams, rails, or channels, running the length of the vehicle, connected by several transverse cross-members. Initially seen on almost all vehicles, the ladder frame was gradually phased out on cars in favor of perimeter frames and unitized body construction. It is now seen mainly on large trucks. This design offers good beam resistance because of its continuous rails from front to rear, but poor resistance to torsion or warping if simple, perpendicular cross-members are used. The vehicle's overall height will be greater due to the floor pan sitting above the frame instead of inside it.
Backbone tube
A backbone chassis is a type of automotive construction with chassis that is similar to the body-on-frame design. Instead of a relatively flat, ladder-like structure with two longitudinal, parallel frame rails, it consists of a central, strong tubular backbone (usually rectangular in cross-section) that carries the power-train and connects the front and rear suspension attachment structures. Although the backbone is frequently drawn upward into, and mostly above the floor of the vehicle, the body is still placed on or over (sometimes straddling) this structure from above.
X-frame
This is the design used for the full-size American models of General Motors in the late 1950s and early 1960s in which the rails from alongside the engine seemed to cross in the passenger compartment, each continuing to the opposite end of the crossmember at the extreme rear of the vehicle. It was specifically chosen to decrease the overall height of the vehicles regardless of the increase in the size of the transmission and propeller shaft humps since each row had to cover frame rails as well. Several models had the differential located not by the customary bar between axle and frame, but by a ball joint atop the differential connected to a socket in a wishbone hinged onto a crossmember of the frame.
The X-frame was claimed to improve on previous designs, but it lacked side rails and thus did not provide adequate side impact and collision protection. Perimeter frames replaced this design.
Perimeter frame
Similar to a ladder frame, but the middle sections of the frame rails sit outboard of the front and rear rails, routed around the passenger footwells, inside the rocker and sill panels. This allowed the floor pan to be lowered, especially the passenger footwells, lowering the passengers' seating height and thereby reducing both the roof-line and overall vehicle height, as well as the center of gravity, thus improving handling and road-holding in passenger cars.
This became the prevalent design for body-on-frame cars in the United States, but not in the rest of the world, until the unibody gained popularity. For example, Hudson introduced this construction on their 3rd generation Commodore models in 1948. This frame type allowed for annual model changes, and lower cars, introduced in the 1950s to increase sales – without costly structural changes.
The Ford Panther platform, discontinued in 2011, was one of the last perimeter frame passenger car platforms in the United States. The fourth to seventh generation Chevrolet Corvette used a perimeter frame integrated with an internal skeleton that serves as a clamshell.
In addition to a lowered roof, the perimeter frame allows lower seating positions when that is desirable, and offers better safety in the event of a side impact. However, the design lacks stiffness because the transition areas from front to center and center to rear reduce beam and torsional resistance and is used in combination with torque boxes and soft suspension settings.
Platform frame
This is a modification of the perimeter frame, or of the backbone frame, in which the passenger compartment floor, and sometimes the luggage compartment floor, have been integrated into the frame as loadbearing parts for strength and rigidity. The sheet metal used to assemble the components needs to be stamped with ridges and hollows to give it strength.
Platform chassis were used on several successful European cars, most notably the Volkswagen Beetle, where it was called "body-on-pan" construction. Another German example are the Mercedes-Benz "Ponton" cars of the 1950s and 1960s, where it was called a "frame floor" in English-language advertisements.
The French Renault 4, of which over eight million were made, also used a platform frame. The frame of the Citroën 2CV used a minimal interpretation of a platform chassis under its body.
Space frame
In a (tubular) spaceframe chassis, the suspension, engine, and body panels are attached to a three-dimensional skeletal frame of tubes, and the body panels have limited or no structural function. To maximize rigidity and minimize weight, the design frequently makes maximum use of triangles, and all the forces in each strut are either tensile or compressive, never bending, so they can be kept as thin as possible.
The first true spaceframe chassis were produced in the 1930s by Buckminster Fuller and William Bushnell Stout (the Dymaxion and the Stout Scarab) who understood the theory of the true spaceframe from either architecture or aircraft design.
The 1951 Jaguar C-Type racing sports car utilized a lightweight, multi-tubular, triangulated frame over which an aerodynamic aluminum body was crafted.
In 1994, the Audi A8 was the first mass-market car with an aluminium chassis, made feasible by integrating an aluminium space-frame into the bodywork. Audi A8 models have since used this construction method co-developed with Alcoa, and marketed as the Audi Space Frame.
The Italian term Superleggera (meaning 'super-light') was trademarked by Carrozzeria Touring for lightweight sports-car body construction that only resembles a space-frame chassis. Using a three-dimensional frame that consists of a cage of narrow tubes that, besides being under the body, run up the fenders and over the radiator, cowl, and roof, and under the rear window, it resembles a geodesic structure. A skin is attached to the outside of the frame, often made of aluminum. This body construction is, however, not stress-bearing and still requires the addition of a chassis.
Unibody
The terms "unibody" and "unit-body" are short for "unitized body", "unitary construction", or alternatively (fully) integrated body and frame/chassis. It is defined as:
Vehicle structure has shifted from the traditional body-on-frame architecture to the lighter unitized/integrated body structure that is now used for most cars.
Integral frame and body construction requires more than simply welding an unstressed body to a conventional frame. In a fully integrated body structure, the entire car is a load-carrying unit that handles all the loads experienced by the vehicle – forces from driving and cargo loads. Integral-type bodies for wheeled vehicles are typically manufactured by welding preformed metal panels and other components together, by forming or casting whole sections as one piece, or by combining these techniques. Although this is sometimes also referred to as a monocoque structure, because the car's outer skin and panels are made load-bearing, there are still ribs, bulkheads, and box sections to reinforce the body, making the description semi-monocoque more appropriate.
The first attempt to develop such a design technique was on the 1922 Lancia Lambda to provide structural stiffness and a lower body height for its torpedo car body. The Lambda had an open layout with unstressed roof, which made it less of a monocoque shell and more like a bowl. One thousand were produced.
A key role in developing the unitary body was played by the American firm the Budd Company, now ThyssenKrupp Budd. Budd supplied pressed-steel bodywork, fitted to separate frames, to automakers Dodge, Ford, Buick, and the French company, Citroën.
In 1930, Joseph Ledwinka, an engineer with Budd, designed an automobile prototype with a full unitary construction.
Citroën purchased this fully unitary body design for the Citroën Traction Avant. This high-volume, mass-production car was introduced in 1934 and sold 760,000 units over the next 23 years of production. This application was the first iteration of the modern structural integration of body and chassis, using spot welded deeply stamped steel sheets into a structural cage, including sills, pillars, and roof beams. In addition to a unitary body with no separate frame, the Traction Avant also featured other innovations such as front-wheel drive. The result was a low-slung vehicle with an open, flat-floored interior.
For the Chrysler Airflow (1934–1937), Budd supplied a variation – three main sections from the Airflow's body were welded into what Chrysler called a bridge-truss construction. Unfortunately, this method was not ideal because the panel fits were poor. To convince a skeptical public of the strength of unibody, both Citroën and Chrysler created advertising films showing cars surviving after being pushed off a cliff.
Opel was the second European and the first German car manufacturer to produce a car with a unibody structure – production of the compact Olympia started in 1935. A larger Kapitän went into production in 1938, although its front longitudinal beams were stamped separately and then attached to the main body. It was so successful that the Soviet post-war mass produced GAZ-M20 Pobeda of 1946 copied unibody structure from the Opel Kapitän. Later Soviet limousine GAZ-12 ZIM of 1950 introduced unibody design to automobiles with a wheelbase as long as 3.2 m (126 in).
The streamlined 1936 Lincoln-Zephyr with conventional front-engine, rear-wheel-drive layout utilized a unibody structure. By 1941, unit construction was no longer a new idea for cars, "but it was unheard of in the [American] low-price field [and] Nash wanted a bigger share of that market." The single unit-body construction of the Nash 600 provided weight savings and Nash's Chairman and CEO, George W. Mason was convinced "that unibody was the wave of the future."
Since then, more cars were redesigned to the unibody structure, which is now "considered standard in the industry". By 1960, the unitized body design was used by Detroit's Big Three on their compact cars (Ford Falcon, Plymouth Valiant, and Chevrolet Corvair). After Nash merged with Hudson Motors to form American Motors Corporation, its Rambler-badged automobiles continued exclusively building variations of the unibody.
Although the 1934 Chrysler Airflow had a weaker-than-usual frame and body framework welded to the chassis to provide stiffness, in 1960, Chrysler moved from body-on-frame construction to a unit-body design for most of its cars.
Most of the American-manufactured unibody automobiles used torque boxes in their vehicle design to reduce vibrations and chassis flex, except for the Chevy II, which had a bolt-on front apron (erroneously referred to as a subframe).
The unibody is now the preferred construction for mass-market automobiles. This design provides weight savings, improved space utilization, and ease of manufacture. Acceptance grew dramatically in the wake of the two energy crises of the 1970s and that of the 2000s in which compact SUVs using a truck platform (primarily the USA market) were subjected to CAFE standards after 2005 (by the late 2000s truck-based compact SUVs were phased out and replaced with crossovers). An additional advantage of a strong-bodied car lies in the improved crash protection for its passengers.
Uniframe
American Motors (with its partner Renault) during the late 1970s incorporated unibody construction when designing the Jeep Cherokee (XJ) platform using the manufacturing principles (unisides, floorplan with integrated frame rails and crumple zones, and roof panel) used in its passenger cars, such as the Hornets and all-wheel-drive Eagles for a new type of frame called the "Uniframe [...] a robust stamped steel frame welded to a strong unit-body structure, giving the strength of a conventional heavy frame with the weight advantages of Unibody construction." This design was also used with the XJC concept developed by American Motors before its absorption by Chrysler, which later became the Jeep Grand Cherokee (ZJ). The design is still used in modern-day sport utility vehicles such as the Jeep Grand Cherokee and Land Rover Defender. This design is also used in large vans such as Ford Transit, VW Crafter and Mercedes Sprinter.
Partial frames
Subframe
A subframe is a distinct structural frame component, to reinforce or complement a particular section of a vehicle's structure. Typically attached to a unibody or a monocoque, the rigid subframe can handle great forces from the engine and drive train. It can transfer them evenly to a wide area of relatively thin sheet metal of a unitized body shell. Subframes are often found at the front or rear end of cars and are used to attach the suspension to the vehicle. A subframe may also contain the engine and transmission. It normally has pressed or box steel construction but may be tubular and/or other material.
Examples of passenger car use include the 1967–1981 GM F platform, the numerous years and models built on the GM X platform (1962), GM's M/L platform vans (Chevrolet Astro/GMC Safari, which included an all-wheel drive variant), and the unibody AMC Pacer that incorporated a front subframe to isolate the passenger compartment from the engine, suspension, and steering loads.
See also
Bicycle frame
Body-on-frame
Chassis
Coachbuilder
Locomotive frame
Monocoque
Motorcycle frame
C-channel
References
External links
What Is the A-Frame on a Car?
What Is Car frame?
Automotive chassis types
Automotive technologies
Structural engineering
Structural system
Vehicle parts | Vehicle frame | [
"Technology",
"Engineering"
] | 3,856 | [
"Structural engineering",
"Building engineering",
"Structural system",
"Construction",
"Civil engineering",
"Vehicle parts",
"Components"
] |
563,161 | https://en.wikipedia.org/wiki/Membrane%20potential | Membrane potential (also transmembrane potential or membrane voltage) is the difference in electric potential between the interior and the exterior of a biological cell. It equals the interior potential minus the exterior potential. This is the energy (i.e. work) per charge which is required to move a (very small) positive charge at constant velocity across the cell membrane from the exterior to the interior. (If the charge is allowed to change velocity, the change of kinetic energy and production of radiation must be taken into account.)
Typical values of membrane potential, normally given in units of milli volts and denoted as mV, range from –80 mV to –40 mV. For such typical negative membrane potentials, positive work is required to move a positive charge from the interior to the exterior. However, thermal kinetic energy allows ions to overcome the potential difference. For a selectively permeable membrane, this permits a net flow against the gradient. This is a kind of osmosis.
Description
All animal cells are surrounded by a membrane composed of a lipid bilayer with proteins embedded in it. The membrane serves as both an insulator and a diffusion barrier to the movement of ions. Transmembrane proteins, also known as ion transporter or ion pump proteins, actively push ions across the membrane and establish concentration gradients across the membrane, and ion channels allow ions to move across the membrane down those concentration gradients. Ion pumps and ion channels are electrically equivalent to a set of batteries and resistors inserted in the membrane, and therefore create a voltage between the two sides of the membrane.
All plasma membranes have an electrical potential across them, with the inside usually negative with respect to the outside. The membrane potential has two basic functions. First, it allows a cell to function as a battery, providing power to operate a variety of "molecular devices" embedded in the membrane. Second, in electrically excitable cells such as neurons and muscle cells, it is used for transmitting signals between different parts of a cell.
Signals in neurons and muscle cells
Signals are generated in excitable cells by opening or closing of ion channels at one point in the membrane, producing a local change in the membrane potential. This change in the electric field can be quickly sensed by either adjacent or more distant ion channels in the membrane. Those ion channels can then open or close as a result of the potential change, reproducing the signal.
In non-excitable cells, and in excitable cells in their baseline states, the membrane potential is held at a relatively stable value, called the resting potential. For neurons, resting potential is defined as ranging from –80 to –70 millivolts; that is, the interior of a cell has a negative baseline voltage of a bit less than one-tenth of a volt. The opening and closing of ion channels can induce a departure from the resting potential. This is called a depolarization if the interior voltage becomes less negative (say from –70 mV to –60 mV), or a hyperpolarization if the interior voltage becomes more negative (say from –70 mV to –80 mV). In excitable cells, a sufficiently large depolarization can evoke an action potential, in which the membrane potential changes rapidly and significantly for a short time (on the order of 1 to 100 milliseconds), often reversing its polarity. Action potentials are generated by the activation of certain voltage-gated ion channels.
In neurons, the factors that influence the membrane potential are diverse. They include numerous types of ion channels, some of which are chemically gated and some of which are voltage-gated. Because voltage-gated ion channels are controlled by the membrane potential, while the membrane potential itself is influenced by these same ion channels, feedback loops that allow for complex temporal dynamics arise, including oscillations and regenerative events such as action potentials.
Ion concentration gradients
Differences in the concentrations of ions on opposite sides of a cellular membrane lead to a voltage called the membrane potential.
Many ions have a concentration gradient across the membrane, including potassium (K+), which is at a high concentration inside and a low concentration outside the membrane. Sodium (Na+) and chloride (Cl−) ions are at high concentrations in the extracellular region, and low concentrations in the intracellular regions. These concentration gradients provide the potential energy to drive the formation of the membrane potential. This voltage is established when the membrane has permeability to one or more ions.
In the simplest case, illustrated in the top diagram ("Ion concentration gradients"), if the membrane is selectively permeable to potassium, these positively charged ions can diffuse down the concentration gradient to the outside of the cell, leaving behind uncompensated negative charges. This separation of charges is what causes the membrane potential.
The system as a whole is electro-neutral. The uncompensated positive charges outside the cell, and the uncompensated negative charges inside the cell, physically line up on the membrane surface and attract each other across the lipid bilayer. Thus, the membrane potential is physically located only in the immediate vicinity of the membrane. It is the separation of these charges across the membrane that is the basis of the membrane voltage.
The top diagram is only an approximation of the ionic contributions to the membrane potential. Other ions including sodium, chloride, calcium, and others play a more minor role, even though they have strong concentration gradients, because they have more limited permeability than potassium.
Physical basis
The membrane potential in a cell derives ultimately from two factors: electrical force and diffusion. Electrical force arises from the mutual attraction between particles with opposite electrical charges (positive and negative) and the mutual repulsion between particles with the same type of charge (both positive or both negative). Diffusion arises from the statistical tendency of particles to redistribute from regions where they are highly concentrated to regions where the concentration is low.
Voltage
Voltage, which is synonymous with difference in electrical potential, is the ability to drive an electric current across a resistance. Indeed, the simplest definition of a voltage is given by Ohm's law: V=IR, where V is voltage, I is current and R is resistance. If a voltage source such as a battery is placed in an electrical circuit, the higher the voltage of the source the greater the amount of current that it will drive across the available resistance. The functional significance of voltage lies only in potential differences between two points in a circuit. The idea of a voltage at a single point is meaningless. It is conventional in electronics to assign a voltage of zero to some arbitrarily chosen element of the circuit, and then assign voltages for other elements measured relative to that zero point. There is no significance in which element is chosen as the zero point—the function of a circuit depends only on the differences not on voltages per se. However, in most cases and by convention, the zero level is most often assigned to the portion of a circuit that is in contact with ground.
The same principle applies to voltage in cell biology. In electrically active tissue, the potential difference between any two points can be measured by inserting an electrode at each point, for example one inside and one outside the cell, and connecting both electrodes to the leads of what is in essence a specialized voltmeter. By convention, the zero potential value is assigned to the outside of the cell and the sign of the potential difference between the outside and the inside is determined by the potential of the inside relative to the outside zero.
In mathematical terms, the definition of voltage begins with the concept of an electric field , a vector field assigning a magnitude and direction to each point in space. In many situations, the electric field is a conservative field, which means that it can be expressed as the gradient of a scalar function , that is, . This scalar field is referred to as the voltage distribution. The definition allows for an arbitrary constant of integration—this is why absolute values of voltage are not meaningful. In general, electric fields can be treated as conservative only if magnetic fields do not significantly influence them, but this condition usually applies well to biological tissue.
Because the electric field is the gradient of the voltage distribution, rapid changes in voltage within a small region imply a strong electric field; on the converse, if the voltage remains approximately the same over a large region, the electric fields in that region must be weak. A strong electric field, equivalent to a strong voltage gradient, implies that a strong force is exerted on any charged particles that lie within the region.
Ions and the forces driving their motion
Electrical signals within biological organisms are, in general, driven by ions. The most important cations for the action potential are sodium (Na+) and potassium (K+). Both of these are monovalent cations that carry a single positive charge. Action potentials can also involve calcium (Ca2+), which is a divalent cation that carries a double positive charge. The chloride anion (Cl−) plays a major role in the action potentials of some algae, but plays a negligible role in the action potentials of most animals.
Ions cross the cell membrane under two influences: diffusion and electric fields. A simple example wherein two solutions—A and B—are separated by a porous barrier illustrates that diffusion will ensure that they will eventually mix into equal solutions. This mixing occurs because of the difference in their concentrations. The region with high concentration will diffuse out toward the region with low concentration. To extend the example, let solution A have 30 sodium ions and 30 chloride ions. Also, let solution B have only 20 sodium ions and 20 chloride ions. Assuming the barrier allows both types of ions to travel through it, then a steady state will be reached whereby both solutions have 25 sodium ions and 25 chloride ions. If, however, the porous barrier is selective to which ions are let through, then diffusion alone will not determine the resulting solution. Returning to the previous example, let's now construct a barrier that is permeable only to sodium ions. Now, only sodium is allowed to diffuse cross the barrier from its higher concentration in solution A to the lower concentration in solution B. This will result in a greater accumulation of sodium ions than chloride ions in solution B and a lesser number of sodium ions than chloride ions in solution A.
This means that there is a net positive charge in solution B from the higher concentration of positively charged sodium ions than negatively charged chloride ions. Likewise, there is a net negative charge in solution A from the greater concentration of negative chloride ions than positive sodium ions. Since opposite charges attract and like charges repel, the ions are now also influenced by electrical fields as well as forces of diffusion. Therefore, positive sodium ions will be less likely to travel to the now-more-positive B solution and remain in the now-more-negative A solution. The point at which the forces of the electric fields completely counteract the force due to diffusion is called the equilibrium potential. At this point, the net flow of the specific ion (in this case sodium) is zero.
Plasma membranes
Every cell is enclosed in a plasma membrane, which has the structure of a lipid bilayer with many types of large molecules embedded in it. Because it is made of lipid molecules, the plasma membrane intrinsically has a high electrical resistivity, in other words a low intrinsic permeability to ions. However, some of the molecules embedded in the membrane are capable either of actively transporting ions from one side of the membrane to the other or of providing channels through which they can move.
In electrical terminology, the plasma membrane functions as a combined resistor and capacitor. Resistance arises from the fact that the membrane impedes the movement of charges across it. Capacitance arises from the fact that the lipid bilayer is so thin that an accumulation of charged particles on one side gives rise to an electrical force that pulls oppositely charged particles toward the other side. The capacitance of the membrane is relatively unaffected by the molecules that are embedded in it, so it has a more or less invariant value estimated at 2 μF/cm2 (the total capacitance of a patch of membrane is proportional to its area). The conductance of a pure lipid bilayer is so low, on the other hand, that in biological situations it is always dominated by the conductance of alternative pathways provided by embedded molecules. Thus, the capacitance of the membrane is more or less fixed, but the resistance is highly variable.
The thickness of a plasma membrane is estimated to be about 7-8 nanometers. Because the membrane is so thin, it does not take a very large transmembrane voltage to create a strong electric field within it. Typical membrane potentials in animal cells are on the order of 100 millivolts (that is, one tenth of a volt), but calculations show that this generates an electric field close to the maximum that the membrane can sustain—it has been calculated that a voltage difference much larger than 200 millivolts could cause dielectric breakdown, that is, arcing across the membrane.
Facilitated diffusion and transport
The resistance of a pure lipid bilayer to the passage of ions across it is very high, but structures embedded in the membrane can greatly enhance ion movement, either actively or passively, via mechanisms called facilitated transport and facilitated diffusion. The two types of structure that play the largest roles are ion channels and ion pumps, both usually formed from assemblages of protein molecules. Ion channels provide passageways through which ions can move. In most cases, an ion channel is permeable only to specific types of ions (for example, sodium and potassium but not chloride or calcium), and sometimes the permeability varies depending on the direction of ion movement. Ion pumps, also known as ion transporters or carrier proteins, actively transport specific types of ions from one side of the membrane to the other, sometimes using energy derived from metabolic processes to do so.
Ion pumps
Ion pumps are integral membrane proteins that carry out active transport, i.e., use cellular energy (ATP) to "pump" the ions against their concentration gradient. Such ion pumps take in ions from one side of the membrane (decreasing its concentration there) and release them on the other side (increasing its concentration there).
The ion pump most relevant to the action potential is the sodium–potassium pump, which transports three sodium ions out of the cell and two potassium ions in. As a consequence, the concentration of potassium ions K+ inside the neuron is roughly 30-fold larger than the outside concentration, whereas the sodium concentration outside is roughly five-fold larger than inside. In a similar manner, other ions have different concentrations inside and outside the neuron, such as calcium, chloride and magnesium.
If the numbers of each type of ion were equal, the sodium–potassium pump would be electrically neutral, but, because of the three-for-two exchange, it gives a net movement of one positive charge from intracellular to extracellular for each cycle, thereby contributing to a positive voltage difference. The pump has three effects: (1) it makes the sodium concentration high in the extracellular space and low in the intracellular space; (2) it makes the potassium concentration high in the intracellular space and low in the extracellular space; (3) it gives the intracellular space a negative voltage with respect to the extracellular space.
The sodium-potassium pump is relatively slow in operation. If a cell were initialized with equal concentrations of sodium and potassium everywhere, it would take hours for the pump to establish equilibrium. The pump operates constantly, but becomes progressively less efficient as the concentrations of sodium and potassium available for pumping are reduced.
Ion pumps influence the action potential only by establishing the relative ratio of intracellular and extracellular ion concentrations. The action potential involves mainly the opening and closing of ion channels not ion pumps. If the ion pumps are turned off by removing their energy source, or by adding an inhibitor such as ouabain, the axon can still fire hundreds of thousands of action potentials before their amplitudes begin to decay significantly. In particular, ion pumps play no significant role in the repolarization of the membrane after an action potential.
Another functionally important ion pump is the sodium-calcium exchanger. This pump operates in a conceptually similar way to the sodium-potassium pump, except that in each cycle it exchanges three Na+ from the extracellular space for one Ca++ from the intracellular space. Because the net flow of charge is inward, this pump runs "downhill", in effect, and therefore does not require any energy source except the membrane voltage. Its most important effect is to pump calcium outward—it also allows an inward flow of sodium, thereby counteracting the sodium-potassium pump, but, because overall sodium and potassium concentrations are much higher than calcium concentrations, this effect is relatively unimportant. The net result of the sodium-calcium exchanger is that in the resting state, intracellular calcium concentrations become very low.
Ion channels
Ion channels are integral membrane proteins with a pore through which ions can travel between extracellular space and cell interior. Most channels are specific (selective) for one ion; for example, most potassium channels are characterized by 1000:1 selectivity ratio for potassium over sodium, though potassium and sodium ions have the same charge and differ only slightly in their radius. The channel pore is typically so small that ions must pass through it in single-file order. Channel pores can be either open or closed for ion passage, although a number of channels demonstrate various sub-conductance levels. When a channel is open, ions permeate through the channel pore down the transmembrane concentration gradient for that particular ion. Rate of ionic flow through the channel, i.e. single-channel current amplitude, is determined by the maximum channel conductance and electrochemical driving force for that ion, which is the difference between the instantaneous value of the membrane potential and the value of the reversal potential.
A channel may have several different states (corresponding to different conformations of the protein), but each such state is either open or closed. In general, closed states correspond either to a contraction of the pore—making it impassable to the ion—or to a separate part of the protein, stoppering the pore. For example, the voltage-dependent sodium channel undergoes inactivation, in which a portion of the protein swings into the pore, sealing it. This inactivation shuts off the sodium current and plays a critical role in the action potential.
Ion channels can be classified by how they respond to their environment. For example, the ion channels involved in the action potential are voltage-sensitive channels; they open and close in response to the voltage across the membrane. Ligand-gated channels form another important class; these ion channels open and close in response to the binding of a ligand molecule, such as a neurotransmitter. Other ion channels open and close with mechanical forces. Still other ion channels—such as those of sensory neurons—open and close in response to other stimuli, such as light, temperature or pressure.
Leakage channels
Leakage channels are the simplest type of ion channel, in that their permeability is more or less constant. The types of leakage channels that have the greatest significance in neurons are potassium and chloride channels. Even these are not perfectly constant in their properties: First, most of them are voltage-dependent in the sense that they conduct better in one direction than the other (in other words, they are rectifiers); second, some of them are capable of being shut off by chemical ligands even though they do not require ligands in order to operate.
Ligand-gated channels
Ligand-gated ion channels are channels whose permeability is greatly increased when some type of chemical ligand binds to the protein structure. Animal cells contain hundreds, if not thousands, of types of these. A large subset function as neurotransmitter receptors—they occur at postsynaptic sites, and the chemical ligand that gates them is released by the presynaptic axon terminal. One example of this type is the AMPA receptor, a receptor for the neurotransmitter glutamate that when activated allows passage of sodium and potassium ions. Another example is the GABAA receptor, a receptor for the neurotransmitter GABA that when activated allows passage of chloride ions.
Neurotransmitter receptors are activated by ligands that appear in the extracellular area, but there are other types of ligand-gated channels that are controlled by interactions on the intracellular side.
Voltage-dependent channels
Voltage-gated ion channels, also known as voltage dependent ion channels, are channels whose permeability is influenced by the membrane potential. They form another very large group, with each member having a particular ion selectivity and a particular voltage dependence. Many are also time-dependent—in other words, they do not respond immediately to a voltage change but only after a delay.
One of the most important members of this group is a type of voltage-gated sodium channel that underlies action potentials—these are sometimes called Hodgkin-Huxley sodium channels because they were initially characterized by Alan Lloyd Hodgkin and Andrew Huxley in their Nobel Prize-winning studies of the physiology of the action potential. The channel is closed at the resting voltage level, but opens abruptly when the voltage exceeds a certain threshold, allowing a large influx of sodium ions that produces a very rapid change in the membrane potential. Recovery from an action potential is partly dependent on a type of voltage-gated potassium channel that is closed at the resting voltage level but opens as a consequence of the large voltage change produced during the action potential.
Reversal potential
The reversal potential (or equilibrium potential) of an ion is the value of transmembrane voltage at which diffusive and electrical forces counterbalance, so that there is no net ion flow across the membrane. This means that the transmembrane voltage exactly opposes the force of diffusion of the ion, such that the net current of the ion across the membrane is zero and unchanging. The reversal potential is important because it gives the voltage that acts on channels permeable to that ion—in other words, it gives the voltage that the ion concentration gradient generates when it acts as a battery.
The equilibrium potential of a particular ion is usually designated by the notation Eion.The equilibrium potential for any ion can be calculated using the Nernst equation. For example, reversal potential for potassium ions will be as follows:
where
Eeq,K+= equilibrium potential for potassium, measured in volts
R = universal gas constant, equal to 8.314 joules·K−1·mol−1
T = absolute temperature, measured in kelvins (= K = degrees Celsius + 273.15)
z = number of elementary charges of the ion in question involved in the reaction
F = Faraday constant, equal to 96,485 coulombs·mol−1 or J·V−1·mol−1
[K+]o= extracellular concentration of potassium, measured in mol·m−3 or mmol·l−1
[K+]i= intracellular concentration of potassium
Even if two different ions have the same charge (i.e., K+ and Na+), they can still have very different equilibrium potentials, provided their outside and/or inside concentrations differ. Take, for example, the equilibrium potentials of potassium and sodium in neurons. The potassium equilibrium potential EK is −84 mV with 5 mM potassium outside and 140 mM inside. On the other hand, the sodium equilibrium potential, ENa, is approximately +66 mV with approximately 12 mM sodium inside and 140 mM outside.
Changes to membrane potential during development
A neuron's resting membrane potential actually changes during the development of an organism. In order for a neuron to eventually adopt its full adult function, its potential must be tightly regulated during development. As an organism progresses through development the resting membrane potential becomes more negative. Glial cells are also differentiating and proliferating as development progresses in the brain. The addition of these glial cells increases the organism's ability to regulate extracellular potassium. The drop in extracellular potassium can lead to a decrease in membrane potential of 35 mV.
Cell excitability
Cell excitability is the change in membrane potential that is necessary for cellular responses in various tissues. Cell excitability is a property that is induced during early embriogenesis. Excitability of a cell has also been defined as the ease with which a response may be triggered. The resting and threshold potentials forms the basis of cell excitability and these processes are fundamental for the generation of graded and action potentials.
The most important regulators of cell excitability are the extracellular electrolyte concentrations (i.e. Na+, K+, Ca2+, Cl−, Mg2+) and associated proteins. Important proteins that regulate cell excitability are voltage-gated ion channels, ion transporters (e.g. Na+/K+-ATPase, magnesium transporters, acid–base transporters), membrane receptors and hyperpolarization-activated cyclic-nucleotide-gated channels. For example, potassium channels and calcium-sensing receptors are important regulators of excitability in neurons, cardiac myocytes and many other excitable cells like astrocytes. Calcium ion is also the most important second messenger in excitable cell signaling. Activation of synaptic receptors initiates long-lasting changes in neuronal excitability. Thyroid, adrenal and other hormones also regulate cell excitability, for example, progesterone and estrogen modulate myometrial smooth muscle cell excitability.
Many cell types are considered to have an excitable membrane. Excitable cells are neurons, muscle (cardiac, skeletal, smooth), vascular endothelial cells, pericytes, juxtaglomerular cells, interstitial cells of Cajal, many types of epithelial cells (e.g. beta cells, alpha cells, delta cells, enteroendocrine cells, pulmonary neuroendocrine cells, pinealocytes), glial cells (e.g. astrocytes), mechanoreceptor cells (e.g. hair cells and Merkel cells), chemoreceptor cells (e.g. glomus cells, taste receptors), some plant cells and possibly immune cells. Astrocytes display a form of non-electrical excitability based on intracellular calcium variations related to the expression of several receptors through which they can detect the synaptic signal. In neurons, there are different membrane properties in some portions of the cell, for example, dendritic excitability endows neurons with the capacity for coincidence detection of spatially separated inputs.
Equivalent circuit
Electrophysiologists model the effects of ionic concentration differences, ion channels, and membrane capacitance in terms of an equivalent circuit, which is intended to represent the electrical properties of a small patch of membrane. The equivalent circuit consists of a capacitor in parallel with four pathways each consisting of a battery in series with a variable conductance. The capacitance is determined by the properties of the lipid bilayer, and is taken to be fixed. Each of the four parallel pathways comes from one of the principal ions, sodium, potassium, chloride, and calcium. The voltage of each ionic pathway is determined by the concentrations of the ion on each side of the membrane; see the Reversal potential section above. The conductance of each ionic pathway at any point in time is determined by the states of all the ion channels that are potentially permeable to that ion, including leakage channels, ligand-gated channels, and voltage-gated ion channels.
For fixed ion concentrations and fixed values of ion channel conductance, the equivalent circuit can be further reduced, using the Goldman equation as described below, to a circuit containing a capacitance in parallel with a battery and conductance. In electrical terms, this is a type of RC circuit (resistance-capacitance circuit), and its electrical properties are very simple. Starting from any initial state, the current flowing across either the conductance or the capacitance decays with an exponential time course, with a time constant of , where is the capacitance of the membrane patch, and is the net resistance. For realistic situations, the time constant usually lies in the 1—100 millisecond range. In most cases, changes in the conductance of ion channels occur on a faster time scale, so an RC circuit is not a good approximation; however, the differential equation used to model a membrane patch is commonly a modified version of the RC circuit equation.
Resting potential
When the membrane potential of a cell goes for a long period of time without changing significantly, it is referred to as a resting potential or resting voltage. This term is used for the membrane potential of non-excitable cells, but also for the membrane potential of excitable cells in the absence of excitation. In excitable cells, the other possible states are graded membrane potentials (of variable amplitude), and action potentials, which are large, all-or-nothing rises in membrane potential that usually follow a fixed time course. Excitable cells include neurons, muscle cells, and some secretory cells in glands. Even in other types of cells, however, the membrane voltage can undergo changes in response to environmental or intracellular stimuli. For example, depolarization of the plasma membrane appears to be an important step in programmed cell death.
The interactions that generate the resting potential are modeled by the Goldman equation. This is similar in form to the Nernst equation shown above, in that it is based on the charges of the ions in question, as well as the difference between their inside and outside concentrations. However, it also takes into consideration the relative permeability of the plasma membrane to each ion in question.
The three ions that appear in this equation are potassium (K+), sodium (Na+), and chloride (Cl−). Calcium is omitted, but can be added to deal with situations in which it plays a significant role. Being an anion, the chloride terms are treated differently from the cation terms; the intracellular concentration is in the numerator, and the extracellular concentration in the denominator, which is reversed from the cation terms. Pi stands for the relative permeability of the ion type i.
In essence, the Goldman formula expresses the membrane potential as a weighted average of the reversal potentials for the individual ion types, weighted by permeability. (Although the membrane potential changes about 100 mV during an action potential, the concentrations of ions inside and outside the cell do not change significantly. They remain close to their respective concentrations when then membrane is at resting potential.) In most animal cells, the permeability to potassium is much higher in the resting state than the permeability to sodium. As a consequence, the resting potential is usually close to the potassium reversal potential. The permeability to chloride can be high enough to be significant, but, unlike the other ions, chloride is not actively pumped, and therefore equilibrates at a reversal potential very close to the resting potential determined by the other ions.
Values of resting membrane potential in most animal cells usually vary between the potassium reversal potential (usually around -80 mV) and around -40 mV. The resting potential in excitable cells (capable of producing action potentials) is usually near -60 mV—more depolarized voltages would lead to spontaneous generation of action potentials. Immature or undifferentiated cells show highly variable values of resting voltage, usually significantly more positive than in differentiated cells. In such cells, the resting potential value correlates with the degree of differentiation: undifferentiated cells in some cases may not show any transmembrane voltage difference at all.
Maintenance of the resting potential can be metabolically costly for a cell because of its requirement for active pumping of ions to counteract losses due to leakage channels. The cost is highest when the cell function requires an especially depolarized value of membrane voltage. For example, the resting potential in daylight-adapted blowfly (Calliphora vicina) photoreceptors can be as high as -30 mV. This elevated membrane potential allows the cells to respond very rapidly to visual inputs; the cost is that maintenance of the resting potential may consume more than 20% of overall cellular ATP.
On the other hand, the high resting potential in undifferentiated cells does not necessarily incur a high metabolic cost. This apparent paradox is resolved by examination of the origin of that resting potential. Little-differentiated cells are characterized by extremely high input resistance, which implies that few leakage channels are present at this stage of cell life. As an apparent result, potassium permeability becomes similar to that for sodium ions, which places resting potential in-between the reversal potentials for sodium and potassium as discussed above. The reduced leakage currents also mean there is little need for active pumping in order to compensate, therefore low metabolic cost.
Graded potentials
As explained above, the potential at any point in a cell's membrane is determined by the ion concentration differences between the intracellular and extracellular areas, and by the permeability of the membrane to each type of ion. The ion concentrations do not normally change very quickly (with the exception of Ca2+, where the baseline intracellular concentration is so low that even a small influx may increase it by orders of magnitude), but the permeabilities of the ions can change in a fraction of a millisecond, as a result of activation of ligand-gated ion channels. The change in membrane potential can be either large or small, depending on how many ion channels are activated and what type they are, and can be either long or short, depending on the lengths of time that the channels remain open. Changes of this type are referred to as graded potentials, in contrast to action potentials, which have a fixed amplitude and time course.
As can be derived from the Goldman equation shown above, the effect of increasing the permeability of a membrane to a particular type of ion shifts the membrane potential toward the reversal potential for that ion. Thus, opening Na+ channels shifts the membrane potential toward the Na+ reversal potential, which is usually around +100 mV. Likewise, opening K+ channels shifts the membrane potential toward about –90 mV, and opening Cl− channels shifts it toward about –70 mV (resting potential of most membranes). Thus, Na+ channels shift the membrane potential in a positive direction, K+ channels shift it in a negative direction (except when the membrane is hyperpolarized to a value more negative than the K+ reversal potential), and Cl− channels tend to shift it towards the resting potential.
Graded membrane potentials are particularly important in neurons, where they are produced by synapses—a temporary change in membrane potential produced by activation of a synapse by a single graded or action potential is called a postsynaptic potential. Neurotransmitters that act to open Na+ channels typically cause the membrane potential to become more positive, while neurotransmitters that activate K+ channels typically cause it to become more negative; those that inhibit these channels tend to have the opposite effect.
Whether a postsynaptic potential is considered excitatory or inhibitory depends on the reversal potential for the ions of that current, and the threshold for the cell to fire an action potential (around –50mV). A postsynaptic current with a reversal potential above threshold, such as a typical Na+ current, is considered excitatory. A current with a reversal potential below threshold, such as a typical K+ current, is considered inhibitory. A current with a reversal potential above the resting potential, but below threshold, will not by itself elicit action potentials, but will produce subthreshold membrane potential oscillations. Thus, neurotransmitters that act to open Na+ channels produce excitatory postsynaptic potentials, or EPSPs, whereas neurotransmitters that act to open K+ or Cl− channels typically produce inhibitory postsynaptic potentials, or IPSPs. When multiple types of channels are open within the same time period, their postsynaptic potentials summate (are added together).
Other values
From the viewpoint of biophysics, the resting membrane potential is merely the membrane potential that results from the membrane permeabilities that predominate when the cell is resting. The above equation of weighted averages always applies, but the following approach may be more easily visualized.
At any given moment, there are two factors for an ion that determine how much influence that ion will have over the membrane potential of a cell:
That ion's driving force
That ion's permeability
If the driving force is high, then the ion is being "pushed" across the membrane. If the permeability is high, it will be easier for the ion to diffuse across the membrane.
Driving force is the net electrical force available to move that ion across the membrane. It is calculated as the difference between the voltage that the ion "wants" to be at (its equilibrium potential) and the actual membrane potential (Em). So, in formal terms, the driving force for an ion = Em - Eion
For example, at our earlier calculated resting potential of −73 mV, the driving force on potassium is 7 mV : (−73 mV) − (−80 mV) = 7 mV. The driving force on sodium would be (−73 mV) − (60 mV) = −133 mV.
Permeability is a measure of how easily an ion can cross the membrane. It is normally measured as the (electrical) conductance and the unit, siemens, corresponds to 1 C·s−1·V−1, that is one coulomb per second per volt of potential.
So, in a resting membrane, while the driving force for potassium is low, its permeability is very high. Sodium has a huge driving force but almost no resting permeability. In this case, potassium carries about 20 times more current than sodium, and thus has 20 times more influence over Em than does sodium.
However, consider another case—the peak of the action potential. Here, permeability to Na is high and K permeability is relatively low. Thus, the membrane moves to near ENa and far from EK.
The more ions are permeant the more complicated it becomes to predict the membrane potential. However, this can be done using the Goldman-Hodgkin-Katz equation or the weighted means equation. By plugging in the concentration gradients and the permeabilities of the ions at any instant in time, one can determine the membrane potential at that moment. What the GHK equations means is that, at any time, the value of the membrane potential will be a weighted average of the equilibrium potentials of all permeant ions. The "weighting" is the ions relative permeability across the membrane.
Effects and implications
While cells expend energy to transport ions and establish a transmembrane potential, they use this potential in turn to transport other ions and metabolites such as sugar. The transmembrane potential of the mitochondria drives the production of ATP, which is the common currency of biological energy.
Cells may draw on the energy they store in the resting potential to drive action potentials or other forms of excitation. These changes in the membrane potential enable communication with other cells (as with action potentials) or initiate changes inside the cell, which happens in an egg when it is fertilized by a sperm.
Changes in the dielectric properties of plasma membrane may act as hallmark of underlying conditions such as diabetes and dyslipidemia.
In neuronal cells, an action potential begins with a rush of sodium ions into the cell through sodium channels, resulting in depolarization, while recovery involves an outward rush of potassium through potassium channels. Both of these fluxes occur by passive diffusion.
A dose of salt may trigger the still-working neurons of a fresh cut of meat into firing, causing muscle spasms.
See also
Bioelectrochemistry
Chemiosmotic potential
Electrochemical potential
Goldman equation
Membrane biophysics
Microelectrode array
Saltatory conduction
Surface potential
Gibbs–Donnan effect
Synaptic potential
Notes
References
Further reading
Alberts et al. Molecular Biology of the Cell. Garland Publishing; 4th Bk&Cdr edition (March, 2002). . Undergraduate level.
Guyton, Arthur C., John E. Hall. Textbook of medical physiology. W.B. Saunders Company; 10th edition (August 15, 2000). . Undergraduate level.
Hille, B. Ionic Channel of Excitable Membranes Sinauer Associates, Sunderland, MA, USA; 1st Edition, 1984.
Nicholls, J.G., Martin, A.R. and Wallace, B.G. From Neuron to Brain Sinauer Associates, Inc. Sunderland, MA, USA 3rd Edition, 1992.
Ove-Sten Knudsen. Biological Membranes: Theory of Transport, Potentials and Electric Impulses. Cambridge University Press (September 26, 2002). . Graduate level.
National Medical Series for Independent Study. Physiology. Lippincott Williams & Wilkins. Philadelphia, PA, USA 4th Edition, 2001.
External links
Functions of the Cell Membrane
Nernst/Goldman Equation Simulator
Nernst Equation Calculator
Goldman-Hodgkin-Katz Equation Calculator
Electrochemical Driving Force Calculator
The Origin of the Resting Membrane Potential - Online interactive tutorial (Flash)
Cell communication
Cell signaling
Cellular processes
Cellular neuroscience
Electrochemical concepts
Electrophysiology
Membrane biology | Membrane potential | [
"Chemistry",
"Biology"
] | 8,658 | [
"Cell communication",
"Membrane biology",
"Electrochemical concepts",
"Electrochemistry",
"Cellular processes",
"Molecular biology"
] |
563,239 | https://en.wikipedia.org/wiki/Biogenic%20substance | A biogenic substance is a product made by or of life forms. While the term originally was specific to metabolite compounds that had toxic effects on other organisms, it has developed to encompass any constituents, secretions, and metabolites of plants or animals. In context of molecular biology, biogenic substances are referred to as biomolecules. They are generally isolated and measured through the use of chromatography and mass spectrometry techniques. Additionally, the transformation and exchange of biogenic substances can by modelled in the environment, particularly their transport in waterways.
The observation and measurement of biogenic substances is notably important in the fields of geology and biochemistry. A large proportion of isoprenoids and fatty acids in geological sediments are derived from plants and chlorophyll, and can be found in samples extending back to the Precambrian. These biogenic substances are capable of withstanding the diagenesis process in sediment, but may also be transformed into other materials. This makes them useful as biomarkers for geologists to verify the age, origin and degradation processes of different rocks.
Biogenic substances have been studied as part of marine biochemistry since the 1960s, which has involved investigating their production, transport, and transformation in the water, and how they may be used in industrial applications. A large fraction of biogenic compounds in the marine environment are produced by micro and macro algae, including cyanobacteria. Due to their antimicrobial properties they are currently the subject of research in both industrial projects, such as for anti-fouling paints, or in medicine.
History of discovery and classification
During a meeting of the New York Academy of Sciences' Section of Geology and Mineralogy in 1903, geologist Amadeus William Grabau proposed a new rock classification system in his paper 'Discussion of and Suggestions Regarding a New Classification of Rocks'. Within the primary subdivision of "Endogenetic rocks" – rocks formed through chemical processes – was a category termed "Biogenic rocks", which was used synonymously with "Organic rocks". Other secondary categories were "Igneous" and "Hydrogenic" rocks.
In the 1930s German chemist Alfred E. Treibs first detected biogenic substances in petroleum as part of his studies of porphyrins. Based on this research, there was a later increase in the 1970s in the investigation of biogenic substances in sedimentary rocks as part of the study of geology. This was facilitated by the development of more advanced analytical methods, and led to greater collaboration between geologists and organic chemists in order to research the biogenic compounds in sediments.
Researchers additionally began to investigate the production of compounds by microorganisms in the marine environment during the early 1960s. By 1975, different research areas had developed in the study of marine biochemistry. These were "marine toxins, marine bioproducts and marine chemical ecology". Following this in 1994, Teuscher and Lindequist defined biogenic substances as "chemical compounds which are synthesised by living organisms and which, if they exceed certain concentrations, cause temporary or permanent damage or even death of other organisms by chemical or physicochemical effects" in their book, Biogene Gifte. This emphasis in research and classification on the toxicity of biogenic substances was partly due to the cytotoxicity-directed screening assays that were used to detect the biologically active compounds. The diversity of biogenic products has since been expanded from cytotoxic substances through the use of alternative pharmaceutical and industrial assays.
In the environment
Hydroecology
Through studying the transport of biogenic substances in the Tatar Strait in the Sea of Japan, a Russian team noted that biogenic substances can enter the marine environment due to input from either external sources, transport inside the water masses, or development by metabolic processes within the water. They can likewise be expended due to biotransformation processes, or biomass formation by microorganisms. In this study the biogenic substance concentrations, transformation frequency, and turnover were all highest in the upper layer of the water. Additionally, in different regions of the strait the biogenic substances with the highest annual transfer were constant. These were O2, DOC, and DISi, which are normally found in large concentrations in natural water. The biogenic substances that tend to have lower input through the external boundaries of the strait and therefore least transfer were mineral and detrital components of N and P. These same substances take active part in biotransformation processes in the marine environment and have lower annual output as well.
Geological sites
Organic geochemists also have an interest in studying the diagenesis of biogenic substances in petroleum and how they are transformed in sediment and fossils. While 90% of this organic material is insoluble in common organic solvents – called kerogen – 10% is in a form that is soluble and can be extracted, from where biogenic compounds can then be isolated. Saturated linear fatty acids and pigments have the most stable chemical structures and are therefore suited to withstanding degradation from the diagenesis process and being detected in their original forms. However, macromolecules have also been found in protected geological regions. Typical sedimentation conditions involve enzymatic, microbial and physicochemical processes as well as increased temperature and pressure, which lead to transformations of biogenic substances. For example, pigments that arise from dehydrogenation of chlorophyll or hemin can be found in many sediments as nickel or vanadyl complexes. A large proportion of the isoprenoids in sediments are also derived from chlorophyll. Similarly, linear saturated fatty acids discovered in the Messel oil shale of the Messel Pit in Germany arise from organic material of vascular plants.
Additionally, alkanes and isoprenoids are found in soluble extracts of Precambrian rock, indicating the probable existence of biological material more than three billion years ago. However, there is the potential that these organic compounds are abiogenic in nature, especially in Precambrian sediments. While Studier et al.'s (1968) simulations of the synthesis of isoprenoids in abiogenic conditions did not produce the long-chain isoprenoids used as biomarkers in fossils and sediments, traces of C9-C14 isoprenoids were detected. It is also possible for polyisoprenoid chains to be stereoselectively synthesised using catalysts such as Al(C2H5)3 – VCl3. However, the probability of these compounds being available in the natural environment is unlikely.
Measurement
The different biomolecules that make up a plant's biogenic substances – particularly those in seed exudates - can be identified by using different varieties of chromatography in a lab environment. For metabolite profiling, gas chromatography-mass spectrometry is used to find flavonoids such as quercetin. Compounds can then be further differentiated using reversed-phase high-performance liquid chromatography-mass spectrometry.
When it comes to measuring biogenic substances in a natural environment such as a body of water, a hydroecological CNPSi model can be used to calculate the spatial transport of biogenic substances, in both the horizontal and vertical dimensions. This model takes into account the water exchange and flow rate, and yields the values of biogenic substance rates for any area or layer of the water for any month. There are two main evaluation methods involved: measuring per unit water volume (mg/m3 year) and measuring substances per entire water volume of layer (t of element/year). The former is mostly used to observe biogenic substance dynamics and individual pathways for flux and transformations, and is useful when comparing individual regions of the strait or waterway. The second method is used for monthly substance fluxes and must take into account that there are monthly variations in the water volume in the layers.
In the study of geochemistry, biogenic substances can be isolated from fossils and sediments through a process of scraping and crushing the target rock sample, then washing with 40% hydrofluoric acid, water, and benzene/methanol in the ratio 3:1. Following this, the rock pieces are ground and centrifuged to produce a residue. Chemical compounds are then derived through various chromatography and mass spectrometry separations. However, extraction should be accompanied by rigorous precautions to ensure there is no amino acid contaminants from fingerprints, or silicone contaminants from other analytical treatment methods.
Applications
Anti-fouling paints
Metabolites produced by marine algae have been found to have many antimicrobial properties. This is because they are produced by the marine organisms as chemical deterrents and as such contain bioactive compounds. The principal classes of marine algae that produce these types of secondary metabolites are Cyanophyceae, Chlorophyceae and Rhodophyceae. Observed biogenic products include polyketides, amides, alkaloids, fatty acids, indoles and lipopeptides. For example, over 10% of compounds isolated from Lyngbya majuscula, which is one of the most abundant cyanobacteria, have antifungal and antimicrobial properties. Additionally, a study by Ren et al. (2002) tested halogenated furanones produced by Delisea pulchra from the Rhodophyceae class against the growth of Bacillus subtilis. When applied at a 40 μg/mL concentration, the furanone inhibited the formation of a biofilm by the bacteria and reduced the biofilm's thickness by 25% and the number of live cells by 63%.
These characteristics then have the potential to be utilised in man-made materials, such as making anti-fouling paints without the environment-damaging chemicals. Environmentally safe alternatives are needed to TBT (tin-based antifouling agent) which releases toxic compounds into water and the environment and has been banned in several countries. A class of biogenic compounds that has had a sizeable effect against the bacteria and microalgae that cause fouling are acetylene sesquiterpenoid esters produced by Caulerpa prolifera (from the Chlorophyceae class), which Smyrniotopoulos et al. (2003) observed inhibiting bacterial growth with up to 83% of the efficacy of TBT oxide.
Current research also aims to produce these biogenic substances on a commercial level using metabolic engineering techniques. By pairing these techniques with biochemical engineering design, algae and their biogenic substances can be produced on a large scale using photobioreactors. Different system types can be used to yield different biogenic products.
Paleochemotaxonomy
In the field of paleochemotaxonomy the presence of biogenic substances in geological sediments is useful for comparing old and modern biological samples and species. These biological markers can be used to verify the biological origin of fossils and serve as paleo-ecological markers. For example, the presence of pristane indicates that the petroleum or sediment is of marine origin, while biogenic material of non-marine origin tends to be in the form of polycyclic compounds or phytane. The biological markers also provide valuable information about the degradation reactions of biological material in geological environments. Comparing the organic material between geologically old and recent rocks shows the conservation of different biochemical processes.
Metallic nanoparticle production
Another application of biogenic substances is in the synthesis of metallic nanoparticles. The current chemical and physical production methods for nanoparticles used are costly and produce toxic waste and pollutants in the environment. Additionally, the nanoparticles that are produced can be unstable and unfit for use in the body. Using plant-derived biogenic substances aims to create an environmentally-friendly and cost-effective production method. The biogenic phytochemicals used for these reduction reactions can be derived from plants in numerous ways, including a boiled leaf broth, biomass powder, whole plant immersion in solution, or fruit and vegetable juice extracts. C. annuum juices have been shown to produce Ag nanoparticles at room temperature when treated with silver ions and additionally deliver essential vitamins and amino acids when consumed, making them a potential nanomaterials agent. Another procedure is through the use of a different biogenic substance: the exudate of germinating seeds. When seeds are soaked, they passively release phytochemicals into the surrounding water, which after reaching equilibrium can be mixed with metal ions to synthesise metallic nanoparticles. M. sativa exudate in particular has had success in effectively producing Ag metallic particles, while L. culinaris is an effective reactant for manufacturing Au nanoparticles. This process can also be further adjusted by manipulating factors such as pH, temperature, exudate dilution and plant origin to produce different shapes of nanoparticles, including triangles, spheres, rods, and spirals. These biogenic metallic nanoparticles then have applications as catalysts, glass window coatings to insulate heat, in biomedicine, and in biosensor devices.
Examples
Coal and oil are possible examples of constituents which may have undergone changes over geologic time periods.
Chalk and limestone are examples of secretions (marine animal shells) which are of geologic age.
Grass and wood are biogenic constituents of contemporary origin.
Pearls, silk and ambergris are examples of secretions of contemporary origin.
Biogenic neurotransmitters.
Table of isolated biogenic compounds
Abiogenic (opposite)
An abiogenic substance or process does not result from the present or past activity of living organisms. Abiogenic products may, e.g., be minerals, other inorganic compounds, as well as simple organic compounds (e.g. extraterrestrial methane, see also abiogenesis).
See also
Biogenic minerals
Natural product
Microalgae
Phytochemical
References
Biosphere
Geological processes
Natural materials
Organic compounds
Phycology
Paleobiology | Biogenic substance | [
"Physics",
"Chemistry",
"Biology"
] | 2,900 | [
"Algae",
"Natural materials",
"Phycology",
"Organic compounds",
"Materials",
"Paleobiology",
"Matter"
] |
563,386 | https://en.wikipedia.org/wiki/Select%20Society%20of%20Sanitary%20Sludge%20Shovelers | The Select Society of Sanitary Sludge Shovelers (5S) is used by water environment associations (i.e., those working with sewage and sewage treatment) to honour those who have made a particular contribution to the industry.
Pennsylvania started the High Hat Society in 1937 and used the words "Sludge Shovelers Society" in its initiation ceremony. Later, this became known as the Ted Moses Sludge Shovelers Society. The second Chapter of the Five S Society was formed in Arizona in October 1940, the idea being conceived by A.W. "Dusty" Miller and F. Carlyle Roberts, Jr. There are chapters in the United States and in Canada, as well as the United Kingdom, Australia and New Zealand.
5S chapters do not accept applications, but select potential members. Each inductee receives a badge in the form of a gold tie bar in the shape of a round-nosed shovel.
References
Sewerage
Professional associations | Select Society of Sanitary Sludge Shovelers | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 191 | [
"Sewerage",
"Environmental engineering",
"Water pollution"
] |
563,847 | https://en.wikipedia.org/wiki/Electromigration | Electromigration is the transport of material caused by the gradual movement of the ions in a conductor due to the momentum transfer between conducting electrons and diffusing metal atoms. The effect is important in applications where high direct current densities are used, such as in microelectronics and related structures. As the structure size in electronics such as integrated circuits (ICs) decreases, the practical significance of this effect increases.
History
The phenomenon of electromigration has been known for over 100 years, having been discovered by the French scientist Gerardin. The topic first became of practical interest during the late 1960s when packaged ICs first appeared. The earliest commercially available ICs failed in a mere three weeks of use from runaway electromigration, which led to a major industry effort to correct this problem. The first observation of electromigration in thin films was made by I. Blech. Research in this field was pioneered by a number of investigators throughout the fledgling semiconductor industry. One of the most important engineering studies was performed by Jim Black of Motorola, after whom Black's equation is named. At the time, the metal interconnects in ICs were still about 10 micrometres wide. Currently interconnects are only hundreds to tens of nanometers in width, making research in electromigration increasingly important.
Practical implications of electromigration
Electromigration decreases the reliability of integrated circuits (ICs). It can cause the eventual loss of connections or failure of a circuit. Since reliability is critically important for space travel, military purposes, anti-lock braking systems, medical equipment like Automated External Defibrillators and is even important for personal computers or home entertainment systems, the reliability of chips (ICs) is a major focus of research efforts.
Due to the difficulty of testing under real-world conditions, Black's equation is used to predict the life span of integrated circuits.
To use Black's equation, the component is put through high temperature operating life (HTOL) testing. The component's expected life span under real conditions is extrapolated from data gathered during this testing.
Although damage from electromigration ultimately results in the failure of the affected IC, the first symptoms are intermittent glitches, which are quite challenging to diagnose. As some interconnects fail before others, the circuit exhibits seemingly random errors, which may be indistinguishable from other failure mechanisms (such as electrostatic discharge damage). In a laboratory setting, electromigration failure is readily imaged with an electron microscope, as interconnect erosion leaves telltale visual markers on the metal layers of the IC.
With increasing miniaturization, the probability of failure due to electromigration increases in VLSI and ULSI circuits because both the power density and the current density increase. Specifically, line widths will continue to decrease over time, as will wire cross-sectional areas. Currents are also reduced due to lower supply voltages and shrinking gate capacitances. However, as current reduction is constrained by increasing frequencies, the more marked decrease in cross-sectional areas (compared to current reduction) will give rise to increased current densities in ICs going forward.
In advanced semiconductor manufacturing processes, copper has replaced aluminium as the interconnect material of choice. Despite its greater fragility in the fabrication process, copper is preferred for its superior conductivity. It is also intrinsically less susceptible to electromigration. However, electromigration (EM) continues to be an ever-present challenge to device fabrication, and therefore the EM research for copper interconnects is ongoing (though a relatively new field).
In modern consumer electronic devices, ICs rarely fail due to electromigration effects. This is because proper semiconductor design practices incorporate the effects of electromigration into the IC's layout. Nearly all IC design houses use automated EDA tools to check and correct electromigration problems at the transistor layout-level. When operated within the manufacturer's specified temperature and voltage range, a properly designed IC device is more likely to fail from other (environmental) causes, such as cumulative damage from gamma-ray bombardment.
Nevertheless, there have been documented cases of product failures due to electromigration. In the late 1980s, one line of Western Digital's desktop drives suffered widespread, predictable failure after 12–18 months of field usage. Using forensic analysis of the returned bad units, engineers identified improper design-rules in a third-party supplier's IC controller. By replacing the bad component with that of a different supplier, WD was able to correct the flaw, but not before significant damage was done to the company's reputation.
Electromigration can be a cause of degradation in some power semiconductor devices such as low voltage power MOSFETs, in which the lateral current through the source contact metallisation (often aluminium) can reach the critical current densities during overload conditions. The degradation of the aluminium layer causes an increase in on-state resistance, and can eventually lead to complete failure.
Fundamentals
The material properties of the metal interconnects have a strong influence on their life span. The characteristics are predominantly the composition of the metal alloy and the dimensions of the conductor. The shape of the conductor, the crystallographic orientation of the grains in the metal, procedures for the layer deposition, heat treatment or annealing, characteristics of the passivation, and the interface to other materials also affect the durability of the interconnects. There are also important differences with time dependent current: direct current or different alternating current waveforms cause different effects.
Forces on ions in an electrical field
Two forces affect ionized atoms in a conductor: 1) The direct electrostatic force Fe, as a result of the electric field , which has the same direction as the electric field, and 2) The force from the exchange of momentum with other charge carriers Fp, toward the flow of charge carriers, is in the opposite direction of the electric field. In metallic conductors Fp is caused by a so-called "electron wind" or "ion wind".
The resulting force Fres on an activated ion in the electrical field can be written as
where is the electric charge of the ions, and the valences corresponding to the electrostatic and wind force respectively, the so-called effective valence of the material, the current density, and the resistivity of the material
.
Electromigration occurs when some of the momentum of a moving electron is transferred to a nearby activated ion. This causes the ion to move from its original position. Over time this force knocks a significant number of atoms far from their original positions. A break or gap can develop in the conducting material, preventing the flow of electricity. In narrow interconnect conductors, such as those linking transistors and other components in integrated circuits, this is known as a void or internal failure (open circuit). Electromigration can also cause the atoms of a conductor to pile up and drift toward other nearby conductors, creating an unintended electrical connection known as a hillock failure or whisker failure (short circuit). Both of these situations can lead to a malfunction of the circuit.
Failure mechanisms
Diffusion mechanisms
In a homogeneous crystalline structure, because of the uniform lattice structure of the metal ions, there is hardly any momentum transfer between the conduction electrons and the metal ions. However, this symmetry does not exist at the grain boundaries and material interfaces, and so here momentum is transferred much more vigorously. Since the metal ions in these regions are bonded more weakly than in a regular crystal lattice, once the electron wind has reached a certain strength, atoms become separated from the grain boundaries and are transported in the direction of the current. This direction is also influenced by the grain boundary itself, because atoms tend to move along grain boundaries.
Diffusion processes caused by electromigration can be divided into grain boundary diffusion, bulk diffusion and surface diffusion. In general, grain boundary diffusion is the major electromigration process in aluminum wires, whereas surface diffusion is dominant in copper interconnects.
Thermal effects
In an ideal conductor, where atoms are arranged in a perfect lattice structure, the electrons moving through it would experience no collisions and electromigration would not occur. In real conductors, defects in the lattice structure and the random thermal vibration of the atoms about their positions causes electrons to collide with the atoms and scatter, which is the source of electrical resistance (at least in metals; see electrical conduction). Normally, the amount of momentum imparted by the relatively low-mass electrons is not enough to permanently displace the atoms. However, in high-power situations (such as with the increasing current draw and decreasing wire sizes in modern VLSI microprocessors), if many electrons bombard the atoms with enough force to become significant, this will accelerate the process of electromigration by causing the atoms of the conductor to vibrate further from their ideal lattice positions, increasing the amount of electron scattering. High current density increases the number of electrons scattering against the atoms of the conductor, and hence the rate at which those atoms are displaced.
In integrated circuits, electromigration does not occur in semiconductors directly, but in the metal interconnects deposited onto them (see semiconductor device fabrication).
Electromigration is exacerbated by high current densities and the Joule heating of the conductor (see electrical resistance), and can lead to eventual failure of electrical components. Localized increase of current density is known as current crowding.
Balance of atom concentration
A governing equation which describes the atom concentration evolution throughout some interconnect segment, is the conventional mass balance (continuity) equation
where is the atom concentration at the point with a coordinates at the moment of time , and is the total atomic flux at this location. The total atomic flux is a combination of the fluxes caused by the different atom migration forces. The major forces are induced by the electric current, and by the gradients of temperature, mechanical stress and concentration. .
To define the fluxes mentioned above:
.
Here is the electron charge, is the effective charge of the migrating atom, the resistivity of the conductor where atom migration takes place, is the local current density, is the Boltzmann constant, is the absolute temperature. is the time and position dependent atom diffusivity.
. We use the heat of thermal diffusion.
,
here is the atomic volume and is initial atomic concentration, is the hydrostatic stress and are the components of principal stress.
.
Assuming a vacancy mechanism for atom diffusion we can express as a function of the hydrostatic stress where is the effective activation energy of the thermal diffusion of metal atoms. The vacancy concentration represents availability of empty lattice sites, which might be occupied by a migrating atom.
Electromigration-aware design
Electromigration reliability of a wire (Black's equation)
At the end of the 1960s J. R. Black developed an empirical model to estimate the MTTF (mean time to failure) of a wire, taking electromigration into consideration. Since then, the formula has gained popularity in the semiconductor industry:
.
Here is a constant based on the cross-sectional area of the interconnect, is the current density, is the activation energy (e.g. 0.7 eV for grain boundary diffusion in aluminum), is the Boltzmann constant, is the temperature in kelvins, and a scaling factor (usually set to 2 according to Black). The temperature of the conductor appears in the exponent, i.e. it strongly affects the MTTF of the interconnect. For an interconnect of a given construction to remain reliable as the temperature rises, the current density within the conductor must be reduced. However, as interconnect technology advances at the nanometer scale, the validity of Black's equation becomes increasingly questionable.
Wire material
Historically, aluminium has been used as conductor in integrated circuits, due to its good adherence to substrate, good conductivity, and ability to form ohmic contacts with silicon. However, pure aluminium is susceptible to electromigration. Research shows that adding 2-4% of copper to aluminium increases resistance to electromigration about 50 times. The effect is attributed to the grain boundary segregation of copper, which greatly inhibits the diffusion of aluminium atoms across grain boundaries.
Pure copper wires can withstand approximately five times more current density than aluminum wires while maintaining similar reliability requirements. This is mainly due to the higher electromigration activation energy levels of copper, caused by its superior electrical and thermal conductivity as well as its higher melting point. Further improvements can be achieved by alloying copper with about 1% palladium which inhibits diffusion of copper atoms along grain boundaries in the same way as the addition of copper to aluminium interconnect.
Bamboo structure and metal slotting
A wider wire results in smaller current density and, hence, less likelihood of electromigration. Also, the metal grain size has influence; the smaller grains, the more grain boundaries and the higher likelihood of electromigration effects. However, if you reduce wire width to below the average grain size of the wire material, grain boundaries become "crosswise", more or less perpendicular to the length of the wire. The resulting structure resembles the joints in a stalk of bamboo. With such a structure, the resistance to electromigration increases, despite an increase in current density. This apparent contradiction is caused by the perpendicular position of the grain boundaries; the boundary diffusion factor is excluded, and material transport is correspondingly reduced.
However, the maximum wire width possible for a bamboo structure is usually too narrow for signal lines of large-magnitude currents in analog circuits or for power supply lines. In these circumstances, slotted wires are often used, whereby rectangular holes are carved in the wires. Here, the widths of the individual metal structures in between the slots lie within the area of a bamboo structure, while the resulting total width of all the metal structures meets power requirements.
Blech length
There is a lower limit for the length of the interconnect that will allow higher current carrying capability. It is known as "Blech length". Any wire that has a length below this limit will have a stretched limit for electromigration. Here, a mechanical stress buildup causes an atom back flow process which reduces or even compensates the effective material flow towards the anode. The Blech length must be considered when designing test structures to evaluate electromigration. This minimum length is typically some tens of microns for chip traces, and interconnections shorter than this are sometimes referred to as 'electromigration immortal'.
Via arrangements and corner bends
Particular attention must be paid to vias and contact holes. The current carrying capacity of a via is much less than a metallic wire of same length. Hence multiple vias are often used, whereby the geometry of the via array is very significant: multiple vias must be organized such that the resulting current is distributed as evenly as possible through all the vias.
Attention must also be paid to bends in interconnects. In particular, 90-degree corner bends must be avoided, since the current density in such bends is significantly higher than that in oblique angles (e.g., 135 degrees).
Electromigration in solder joints
The typical current density at which electromigration occurs in Cu or Al interconnects is 106 to 107 A/cm2. For solder joints (SnPb or SnAgCu lead-free) used in IC chips, however, electromigration occurs at much lower current densities, e.g. 104 A/cm2.
It causes a net atom transport along the direction of electron flow. The atoms accumulate at the anode, while voids are generated at the cathode and back stress is induced during electromigration. The typical failure of a solder joint due to electromigration will occur at the cathode side. Due to the current crowding effect, voids form first at the corners of the solder joint. Then the voids extend and join to cause a failure. Electromigration also influences formation of intermetallic compounds, as the migration rates are a function of atomic mass.
Electromigration and technology computer aided design
The complete mathematical model describing electromigration consists of several partial differential equations (PDEs) which need to be solved for three-dimensional geometrical domains representing segments of an interconnect structure. Such a mathematical model forms the basis for simulation of electromigration in modern technology computer aided design (TCAD) tools.
Use of TCAD tools for detailed investigations of electromigration induced interconnect degradation is gaining importance. Results of TCAD studies in combination with reliability tests lead to modification of design rules improving the interconnect resistance to electromigration.
Electromigration due to IR drop noise of the on-chip power grid network/interconnect
The electromigration degradation of the on-chip power grid network/interconnect depends on the IR drop noise of the power grid interconnect.
The electromigration-aware lifetime of the power grid interconnects as well as the chip decreases if the chip suffers from a high value of the IR drop noise.
Machine Learning Model for Electromigration-aware MTTF Prediction
Recent work demonstrates MTTF prediction using a machine learning model. The work uses a neural network-based supervised learning approach with current density, interconnect length, interconnect temperature as input features to the model.
Electromigrated nanogaps
Electromigrated nanogaps are gaps formed in metallic bridges formed by the process of electromigration. A nanosized contact formed by electromigration acts like a waveguide for electrons. The nanocontact essentially acts like a one-dimensional wire with a conductance of . The current in a wire is the velocity of the electrons multiplied by the charge and number per unit length, or . This gives a conductance of . In nano scale bridges the conductance falls in discrete steps of multiples of the quantum conductance .
Electromigrated Nanogaps have shown great promise as electrodes in use in molecular scale electronics. Researchers have used feedback controlled electromigration to investigate the magnetoresistance of a quantum spin valve.
Reference standards
EIA/JEDEC Standard EIA/JESD61: Isothermal Electromigration Test Procedure.
EIA/JEDEC Standard EIA/JESD63: Standard method for calculating the electromigration model parameters for current density and temperature.
Fundamentals of electromigration, Chapter 2
See also
Kirkendall effect
Sealing current
References
Further reading
Ghate, P. B.: Electromigration-Induced Failures in VLSI Interconnects, IEEE Conf. Publication, Vol. 20:p 292 299, March 1982.
Lienig, J.: , (Download paper) Proc. of the Int. Symposium on Physical Design (ISPD) 2006, pp. 39–46, April 2006.
Lienig, J., Thiele, M.: , (Download paper), Proc. of the Int. Symposium on Physical Design (ISPD) 2018, pp. 144–151, March 2018.
Louie Liu, H.C., Murarka, S.: "Modeling of Temperature Increase Due to Joule Heating During Elektromigration Measurements. Center for Integrated Electronics and Electronics Manufacturing", Materials Research Society Symposium Proceedings Vol. 427:p. 113 119.
Books
External links
What is Electromigration?, Computer Simulation Laboratory, Middle East Technical University.
Electromigration for Designers: An Introduction for the Non-Specialist, J.R. Lloyd, EETimes.
Semiconductor electromigration in-depth at DWPG.Com
Modeling of electromigration process with void formation at UniPro R&D site
DoITPoMS Teaching and Learning Package- "Electromigration"
Electric and magnetic fields in matter
Electronic design automation
Semiconductor device defects
Transport phenomena
Electrochemistry | Electromigration | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 4,096 | [
"Transport phenomena",
"Physical phenomena",
"Chemical engineering",
"Technological failures",
"Semiconductor device defects",
"Electric and magnetic fields in matter",
"Materials science",
"Electrochemistry",
"Condensed matter physics"
] |
564,527 | https://en.wikipedia.org/wiki/Density%20matrix%20renormalization%20group | The density matrix renormalization group (DMRG) is a numerical variational technique devised to obtain the low-energy physics of quantum many-body systems with high accuracy. As a variational method, DMRG is an efficient algorithm that attempts to find the lowest-energy matrix product state wavefunction of a Hamiltonian. It was invented in 1992 by Steven R. White and it is nowadays the most efficient method for 1-dimensional systems.
History
The first application of the DMRG, by Steven R. White and Reinhard Noack, was a toy model: to find the spectrum of a spin 0 particle in a 1D box. This model had been proposed by Kenneth G. Wilson as a test for any new renormalization group method, because they all happened to fail with this simple problem. The DMRG overcame the problems of previous renormalization group methods by connecting two blocks with the two sites in the middle rather than just adding a single site to a block at each step as well as by using the density matrix to identify the most important states to be kept at the end of each step. After succeeding with the toy model, the DMRG method was tried with success on the quantum Heisenberg model.
Principle
The main problem of quantum many-body physics is the fact that the Hilbert space grows exponentially with size. In other words if one considers a lattice, with some Hilbert space of dimension on each site of the lattice, then the total Hilbert space would have dimension , where is the number of sites on the lattice. For example, a spin-1/2 chain of length L has 2L degrees of freedom. The DMRG is an iterative, variational method that reduces effective degrees of freedom to those most important for a target state. The state one is most often interested in is the ground state.
After a warmup cycle, the method splits the system into two subsystems, or blocks, which need not have equal sizes, and two sites in between. A set of representative states has been chosen for the block during the warmup. This set of left blocks + two sites + right blocks is known as the superblock. Now a candidate for the ground state of the superblock, which is a reduced version of the full system, may be found. It may have a rather poor accuracy, but the method is iterative and improves with the steps below.
The candidate ground state that has been found is projected into the Hilbert subspace for each block using a density matrix, hence the name. Thus, the relevant states for each block are updated.
Now one of the blocks grows at the expense of the other and the procedure is repeated. When the growing block reaches maximum size, the other starts to grow in its place. Each time we return to the original (equal sizes) situation, we say that a sweep has been completed. Normally, a few sweeps are enough to get a precision of a part in 1010 for a 1D lattice.
Implementation guide
A practical implementation of the DMRG algorithm is a lengthy work. A few of the main computational tricks are these:
Since the size of the renormalized Hamiltonian is usually in the order of a few or tens of thousand while the sought eigenstate is just the ground state, the ground state for the superblock is obtained via iterative algorithm such as the Lanczos algorithm of matrix diagonalization. Another choice is the Arnoldi method, especially when dealing with non-hermitian matrices.
The Lanczos algorithm usually starts with the best guess of the solution. If no guess is available a random vector is chosen. In DMRG, the ground state obtained in a certain DMRG step, suitably transformed, is a reasonable guess and thus works significantly better than a random starting vector at the next DMRG step.
In systems with symmetries, we may have conserved quantum numbers, such as total spin in a Heisenberg model. It is convenient to find the ground state within each of the sectors into which the Hilbert space is divided.
Applications
The DMRG has been successfully applied to get the low energy properties of spin chains: Ising model in a transverse field, Heisenberg model, etc., fermionic systems, such as the Hubbard model, problems with impurities such as the Kondo effect, boson systems, and the physics of quantum dots joined with quantum wires. It has been also extended to work on tree graphs, and has found applications in the study of dendrimers. For 2D systems with one of the dimensions much larger than the other DMRG is also accurate, and has proved useful in the study of ladders.
The method has been extended to study equilibrium statistical physics in 2D, and to analyze non-equilibrium phenomena in 1D.
The DMRG has also been applied to the field of quantum chemistry to study strongly correlated systems.
Example: Quantum Heisenberg model
Let us consider an "infinite" DMRG algorithm for the antiferromagnetic quantum Heisenberg chain. The recipe can be applied for every translationally invariant one-dimensional lattice.
DMRG is a renormalization-group technique because it offers an efficient truncation of the Hilbert space of one-dimensional quantum systems.
Starting point
To simulate an infinite chain, start with four sites. The first is the block site, the last the universe-block site and the remaining are the added sites, the right one is added to the universe-block site and the other to the block site.
The Hilbert space for the single site is with the base . With this base the spin operators are , and for the single site. For every block, the two blocks and the two sites, there is its own Hilbert space , its base ()and its own operatorswhere
block: , , , , ,
left-site: , , , ,
right-site: , , , ,
universe: , , , , ,
At the starting point all four Hilbert spaces are equivalent to , all spin operators are equivalent to , and and . In the following iterations, this is only true for the left and right sites.
Step 1: Form the Hamiltonian matrix for the superblock
The ingredients are the four block operators and the four universe-block operators, which at the first iteration are matrices, the three left-site spin operators and the three right-site spin operators, which are always matrices. The Hamiltonian matrix of the superblock (the chain), which at the first iteration has only four sites, is formed by these operators. In the Heisenberg antiferromagnetic S=1 model the Hamiltonian is:
These operators live in the superblock state space: , the base is . For example: (convention):
The Hamiltonian in the DMRG form is (we set ):
The operators are matrices, , for example:
Step 2: Diagonalize the superblock Hamiltonian
At this point you must choose the eigenstate of the Hamiltonian for which some observables is calculated, this is the target state . At the beginning you can choose the ground state and use some advanced algorithm to find it, one of these is described in:
The Iterative Calculation of a Few of the Lowest Eigenvalues and Corresponding Eigenvectors of Large Real-Symmetric Matrices, Ernest R. Davidson; Journal of Computational Physics 17, 87-94 (1975)
This step is the most time-consuming part of the algorithm.
If is the target state, expectation value of various operators can be measured at this point using .
Step 3: Reduce density matrix
Form the reduced density matrix for the first two block system, the block and the left-site. By definition it is the matrix:
Diagonalize and form the matrix , which rows are the eigenvectors associated with the largest eigenvalues of . So is formed by the most significant eigenstates of the reduced density matrix. You choose looking to the parameter : .
Step 4: New block and universe-block operators
Form the matrix representation of operators for the system composite of the block and left-site, and for the system composite of right-site and universe-block, for example:
Now, form the matrix representations of the new block and universe-block operators, form a new block by changing basis with the transformation , for example:At this point the iteration is ended and the algorithm goes back to step 1.
The algorithm stops successfully when the observable converges to some value.
Matrix product ansatz
The success of the DMRG for 1D systems is related to the fact that it is a variational method within the space of matrix product states (MPS). These are states of the form
where are the values of the e.g. z-component of the spin in a spin chain, and the Asi are matrices of arbitrary dimension m. As m → ∞, the representation becomes exact. This theory was exposed by S. Rommer and S. Ostlund in .
In quantum chemistry application, stands for the four possibilities of the projection of the spin quantum number of the two electrons that can occupy a single orbital, thus , where the first (second) entry of these kets corresponds to the spin-up(down) electron. In quantum chemistry, (for a given ) and (for a given ) are traditionally chosen to be row and column matrices, respectively. This way, the result of is a scalar value and the trace operation is unnecessary. is the number of sites (the orbitals basically) used in the simulation.
The matrices in the MPS ansatz are not unique, one can, for instance, insert in the middle of , then define and , and the state will stay unchanged. Such gauge freedom is employed to transform the matrices into a canonical form. Three types of canonical form exist: (1) left-normalized form, when
for all , (2) right-normalized form, when
for all , and (3) mixed-canonical form when both left- and right-normalized matrices exist among the matrices in the above MPS ansatz.
The goal of the DMRG calculation is then to solve for the elements of each of the matrices. The so-called one-site and two-site algorithms have been devised for this purpose. In the one-site algorithm, only one matrix (one site) whose elements are solved for at a time. Two-site just means that two matrices are first contracted (multiplied) into a single matrix, and then its elements are solved. The two-site algorithm is proposed because the one-site algorithm is much more prone to getting trapped at a local minimum. Having the MPS in one of the above canonical forms has the advantage of making the computation more favorable - it leads to the ordinary eigenvalue problem. Without canonicalization, one will be dealing with a generalized eigenvalue problem.
Extensions
In 2004 the time-evolving block decimation method was developed to implement real-time evolution of matrix product states. The idea is based on the classical simulation of a quantum computer. Subsequently, a new method was devised to compute real-time evolution within the DMRG formalism - See the paper by A. Feiguin and S.R. White .
In recent years, some proposals to extend the method to 2D and 3D have been put forward, extending the definition of the matrix product states. See this paper by F. Verstraete and I. Cirac, .
Further reading
The original paper, by S. R. White, or
A textbook on DMRG and its origins: https://www.springer.com/gp/book/9783540661290
A broad review, by Karen Hallberg, .
Two reviews by Ulrich Schollwöck, one discussing the original formulation , and another in terms of matrix product states
The Ph.D. thesis of Javier Rodríguez Laguna .
An introduction to DMRG and its time-dependent extension .
A list of DMRG e-prints on arxiv.org .
A review article on DMRG for ab initio quantum chemistry .
An introduction video on DMRG for ab initio quantum chemistry .
Related software
The Matrix Product Toolkit: A free GPL set of tools for manipulating finite and infinite matrix product states written in C++
Uni10: a library implementing numerous tensor network algorithms (DMRG, TEBD, MERA, PEPS ...) in C++
Powder with Power: a free distribution of time-dependent DMRG code written in Fortran
The ALPS Project: a free distribution of time-independent DMRG code and Quantum Monte Carlo codes written in C++
DMRG++: a free implementation of DMRG written in C++
The ITensor (Intelligent Tensor) Library: a free library for performing tensor and matrix-product state based DMRG calculations written in C++
OpenMPS: an open source DMRG implementation based on Matrix Product States written in Python/Fortran2003.
Snake DMRG program: open source DMRG, tDMRG and finite temperature DMRG program written in C++
CheMPS2: open source (GPL) spin-adapted DMRG code for ab initio quantum chemistry written in C++
Block: open source DMRG framework for quantum chemistry and model Hamiltonians. Supports SU(2) and general non-Abelian symmetries. Written in C++.
Block2: An efficient parallel implementation of DMRG, dynamical DMRG, tdDMRG, and finite temperature DMRG for quantum chemistry and models. Written in Python/C++.
See also
Quantum Monte Carlo
Time-evolving block decimation
Configuration interaction
References
Theoretical physics
Computational physics
Statistical mechanics | Density matrix renormalization group | [
"Physics"
] | 2,829 | [
"Statistical mechanics",
"Theoretical physics",
"Computational physics"
] |
564,641 | https://en.wikipedia.org/wiki/Virtual%20ground | In electronics, a virtual ground (or virtual earth) is a node of a circuit that is maintained at a steady reference potential, without being connected directly to the reference potential. In some cases the reference potential is considered to be that of the surface of the earth, and the reference node is called "ground" or "earth" as a consequence.
The virtual ground concept aids circuit analysis in operational amplifiers and other circuits and provides useful practical circuit effects that would be difficult to achieve in other ways.
In circuit theory, a node may have any value of current or voltage but physical implementations of a virtual ground will have limitations in terms of current handling ability and a non-zero impedance which may have practical side effects.
Construction
A voltage divider, using two resistors, can be used to create a virtual ground node. If two voltage sources are connected in series with two resistors, it can be shown that the midpoint becomes a virtual ground if
An active virtual ground circuit is sometimes called a rail splitter. Such a circuit uses an op-amp or some other circuit element that has gain. Since an operational amplifier has very high open-loop gain, the potential difference between its inputs tends to zero when a feedback network is implemented.
This means that the output supplies the inverting input (via the feedback network) with enough voltage to reduce the potential difference between the inputs to microvolts. More precisely, it can be shown that the output voltage of the amplifier in the figure is approximately equal to .
Thus, as far as the amplifier is working in its linear region (output not saturated, frequencies inside the range of the opamp), the voltage at the inverting input terminal remains constant with respect to the real ground, and independent from the loads to which the output may be connected.
This property is characterized a "virtual ground".
Applications
Voltage is a differential quantity, which appears between two points. In order to deal only with a voltage (an electrical potential) of a single point, the second point has to be connected to a reference point (ground). Usually, the power supply terminals serve as steady grounds; when the internal points of compound power sources are accessible, they can also serve as real grounds.
If there are no accessible source internal points, external circuit points with steady voltage relative to the source terminals can serve as artificial virtual grounds. Such a point has to have steady potential, which does not vary when a load is attached.
See also
Voltage-to-current converter and Current-to-voltage converter show some typical virtual ground applications
Miller theorem applications
References
External links
Create a Virtual Ground with the LT1118-2.5 Sink/Source Voltage Regulator
Rail Splitter, from Abraham Lincoln to Virtual Ground Application note on creating an artificial virtual ground as a reference voltage.
Creating a Virtual Power Supply Ground
Inverting configuration shows the application of the virtual ground concept in an inverting amplifier (Archived)
Electrical circuits
Electricity concepts | Virtual ground | [
"Engineering"
] | 597 | [
"Electrical engineering",
"Electronic engineering",
"Electrical circuits"
] |
564,719 | https://en.wikipedia.org/wiki/Hybrid%20system | A hybrid system is a dynamical system that exhibits both continuous and discrete dynamic behavior – a system that can both flow (described by a differential equation) and jump (described by a state machine, automaton, or a difference equation). Often, the term "hybrid dynamical system" is used instead of "hybrid system", to distinguish from other usages of "hybrid system", such as the combination neural nets and fuzzy logic, or of electrical and mechanical drivelines. A hybrid system has the benefit of encompassing a larger class of systems within its structure, allowing for more flexibility in modeling dynamic phenomena.
In general, the state of a hybrid system is defined by the values of the continuous variables and a discrete mode. The state changes either continuously, according to a flow condition, or discretely according to a control graph. Continuous flow is permitted as long as so-called invariants hold, while discrete transitions can occur as soon as given jump conditions are satisfied. Discrete transitions may be associated with events.
Examples
Hybrid systems have been used to model several cyber-physical systems, including physical systems with impact, logic-dynamic controllers, and even Internet congestion.
Bouncing ball
A canonical example of a hybrid system is the bouncing ball, a physical system with impact. Here, the ball (thought of as a point-mass) is dropped from an initial height and bounces off the ground, dissipating its energy with each bounce. The ball exhibits continuous dynamics between each bounce; however, as the ball impacts the ground, its velocity undergoes a discrete change modeled after an inelastic collision. A mathematical description of the bouncing ball follows. Let be the height of the ball and be the velocity of the ball. A hybrid system describing the ball is as follows:
When , flow is governed by
,
where is the acceleration due to gravity. These equations state that when the ball is above ground, it is being drawn to the ground by gravity.
When , jumps are governed by
,
where is a dissipation factor. This is saying that when the height of the ball is zero (it has impacted the ground), its velocity is reversed and decreased by a factor of . Effectively, this describes the nature of the inelastic collision.
The bouncing ball is an especially interesting hybrid system, as it exhibits Zeno behavior. Zeno behavior has a strict mathematical definition, but can be described informally as the system making an infinite number of jumps in a finite amount of time. In this example, each time the ball bounces it loses energy, making the subsequent jumps (impacts with the ground) closer and closer together in time.
It is noteworthy that the dynamical model is complete if and only if one adds the contact force between the ground and the ball. Indeed, without forces, one cannot properly define the bouncing ball and the model is, from a mechanical point of view, meaningless. The simplest contact model that represents the interactions between the ball and the ground, is the complementarity relation between the force and the distance (the gap) between the ball and the ground. This is written as
Such a contact model does not incorporate magnetic forces, nor gluing effects. When the complementarity relations are in, one can continue to integrate the system after the impacts have accumulated and vanished: the equilibrium of the system is well-defined as the static equilibrium of the ball on the ground, under the action of gravity compensated by the contact force . One also notices from basic convex analysis that the complementarity relation can equivalently be rewritten as the inclusion into a normal cone, so that the bouncing ball dynamics is a differential inclusion into a normal cone to a convex set. See Chapters 1, 2 and 3 in Acary-Brogliato's book cited below (Springer LNACM 35, 2008). See also the other references on non-smooth mechanics.
Hybrid systems verification
There are approaches to automatically proving properties of hybrid systems (e.g., some of the tools mentioned below). Common techniques for proving safety of hybrid systems are computation of reachable sets, abstraction refinement, and barrier certificates.
Most verification tasks are undecidable, making general verification algorithms impossible. Instead, the tools are analyzed for their capabilities on benchmark problems. A possible theoretical characterization of this is algorithms that succeed with hybrid systems verification in all robust cases implying that many problems for hybrid systems, while undecidable, are at least quasi-decidable.
Other modeling approaches
Two basic hybrid system modeling approaches can be classified, an implicit and an explicit one. The explicit approach is often represented by a hybrid automaton, a hybrid program or a hybrid Petri net. The implicit approach is often represented by guarded equations to result in systems of differential algebraic equations (DAEs) where the active equations may change, for example by means of a hybrid bond graph.
As a unified simulation approach for hybrid system analysis, there is a method based on DEVS formalism in which integrators for differential equations are quantized into atomic DEVS models. These methods generate traces of system behaviors in discrete event system manner which are different from discrete time systems. Detailed of this approach can be found in references [Kofman2004] [CF2006] [Nutaro2010] and the software tool PowerDEVS.
Software Tools
Simulation
HyEQ Toolbox: Hybrid system solver for MATLAB and Simulink
PowerDEVS: General-purpose tool for DEVS (Discrete Event System) modeling and simulation oriented to the simulation of hybrid systems
Reachability
Ariadne: C++ library for (numerically rigorous) reachability analysis of nonlinear hybrid systems
CORA: A MATLAB Toolbox for reachability analysis of cyber-physical systems, including hybrid systems
Flow*: A tool for reachability analysis of nonlinear hybrid systems
HyCreate: A tool for overapproximating reachability of hybrid automata
HyPro: C++ library for state set representations for hybrid systems reachability analysis
JuliaReach: A toolbox for set-based reachability
Temporal Logic and Other Verification
C2E2: Nonlinear hybrid system verifier
HyTech: Model checker for hybrid systems
HSolver: Verification tool for hybrid systems
KeYmaera: Theorem prover for hybrid systems
PHAVer: Polyhedral hybrid automaton verifier
S-TaLiRo: MATLAB toolbox for verification of hybrid systems with respect to temporal logic specifications
Other
SCOTS: Tool for the synthesis of correct-by-construction controllers for hybrid systems
SpaceEx: State-space explorer
See also
Hybrid automaton
Sliding mode control
Variable structure system
Variable structure control
Joint spectral radius
Cyber-physical system
Behavior trees (artificial intelligence, robotics and control)
Jump process (in the context of probability), an example of a (stochastic) hybrid system with zero flow component
Piecewise-deterministic Markov process (PDMP), an example of a (stochastic) hybrid system and a generalization of the jump process
Jump diffusion, an example of a (stochastic) hybrid system and a generalization of the PDMP
Further reading
[Kofman2004]
[CF2006]
[Nutaro2010]
External links
IEEE CSS Committee on Hybrid Systems
References
Systems theory
Differential equations
Dynamical systems
Control theory | Hybrid system | [
"Physics",
"Mathematics"
] | 1,484 | [
"Applied mathematics",
"Control theory",
"Mathematical objects",
"Differential equations",
"Equations",
"Mechanics",
"Dynamical systems"
] |
564,746 | https://en.wikipedia.org/wiki/Closed-loop%20controller | A closed-loop controller or feedback controller is a control loop which incorporates feedback, in contrast to an open-loop controller or non-feedback controller.
A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.
In the case of linear feedback systems, a control loop including sensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at a setpoint (SP). An everyday example is the cruise control on a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. The PID algorithm in the controller restores the actual speed to the desired speed in an optimum way, with minimal delay or overshoot, by controlling the power output of the vehicle's engine.
Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent. Open-loop control systems do not make use of feedback, and run only in pre-arranged ways.
Closed-loop controllers have the following advantages over open-loop controllers:
disturbance rejection (such as hills in the cruise control example above)
guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact
unstable processes can be stabilized
reduced sensitivity to parameter variations
improved reference tracking performance
improved rectification of random fluctuations
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.
A common closed-loop controller architecture is the PID controller.
Open-loop and closed-loop
Closed-loop transfer function
The output of the system y(t) is fed back through a sensor measurement F to a comparison with the reference value r(t). The controller C then takes the error e (difference) between the reference and the output to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller.
This is called a single-input-single-output (SISO) control system; MIMO (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions).
If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e., elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations:
Solving for Y(s) in terms of R(s) gives
The expression is referred to as the closed-loop transfer function of the system. The numerator is the forward (open-loop) gain from r to y, and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If , i.e., it has a large norm with each value of s, and if , then Y(s) is approximately equal to R(s) and the output closely tracks the reference input.
PID feedback control
A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism control technique widely used in control systems.
A PID controller continuously calculates an error value as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms. PID is an initialism for Proportional-Integral-Derivative, referring to the three terms operating on the error signal to produce a control signal.
The theoretical understanding and application dates from the 1920s, and they are implemented in nearly all analogue control systems; originally in mechanical controllers, and then using discrete electronics and later in industrial process computers.
The PID controller is probably the most-used feedback control design.
If is the control signal sent to the system, is the measured output and is the desired output, and is the tracking error, a PID controller has the general form
The desired closed loop dynamics is obtained by adjusting the three parameters , and , often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well-established class of control systems: however, they cannot be used in several more complicated cases, especially if MIMO systems are considered.
Applying Laplace transformation results in the transformed PID controller equation
with the PID controller transfer function
As an example of tuning a PID controller in the closed-loop system , consider a 1st order plant given by
where and are some constants. The plant output is fed back through
where is also a constant. Now if we set , , and , we can express the PID controller transfer function in series form as
Plugging , , and into the closed-loop transfer function , we find that by setting
. With this tuning in this example, the system output follows the reference input exactly.
However, in practice, a pure differentiator is neither physically realizable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach or a differentiator with low-pass roll-off are used instead.
References
Control theory | Closed-loop controller | [
"Mathematics"
] | 1,260 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
565,250 | https://en.wikipedia.org/wiki/Paternoster%20lift | A paternoster (, , or ) or paternoster lift is a passenger elevator which consists of a chain of open compartments (each usually designed for two people) that move slowly in a loop up and down inside a building without stopping. Passengers can step on or off at any floor they like. The same technique is also used for filing cabinets to store large amounts of (paper) documents or for small spare parts. The much smaller belt manlift, which consists of an endless belt with steps and rungs but no compartments, is also sometimes called a paternoster.
The name paternoster ("Our Father", the first two words of the Lord's Prayer in Latin) was originally applied to the device because the elevator is in the form of a loop and is thus similar to rosary beads used as an aid in reciting prayers.
The construction of new paternosters was stopped in the mid-1970s out of concern for safety, but public sentiment has kept many of the remaining examples open. By far, most remaining paternosters are in Europe, with 230 examples in Germany and 68 in the Czech Republic. Only three have been identified outside Europe; one each in Malaysia, Sri Lanka, and Peru.
History
British architect Peter Ellis obtained a patent in July 1866 for "an improved lift, hoist, or mechanical elevator" with two shafts and subsequently installed the first elevators that could be described as paternoster lifts in Oriel Chambers in Liverpool in 1868. This patent lapsed in July 1873. Another was used in 1876 to transport parcels at the General Post Office in London. In 1878, British engineer Frederick Hart obtained a patent on the paternoster. In 1884, the engineering firm of J & E Hall of Dartford, Kent, installed its first "Cyclic Elevator", using Hart's patent, in a London office block, and the firm is generally considered the company first involved in regular construction of the lifts.
The newly built Dovenhof in Hamburg was inaugurated in 1886. The prototype of the Hamburg office buildings equipped with the latest technology also had a paternoster. This first system outside of Great Britain already had the technology that would later become common, but was still driven by steam power like the British systems.
The highest paternoster lift in the world was located in Stuttgart in the 16-floor Tagblatt tower, which was completed in 1927. This was replaced with conventional elevators in 1959.
Paternosters were popular throughout the first half of the 20th century because they could carry more passengers than ordinary elevators. They were more common in continental Europe, especially in public buildings, than in the United Kingdom. They are relatively slow elevators, typically travelling at about to facilitate passengers embarking and disembarking.
Safety
Paternoster elevators are intended only for transporting people. Accidents have occurred when they have been misused for transporting large items such as ladders or library trolleys. Their overall rate of accidents is estimated as 30 times higher than conventional elevators. A representative of the Union of Technical Inspection Associations stated that Germany saw an average of one death per year due to paternosters prior to 2002, at which point many of them were made inaccessible to the general public.
Because the accident risk is much greater than for conventional elevators, the construction of new paternosters is no longer allowed in many countries. In 2012, an 81-year-old man was killed when he fell into the shaft of a paternoster in the Dutch city of The Hague. Elderly people, disabled people and children are most vulnerable.
In September 1975, the paternoster in Newcastle University's Claremont Tower was temporarily taken out of service after a passenger was killed when a car left its guide rail at the top of its journey and forced the two cars ascending behind it into the winding room above. In October 1988 a second, non-fatal accident occurred in the same lift. A conventional lift replaced it in 1989–1990.
In West Germany, new paternoster installations were banned in 1974, and in 1994 there was an attempt to shut down all existing installations. However, there was a wave of popular resistance to the ban, and to a similar attempt in 2015, and as a result many are still in operation. , Germany had 231 paternosters.
In April 2006, Hitachi announced plans for a modern paternoster-style elevator with computer-controlled cars and standard elevator doors to alleviate safety concerns. A prototype was revealed . In 2009, Solon received special permission to build a brand new paternoster in its Berlin headquarters.
Surviving examples
Austria
In Vienna, the Vienna City Hall, the Ringturm (headquarters of the Vienna Insurance Group), an office building at Trattnerhof 2 near Stephansplatz and Haus der Industrie on Schwartzenbergplatz have the last four running and frequently used paternosters in the city. The university also had one or more.
In Klagenfurt, the Headquarters of the energy company Kelag still have one paternoster active for daily use.
Belgium
A paternoster lift dating from 1958 survives in Avenue Fonsny 47, Brussels, a currently disused office building forming part of Midi/Zuid railway station.
At the Huis van de Vlaamse Volksvertegenwoordigers (House of Flemish Representatives), previously the Postcheque Building, at Leuvenseweg/Rue de Louvain 86, the paternoster is operational but not used.
Czech Republic
In Prague, New City Hall – an early 20th century paternoster renovated in 2017. The lift was temporarily closed in April 2023 due to misuse by tourists.
In Prague, Czech Technical University – Faculty of Electrotechnical Engineering at Technická 2, Dejvice
In Prague, Czech Technical University – Faculty of Mechanical Engineering at Technická 4, Dejvice
In Prague, Charles University – Faculty of Law
In Prague, Ministry of Transport (Czech Republic) head office
In Prague, Ministry of Agriculture (Czech Republic)
In Prague, Lucerna Palace (near the southeast entrance)
In Prague, Czech Radio building (oldest paternoster lift in the Czech Republic, not publicly accessible)
In Prague, YMCA building
In Plzeň, municipal office – Škroupova 1900/5
In Brno, Brno Technical University – Faculty of Mechanical Engineering at Technická 2896/2
In Brno, municipal office – Malinovského Square 624/3
In Most, Business centrum, tř. Budovatelů 2957
In the offices of Czech Post at Brno railway station, (returned to use in 2013, after being out of service for six years)
In Jablonec nad Nisou, city hall built in 1933
In Ostrava, New City Hall built in 1930
In Liberec, Liberec Regional Office building build in 1971, highest paternoster in the country (56.8 m high and has 35 wooden cabins)
In Zlín, Baťa's Skyscraper or Building No. 21 built in 1938
Denmark
In the Christiansborg Palace where the Danish parliament resides
At Vognmagergade 8. Today the building is used by KVUC – Københavns VUC (Copenhagen's adult-education center)
In the corporate office building Axelborg, located in central Copenhagen
In Frederiksberg Town Hall
In the 11-story main administrative building at Danfoss headquarters on the island of Als
In the hospital in Vejle
In Sydvestjysk Hospital in Esbjerg
In Regionshospitalet Randers
Finland
The following locations have paternosters:
In Turku, Town hall in Yliopistonkatu 27
In Helsinki, in the office building at Hämeentie 19
In Helsinki, at Eduskunta, the parliament of Finland at Mannerheimintie 30, accessible to staff only
In Helsinki, in Stockmann, Helsinki centre at Aleksanterinkatu 52, accessible to staff only
Germany
In Kiel, the State Parliament building for the state of Schleswig-Holstein has had a working paternoster since 1950.
In Kiel, the city hall has had a paternoster in use for over 100 years.
In Berlin, the offices of the alt-left newspaper Neues Deutschland contain a working paternoster (), while those of the conservative tabloid Bild contain a 19-storey paternoster that is still in use but not open to the public. The Rathaus Schöneberg, including scenes with its paternoster elevator, were used to film the TV series Babylon Berlin.
In Berlin, the building at Kleiststr. 23–26 that houses Argentina's embassy contains an 8-story paternoster.
In the in Berlin paternosters are in use.
In the German Academy of Sciences in Berlin another paternoster is in use.
In the Siemens building in Berlin at Nonnendammallee 101 a paternoster is in use.
Berlin's Flughafen Tempelhof through at least 1967 (when it shared an identity as Tempelhof Air Base) had at least 1 fully-functional paternoster in the tower on the left end (as seen from the Luftbrückeplatz) of the quarter-circular pre-WW2 building.
Bremen has a paternoster in the Bremen Cotton Exchange, at Wachtstraße 17-24, just off the market square.
In Hamburg, the building at 25 Deichstraße, Speicherstadt, has an operating paternoster, the Bezirksamt at Grindelberg 62–66 in Eimsbüttel and Hapag Lloyd buildings in Balindamm street also have a working Paternoster. As well as the building at Stadthausebrücke 8. The Laeiszhof Building in Trostbrücke 1 also has a working Paternoster.
In Cologne, the building at Hansaring 97 has a working and in-use paternoster.
In Frankfurt, the former IG Farben Building has running and frequently used paternosters as seen in the movies "Berlin Express" (1948) and "Night People" (1954).
In Frankfurt the hotel Fleming's has an operational paternoster.
In Jena, a paternoster is in use at the headquarters of Jenapharm.
In Kassel, a paternoster is still in use at the headquarters of Wintershall Dea
In Lippstadt, a paternoster is still in use at the headquarter of Hella/Forvia.
In Wiesbaden, a paternoster is still in use at the Federal Statistical Office of Germany.
In Wetzlar, a paternoster is in use at the headquarters of Leica Microsystems
In Stuttgart, a paternoster is still in use at city hall (Stuttgart Rathaus).
In Leipzig, a paternoster is still in use at city hall (Leipzig Neues Rathaus)
Hungary
In Jahn Ferenc hospital in Budapest.
In Miskolc, the University of Miskolc, has a working and in-use paternoster.
In the central office of National Tax and Duty Administration Budapest.
In the MVM building in Budapest.
In the headquarters of BKV Budapesti Közlekedési Zrt. in Budapest. (operating in 2020)
In the Ministry of Education in Budapest (operating and in daily use in March 2022).
In Kiskun County Hospital, Kecskemét
ELMŰ-székház (HQ) (Váci út – Dráva u. sarok, Budapest)
Pesti Központi Kerületi Bíróság (Pest District Court)(Budapest, Markó utca 25.)
Tőzsdepalota (volt MTV-székház / HQ) (Budapest, Szabadság tér 17.)
Italy
In Fiat's Head Office Building, Mirafiori, Turin (Torino) [as of 1985].
Netherlands
In the Netherlands, seven paternoster lifts could be found in 2012, some of which were still operational:
In the former Ziggo building at Spaarneplein 2, The Hague: no longer in use. (Stork Hijsch 1922, conversion 1976 Starlift, damage repair 1999 Schindler.) On 13 April 2012, a fatal accident occurred when an 81-year-old man was trapped between the lift and the wall.
At the Dudokhuis, Tata Steel Europe in IJmuiden: shut down in 1999. (Eggers Kehrhan, 1957):
In the HaKa building (the old head office of the Coöperatieve Groothandelsvereniging 'de Handelskamer' ) on the Vierhavenstraat in Rotterdam. This 1936 Hensen-Schindler lift has been operational again since the end of 2011, although the building is empty. For safety reasons, the lift can only be visited with the building manager. The lift can be put into operation for interested parties on request.
In the former tax office on Puntegaalstraat in Rotterdam; it is put into operation during Heritage Days, but may not be used. To enforce this, gates have been built across the entrances. (Backer and Rueb Breda, 1948, conversion December 1975 by De Reus BV.)
In the former post office on the Coolsingel 42 in Rotterdam: disused.
Two examples in the Scheepvaarthuis (now Grand Hotel Amrâth Amsterdam) in Amsterdam: working, can be used on request. (Roux Combaluzier, 1928.)
In the old school building on the Mauritskade in Amsterdam: whether the elevator is still working is unknown.
Norway
In Oslo, Landbrukets hus, on Schweigaards Gate. The building was built in 1965 as the headquarters for Norges Bondelag, who vacated it in 2016.
Poland
Building of Silesian Parliament in Katowice.
In Wrocław, Poland, Santander Bank building, Main Square. Available for employees only.
In Opole, Poland, Urząd Wojewódzki building, Ostrówek.
Russia
In the building of the Ministry of Agriculture in Moscow
Serbia
In Belgrade in the headquarters building of Serbian Railways there is one operating paternoster lift and another one which is not in service.
Slovakia
In Bratislava there are at least 5 operating paternosters: Ministry of Transport and Construction, Ministry of Interior, Ministry of Finance, Ministry of Agriculture and Rural Development and the headquarters of Railways of the Slovak Republic.
In Košice, the Technical University of Košice operates a paternoster in the main building called L9 since 1972. There's another paternoster in an administrative building of U.S.Steel Košice, steel manufacturing company in Košice.
Sri Lanka
Ceylon Electricity Board Headquarters building in Colombo
Sweden
In Sweden there is at least one functional Paternoster lift at HSB-huset, Kungsholmen, Stockholm
Mäster Samuelsgatan 56, in central Stockholm, houses a multi-floor Paternoster lift.
Ukraine
One functional paternoster in the building of Zakarpattia Oblast Administration in Uzhhorod.
United Kingdom
Current
The Arts Tower at the University of Sheffield has a paternoster, which is said to be the largest in Europe. It has 38 two-person cars and serves 22 storeys. A journey between two floors takes 13 seconds.
The Albert Sloman Library at the University of Essex on the Colchester campus has a working paternoster which began operating in 1967. The lift was temporarily out of service for refurbishment between December 2019 and June 2021.
Northwick Park Hospital in Harrow, North West London (part of the London North West University Healthcare NHS Trust) has the last working paternoster in London. It had been out of commission for many years until July 2020, when it was reopened for staff use.
Former
Aston University in Birmingham were operating paternoster lifts in the main building. These are no longer in use, but one is remaining and is visible on the 4th floor of the south wing. The lift cars are covered with a perspex wall, and some visual displays explain the story and operation of the lift.
On 8 December 2017 it was announced that the paternoster in the Attenborough tower at the University of Leicester which was constructed in 1968–70 would be taken out of service as maintenance had become too expensive. This was undertaken shortly afterwards.
At the University of Birmingham, both the main library and the Muirhead Tower had paternosters. The library was demolished in 2017, and replaced with a new library. The paternoster in the Muirhead Tower was closed for many years before a major refurbishment added two new lifts.
Birmingham Polytechnic (now Birmingham City University) had a paternoster in the 1970s in the Baker building on its City North Campus at Perry Barr. The building closed in 2018.
Birmingham College of Food, Tourism & Creative Studies, Summer Row, Birmingham. (now University College Birmingham)
Birmingham Dental School. The building was demolished during 2020–21
London School of Economics. The Clare Market Building had a paternoster until 1991
There was a paternoster in the Co-op's six-storey Fairfax House department store, in Bristol's Broadmead shopping centre. The store opened in March 1962 and was demolished in 1988.
Leeds University in the Roger Stevens building.
Newcastle University's Claremont Tower paternoster had a fatal accident in September 1975 after a car left its guide rail at the top of its journey and forced the two cars ascending behind it into the winding room above. Another accident in 1988 led to its subsequent closure and removal.
University of Glasgow. The Pontecorvo Building which housed The Institute of Genetics had a paternoster lift.
Oxford University Department of Engineering Science. The Thom Building had a paternoster lift through into the 1980s, now replaced by a pair of conventional lifts.
University of Salford Chemistry Tower had a paternoster lift. The building has been demolished.
Risley, Cheshire – Former United Kingdom Atomic Energy Authority (UKAEA) site, now Birchwood Park Business Park. The original management block 'A Block', and the later engineering building 'E Block' had paternoster lifts. Those in the former E Block (Chadwick House) survived into the 21st century (sealed off), and still exist in place. The adjacent ‘Y Block’ also had two sets, these are also sealed off.
UKAEA Winfrith Heath Dorset 4 floor Administration Building
BNFL Sellafield had a paternoster in its administration building B403. Demolished in 2002.
Viscount House, a British Airways office building at Hatton Cross. Now demolished.
Unipart House, Oxford had two of them. They were at each end of the building but were taken out due to the cost of maintenance. Bob Geldof and The Boomtown Rats filmed their video of Love or Something in them.
Schofields Department Store, Leeds had one in their Lands Lane building giving staff only access to the staff restaurant. Operational late 1970's. From personal memory.
Gallery
See also
Belt manlift
Escalator
List of elevator manufacturers
Shabbat elevator
Revolving door
References
External links
A look at the last remaining paternoster lifts | Associated Press, 2017 (YouTube)
Information and photos regarding the GEC Marconi paternoster featured in "The Prisoner" TV series
Elevators
Vertical transport devices
English inventions
1884 introductions
Articles containing video clips | Paternoster lift | [
"Technology",
"Engineering"
] | 4,051 | [
"Building engineering",
"Vertical transport devices",
"Transport systems",
"Elevators"
] |
565,530 | https://en.wikipedia.org/wiki/Mesopelagic%20zone | The mesopelagic zone (Greek μέσον, middle), also known as the middle pelagic or twilight zone, is the part of the pelagic zone that lies between the photic epipelagic and the aphotic bathypelagic zones. It is defined by light, and begins at the depth where only 1% of incident light reaches and ends where there is no light; the depths of this zone are between approximately 200 to 1,000 meters (~656 to 3,280 feet) below the ocean surface.
The mesopelagic zone occupies about 60% of the planet's surface and about 20% of the ocean's volume, amounting to a large part of the total biosphere. It hosts a diverse biological community that includes bristlemouths, blobfish, bioluminescent jellyfish, giant squid, and a myriad of other unique organisms adapted to live in a low-light environment. It has long captivated the imagination of scientists, artists and writers; deep sea creatures are prominent in popular culture.
Physical conditions
The mesopelagic zone includes the region of sharp changes in temperature, salinity and density called the thermocline, halocline, and pycnocline respectively. The temperature variations are large; from over 20 °C (68 °F) at the upper layers to around 4 °C (39 °F) at the boundary with the bathyal zone. The variation in salinity is smaller, typically between 34.5 and 35 psu. The density ranges from 1023 to 1027 g/L of seawater. These changes in temperature, salinity, and density induce stratification which create ocean layers. These different water masses affect gradients and mixing of nutrients and dissolved gasses. This makes this a dynamic zone.
The mesopelagic zone has some unique acoustic features. The Sound Fixing and Ranging (SOFAR) channel, where sound travels the slowest due to salinity and temperature variations, is located at the base of the mesopelagic zone at about 600–1,200m. It is a wave-guided zone where sound waves refract within the layer and propagate long distances. The channel got its name during World War II when the US Navy proposed using it as a life saving tool. Shipwreck survivors could drop a small explosive timed to explode in the SOFAR channel and then listening stations could determine the position of the life raft. During the 1950s, the US Navy tried to use this zone to detect Soviet submarines by creating an array of hydrophones called the Sound Surveillance System (SOSUS.) Oceanographers later used this underwater surveillance system to figure out the speed and direction of deep ocean currents by dropping SOFAR floats that could be detected with the SOSUS array.
The mesopelagic zone is important for water mass formation, such as mode water. Mode water is a water mass that is typically defined by its vertically mixed properties. It often forms as deep mixed layers at the depth of the thermocline. The mode water in the mesopelagic has residency times on decadal or century scales. The longer overturning times contrast with the daily and shorter scales that a variety of animals move vertically through the zone and sinking of various debris.
Biogeochemistry
Carbon
The mesopelagic zone plays a key role in the ocean's biological pump, which contributes to the oceanic carbon cycle. In the biological pump, organic carbon is produced in the surface euphotic zone where light promotes photosynthesis. A fraction of this production is exported out of the surface mixed layer and into the mesopelagic zone. One pathway for carbon export from the euphotic layer is through sinking of particles, which can be accelerated through repackaging of organic matter in zooplankton fecal pellets, ballasted particles, and aggregates.
In the mesopelagic zone, the biological pump is key to carbon cycling, as this zone is largely dominated by remineralization of particulate organic carbon (POC). When a fraction of POC is exported from the euphotic zone, an estimated 90% of that POC is respired in the mesopelagic zone. This is due to the microbial organisms that respire organic matter and remineralize the nutrients, while mesopelagic fish also package organic matter into quick-sinking parcels for deeper export.
Another key process occurring in this zone is the diel vertical migration of certain species, which move between the euphotic zone and mesopelagic zone and actively transport particulate organic matter to the deep. In one study in the Equatorial Pacific, myctophids in the mesopelagic zone were estimated to actively transport 15–28% of the passive POC sinking to the deep, while a study near the Canary Islands estimated 53% of vertical carbon flux was due to active transport from a combination of zooplankton and micronekton. When primary productivity is high, the contribution of active transport by vertical migration has been estimated to be comparable to sinking particle export.
Particle Packaging and sinking
Mean particle sinking rates are 10 to 100 m/day. Sinking rates have been measured in the project VERTIGO (Vertical Transport in the Global Ocean) using settling velocity sediment traps. The variability in sinking rates is due to differences in ballast, water temperature, food web structure and the types of phyto and zooplankton in different areas of the ocean. If the material sinks faster, then it gets respired less by bacteria, transporting more carbon from the surface layer to the deep ocean. Larger fecal pellets sink faster due to lower friction-surface/mass ratio. More viscous waters could slow the sinking rate of particles.
Oxygen
Dissolved oxygen is a requirement for aerobic respiration, and while the surface ocean is usually oxygen-rich due to atmospheric gas exchange and photosynthesis, the mesopelagic zone is not in direct contact with the atmosphere, due to stratification at the base of the surface mixed layer. Organic matter is exported to the mesopelagic zone from the overlying euphotic layer, while the minimal light in the mesopelagic zone limits photosynthesis. The oxygen consumption due to respiration of most of the sinking organic matter and lack of gas exchange, often creates an oxygen minimum zone (OMZ) in the mesopelagic. The mesopelagic OMZ is particularly severe in the eastern tropical Pacific Ocean and tropical Indian Ocean due to poor ventilation and high rates of organic carbon export to the mesopelagic. Oxygen concentrations in the mesopelagic are occasionally result in suboxic concentrations, making aerobic respiration difficult for organisms. In these anoxic regions, chemosynthesis may occur in which CO2 and reduced compounds such as sulfide or ammonia are taken up to form organic carbon, contributing to the organic carbon reservoir in the mesopelagic. This pathway of carbon fixation has been estimated to be comparable in rate to the contribution by heterotrophic production in this ocean realm.
Nitrogen
The mesopelagic zone, an area of significant respiration and remineralization of organic particles, is generally nutrient-rich. This is in contrast to the overlying euphotic zone, which is often nutrient-limited. Areas of low oxygen such as OMZ's are a key area of denitrification by prokaryotes, a heterotrophic pathways in which nitrate is converted into nitrogen gas, resulting in a loss to the ocean reservoir of reactive nitrogen. At the suboxic interface that occurs at the edge of the OMZ, nitrite and ammonium can be coupled to produce nitrogen gas through anammox, also removing nitrogen from the biologically available pool.
Biology
Although some light penetrates the mesopelagic zone, it is insufficient for photosynthesis. The biological community of the mesopelagic zone has adapted to a low-light environment. This is a very efficient ecosystem with many organisms recycling the organic matter sinking from the epipelagic zone resulting in very little organic carbon making it to deeper ocean waters. The general types of life forms found are daytime-visiting herbivores, detritivores feeding on dead organisms and fecal pellets, and carnivores feeding on those detritivores.
Many organisms in the mesopelagic zone move up into the epipelagic zone at night, and retreat to the mesopelagic zone during the day, which is known as diel vertical migration. These migrators can therefore avoid visual predators during the day and feed at night, while some of their predators also migrate up at night to follow the prey. There is so much biomass in this migration that sonar operators in World War II would regularly misinterpret the signal returned by this thick layer of plankton as a false sea floor.
Estimates of the global biomass of mesopelagic fishes range from 1 gigatonne (Gt) based on net tows to 7–10 Gt based on measurements using active acoustics.
Virus and microbial ecology
Very little is known about the microbial community of the mesopelagic zone because it is a difficult part of the ocean to study. Recent work using DNA from seawater samples emphasized the importance of viruses and microbes role in recycling organic matter from the surface ocean, known as the microbial loop. These many microbes can get their energy from different metabolic pathways. Some are autotrophs, heterotrophs, and a 2006 study even discovered chemoautotrophs. This chemoautotrophic Archaea crenarchaeon Candidatus can oxidize ammonium as their energy source without oxygen, which could significantly impact the nitrogen and carbon cycles. One study estimates these ammonium-oxidizing bacteria, which are only 5% of the microbial population, can annually capture 1.1 Gt of organic carbon.
Microbial biomass and diversity typically decline exponentially with depth in the mesopelagic zone, tracking the general decline of food from above. The community composition varies with depths in the mesopelagic as different organisms are evolved for varying light conditions. Microbial biomass in the mesopelagic is greater at higher latitudes and decreases towards the tropics, which is likely linked to the differing productivity levels in the surface waters. Viruses however are very abundant in the mesopelagic, with around 1010 - 1012 every cubic meter, which is fairly uniform throughout the mesopelagic zone.
Zooplankton ecology
The mesopelagic zone hosts a diverse zooplankton community. Common zooplankton include copepods, krill, jellyfish, siphonophores, larvaceans, cephalopods, and pteropods. Food is generally scarce in the mesopelagic, so predators have to be efficient in capturing food. Gelatinous organisms are thought to play an important role in the ecology of the mesopelagic and are common predators. Though previously thought to be passive predators just drifting through the water column, jellyfish could be more active predators. One study found that the helmet jellyfish Periphylla periphylla exhibit social behavior and can find each other at depth and form groups. Such behavior was previously attributed to mating, but scientists speculate this could be a feeding strategy to allow a group of jellyfish to hunt together. Mesopelagic zooplankton have unique adaptations for the low light. Bioluminescence is a very common strategy in many zooplankton. This light production is thought to function as a form of communication between conspecifics, prey attraction, prey deterrence, and/or reproduction strategy. Another common adaption are enhanced light organs, or eyes, which is common in krill and shrimp, so they can take advantage of the limited light. Some octopus and krill even have tubular eyes that look upwards in the water column.
Most life processes, like growth rates and reproductive rates, are slower in the mesopelagic. Metabolic activity has been shown to decrease with increasing depth and decreasing temperature in colder-water environments. For example, the mesopelagic shrimp-like mysid, Gnathophausia ingens, lives for 6.4 to 8 years, while similar benthic shrimp only live for 2 years.
Fish ecology
The mesopelagic is home to a significant portion of the world's total fish biomass. Mesopelagic fish are found globally, with exceptions in the Arctic Ocean. A 1980 study puts the mesopelagic fish biomass at about one billion tons. Then a 2008 study estimated the world marine fish biomass at between 0.8 and 2 billion tons. A more recent study concluded mesopelagic fish could have a biomass amounting to 10 billion tons, equivalent to about 100 times the annual catch of traditional fisheries of about 100 million metric tons. However, there is a lot of uncertainty in this biomass estimate. This ocean realm could contain the largest fishery in the world and there is active development for this zone to become a commercial fishery.
There are currently thirty families of known mesopelagic fish. One dominant fish in the mesopelagic zone are lanternfish (Myctophidae), which include 245 species distributed among 33 different genera. They have prominent photophores along their ventral side. The Gonostomatidae, or bristlemouth, are also common mesopelagic fish. The bristlemouth could be the Earth's most abundant vertebrate, with numbers in the hundreds of trillions to quadrillions.
Mesopelagic fish are difficult to study due to their unique anatomy. Many of these fish have swim bladders to help them control their buoyancy, which makes them hard to sample because those gas-filled chambers typically burst as the fish come up in nets and the fish die. Scientists in California have made progress on mesopelagic fish sampling by developing a submersible chamber that can keep fish alive on the way up to the surface under a controlled atmosphere and pressure. A passive method to estimate mesopelagic fish abundance is by echosounding to locate the 'deep scattering layer' through the backscatter received from these acoustic sounders. A 2015 study suggested that some areas have had a decline in abundance of mesopelagic fish, including off the coast of Southern California, using a long-term study dating back to the 1970s. Cold water species were especially vulnerable to decline.
Mesopelagic fish are adapted to a low-light environment. Many fish are black or red, because these colors appear dark due to the limited light penetration at depth. Some fish have rows of photophores, small light-producing organs, on their underside to mimic the surrounding environment. Other fish have mirrored bodies which are angled to reflect the surrounding ocean low-light colors and protect the fish from being seen, while another adaptation is countershading where fish have light colors on the ventral side and dark colors on the dorsal side.
Food is often limited and patchy in the mesopelagic, leading to dietary adaptations. Common adaptations fish may have include sensitive eyes and huge jaws for enhanced and opportunistic feeding. Fish are also generally small to reduce the energy requirement for growth and muscle formation. Other feeding adaptations include jaws that can unhinge, elastic throats, and massive, long teeth. Some predators develop bioluminescent lures, like the tasselled anglerfish, which can attract prey, while others respond to pressure or chemical cues instead of relying on vision.
Human impacts
Pollution
Marine debris
Marine debris, specifically in the plastic form, have been found in every ocean basin and have a wide range of impacts on the marine world.
One of the most critical issues is ingestion of plastic debris, specifically microplastics. Many mesopelagic fish species migrate to the surface waters to feast on their main prey species, zooplankton and phytoplankton, which are mixed with microplastics in the surface waters. Additionally, research has shown that even zooplankton are consuming the microplastics themselves. Mesopelagic fish play a key role in energy dynamics, meaning they provide food to a number of predators including birds, larger fish and marine mammals. The concentration of these plastics has the potential to increase, so more economically important species could become contaminated as well. Concentration of plastic debris in mesopelagic populations can vary depending on geographic location and the concentration of marine debris located there. In 2018, approximately 73% of approximately 200 fish sampled in the North Atlantic had consumed plastic.
Bioaccumulation
Bioaccumulation (a buildup of a certain substance in the adipose tissue) and biomagnification (the process in which the concentration of the substance grows higher as you rise through the food chain) are growing issues in the mesopelagic zone. Mercury in fish can be traced back to a combination of anthropological factors (such as coal mining) in addition to natural factors. Mercury is a particularly important bioaccumulation contaminant because its concentration in the mesopelagic zone is increasing faster than in surface waters. Inorganic mercury occurs in anthropogenic atmospheric emissions in its gaseous elemental form, which then oxidizes and can be deposited in the ocean. Once there, the oxidized form can be converted to methylmercury, which is its organic form. Research suggests that current levels anthropogenic emissions will not equilibrate between the atmosphere and ocean for a period of decades to centuries, which means we can expect current mercury concentrations in the ocean to keep rising. Mercury is a potent neurotoxin, and poses health risks to the whole food web, beyond the mesopelagic species that consume it. Many of the mesopelagic species, such as myctophids, that make their diel vertical migration to the surface waters, can transfer the neurotoxin when they are consumed by pelagic fish, birds and mammals.
Fishing
Historically, there have been few examples of efforts to commercialize the mesopelagic zone due to low economic value, technical feasibility and environmental impacts. While the biomass may be abundant, fish species at depth are generally smaller in size and slower to reproduce. Fishing with large trawl nets poses threats to a high percentage of bycatch as well as potential impacts to the carbon cycling processes. Additionally, ships trying to reach productive mesopelagic regions requires fairly long journeys offshore. In 1977, a Soviet fishery opened but closed less than 20 years later due to low commercial profits, while a South African purse seine fishery closed in the mid-1980s due to processing difficulties from the high oil content of fish.
As the biomass in the mesopelagic is so abundant, there has been an increased interest to determine whether these populations could be of economic use in sectors other than direct human consumption. For example, it has been suggested that the high abundance of fish in this zone could potentially satisfy a demand for fishmeal and nutraceuticals. With a growing global population, the demand for fishmeal in support of a growing aquaculture industry is high. There is potential for an economically viable harvest. For example, 5 billion tons of mesopelagic biomass could result in the production of circa 1.25 billion tons of food for human consumption. Additionally, the demand for nutraceuticals is also rapidly growing, stemming from the popular human consumption of Omega-3 Fatty Acids in addition to the aquaculture industry that requires a specific marine oil for feed material. Lanternfish are of much interest to the aquaculture market, as they are especially high in fatty acids.
Climate Change
The mesopelagic region plays an important role in the global carbon cycle, as it is the area where most of the surface organic matter is respired. Mesopelagic species also acquire carbon during their diel vertical migration to feed in surface waters, and they transport that carbon to the deep sea when they die. It is estimated that the mesopelagic cycles between 5 and 12 billion tons of carbon dioxide from the atmosphere per year, and until recently, this estimate was not included in many climate models. It is difficult to quantify the effects of climate change on the mesopelagic zone as a whole, as climate change does not have uniform impacts geographically. Research suggests that in warming waters, as long as there are adequate nutrients and food for fish, then mesopelagic biomass could actually increase due to higher trophic efficiency and increased temperature-driven metabolism. However, because ocean warming will not be uniform throughout the global mesopelagic zone, it is predicted that some areas may actually decrease in fish biomass, while others increase.
Water column stratification will also likely increase with ocean warming and climate change. Increased ocean stratification reduces the introduction of nutrients from the deep ocean into the euphotic zone resulting in decreases in both net primary production and sinking particulate matter. Additional research suggests shifts in the geographical range of many species could also occur with warming, with many of them shifting poleward. The combination of these factors could potentially mean that as global ocean basins continue to warm, there could be areas in the mesopelagic that increase in biodiversity and species richness, while declines in other areas, especially moving farther from the equator.
Research and Exploration
There is a dearth of knowledge about the mesopelagic zone so researchers have begun to develop new technology to explore and sample this area. The Woods Hole Oceanographic Institution (WHOI), NASA, and the Norwegian Institute of Marine Research are all working on projects to gain a better understanding of this zone in the ocean and its influence on the global carbon cycle. Traditional sampling methods like nets have proved to be inadequate because they scare off creatures due to the pressure wave formed by the towed net and the light produced by the bioluminescent species caught in the net.
Mesopelagic activity was first investigated by use of sonar because the return bounces off of plankton and fish in the water. However, there are many challenges with acoustic survey methods and previous research has estimated errors in measured amounts of biomass of up to three orders of magnitude. This is due to inaccurate incorporation of depth, species size distribution, and acoustic properties of the species. Norway's Institute of Marine Research has launched a research vessel named Dr. Fridtjof Nansen to investigate mesopelagic activity using sonar with their focus being on the sustainability of fishing operations. To overcome the challenges faced with acoustic sampling, WHOI is developing remote operated vehicles (ROVs) and robots (Deep-See, Mesobot, and Snowclops) that are capable of studying this zone more precisely in a dedicated effort called the Ocean Twilight Zone project that launched in August 2018.
Discovery and Detection
The deep scattering layer often characterizes the mesopelagic due to the high amount of biomass that exists in the region. Acoustic sound sent into the ocean bounces off particles and organisms in the water column and return a strong signal. The region was initially discovered by American researchers during World War II in 1942 during anti-submarine research with sonar. Sonar at the time could not penetrate below this depth due to the large number of creatures obstructing sound waves. It is uncommon to detect deep scattering layers below 1000m. Until recently, sonar has been the predominant method for studying the mesopelagic.
The Malaspina Circumnavigation Expedition was a Spanish-led scientific quest in 2011 to gain a better understanding of the state of the ocean and the diversity in the deep oceans. The data collected, particularly through sonar observations showed that the biomass estimation in the mesopelagic was lower than previously thought.
Deep-See
WHOI is currently working on a project to characterize and document the pelagic ecosystem. They have developed a device named Deep-See weighing approximately 700 kg, which is designed to be towed behind a research vessel. The Deep-See is capable of reaching depths up to 2000 m and can estimate the amount of biomass and biodiversity in this mesopelagic ecosystem. Deep-See is equipped with cameras, sonars, sensors, water sample collection devices, and a real-time data transmission system.
Mesobot
WHOI is collaborating with the Monterey Bay Aquarium Research Institute (MBARI), Stanford University, and the University of Texas Rio Grande Valley to develop a small autonomous robot, Mesobot, weighing approximately 75 kg. Mesobot is equipped with high-definition cameras to track and record mesopelagic species on their daily migration over extended periods of time. The robot's thrusters were designed so that they do not disturb the life in the mesopelagic that it is observing. Traditional sample collection devices fail to preserve organisms captured in the mesopelagic due to the large pressure change associated with surfacing. The Mesobot also has a unique sampling mechanism that is capable of keeping the organisms alive during their ascent. The first sea trial of this device is expected to be in 2019.
MINIONS
Another mesopelagic robot developed by WHOI are the MINIONS. This device descends down the water column and takes images of the amount and size distribution of marine snow at various depths. These tiny particles are a food source for other organisms so it is important to monitor the different levels of marine snow to characterize the carbon cycling processes between the surface ocean and the mesopelagic.
SPLAT cam
The Harbor Branch Oceanographic Institute has developed the Spatial PLankton Analysis Technique (SPLAT) to identify and map distribution patterns of bioluminescent plankton. The various bioluminescent species produce a unique flash that allows the SPLAT to distinguish each specie's flash characteristic and then map their 3-dimensional distribution patterns. Its intended use was not for investigating the mesopelagic zone, although it is capable of tracking movement patterns of bioluminescent species during their vertical migrations. It would be interesting to apply this mapping technique in the mesopelagic to obtain more information about the diurnal vertical migrations that occur in this zone of the ocean.
See also
Ocean Twilight Zone project at Woods Hole Oceanographic Institution
Ocean Twilight Zone creature features
Value of the ocean twilight zone to humans
Climate and the ocean twilight zone
Mesopelagic fish
References
External links
Aquatic biomes
Oceanography | Mesopelagic zone | [
"Physics",
"Environmental_science"
] | 5,422 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
565,536 | https://en.wikipedia.org/wiki/Bathypelagic%20zone | The bathypelagic zone or bathyal zone (from Greek βαθύς (bathýs), deep) is the part of the open ocean that extends from a depth of below the ocean surface. It lies between the mesopelagic above and the abyssopelagic below. The bathypelagic is also known as the midnight zone because of the lack of sunlight; this feature does not allow for photosynthesis-driven primary production, preventing growth of phytoplankton or aquatic plants. Although larger by volume than the photic zone, human knowledge of the bathypelagic zone remains limited by ability to explore the deep ocean.
Physical characteristics
The bathypelagic zone is characterized by a nearly constant temperature of approximately and a salinity range of 33-35 g/kg. This region has little to no light because sunlight does not reach this deep in the ocean and bioluminescence is limited. The hydrostatic pressure in this zone ranges from 100-400 atmospheres (atm) due to the increase of 1 atm for every 10 m depth. It is believed that these conditions have been consistent for the past 8000 years.
This ocean depth spans from the edge of the continental shelf down to the top of the abyssal zone, and along continental slope depths. The bathymetry of the bathypelagic zone consists of limited areas where the seafloor is in this depth range along the deepest parts of the continental margins, as well as seamounts and mid-ocean ridges. The continental slopes are mostly made up of accumulated sediment, while seamounts and mid-ocean ridges contain large areas of hard substrate that provide habitats for bathypelagic fishes and benthic invertebrates. Although currents at these depths are very slow, the topography of seamounts interrupts the currents and creates eddies that retain plankton in the seamount region, thus increasing fauna nearby as well
Hydrothermal vents are also a common feature in some areas of the bathypelagic zone and are primarily formed from the spreading of Earth's tectonic plates at mid-ocean ridges. As the bathypelagic region lacks light, these vents play an important role in global ocean chemical processes, thus supporting unique ecosystems that have adapted to utilize chemicals as energy, via chemoautotrophy, instead of sunlight, to sustain themselves. In addition, hydrothermal vents facilitate precipitation of minerals on the seafloor, making them regions of interest for deep-sea mining.
Biogeochemistry
Many of the biogeochemical processes in the bathypelagic region are dependent upon the input of organic matter from the overlying epipelagic and mesopelagic zones. This organic material, sometimes called marine snow, sinks in the water column or is transported within downward convected water masses such as the Thermohaline Circulation. Hydrothermal vents also deliver heat and chemicals such as sulfide and methane. These chemicals can be utilized to sustain metabolism by organisms in the region. Our understanding of these biogeochemical processes has historically been limited due to the difficulty and cost of collecting samples from these ocean depths. Other technological challenges, such as measuring microbial activity under the pressure conditions experienced in the bathypelagic zone, have also restricted our knowledge of the region. Although scientific advancements have increased our understanding over the past several decades, many aspects remain a mystery. One of the major areas of current research is focused on understanding carbon remineralization rates in the region. Prior studies have struggled to quantify the rates at which prokaryotes in this region remineralize carbon because previously developed techniques may not be adequate for this region, and indicate remineralization rates much higher than expected. Further work is needed to explore this question, and may require revisions to our understanding of the global carbon cycle.
Particulate organic matter
Organic material from primary production in the epipelagic zone, and to a far lesser extent, organic inputs from terrestrial sources, make up a majority of the Particulate Organic Matter (POM) in the ocean. POM is delivered to the bathypelagic zone via sinking copepod fecal pellets and dead organisms; these parcels of organic matter fall through the water column and deliver organic carbon, nitrogen, and phosphorus, to organisms that live below the photic zone. These parcels are sometimes referred to as marine snow or ocean dandruff. This is also the dominant delivery mechanism of food to organisms in the bathypelagic zone because there is no sunlight for photosynthesis, with chemoautotrophy playing a more minor role as far as we know.
As POM sinks through the water column, it is consumed by organisms which deplete it of nutrients. The size and density of these particles affect their likelihood of reaching organisms in the bathypelagic zone. Smaller parcels of POM often become aggregated together as they fall, which quickens their descent and prohibits their consumption by other organisms, increasing their likelihood of reaching lower depths. The density of these particles may be increased in some regions where minerals associated with some forms of phytoplankton, such as biogenic silica and calcium carbonate "ballast" resulting in more rapid transport to deeper depth.
Carbon
A majority of organic carbon is produced in the epipelagic zone, with a small portion transported deeper into the ocean interior. This process, known as the biological pump, plays a large role in the sequestration of carbon from the atmosphere into the ocean. Organic carbon is primarily exported to the bathypelagic zone in the form of particulate organic carbon (POC) and dissolved organic carbon (DOC).
POC is the largest component of organic carbon delivered to the bathypelagic zone; it primarily takes the form of fecal pellets and dead organisms that sink out of the surface waters and fall toward the ocean floor. Regions with higher primary productivity where particles are able to sink quickly, such as equatorial upwelling zones and the Arabian Sea, have the greatest amount of POC delivery to the bathypelagic zone.
The vertical mixing of DOC-rich surface waters is also a process that delivers carbon to the bathypelagic zone, however, it constitutes a substantially smaller portion of overall transport than POC delivery. DOC transport occurs most readily in regions with high rates of ventilation or ocean turnover, such as the interior of gyres or deep water formation sites along the thermohaline circulation.
Calcium carbonate dissolution
The region in the water column at which calcite dissolution begins to occur rapidly, known as the lysocline, is typically located near the base bathypelagic zone at approximately 3,500 m depth, but varies among ocean basins. The lysocline lies below the saturation depth (the transition to undersaturated conditions with respect to calcium carbonate) and above the carbonate compensation depth (below which there is no calcium carbonate preservation). In a supersaturated environment, the tests of calcite-forming organisms are preserved as they sink toward the sea floor, resulting in sediments with relatively high amounts of CaCO3. However, as depth and pressure increase and temperature decreases, the solubility of calcium carbonate also increases, which results in more dissolution and less net transport to the deeper, underlying seafloor. As a result of this rapid change in dissolution rates, sediments in the bathypelagic region vary widely in CaCO3 content and burial.
Ecology
The ecology of the bathypelagic ecosystem is constrained by its lack of sunlight and primary producers, with limited production of microbial biomass via autotrophy. The trophic networks in this region rely on particulate organic matter (POM) that sinks from the epipelagic and mesopelagic water, and oxygen inputs from the thermohaline circulation. Despite these limitations, this open-ocean ecosystem is home to microbial organisms, fish, and nekton.
Microbial ecology
A comprehensive understanding of the inputs driving the microbial ecology in the bathypelagic zone is lacking due to limited observational data, but has been improving with advancements in deep-sea technology. A majority of our knowledge of ocean microbial activity comes from studies of the shallower regions of the ocean because it is easier to access, and it was previously assumed that deeper water did not have suitable physical conditions for diverse microbial communities. The bathypelagic zone receives inputs of organic material and POM from the surface ocean on the order of 1-3.6 Pg C/year.
Prokaryote biomass in the bathypelagic is dependent and thus correlated with the amount of sinking POM and organic carbon availability. These essential organic carbon inputs for microbes typically decrease with depth as they are utilized while sinking to the bathypelagic. Microbial production varies over six orders of magnitude based on resource availability in a given area. Prokaryote abundance can range from 0.03-2.3x105 cells ml−1, and have population turnover times that can range from 0.1–30 years. Archaea make up a larger portion of the total prokaryote cell abundance, and different groups have different growth needs, with some archaea groups for example utilizing amino acid groups more readily than others. Some archaea like Crenarchaeota have Crenarchaeota 16S rRNA and archaeal amoA gene abundances correlated to dissolved inorganic carbon (DIC) fixation. The utilization of DIC is thought to be fueled by the oxidation of ammonium and is one form of chemoautotrophy. Based on regional variation and differences in prokaryote abundance, heterotrophic prokaryote production, and particulate organic carbon (POC) inputs to the bathypelagic zone.
Research to quantify bacterial-consuming grazers, like heterotrophic eukaryotes, has been limited by difficulties in sampling. Oftentimes organisms do not survive being brought to the surface due to experiencing drastic pressure changes in a short amount of time. Work is underway to quantify cell abundance and biomass, but due to poor survival, it is difficult to get accurate counts. In more recent years there has been an effort to categorize the diversity of the eukaryotic assemblages in the bathypelagic zone using methods to assess the genetic compositions of microbial communities based on supergroups, which is a way to classify organisms that have common ancestry. Some important groups of bacterial grazers include Rhizaria, Alveolata, Fungi, Stramenopiles, Amoebozoa, and Excavata (listed from most to least abundant), with the remaining composition classified as uncertain or other.
Viruses influence biogeochemical cycling through the role they play in marine food webs. Their overall abundance can be up to two orders of magnitude lower than the mesopelagic zone, however, there is often high viral abundance found around deep-sea hydrothermal vents. The magnitude of their impacts on biological systems is demonstrated by the varying range of viral-to-prokaryote abundance ratios ranging from 1-223, this indicates that there are the same amount or more viruses than prokaryotes.
Fauna
Fish ecology
Despite the lack of light, vision plays a role in life within the bathypelagic with bioluminescence a trait among both nektonic and planktonic organisms. In contrast to organisms in the water column, benthic organisms in this region tend to have limited to no bioluminescence. The bathypelagic zone contains sharks, squid, octopuses, and many species of fish, including deep-water anglerfish, gulper eel, amphipods, and dragonfish. The fish are characterized by weak muscles, soft skin, and slimy bodies. The adaptations of some of the fish that live there include small eyes and transparent skin. However, this zone is difficult for fish to live in since food is scarce; resulting in species evolving slow metabolic rates in order to conserve energy. Occasionally, large sources of organic matter from decaying organisms, such as whale falls, create a brief burst of activity by attracting organisms from different bathypelagic communities.
Diel vertical migration
Some bathypelagic species undergo vertical migration, which differs from the diel vertical migration of mesopelagic species in that it is not driven by sunlight. Instead, the migration of bathypelagic organisms is driven by other factors, most of which remain unknown. Some research suggests the movement of species within the overlying pelagic region could prompt individual bathypelagic species to migrate, such as Sthenoteuthis sp., a species of squid. In this particular example, Sthenoteuthis sp. appears to migrate individually over the course of ~4–5 hours towards the surface and then form into groups. While in most regions migration patterns can be driven by predation, in this particular region, the migration patterns are not believed to result solely from predator-prey relations. Instead, these relations are commensalistic, with the species who remain in the bathypelagic benefitting from the POM mixing caused by the upward movement of another species. In addition, the vertical migrating species' timing bathypelagic appears linked to the lunar cycle. However, the exact indicators causing this timing are still unknown.
Research and exploration
This region is understudied due to a lack of data/observations and difficulty of access (i.e. cost, remote locations, extreme pressure). Historically in oceanography, continental margins were the most sampled and researched due to their relatively easy access. However, more recently locations further offshore and at greater depths, such as ocean ridges and seamounts, are being increasingly studied due to advances in technology and laboratory methods, as well as collaboration with industry. The first discovery of communities subsisting off of the chemical energy in hydrothermal vents was aboard an expedition in 1977 led by Jack Corliss, an oceanographer from Oregon State University. More recent advancements include remotely operated vehicles (ROVs), autonomous underwater vehicles (AUVs), and independent gliders and floats.
Specific technologies and research projects
SERPENT Project
Ocean Twilight Zone (OTZ) Project
DEEP SEARCH Project
DEEPEND Project
AUV Sentry
ROV Jason
Hybrid ROV Nereus
AUV Autosub Long Range
Climate change
The oceans act as a buffer for anthropogenic climate change due to their ability to take up atmospheric CO2 and absorb heat from the atmosphere. However, the ocean's ability to do so will be negatively affected as atmospheric CO2 concentrations continue to rise and global temperatures continue to warm. This will lead to changes such as deoxygenation, ocean acidification, temperature increase, and carbon sequestration decrease, among other physical and chemical alterations. These perturbations may have significant impacts on the organisms that dwell in the bathypelagic region and the properties that deliver organic carbon to the deep sea.
Carbon storage
The bathypelagic zone currently acts as a significant reservoir for carbon because of its sheer volume and the century to millennial timescales these waters are isolated from the atmosphere, this ocean zone plays an important role in moderating the effects of anthropogenic climate change. The burial of particulate organic carbon (POC) in the underlying sediments via the biological carbon pump, and the solubility pump of dissolved inorganic carbon (DIC) into the ocean interior via the thermohaline conveyor are key processes for removing excess atmospheric carbon. However, as atmospheric CO2 concentrations and global temperatures continue to rise, the efficiency at which the bathypelagic will store and bury the influx of carbon will most likely decrease. While some regions may experience an increase in POC input, such as Arctic regions where increased periods of minimal sea ice coverage will increase the downward flux of carbon from the surface oceans, overall, there will likely be less carbon sequestered to the bathypelagic region.
References
External links
Woods Hole Oceanographic Institution - Midnight Zone
Oregon Coast Aquarium OceanScape - Midnight Zone
Oceanography | Bathypelagic zone | [
"Physics",
"Environmental_science"
] | 3,302 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
565,742 | https://en.wikipedia.org/wiki/Symbolic%20method | In mathematics, the symbolic method in invariant theory is an algorithm developed by Arthur Cayley, Siegfried Heinrich Aronhold, Alfred Clebsch, and Paul Gordan in the 19th century for computing invariants of algebraic forms. It is based on treating the form as if it were a power of a degree one form, which corresponds to embedding a symmetric power of a vector space into the symmetric elements of a tensor product of copies of it.
Symbolic notation
The symbolic method uses a compact, but rather confusing and mysterious notation for invariants, depending on the introduction of new symbols a, b, c, ... (from which the symbolic method gets its name) with apparently contradictory properties.
Example: the discriminant of a binary quadratic form
These symbols can be explained by the following example from Gordan. Suppose that
is a binary quadratic form with an invariant given by the discriminant
The symbolic representation of the discriminant is
where a and b are the symbols. The meaning of the expression (ab)2 is as follows. First of all, (ab) is a shorthand form for the determinant of a matrix whose rows are a1, a2 and b1, b2, so
Squaring this we get
Next we pretend that
so that
and we ignore the fact that this does not seem to make sense if f is not a power of a linear form.
Substituting these values gives
Higher degrees
More generally if
is a binary form of higher degree, then one introduces new variables a1, a2, b1, b2, c1, c2, with the properties
What this means is that the following two vector spaces are naturally isomorphic:
The vector space of homogeneous polynomials in A0,...An of degree m
The vector space of polynomials in 2m variables a1, a2, b1, b2, c1, c2, ... that have degree n in each of the m pairs of variables (a1, a2), (b1, b2), (c1, c2), ... and are symmetric under permutations of the m symbols a, b, ....,
The isomorphism is given by mapping aa, bb, .... to Aj. This mapping does not preserve products of polynomials.
More variables
The extension to a form f in more than two variables x1, x2, x3,... is similar: one introduces symbols a1, a2, a3 and so on with the properties
Symmetric products
The rather mysterious formalism of the symbolic method corresponds to embedding a symmetric product Sn(V) of a vector space V into a tensor product of n copies of V, as the elements preserved by the action of the symmetric group. In fact this is done twice, because the invariants of degree n of a quantic of degree m are the invariant elements of SnSm(V), which gets embedded into a tensor product of mn copies of V, as the elements invariant under a wreath product of the two symmetric groups. The brackets of the symbolic method are really invariant linear forms on this tensor product, which give invariants of SnSm(V) by restriction.
See also
Umbral calculus
References
Footnotes
Further reading
pp. 32–7, "Invariants of n-ary forms: the symbolic method. Reprinted as
Algebra
Invariant theory | Symbolic method | [
"Physics",
"Mathematics"
] | 684 | [
"Algebra",
"Invariant theory",
"Group actions",
"Symmetry"
] |
14,350,137 | https://en.wikipedia.org/wiki/Through-silicon%20via | In electronic engineering, a through-silicon via (TSV) or through-chip via is a vertical electrical connection (via) that passes completely through a silicon wafer or die. TSVs are high-performance interconnect techniques used as an alternative to wire-bond and flip chips to create 3D packages and 3D integrated circuits. Compared to alternatives such as package-on-package, the interconnect and device density is substantially higher, and the length of the connections becomes shorter.
Classification
Dictated by the manufacturing process, there exist three different types of TSVs: via-first TSVs are fabricated before the individual component (transistors, capacitors, resistors, etc.) are patterned (front end of line, FEOL), via-middle TSVs are fabricated after the individual component are patterned but before the metal layers (back-end-of-line, BEOL), and via-last TSVs are fabricated after (or during) the BEOL process. Via-middle TSVs are currently a popular option for advanced 3D ICs as well as for interposer stacks.
TSVs through the front end of line (FEOL) have to be carefully accounted for during the EDA and manufacturing phases. That is because TSVs induce thermo-mechanical stress in the FEOL layer, thereby impacting the transistor behaviour.
Applications
Image sensors
CMOS image sensors (CIS) were among the first applications to adopt TSV(s) in volume manufacturing. In initial CIS applications, TSVs were formed on the backside of the image sensor wafer to form interconnects, eliminate wire bonds, and allow for reduced form factor and higher-density interconnects. Die stacking came about only with the advent of backside illuminated (BSI) CIS, and involved reversing the order of the lens, circuitry, and photodiode from traditional front-side illumination so that the light coming through the lens first hits the photodiode and then the circuitry. This was accomplished by flipping the photodiode wafer, thinning the backside, and then bonding it on top of the readout layer using a direct oxide bond, with TSVs as interconnects around the perimeter.
3D packages
A 3D package (System in Package, Chip Stack MCM, etc.) contains two or more dies stacked vertically so that they occupy less space and/or have greater connectivity. An alternate type of 3D package can be found in IBM's Silicon Carrier Packaging Technology, where ICs are not stacked but a carrier substrate containing TSVs is used to connect multiple ICs together in a package. In most 3D packages, the stacked chips are wired together along their edges; this edge wiring slightly increases the length and width of the package and usually requires an extra "interposer" layer between the dies. In some new 3D packages, TSVs replace edge wiring by creating vertical connections through the body of the dies. The resulting package has no added length or width. Because no interposer is required, a TSV 3D package can also be flatter than an edge-wired 3D package. This TSV technique is sometimes also referred to as TSS (Through-Silicon Stacking or Thru-Silicon Stacking).
3D integrated circuits
A 3D integrated circuit (3D IC) is a single integrated circuit built by stacking silicon wafers and/or dies and interconnecting them vertically so that they behave as a single device. By using TSV technology, 3D ICs can pack a great deal of functionality into a small "footprint". The different devices in the stack may be heterogeneous, e.g. combining CMOS logic, DRAM and III-V materials into a single IC. In addition, critical electrical paths through the device can be drastically shortened, leading to faster operation. The Wide I/O 3D DRAM memory standard (JEDEC JESD229) includes TSVs in the design.
History
The origins of the TSV concept can be traced back to William Shockley's patent "Semiconductive Wafer and Method of Making the Same" filed in 1958 and granted in 1962, which was further developed by IBM researchers Merlin Smith and Emanuel Stern with their patent "Methods of Making Thru-Connections in Semiconductor Wafers" filed in 1964 and granted in 1967, the latter describing a method for etching a hole through silicon. TSV was not originally designed for 3D integration, but the first 3D chips based on TSV were invented later in the 1980s.
The first three-dimensional integrated circuit (3D IC) stacked dies fabricated with a TSV process were invented in 1980s Japan. Hitachi filed a Japanese patent in 1983, followed by Fujitsu in 1984. In 1986, Fujitsu filed a Japanese patent describing a stacked chip structure using TSV. In 1989, Mitsumasa Koyonagi of Tohoku University pioneered the technique of wafer-to-wafer bonding with TSV, which he used to fabricate a 3D LSI chip in 1989. In 1999, the Association of Super-Advanced Electronics Technologies (ASET) in Japan began funding the development of 3D IC chips using TSV technology, called the "R&D on High Density Electronic System Integration Technology" project. The Koyanagi Group at Tohoku University used TSV technology to fabricate a three-layer stacked image sensor chip in 1999, a three-layer memory module in 2000, a three-layer artificial retina chip in 2001, a three-layer microprocessor in 2002, and a ten-layer memory chip in 2005.
The inter-chip via (ICV) method was developed in 1997 by a FraunhoferSiemens research team including Peter Ramm, D. Bollmann, R. Braun, R. Buchner, U. Cao-Minh, Manfred Engelhardt and Armin Klumpp. It was a variation of the TSV process, and was later called SLID (solid liquid inter-diffusion) technology.
The term "through-silicon via" (TSV) was coined by Tru-Si Technologies researchers Sergey Savastiouk, O. Siniaguine, and E. Korczynski, who proposed a TSV method for a 3D wafer-level packaging (WLP) solution in 2000.
CMOS image sensors utilising TSV were commercialized by companies including Toshiba, Aptina and STMicroelectronics during 20072008, with Toshiba naming their technology "Through Chip Via" (TCV). 3D-stacked random-access memory (RAM) was commercialized by Elpida Memory, which developed the first 8GB DRAM module (stacked with four DDR3 SDRAM dies) in September 2009, and released it in June 2011. TSMC announced plans for 3D IC production with TSV technology in January 2010. In 2011, SK Hynix introduced 16GB DDR3 SDRAM (40nm class) using TSV technology, Samsung introduced 3D-stacked 32GB DDR3 (30nm class) based on TSV in September, and then Samsung and Micron Technology announced TSV-based Hybrid Memory Cube (HMC) technology in October. In 2013, SK Hynix manufactured the first High Bandwidth Memory (HBM) module based on TSV technology. The via middle technology was developed by imec under the vision of Eric Beyne. The via middle provided the best trade off in terms of cost and interconnect density. The work was supported by Qualcomm, and then later Nvidia, Xilinx and Altera, who were looking for ways to beat Intel at its game - increasing on-die memory, but then by stacking, rather than scaling.
References
External links
Integrated circuits
Semiconductor device fabrication | Through-silicon via | [
"Materials_science",
"Technology",
"Engineering"
] | 1,601 | [
"Semiconductor device fabrication",
"Integrated circuits",
"Computer engineering",
"Microtechnology"
] |
14,350,461 | https://en.wikipedia.org/wiki/IdMOC | Integrated discrete Multiple Organ Culture (IdMOC) is an in vitro, cell culture based experimental model for the study of intercellular communication. In conventional in vitro systems, each cell type is studied in isolation ignoring critical interactions between organs or cell types. IdMOC technology is based on the concept that multiple organs signal or communicate via the systemic circulation (i.e., blood).
The IdMOC plate consists of multiple inner wells within a large interconnecting chamber. Multiple cell types are first individually seeded in the inner wells and, when required, are flooded with an overlying medium to facilitate well-to-well communication. Test material can be added to the overlying medium and both media and cells can be analyzed individually. Plating of hepatocytes with other organ-specific cells allows evaluation of drug metabolism and organotoxicity.
The IdMOC system has numerous applications in drug development, such as the evaluation of drug metabolism and toxicity. It can simultaneously evaluate the toxic potential of a drug on cells from multiple organs and evaluate drug stability, distribution, metabolite formation, and efficacy. By modeling multiple-organ interactions, IdMOC can examine the pharmacological effects of a drug and its metabolites on target and off-target organs as well as evaluate drug-drug interactions by measuring cytochrome P450 (CYP) induction or inhibition in hepatocytes.
IdMOC can also be used for routine and high throughput screening of drugs with desirable ADME or ADME-Tox properties. In vitro toxicity screening using hepatocytes in conjunction with other primary cells such as cardiomyocytes (cardiotoxicity model), kidney proximal tubule epithelial cells (nephrotoxicity model), astrocytes (neurotoxicity model), endothelial cells (vascular toxicity model), and airway epithelial cells (pulmonary toxicity model) is invaluable to the drug design and discovery process.
The IdMOC was patented by Dr. Albert P. Li in 2004.
See also
Cytochrome P450
Drug metabolism
Pharmacology
Toxicology
References
External links
http://www.apsciences.com
http://www.invitroadmet.com
"Scientist shows the way to take guinea pigs off lab," Karthika Gopalakrishnan. The Times of India. 17 February 2011. Retrieved 19 August 2015.
Drug development
Pharmacokinetics
Pharmacodynamics
Pharmaceutics
Metabolism
Biochemistry
Cell communication | IdMOC | [
"Chemistry",
"Biology"
] | 531 | [
"Pharmacology",
"Cell communication",
"Pharmacokinetics",
"Pharmacodynamics",
"Cellular processes",
"nan",
"Biochemistry",
"Metabolism"
] |
14,350,687 | https://en.wikipedia.org/wiki/Halogen%20bond | In chemistry, a halogen bond (XB or HaB) occurs when there is evidence of a net attractive interaction between an electrophilic region associated with a halogen atom in a molecular entity and a nucleophilic region in another, or the same, molecular entity. Like a hydrogen bond, the result is not a formal chemical bond, but rather a strong electrostatic attraction. Mathematically, the interaction can be decomposed in two terms: one describing an electrostatic, orbital-mixing charge-transfer and another describing electron-cloud dispersion. Halogen bonds find application in supramolecular chemistry; drug design and biochemistry; crystal engineering and liquid crystals; and organic catalysis.
Definition
Halogen bonds occur when a halogen atom is electrostatically attracted to a partial negative charge. Necessarily, the atom must be covalently bonded in an antipodal σ-bond; the electron concentration associated with that bond leaves a positively charged "hole" on the other side. Although all halogens can theoretically participate in halogen bonds, the σ-hole shrinks if the electron cloud in question polarizes poorly or the halogen is so electronegative as to polarize the associated σ-bond. Consequently halogen-bond propensity follows the trend F < Cl < Br < I.
There is no clear distinction between halogen bonds and expanded octet partial bonds; what is superficially a halogen bond may well turn out to be a full bond in an unexpectedly relevant resonance structure.
Donor characteristics
A halogen bond is almost collinear with the halogen atom's other, conventional bond, but the geometry of the electron-charge donor may be much more complex.
Multi-electron donors such as ethers and amines prefer halogen bonds collinear with the lone pair and donor nucleus.
Pyridine derivatives tend to donate halogen bonds approximately coplanar with the ring, and the two C–N–X angles are about 120°.
Carbonyl, thiocarbonyl-, and selenocarbonyl groups, with a trigonal planar geometry around the Lewis donor atom, can accept one or two halogen bonds.
Anions are usually better halogen-bond acceptors than neutral species: the more dissociated an ion pair is, the stronger the halogen bond formed with the anion.
Comparison to other bond-like forces
A parallel relationship can easily be drawn between halogen bonding and hydrogen bonding. Both interactions revolve around an electron donor/electron acceptor relationship, between a halogen-like atom and an electron-dense one. But halogen bonding is both much stronger and more sensitive to direction than hydrogen bonding. A typical hydrogen bond has energy of formation ; known halogen bond energies range from 10–200 kJ/mol.
The σ-hole concept readily extends to pnictogen, chalcogen and aerogen bonds, corresponding to atoms of Groups 15, 16 and 18 (respectively).
History
In 1814, Jean-Jacques Colin discovered (to his surprise) that a mixture of dry gaseous ammonia and iodine formed a shiny, metallic-appearing liquid. Frederick Guthrie established the precise composition of the resulting I2···NH3 complex fifty years later, but the physical processes underlying the molecular interaction remained mysterious until the development of Robert S. Mulliken's theory of inner-sphere and outer-sphere interactions. In Mulliken's categorization, the intermolecular interactions associated with small partial charges affect only the "inner sphere" of an atom's electron distribution; the electron redistribution associated with Lewis adducts affects the "outer sphere" instead.
Then, in 1954, Odd Hassel fruitfully applied the distinction to rationalize the X-ray diffraction patterns associated with a mixture of 1,4-dioxane and bromine. The patterns suggested that only 2.71 Å separated the dioxane oxygen atoms and bromine atoms, much closer than the sum (3.35 Å) of the atoms' van der Waals radii; and that the angle between the O−Br and Br−Br bond was about 180°. From these facts, Hassel concluded that halogen atoms are directly linked to electron pair donors in a direction with a bond direction that coincides with the axes of the orbitals of the lone pairs in the electron pair donor molecule. For this work, Hassel was awarded the 1969 Nobel Prize in Chemistry.
Dumas and coworkers first coined the term "halogen bond" in 1978, during their investigations into complexes of CCl4, CBr4, SiCl4, and SiBr4 with tetrahydrofuran, tetrahydropyran, pyridine, anisole, and di-n-butyl ether in organic solvents.
However, it was not until the mid-1990s, that the nature and applications of the halogen bond began to be intensively studied. Through systematic and extensive microwave spectroscopy of gas-phase halogen bond adducts, Legon and coworkers drew attention to the similarities between halogen-bonding and better-known hydrogen-bonding interactions.
In 2007, computational calculations by Politzer and Murray showed that an anisotropic electron density distribution around the halogen nucleus — the "σ-hole" — underlay the high directionality of the halogen bond. This hole was then experimentally observed using Kelvin probe force microscopy.
In 2020, Kellett et al. showed that halogen bonds also have a π-covalent character similar to metal coordination bonds. In August 2023 the "π-hole" was too experimentally observed
Applications
Crystal engineering
The strength and directionality of halogen bonds are a key tool in the discipline of crystal engineering, which attempts to shape crystal structures through close control of intermolecular interactions. Halogen bonds can stabilize copolymers or induce mesomorphism in otherwise isotropic liquids. Indeed, halogen bond-induced liquid crystalline phases are known in both alkoxystilbazoles and silsesquioxanes (pictured). Alternatively, the steric sensitivity of halogen bonds can cause bulky molecules to crystallize into porous structures; in one notable case, halogen bonds between iodine and aromatic π-orbitals caused molecules to crystallize into a pattern that was nearly 40% void.
Controlled polymerization
Conjugated polymers offer the tantalizing possibility of organic molecules with a manipulable electronic band structure, but current methods for production have an uncontrolled topology. Sun, Lauher, and Goroff discovered that certain amides ensure a linear polymerization of poly(diiododiacetylene). The underlying mechanism is a self-organization of the amides via hydrogen bonds that then transfers to the diiododiacetylene monomers via halogen bonds. Although pure diiododiacetylene crystals do not polymerize spontaneously, the halogen-bond induced organization is sufficiently strong that the cocrystals do spontaneously polymerize.
Biological macromolecules
Most biological macromolecules contain few or no halogen atoms. But when molecules do contain halogens, halogen bonds are often essential to understanding molecular conformation. Computational studies suggest that known halogenated nucleobases form halogen bonds with oxygen, nitrogen, or sulfur in vitro. Interestingly, oxygen atoms typically do not attract halogens with their lone pairs, but rather the π electrons in the carbonyl or amide group.
Halogen bonding can be significant in drug design as well. For example, inhibitor IDD 594 binds to human aldose reductase through a bromine halogen bond, as shown in the figure. The molecules fail to bind to each other if similar aldehyde reductase replaces the enzyme, or chlorine replaces the drug halogen, because the variant geometries inhibit the halogen bond.
Notes
References
Further reading
An early review:
Chemical bonding
Intermolecular forces | Halogen bond | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,655 | [
"Molecular physics",
"Materials science",
"Intermolecular forces",
"Condensed matter physics",
"nan",
"Chemical bonding"
] |
14,352,042 | https://en.wikipedia.org/wiki/FENE%20model | In polymer physics, the finite extensible nonlinear elastic (FENE) model, also called the FENE dumbbell model, represents the dynamics of a long-chained polymer. It simplifies the chain of monomers by connecting a sequence of beads with nonlinear springs.
Its direct extension the FENE-P model, is more commonly used in computational fluid dynamics to simulate turbulent flow. The P stands for the last name of physicist Anton Peterlin, who developed an important approximation of the model in 1966. The FENE-P model was introduced by Robert Byron Bird et al. in the 1980s.
In 1991 the FENE-MP model (PM for modified Peterlin) was introduced and in 1988 the FENE-CR was introduced by M.D. Chilcott and J.M. Rallison.
Formulation
The spring force in the FENE model is given Warner's spring force, as
,
where , k is the spring constant and Lmax the upper limit for the length extension. Total stretching force on i-th bead can be written as .
The Werner's spring force approximate the inverse Langevin function found in other models.
FENE-P model
The FENE-P model takes the FENE model and assumes the Peterlin statistical average for the restoring force as
,
where the indicates the statistical average.
Advantages and disanvatages
FENE-P is one of few polymer models that can be used in computational fluid dynamics simulations since it removes the need of statistical averaging at each grid point at any instant in time. It is demonstrated to be able to capture some of the most important polymeric flow behaviors such as polymer turbulence drag reduction and shear thinning. It is the most commonly used polymer model that can be used in a turbulence simulation since direct numerical simulation of turbulence is already extremely expensive.
Due to its simplifications FENE-P is not able to show the hysteresis effects that polymers have, while the FENE model can.
References
Dynamics of dissolved polymer chains in isotropic turbulence
External links
QPolymer: an open source (for Mac OS X) FENE model Brownian dynamics simulation software
Stretching of Polymers in Isotropic Turbulence: A Statistical Closure
Polymers | FENE model | [
"Chemistry",
"Materials_science"
] | 450 | [
"Polymers",
"Polymer chemistry"
] |
14,352,711 | https://en.wikipedia.org/wiki/Lateral%20flow%20test | A lateral flow test (LFT), is an assay also known as a lateral flow immunochromatographic test (ICT), or rapid test. It is a simple device intended to detect the presence of a target substance in a liquid sample without the need for specialized and costly equipment. LFTs are widely used in medical diagnostics in the home, at the point of care, and in the laboratory. For instance, the home pregnancy test is an LFT that detects a specific hormone. These tests are simple and economical and generally show results in around five to thirty minutes. Many lab-based applications increase the sensitivity of simple LFTs by employing additional dedicated equipment. Because the target substance is often a biological antigen, many lateral flow tests are rapid antigen tests (RAT or ART).
LFTs operate on the same principles of affinity chromatography as the enzyme-linked immunosorbent assays (ELISA). In essence, these tests run the liquid sample along the surface of a pad with reactive molecules that show a visual positive or negative result. The pads are based on a series of capillary beds, such as pieces of porous paper, microstructured polymer, or sintered polymer. Each of these pads has the capacity to transport fluid (e.g., urine, blood, saliva) spontaneously.
The sample pad acts as a sponge and holds an excess of sample fluid. Once soaked, the fluid flows to the second conjugate pad in which the manufacturer has stored freeze dried bio-active particles called conjugates (see below) in a salt–sugar matrix. The conjugate pad contains all the reagents required for an optimized chemical reaction between the target molecule (e.g., an antigen) and its chemical partner (e.g., antibody) that has been immobilized on the particle's surface. This marks target particles as they pass through the pad and continue across to the test and control lines. The test line shows a signal, often a color as in pregnancy tests. The control line contains affinity ligands which show whether the sample has flowed through and the bio-molecules in the conjugate pad are active. After passing these reaction zones, the fluid enters the final porous material, the wick, that simply acts as a waste container.
LFTs can operate as either competitive or sandwich assays.
History
LFTs derive from paper chromatography, which was developed in 1943 by Martin and Synge, and elaborated in 1944 by Consden, Gordon and Martin. There was an explosion of activity in this field after 1945. The ELISA technology was developed in 1971. A set of LFT patents, including the litigated US 6,485,982 described below, were filed by Armkel LLC starting in 1988.
Synopsis
Colored particles
In principle, any colored particle can be used, but latex (blue color) or nanometer-sized particles of gold (red color) are most commonly used. The gold particles are red in color due to localized surface plasmon resonance. Fluorescent or magnetic labelled particles can also be used, but these require the use of an electronic reader to assess the test result.
Sandwich assays
Sandwich assays are generally used for larger analytes because they tend to have multiple binding sites. As the sample migrates through the assay it first encounters a conjugate, which is an antibody specific to the target analyte labelled with a visual tag, usually colloidal gold. The antibodies bind to the target analyte within the sample and migrate together until they reach the test line. The test line also contains immobilized antibodies specific to the target analyte, which bind to the migrated analyte bound conjugate molecules. The test line then presents a visual change due to the concentrated visual tag, hence confirming the presence of the target molecules. The majority of sandwich assays also have a control line which will appear whether or not the target analyte is present to ensure proper function of the lateral flow pad.
The rapid, low-cost sandwich-based assay is commonly used for home pregnancy tests which detect human chorionic gonadotropin, hCG, in the urine of pregnant women.
Competitive assays
Competitive assays are generally used for smaller analytes since smaller analytes have fewer binding sites. The sample first encounters antibodies to the target analyte labelled with a visual tag (colored particles). The test line contains the target analyte fixed to the surface. When the target analyte is absent from the sample, unbound antibody will bind to these fixed analyte molecules, meaning that a visual marker will show. Conversely, when the target analyte is present in the sample, it binds to the antibodies to prevent them binding to the fixed analyte in the test line, and thus no visual marker shows. This differs from sandwich assays in that no band means the analyte is present.
Quantitative tests
Most LFTs are intended to operate on a purely qualitative basis. However, it is possible to measure the intensity of the test line to determine the quantity of analyte in the sample. Handheld diagnostic devices known as lateral flow readers are used by several companies to provide a fully quantitative assay result. By utilizing unique wavelengths of light for illumination in conjunction with either CMOS or CCD detection technology, a signal rich image can be produced of the actual test lines. Using image processing algorithms specifically designed for a particular test type and medium, line intensities can then be correlated with analyte concentrations. One such handheld lateral flow device platform is made by Detekt Biomedical L.L.C. Alternative non-optical techniques are also able to report quantitative assays results. One such example is a magnetic immunoassay (MIA) in the LFT form also allows for getting a quantified result. Reducing variations in the capillary pumping of the sample fluid is another approach to move from qualitative to quantitative results. Recent work has, for example, demonstrated capillary pumping with a constant flow rate independent from the liquid viscosity and surface energy.
Control line
Most tests will incorporate a second line which contains a further antibody (one which is not specific to the analyte) that binds some of the remaining colored particles which did not bind to the test line. This confirms that fluid has passed successfully from the sample-application pad, past the test line. By giving confirmation that the sample has had a chance to interact with the test line, this increases confidence that a visibly-unchanged test line can be interpreted as a negative result (or that a changed test line can be interpreted as a negative result in a competitive assay).
Blood plasma extraction
Because the intense red color of hemoglobin interferes with the readout of colorimetric or optical detection-based diagnostic tests, blood plasma separation is a common first step to increase diagnostic test accuracy. Plasma can be extracted from whole blood via integrated filters or via agglutination.
Speed and simplicity
Time to obtain the test result is a key driver for these products. Tests results can be available in as little as a few minutes. Generally there is a trade off between time and sensitivity: more sensitive tests may take longer to develop. The other key advantage of this format of test compared to other immunoassays is the simplicity of the test, by typically requiring little or no sample or reagent preparation.
Patents
This is a highly competitive area and a number of people claim patents in the field, most notably Alere (formerly Inverness Medical Innovations, now owned by Abbott) who own patents originally filed by Unipath. The US 6,485,982 patent, that has been litigated, expired in 2019. A number of other companies also hold patents in this arena. A group of competitors are challenging the validity of the patents. The original patent is apparently from 1988.
Applications
Lateral flow assays have a wide array of applications and can test a variety of samples including urine, blood, saliva, sweat, serum, and other fluids. They are currently used by clinical laboratories, hospitals, physicians and veterinary clinics, food analysis labs and environmental testing facilities.
Immediacy in obtaining results is normally the key factor in choosing this technique, although simplicity and lack of a need for formal equipment are also important factors. These features allow ICTs to be used a at-home test or in pharmacies. Because of their exceptional quality, rapid test are also used routinely in well-equippped laboratories when the demand for test is low.
The broad applications of rapid test can be realized because of their simplicity accompanied by high quality analytical production. The sensitivity and specificity of these techniques tend to be comparable to those of other more complex methods, and on occasion significantly better.
Other uses for lateral flow assays are food and environmental safety and veterinary medicine for chemicals such as diseases and toxins. LFTs are also commonly used for disease identification such as ebola, but the most common LFT are the home pregnancy and SARS-CoV-2 tests.
COVID-19 testing
Lateral flow assays have played a critical role in COVID-19 testing as they have the benefit of delivering a result in 15–30 minutes. The systematic evaluation of lateral flow assays during the COVID-19 pandemic was initiated at Oxford University as part of a UK collaboration with Public Health England. A study that started in June 2020 in the United Kingdom, FALCON-C19, confirmed the sensitivity of some lateral flow devices (LFDs) in this setting. Four out of 64 LFDs tested had desirable performance characteristics according to these early tests; the Innova SARS-CoV-2 Antigen Rapid Qualitative Test performed moderately in viral antigen detection/sensitivity with excellent specificity, although kit failure rates and the impact of training were potential issues. The Innova test's specificity is more widely publicised, but sensitivity in phase 4 trials was 50.1%. This describes a device for which one out of every two patients infected with COVID-19 and tested in real-world conditions would receive a false-negative result. After closure of schools in January 2021, biweekly LFTs were introduced in England for teachers, pupils, and households of pupils when schools re-opened on March 8, 2021 for asymptomatic testing. Biweekly LFT were made universally available to everyone in England on April 9, 2021. LFTs have been used for mass testing for COVID-19 globally and complement other public health measures for COVID-19.
Some scientists outside government expressed serious misgivings in late 2020 about the use of Innova LFDs for screening for Covid. According to Jon Deeks, a professor of biostatistics at the University of Birmingham, England, the Innova test is "entirely unsuitable" for community testing: "as the test may miss up to half of cases, a negative test result indicates a reduced risk of Covid, but does not exclude Covid".
Sensitivity of tests used in 2022 was around 70%.
See also
: LFT test for ovulation
References
Further reading
Porex Clinical Sciences (manufacturer)
Medical terminology
Molecular biology
Biotechnology
Molecular biology techniques
Chromatography
Immunologic tests | Lateral flow test | [
"Chemistry",
"Biology"
] | 2,313 | [
"Chromatography",
"Separation processes",
"Immunologic tests",
"Biotechnology",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Biochemistry"
] |
14,356,754 | https://en.wikipedia.org/wiki/Hatta%20number | The Hatta number (Ha) was developed by Shirôji Hatta (1895-1973 ) in 1932, who taught at Tohoku University from 1925 to 1958. It is a dimensionless parameter that compares the rate of reaction in a liquid film to the rate of diffusion through the film. For a second order reaction (), the maximum rate of reaction assumes that the liquid film is saturated with gas at the interfacial concentration ; thus, the maximum rate of reaction is .
For a reaction order in and order in :
For gas-liquid absorption with chemical reactions, a high Hatta number indicates the reaction is much faster than diffusion. In this case, the reaction occurs within a thin film, and the surface area limits the overall rate. Conversely, a Hatta number smaller than unity suggests the reaction is the limiting factor, and the reaction takes place in the bulk fluid, requiring larger volumes.
References
See also
Dimensionless quantity
Dimensional analysis
Catalysis
Dimensionless numbers of chemistry
Transport phenomena | Hatta number | [
"Physics",
"Chemistry",
"Engineering"
] | 202 | [
"Transport phenomena",
"Catalysis",
"Physical phenomena",
"Chemical engineering",
"Chemical reaction stubs",
"Chemical kinetics",
"Dimensionless numbers of chemistry",
"Chemical process stubs"
] |
14,356,889 | https://en.wikipedia.org/wiki/Gamow%20factor | The Gamow factor, Sommerfeld factor or Gamow–Sommerfeld factor, named after its discoverer George Gamow or after Arnold Sommerfeld, is a probability factor for two nuclear particles' chance of overcoming the Coulomb barrier in order to undergo nuclear reactions, for example in nuclear fusion. By classical physics, there is almost no possibility for protons to fuse by crossing each other's Coulomb barrier at temperatures commonly observed to cause fusion, such as those found in the Sun. When George Gamow instead applied quantum mechanics to the problem, he found that there was a significant chance for the fusion due to tunneling.
The probability of two nuclear particles overcoming their electrostatic barriers is given by the following equation:
where is the Gamow energy,
Here, is the reduced mass of the two particles. The constant is the fine-structure constant, is the speed of light, and and are the respective atomic numbers of each particle.
While the probability of overcoming the Coulomb barrier increases rapidly with increasing particle energy, for a given temperature, the probability of a particle having such an energy falls off very fast, as described by the Maxwell–Boltzmann distribution. Gamow found that, taken together, these effects mean that for any given temperature, the particles that fuse are mostly in a temperature-dependent narrow range of energies known as the Gamow window.
Derivation
Gamow first solved the one-dimensional case of quantum tunneling using the WKB approximation. Considering a wave function of a particle of mass m, we take area 1 to be where a wave is emitted, area 2 the potential barrier which has height V and width l (at ), and area 3 its other side, where the wave is arriving, partly transmitted and partly reflected. For a wave number k and energy E we get:
where and .
This is solved for given A and α by taking the boundary conditions at the both barrier edges, at and , where both and its derivative must be equal on both sides.
For , this is easily solved by ignoring the time exponential and considering the real part alone (the imaginary part has the same behavior). We get, up to factors depending on the phases which are typically of order 1, and up to factors of the order of (assumed not very large, since V is greater than E not marginally):
Next Gamow modeled the alpha decay as a symmetric one-dimensional problem, with a standing wave between two symmetric potential barriers at and , and emitting waves at both outer sides of the barriers.
Solving this can in principle be done by taking the solution of the first problem, translating it by and gluing it to an identical solution reflected around .
Due to the symmetry of the problem, the emitting waves on both sides must have equal amplitudes (A), but their phases (α) may be different. This gives a single extra parameter; however, gluing the two solutions at requires two boundary conditions (for both the wave function and its derivative), so in general there is no solution. In particular, re-writing (after translation by ) as a sum of a cosine and a sine of , each having a different factor that depends on k and α, the factor of the sine must vanish, so that the solution can be glued symmetrically to its reflection. Since the factor is in general complex (hence its vanishing imposes two constraints, representing the two boundary conditions), this can in general be solved by adding an imaginary part of k, which gives the extra parameter needed. Thus E will have an imaginary part as well.
The physical meaning of this is that the standing wave in the middle decays; the emitted waves newly emitted have therefore smaller amplitudes, so that their amplitude decays in time but grows with distance. The decay constant, denoted λ, is assumed small compared to .
λ can be estimated without solving explicitly, by noting its effect on the probability current conservation law. Since the probability flows from the middle to the sides, we have:
Note the factor of 2 is due to having two emitted waves.
Taking , this gives:
Since the quadratic dependence in is negligible relative to its exponential dependence, we may write:
Remembering the imaginary part added to k is much smaller than the real part, we may now neglect it and get:
Note that is the particle velocity, so the first factor is the classical rate by which the particle trapped between the barriers hits them.
Finally, moving to the three-dimensional problem, the spherically symmetric Schrödinger equation reads (expanding the wave function in spherical harmonics and looking at the n-th term):
Since amounts to enlarging the potential, and therefore substantially reducing the decay rate (given its exponential dependence on ), we focus on , and get a very similar problem to the previous one with , except that now the potential as a function of r is not a step function.
The main effect of this on the amplitudes is that we must replace the argument in the exponent, taking an integral of over the distance where rather than multiplying by l. We take the Coulomb potential:
where is the vacuum electric permittivity, e the electron charge, z = 2 is the charge number of the alpha particle and Z the charge number of the nucleus (Z-z after emitting the particle). The integration limits are then , where we assume the nuclear potential energy is still relatively small, and , which is where the nuclear negative potential energy is large enough so that the overall potential is smaller than E. Thus, the argument of the exponent in λ is:
This can be solved by substituting and then and solving for θ, giving:
where .
Since x is small, the x-dependent factor is of order 1.
Gamow assumed , thus replacing the x-dependent factor by , giving:
with:
which is the same as the formula given in the beginning of the article with ,
and the fine-structure constant .
For a radium alpha decay, Z = 88, z = 2 and m = 4mp, EG is approximately 50 GeV. Gamow calculated the slope of with respect to E at an energy of 5 MeV to be ~ 1014 J−1, compared to the experimental value of .
References
External links
Modeling Alpha Half-life (Georgia State University)
Nuclear physics
George Gamow | Gamow factor | [
"Physics"
] | 1,306 | [
"Nuclear physics"
] |
14,357,725 | https://en.wikipedia.org/wiki/In%20vivo%20magnetic%20resonance%20spectroscopy | In vivo magnetic resonance spectroscopy (MRS) is a specialized technique associated with magnetic resonance imaging (MRI).
Magnetic resonance spectroscopy (MRS), also known as nuclear magnetic resonance (NMR) spectroscopy, is a non-invasive, ionizing-radiation-free analytical technique that has been used to study metabolic changes in brain tumors, strokes, seizure disorders, Alzheimer's disease, depression, and other diseases affecting the brain. It has also been used to study the metabolism of other organs such as muscles. In the case of muscles, NMR is used to measure the intramyocellular lipids content (IMCL).
Magnetic resonance spectroscopy is an analytical technique that can be used to complement the more common magnetic resonance imaging (MRI) in the characterization of tissue. Both techniques typically acquire signal from hydrogen protons (other endogenous nuclei such as those of Carbon, Nitrogen, and Phosphorus are also used), but MRI acquires signal primarily from protons which reside within water and fat, which are approximately a thousand times more abundant than the molecules detected with MRS. As a result, MRI often uses the larger available signal to produce very clean 2D images, whereas MRS very frequently only acquires signal from a single localized region, referred to as a "voxel". MRS can be used to determine the relative concentrations and physical properties of a variety of biochemicals frequently referred to as "metabolites" due to their role in metabolism.
Data Acquisition
Acquiring an MRS scan is very similar to that of MRI with a few additional steps preceding data acquisition. These steps include:
Shimming the magnetic field: this step is taken to correct for the inhomogeneity of the magnetic field by tuning different pulses in the x, y, and z directions. This step is usually automated but can be performed manually.
Suppressing the water signal: because water molecules contain hydrogen, and the relative concentration of water to metabolite is about 10,000:1, the water signal is often suppressed or the metabolite peaks will not be discernible in the spectra. This is achieved by adding water suppression pulses. Recent advances allow proton MRS without water suppression.
Choosing a spectroscopic technique: careful planning of measurements is important in the context of a specific experiment.
Single Voxel Spectroscopy (SVS): has a minimum spatial resolution of approximately 1 cm3, and has the cleanest spectrum free from unwanted artifacts due to the small acquired volume leading to easy shim and less unwanted signals from outside the voxel.
Magnetic Resonance Spectroscopic Imaging (MRSI): a 2-dimensional (or 3-dimensional) MRS technique which uses two/three phase-encoding directions to create a two/three-dimensional map of spectra. The drawbacks of this technique is that having two/three phase encoding directions requires lengthy scan time, and the larger volume of acquisition is more likely to introduce artefacts due to poorer shimming, unsuppressed water, as well as the inherent sinc point-spread-function due to the finite sampling of k-space which results in the signal from one voxel bleeding into all others.
Data Quantification
During data acquisition, the scan acquires raw data in the form of spectra. This raw data must be quantified to achieve a meaningful understanding of the spectrum. This quantification is achieved via linear combination. Linear combination requires knowledge of the underlying spectral shapes, referred to as basis sets. Basis sets are acquired either via numerical simulation or experimentally measured in phantoms. There are numerous packages available to numerically simulate basis sets, including MARSS, FID-A, among others such as GAMMA, VESPA and Spinach. With the basis sets, the raw data can now be quantified as measured concentrations of different chemical species. Software is used to complete this. LCModel, a commercial software, has been for most of the field's history the standard software quantification package. However, now there are many freeware packages for quantification: AMARES, AQSES, Gannet, INSPECTOR, jMRUI, TARQUIN, and more.
Before linear combination, peak extraction used to be used for data quantification. However, this is no longer popular nor recommended. Peak extraction is a technique which integrates the area underneath a signal. Despite its seemingly straightforwardness, there are several confounds with this technique. Chiefly, the individual Lorentzian shapes employed do not scale up to match the complexity of the spectral shapes of J-coupled metabolites and is too simple to discern between overlapping peaks.
Pulse Sequences
Similar to MRI, MRS uses pulse sequences to acquire signal from several different molecules to generate a spectra instead of an image. In MRS, STEAM (Stimulated Echo Acquisition Method) and PRESS (Point Resolved Spectroscopy) are the two primary pulse sequence techniques used. In terms of advantages, STEAM is best for imaging metabolites with shorter T2 and has lower SAR, while PRESS has higher SNR than STEAM. STEAM and PRESS are most widely used due to their implementation on the major vendors of MR scanners. Beyond STEAM and PRES there are sequences which utilize adiabatic pulses. Adiabatic pulses produce uniform flip angles even when there is extreme B1 inhomogeneity. Thus, these sequences allow us to achieve excitation that achieves the sought-for B1 insensitivity and off-resonance in the RF coil and sampled object. Specifically, adiabatic pulses solve the problem of signal dropout that comes from the different B1 flux patterns that result from the surface transmit coils used and the usage of normal pulses. Adiabatic pulses are also useful for constraints on RF peak power for excitation and lowering tissue heating. Additionally, adiabatic pulses have substantially higher bandwidth, which reduces chemical shift displacement artefact, which is particularly important at high field strengths and when a large range of frequencies are desired to be measured (i.e., measuring both the signals upfield and downfield of water in proton MRS).
Spatial Localization Sequences
In PRESS, the two chief drawbacks are lengthy echo time (TE) and chemical shift displacement (CSD) artifacts. Lengthy echo time arises from the fact that PRESS uses two 180° pulses, unlike STEAM which uses exclusively 90° pulses. The duration of 180° pulses are generally longer than 90° pulses because it takes more energy to flip a net magnetization vector completely as opposed to only 90°. Chemical shift displacement artifacts arises partly because of less optimal slice selection profiles. Multiple 180° pulses does not allow a very short TE, resulting in less optimal slice selection profile. Additionally, multiple 180° pulses means smaller bandwidth and thus larger chemical shift displacement. Specifically, the chemical shift displacement artifacts occur because signals with different chemical shifts experience different frequency-encoded slice selections and thus do not originate from same volume. Additionally, this effect becomes greater at higher magnetic field strengths.
SPECIAL consists of a spatially selective pre-excitation inversion pulse (typically AFP) followed by spatially selective excitation and refocusing pulses, both of which are usually SLR or truncated sinc pulses.
SPECIAL is a hybrid of PRESS and Image-Selected In Vivo Spectroscopy (ISIS). ISIS achieves spatial localization in the three spatial dimensions through a series of eight slice-selective preinversion pulses that can be appropriately positioned so that the sum of the eight cycles removes all signal outside the desired 3D region. SPECIAL obtains spatial localization from only a single dimension with pre-excitation inversion pulses (cycled on and off every other repetition time [TR]), making it a two-cycle sequence.
The use of the preinversion pulse to remove one refocusing pulse (as compared with PRESS) is what allows SPECIAL to achieve a short TE, reaching a minimum of 2.2 msec on a preclinical scanner in rat brain while being able to recover the full signal and as low as 6 msec on a clinical 3T scanner.
The largest drawback of SPECIAL and SPECIAL-sLASER is that they are two-cycle schemes, and systematic variations between cycles will manifest in their difference spectrum. Lipid contamination is a particularly large problem with SPECIAL and similar sequences.
The state-of-the-art localization sequence is sLASER, which utilizes two pairs of adiabatic refocusing pulses. This has recently been recommended by consensus.
The first is through OVS, which will reduce the contamination of lipid signals that originate from outside the voxel, although this comes at the cost of an increase in SAR. The second is not to set the amplitude of the pre-excitation inversion pulse to zero every other TR, but instead to shift the location of this ISIS plane such that the excited volume for the off condition is outside the object. This has been shown to greatly reduce lipid contamination, speculated to have arisen from the interaction between the RF pulse and lipid compartments due to incomplete relaxation, magnetization transfer, or the homonuclear Overhauser effect, although the exact mechanism remains unknown. The third is to use an echo-planar readout that dephases magnetization from outside the voxel, also shown to substantially reduce lipid artifacts. All three methods could be combined to overcome lipid contamination.
One of the dimensions to understand about a pulse sequence is its coherence pathway. The coherence pathway is the sequence of quantum coherence number(s) the signal takes prior to its acquisition. All coherence pathways end in -1, as this is the only coherence pathway detected by quadrature coils. The spin echo-type sequences (PRESS, sLASER, LASER) simply alternate between +1 and -1. For example, the coherence pathway for PRESS (expressed as a vector) is [-1, 1, -1]. This indicates that after the initial RF pulse (excitation pulse) the spins have a -1 quantum coherence. The refocusing pulses then swap the -1 to +1, then back from +1 to -1 (where it is then detected). Similarly for sLASER the coherence pathway is [-1, 1, -1, 1, -1]. The coherence pathway for LASER is [-1, 1, -1, 1, -1, 1, -1]. The coherence pathway for sPECIAL is [0, 1, -1]. This indicates that after the first RF pulse the signal resides as a population, due to its 0 quantum coherence number. Coherence pathways are critical as the explain how the sequences are affected by crushers and phase cycling. As such, coherence pathway analysis has been used to develop optimized crusher schemes and phase cycling schemes for an arbitrary MRS experiment.
Uses
MRS allows doctors and researchers to obtain biochemical information about the tissues of the human body in a non-invasive way (without the need for a biopsy), whereas MRI only gives them information about the structure of the body (the distribution of water and fat).
For example, whereas MRI can be used to assist in the diagnosis of cancer, MRS could potentially be used to assist in information regarding to the aggressiveness of the tumor. Furthermore, because many pathologies appear similar in diagnostic imaging (such as radiation-induced necrosis and recurring tumor following radiotherapy), MRS may in the future be used to assist in distinguishing between similarly appearing prognoses.
MRS equipment can be tuned (just like a radio receiver) to pick up signals from different chemical nuclei within the body. The most common nuclei to be studied are protons (hydrogen), phosphorus, carbon, sodium and fluorine.
The types of biochemicals (metabolites) which can be studied include choline-containing compounds (which are used to make cell membranes), creatine (a chemical involved in energy metabolism), inositol and glucose (both sugars), N-acetylaspartate, and alanine and lactate which are elevated in some tumors.
At present MRS is mainly used as a tool by scientists (e.g. medical physicists and biochemists) for medical research projects, but it is becoming clear that it also has the ability to give doctors useful clinical information, especially with the discovery that it can be used to probe the concentration of alpha-Hydroxyglutaric acid, which is only present in IDH1 and IDH2 mutated gliomas, which alters the prescribed treatment regimen.
MRS is currently used to investigate a number of diseases in the human body, most notably cancer (in brain, breast and prostate), epilepsy, Alzheimer's disease, Parkinson's disease, and Huntington's chorea. MRS has been used to diagnose pituitary tuberculosis.
Prostate cancer: Combined with a magnetic resonance imaging (MRI) and given equal results, then the three-dimensional MRS can predict the prevalence of a malignant degeneration of prostate tissue by approximately 90%. The combination of both methods may be helpful in the planning of biopsies and therapies of the prostate, as well as to monitor the success of a therapy.
Example
Shown below is an MRI brain scan (in the axial plane, that is slicing from front-to-back and side-to-side through the head) showing a brain tumor (meningioma) at the bottom right. The red box shows the volume of interest from which chemical information was obtained by MRS (a cube with 2 cm sides which produces a square when intersecting the 5 mm thick slice of the MRI scan).
Each biochemical, or metabolite, has a different peak in the spectrum which appears at a known frequency. The peaks corresponding to the amino acid alanine, are highlighted in red (at 1.4 ppm). This is an example of the kind of biochemical information which can help doctors to make their diagnosis. Other metabolites of note are choline (3.2 ppm) and creatine (3.0 ppm).
Applications of MRS
In 1H Magnetic Resonance Spectroscopy each proton can be visualized at a specific chemical shift (peak position along x-axis) depending on its chemical environment. This chemical shift is dictated by neighboring protons within the molecule. Therefore, metabolites can be characterized by their unique set of 1H chemical shifts. The metabolites that MRS probes for have known (1H) chemical shifts that have previously been identified in NMR spectra. These metabolites include:
N-acetyl Aspartate (NAA): with its major resonance peak at 2.02 ppm, decrease in levels of NAA indicate loss or damage to neuronal tissue, which results from many types of insults to the brain. Its presence in normal conditions indicates neuronal and axonal integrity.
Choline: with its major peak at 3.2 ppm, choline is known to be associated with membrane turnover, or increase in cell division. Increased choline indicates increase in cell production or membrane breakdown, which can suggest demyelination or presence of malignant tumors.
Creatine and phosphocreatine: with its major peak at 3.0 ppm, creatine marks metabolism of brain energy. Gradual loss of creatine in conjunction with other major metabolites indicates tissue death or major cell death resulting from disease, injury or lack of blood supply. Increase in creatine concentration could be a response to cranialcerebral trauma. Absence of creatine may be indicative of a rare congenital disease.
Lipids: with their major aliphatic peaks located in the 0.9–1.5 ppm range, increase in lipids is seen is also indicative of necrosis. These spectra are easily contaminated, as lipids are not only present in the brain, but also in other biological tissue such as the fat in the scalp and area between the scalp and skull.
Lactate: Is an AX3 system which results in a doublet (two symmetric peaks) centered about 1.31 ppm, and a quartet (four peaks with relative peak heights of 1:2:2:1) centered about 4.10 ppm. The doublet at 1.31 ppm is typically quantified as the quartet may be suppressed through water saturation or obscured by residual water. In healthy subjects lactate is not visible, for its concentration is lower than the detection limit of MRS; however, presence of this peak indicates glycolysis has been initiated in an oxygen-deficient environment. Several causes of this include ischemia, hypoxia, mitochondrial disorders, and some types of tumors.
Myo-inositol: with its major peak at 3.56 ppm, an increase in Myo-inositol has been seen to be disrupted in patients with Alzheimer's, dementia, and HIV patients.
Glutamate and glutamine: these amino acids are marked by a series of resonance peaks between 2.2 and 2.4 ppm. Hyperammonemia, hepatic encephalopathy are two major conditions that result in elevated levels of glutamine and glutamate. MRS, used in conjunction with MRI or some other imaging technique, can be used to detect changes in the concentrations of these metabolites, or significantly abnormal concentrations of these metabolites.
GABA can be detected primarily from its peaks at approximately 3.0 ppm, however because creatine has a strong singlet at 3.0 ppm with approximately 20x the amplitude a technique which exploits J-coupling must be used to accurately quantify GABA. The most common techniques for this are J-difference editing (MEGA), or J-resolved (as used in JPRESS)
Glutathione can also be detected from its peak at peak at 3.0 ppm, however similar to GABA it also must use a method which exploits J-coupling to remove the overlaying creatine signal.
Limitations of MRS
The major limitation to MRS is its low available signal due to the low concentration of metabolites as compared to water. As such, it has inherently poor temporal and spatial resolution. Nevertheless, no alternate technique is able to quantify metabolism in vivo non-invasively and thus MRS remains a valuable tool for research and clinical scientists.
In addition, despite recent efforts toward international expert consensus on methodological details like shimming, motion correction, spectral editing, spectroscopic neuroimaging, other advanced acquisition methods, data processing and quantification, application to brain, proton spectroscopy application to skeletal muscle, phosphorus application to skeletal muscle, methods description, results reporting, and other considerations, currently published implementations of in vivo magnetic resonance spectroscopy cluster into literatures exhibiting a broad variety of individualized acquisition, processing, quantification, and reporting techniques. This situation may contribute to a low sensitivity and specificity of, for example, in vivo proton magnetic resonance spectroscopy to disorders such as multiple sclerosis, that continue to fall below clinically beneficial thresholds for, e.g., diagnosis.
Non-Proton (1H) MRS
31Phosphorus Magnetic Resonance Spectroscopy
1H MRS's clinical success is only rivaled by 31P MRS. This is in large part because of the relatively high sensitivity of phosphorus NMR (7% of protons) combined with a 100% natural abundance.
Consequently, high-quality spectra are acquired within minutes. Even at low field strengths, great spectra resolution is obtained because of the relatively large (~30 ppm) chemical shift dispersion for in vivo phosphates. Clinically, phosphorus NMR excels because it detects all metabolites playing key roles in tissue energy metabolism and can indirectly deduce intracellular pH. However, phosphorus NMR is chiefly challenged by the limited number of metabolites it can detect.
13Carbon Magnetic Resonance Spectroscopy
In contrast to phosphorus NMR, carbon NMR is an insensitive technique. This arises from the fact that 13C NMR has a low abundance (1.1%) and carbon's low gyromagnetic ratio. This low abundance is because 12C does not have a magnetic moment, making it not NMR active, leading to 13C's use for spectroscopy purposes. However, this low sensitivity can be improved via decoupling, averaging, polarization transfer, and larger volumes. Despite the low natural abundance and sensitivity of 13C, 13C MRS has been used to study several metabolites, especially glycogen and triglycerides. It has proven especially useful at providing insight on the metabolic fluxes from 13C-labeled precursors. There is great overlap in what 1H MRS and 13C MRS can obtain spectra-wise and large reason, combined with 1H MRS's high sensitivity, why 13C MRS has never seen wide application like 1H MRS. See also Hyperpolarized carbon-13 MRI.
23Sodium Magnetic Resonance Spectroscopy
Sodium NMR is infamous for its low sensitivity (9.2% relative to proton sensitivity) and low SNR because of its low sodium concentration (30 - 100 mM), especially compared to protons (40 - 50 M). However, interest in sodium NMR has been reinspired by recent significant gains in SNR at high magnetic fields, along with improved coil designs and optimized pulse sequences. There is much hope for sodium NMR's clinical potential because the detection of abnormal intracellular sodium in vivo may have significant diagnostic potential and reveal new insights into tissue electrolysis homeostasis.
19Fluorine Magnetic Resonance Spectroscopy
Fluorine NMR has high sensitivity (82% relative to proton sensitivity) and 100% natural abundance. However, it is important to note that no endogenous 19F containing compounds are found in biological tissues and thus the fluorine signal comes from an external reference compound. Because19F is not found in biological tissues, 19F does not have to deal with interference from background signals like in vivo 1H MRS does with water, making it especially powerful for pharmacokinetic studies. 1H MRI provides the anatomical landmarks, while 19F MRI/MRS allows us to follow and map the specific interactions of specific compounds. in vivo 19F MRS can be used to monitor the uptake and metabolism of drugs, study the metabolism of anesthetic, determine cerebral blood flow, and measure, via fluorinated compounds ("probes"), various parameters like pH, oxygen levels, and metal concentration.
See also
Functional magnetic resonance spectroscopy of the brain
Magnetic resonance imaging
Magnetization transfer
NMR
NMR spectroscopy
References
External links
Online Physics Tutorial for MRI and MRS
https://aclarion.com/
NOCISCAN (aclarion) – The first, evidence-supported, SaaS platform to leverage MR Spectroscopy to noninvasively help physicians distinguish between painful and nonpainful discs in the spine.
In vivo
Nuclear magnetic resonance spectroscopy | In vivo magnetic resonance spectroscopy | [
"Physics",
"Chemistry"
] | 4,724 | [
"Nuclear magnetic resonance",
"Spectrum (physical sciences)",
"Magnetic resonance imaging",
"Nuclear magnetic resonance spectroscopy",
"Spectroscopy"
] |
14,357,843 | https://en.wikipedia.org/wiki/System%20requirements%20%28spacecraft%20system%29 | System requirements in spacecraft systems are the specific system requirements needed to design and operate a spacecraft or a spacecraft subsystem.
Overview
Spacecraft systems are normally developed under the responsibility of space agencies as NASA, ESA etc. In the space area standardized terms and processes have been introduced to allow for unambiguous communication between all partners and efficient usage of all documents. For instance the life cycle of space systems is divided in phases:
Phase A: Feasibility Study
Phase B: Requirements Definition
Phase C/D: Design / Manufacturing / Verification
Phase E: Operational usage.
At the end of phase B the system requirements together with a statement of work are sent out requesting proposals from industry.
Technical systems requirement
Both technical and nontechnical system requirements are contained in the statement of work.
The technical system requirements documented in the System Specification stay on mission level: System functions and performances, Orbit, Launch vehicle, etc.
Non-technical system (task) requirements: Cost and progress reporting, Documentation maintenance, etc.
The customer (requirements) specification is answered by the contractor by a design-to specification.
For example, the requirement "Columbus shall be launched by the Space Shuttle." is detailed in the contractor system specification "Columbus shall be a cylindrical pressurized module with max. length of 6.9 meters and 4.5 meters diameter as agreed in the Shuttle/Columbus ICD."
Operations environment
The spacecraft's systems specification, according to David Michael Harland (2005), usually also defines the operation environment of the spacecraft. It mostly is defined "as a model - often provide by the scientific community from available data - in the form of a set of curves, numerical tables, or software, usually with a nominal expectation and the minimal and maximum profiles which the environment is not expected to exceed".
System specification structure
A typical industry generated system specification for a spacecraft has the following structure (e.g. Columbus Design Spec (COL-RIBRE-SPE-0028, iss.10/F, 06.25.2004):
Document change record
1. Scope
1.1 Purpose
1.2 Summary description
1.3 Classification
1.4 Applicability
2. Related documents
2.1 Applicable documents (incl. order of precedence)
2.2 Reference documents
3. Functional /Performance Requirements
4. Support requirements
4.1 Product assurance
4.2 Electro-magnetic compatibility
4.3 Contamination
4.4 etc.
5. Interface requirements
5.1 System interfaces
5.1.1 Launcher
5.1.2 Ground stations
5.1.3 etc.
5.2 Subsystem interfaces
5.2.1 Electrical power
5.2.2 Data
5.2.3 etc.
6. Implementation requirements
6.1 Configuration
6.2 Budget Allocations
6.2.1 Mass
6.2.2 Electrical power
6.2.3 etc.
7. Preparation for delivery
Attachments (Abbreviation list etc.)
Each requirement paragraph consists of the requirement to be fulfilled by the product to be delivered and the verification requirement (Review of design, analysis, test, inspection).
Specification tree
The spacecraft system specification defines also the subsystems of the spacecraft e.g.: Structure, Data management subsystem incl. software, Electrical Power, Mechanical, etc.
For each subsystem a subsystem specification is prepared by the Prime Contractor with the same specification structure shown above including references to the parent paragraph in the system specification. In the same way the subsystem contractor prepares an assembly or unit specification. All these specifications are listed in a so-called specification tree showing all specifications and their linkage as well as the issue / date of each specification.
Literature
2005, David Michael Harland, Ralph Lorenz, Space Systems Failures: Disasters and Rescues of Satellites, Rockets, Springer, p. 178.
2003, Peter W. Fortescue, Graham Swinerd, Spacecraft Systems Engineering, John Wiley and Sons, 704 pp.
2001, DoD - Systems Management College, Systems Engineering Fundamentals. Defense Acquisition University Press, January 2001.
See also
Requirements
Requirements analysis
Requirements engineering
Requirements management
Verification of system requirements
Verification (spaceflight)
References
External links
NASA Completes Milestone Review Of Next Human Spacecraft System Nasa article 1999.
Example of a SYSTEM REQUIREMENTS sheet for a spacecraft
Columbus System Specification COL-RIBRE-SPE-0028 for phase C/D
Spaceflight concepts
Systems engineering | System requirements (spacecraft system) | [
"Engineering"
] | 889 | [
"Systems engineering"
] |
9,211,703 | https://en.wikipedia.org/wiki/Leptosphaeria%20maculans | Leptosphaeria maculans (anamorph Phoma lingam) is a fungal pathogen of the phylum Ascomycota that is the causal agent of blackleg disease on Brassica crops. Its genome has been sequenced, and L. maculans is a well-studied model phytopathogenic fungus. Symptoms of blackleg generally include basal stem cankers, small grey lesions on leaves, and root rot. The major yield loss is due to stem canker. The fungus is dispersed by the wind as ascospores or rain splash in the case of the conidia. L. maculans grows best in wet conditions and a temperature range of 5–20 degrees Celsius. Rotation of crops, removal of stubble, application of fungicide, and crop resistance are all used to manage blackleg. The fungus is an important pathogen of Brassica napus (canola) crops.
Host and symptoms
Leptosphaeria maculans causes phoma stem canker or blackleg. Symptoms generally include basal stem cankers, small grey oval lesions on the leaf tissue and root rot (as the fungus can directly penetrate roots). L. maculans infects a wide variety of Brassica crops including cabbage (Brassica oleracea) and oilseed rape (Brassica napus). L. maculans is especially virulent on Brassica napus. The first dramatic epidemic of L. maculans occurred in Wisconsin on cabbage. The disease is diagnosed by the presence of small black pycnidia which occur on the edge of the leaf lesions. The presence of these pycnidia allow for this disease to be distinguished from Alternaria brassicae, another foliar pathogen with similar lesions, but no pycnidia.
Disease cycle
Leptosphaeria maculans has a complicated life cycle. The pathogen begins as a saprophyte on stem residue and survives in the stubble. It then begins a hemibiotrophic stage that results in the production of leaf spots. Colonizing the plant tissue systemically, it begins its endophytic stage within the stem. (Due to its systemic parasitism, quantitative assessment of L. maculanss impact cannot include lesion size or number.) When the growing season ends, the fungus causes cankers at the base of the plant thereby beginning another necrotrophic stage.
Leptosphaeria maculans has both a teleomorph phase (sexual reproduction to generate pseudothecia that release ascospores) and an anamorph phase (asexual reproduction to produce pycnidia that release pycnidiospores). The disease spreads by wind born dispersal of ascospores and rain splash of conidia. In addition, phoma stem canker can also be spread by infected seeds when the fungus infects the seed pods of Brassica napus during the growing season, but this is far less frequent. The disease is polycyclic in nature even though the conidia are not as virulent as the ascospores. The disease cycle starts with airborne ascospores which are released from the pseudothecia in the spring. The ascospores enter through the stomata to infect the plant. Soon after the infection, gray lesions and black pycnidia form on the leaves.
During the growing season, these pycnidia produce conidia that are dispersed by rain splash. These spores cause a secondary infection which is usually less severe than primary infection with ascospores. Stem cankers form from the disease moving systemically through the plant. Following the colonization of the intercellular spaces, the fungus will reach a vascular strand and spread down the stalk between the leaf and the stem. The disease will spread into as well as between the cells of the xylem. This colonization leads to the invasion and destruction of the stem cortex, which leads to the formation of stem canker.
Stubble forms after the growing season due to residual plant material left in the field after harvest. The disease overwinters as pseudothecia and mycelium in the stubble. In spring the pseudothecia release their ascospores and the cycle repeats itself.
Virulence genetics
is a gene which produces an effector which is recognized by Rlm3, in which case it is an avirulence gene, see .
Environment
Temperature and moisture are the two most important environmental conditions for the development of L. maculans spores. A temperature of 5-20 degrees Celsius is the optimal temperature range for pseudothecia to mature. A wet humid environment increases the severity of the disease due to the dispersal of conidia by rain splash. As well as rain, hail storms also increase the severity of the disease.
Management
Cultural methods such as removing stubble and crop rotation can be very effective. By removing the stubble, overwintering pseudothecia and mycelium are less prevalent, reducing the risk of infection. In Canada, crop rotation decreases blackleg dramatically in canola crops. It is suggested to have a 3-year crop rotation of canola and to plant non-host plants such as cereals in between these periods. Chemical methods, such as the application of fungicides, can decrease instances of disease. EBI and MBC fungicides are typically used. EBI fungicides inhibit Ergosterol biosynthesis whereas MBC fungicides disrupt beta tubuline assembly in mitosis. EBIs are the best option for control of L. maculans as they inhibit the growth of conidia. Although fungicides such as EBIs are effective on conidia, they have no effect on ascospores which will grow regardless of the fungicide concentration. Resistance methods can also be used to great effect. Typically race specific Rlm genes are used for resistance (Rlm1-Rlm9) in Brassica napus crops.
Plant disease resistance
Leptosphaeria maculans is controlled by both race-specific gene-for-gene resistance via so-called resistance (R) genes detecting corresponding avirulence (Avr) genes and quantitative, broad, resistance traits. Since L. maculans is sequenced and due to the importance of this pathogen, many different Avr genes have been identified and cloned.
Arabidopsis thaliana model system
Arabidopsis thaliana is a commonly used model organism in plant sciences which is closely related to Brassica. Interestingly, this model organism shows a very high degree of resistance to L. maculans in all accessions tested (except An-1, which provided the source for the rlm3 allele, see below) with no known virulent races known to date, which makes this pathosystem close to a non-host interaction. Interestingly, this high level of resistance can be broken by mutation and some resistance can be transferred from A. thaliana to Brassica napus - for example is a B. napus chromosome addition line with A. thaliana chromosome 3 more resistant to L. maculans.
RLM1 and RLM2
Despite all A. thaliana accessions being resistant to L. maculans, it was discovered that this resistance could be regulated by different loci. In crosses between different accessions, two loci were discovered: RLM1 on chromosome 1 and RLM2 on chromosome 4. The R gene responsible for RLM1 resistance was identified as an R gene of the TIR-NB-LRR family, but the T-DNA insertion mutants were less susceptible than the natural rlm1 allele, indicating that multiple genes at the locus could contribute to resistance.
RLM3
In contrast to RLM1 and RLM2 , RLM3 is not specific to L. maculans and mutant alleles in this gene cause broad susceptibility to multiple fungi.
Camalexin
Camalexin is a phytoalexin which is induced independently of RLM1-mediated resistance and mutants disrupted in camalexin biosynthesis show susceptibility to L. maculans, indicating that this is a critical resistance mechanism.
Phytohormones
Mutants in signaling and biosynthesis of the traditional plant disease resistance hormones salicylic acid (SA), jasmonic acid (JA) and ethylene (ET) do not disrupt A. thaliana resistance to L. maculans. On the other hand, are mutants disrupted in abscisic acid (ABA) biosynthesis or signaling susceptible to L. maculans. Interestingly, however, is SA and JA contributing to tolerance in a compatible interaction where RLM1 and camalexin-mediated resistances have been mutated, and a quadruple mutant (where RLM1, camalexin, JA and SA-dependent responses are blocked) is hyper-susceptible. In contrast, ET appears to be detrimental for disease resistance.
Brassica crops
The Brassica crops consists of combinations of 3 major ancestral genomes (A, B and C) where the most important canola crop is Brassica napus with an AACC genome. Most resistance traits have been introgressed into B. napus from wild Brassica rapa (AA genome) relatives. In contrast, none or very few L. maculans resistance traits can be found in the Brassica oleracea (CC genome) parental species. Additionally, some resistance traits have been introgressed from the "B" genomes from Brassica nigra (BB genome), Brassica juncea (AABB genome) or Brassica carinata (BBCC genome) into B. napus. In the Brassica-L. maculans interactions, there are many race-specific resistance genes known, and some of the corresponding fungal avirulence genes have also been identified.
Rlm1
Rlm1 has been mapped to Brassica chromosome A07. Rlm1 will induce a resistance response against an L. maculans strain harboring the AvrLm1 avirulence gene.
Rlm2
Rlm2 will induce a resistance response against an L. maculans strain harboring the AvrLm2 avirulence gene. Rlm2 s located on chromosome A10 at the same locus as LepR3 as and has been cloned. The Rlm2 gene encodes for a receptor-like protein with a transmembrane domain and extracellular leucine rich repeats.
Rlm3
Rlm3 has been mapped to Brassica chromosome A07. Rlm3 will induce a resistance response against an L. maculans strain harboring AvrLm3, see .
Rlm4Rlm4 has been mapped to Brassica chromosome A07. Rlm4 will induce a resistance response against an L. maculans strain harboring the AvrLm4-7 avirulence gene.
Rlm5Rlm5 and RlmJ1 have been found in Brassica juncea but it is still uncertain whether they reside on the A or B genomes.
Rlm6Rlm6 is normally found in the B genome in Brassica juncea or Brassica nigra. This resistance gene was introgressed into Brassica napus from the mustard Brassica juncea.
Rlm7Rlm7 has been mapped to Brassica chromosome A07.
Rlm8Rlm8 resides on the A genome in Brassica rapa and Brassica napus, but it has not yet been mapped further.
Rlm9
The Rlm9 gene (mapped to chromosome A07) has been cloned and it encodes a Wall-associated-kinase-like (WAKL) protein. Rlm9 responds to the AvrLm5-9 avirulence gene.
Rlm10
Like with Rlm6, Rlm10 is present in the B genome of Brassica juncea or Brassica nigra, but it has not yet been introgressed into Brassica napus.
Rlm11Rlm11 resides on the A genome in Brassica rapa and Brassica napus, but it has not yet been mapped further.
LepR3LepR3 was introduced into the Australian B. napus cultivar Surpass 400 from a wild B. rapa var. sylvestris. This resistance became ineffective within three years of commercial cultivation. LepR3 will induce a resistance response against an L. maculans strain harboring the AvrLm1 avirulence gene. LepR3 is located at the same locus as Rlm2 and also this gene has been cloned. Like the Rlm2 allele, the encoded LepR3 protein is a receptor-like protein with a transmembrane domain and extracellular leucine rich repeats. The predicted protein structure indicates that the LepR3 and Rlm2 R genes (in contrast to the intracellular Arabidopsis RLM1 R gene) senses L. maculans in the extracellular space (apoplast).
Importance Leptosphaeria maculans is the most damaging pathogen of Brassica napus, which is used as a feed source for livestock and for its rapeseed oil. L. maculans destroys around 5–20% of canola yields in France. The disease is very important in England as well: from 2000 to 2002, the disease resulted in approximately £56 million worth of damage per season. Rapeseed oil is the preferred European oil source for biofuel due to its high yield. B. napus produces more oil per land area than other sources like soybeans. Major losses to oilseed crops have also occurred in Australia. The most recent significant losses were in 2003, to the widely planted B. napus cultivars containing a resistance gene from B. rapa.L. maculans metabolizes brassinin, an important phytoalexin produced by Brassica species, into indole-3-carboxaldehyde and indole-3-carboxylic acid. Virulent isolates proceed through the (3-indolylmethyl)dithiocarbamate S-oxide intermediate, while avirulent isolates first convert brassinin to N-acetyl-3-indolylmethylamine and 3-indolylmethylamine. Research has shown that brassinin could be important as a chemo-preventative agent in the treatment of cancer.
As a bioengineering innovation, in 2010 it was shown that a light-driven protein from L. maculans'' could be used to mediate, alongside earlier reagents, multi-color silencing of neurons in the mammalian nervous system.
References
Further reading
Pleosporales
Fungal plant pathogens and diseases
Canola diseases
Fungi described in 1803
Taxa named by James Sowerby
Fungus species | Leptosphaeria maculans | [
"Biology"
] | 3,097 | [
"Fungi",
"Fungus species"
] |
9,212,193 | https://en.wikipedia.org/wiki/Bow%20shock%20%28aerodynamics%29 | A bow shock, also called a detached shock or bowed normal shock, is a curved propagating disturbance wave characterized by an abrupt, nearly discontinuous, change in pressure, temperature, and density. It occurs when a supersonic flow encounters a body, around which the necessary deviation angle of the flow is higher than the maximum achievable deviation angle for an attached oblique shock (see detachment criterion). Then, the oblique shock transforms in a curved detached shock wave. As bow shocks occur for high flow deflection angles, they are often seen forming around blunt bodies, because of the high deflection angle that the body impose to the flow around it.
The thermodynamic transformation across a bow shock is non-isentropic and the shock decreases the flow velocity from supersonic velocity upstream to subsonic velocity downstream.
Applications
The bow shock significantly increases the drag in a vehicle traveling at a supersonic speed. This property was utilized in the design of the return capsules during space missions such as the Apollo program, which need a high amount of drag in order to slow down during atmospheric reentry.
Shock relations
As in normal shock and oblique shock,
The upstream static pressures is lower than the downstream static pressure.
The upstream static density is lower than the downstream static density.
The upstream static temperature is lower than the downstream static temperature.
The upstream total pressure is greater than the downstream total pressure.
The upstream total density is lower than the downstream total density.
The upstream total temperature is equal to the downstream total temperature, as the shock wave is supposed isenthalpic.
For a curved shock, the shock angle varies and thus has variable strength across the entire shock front. The post-shock flow velocity and vorticity can therefore be computed via the Crocco's theorem, which is independent of any EOS (equation of state) assuming inviscid flow.
See also
Bow shock
Gas dynamics
Moving shock
Prandtl–Meyer expansion fan
References
Aerodynamics
Shock waves | Bow shock (aerodynamics) | [
"Physics",
"Chemistry",
"Engineering"
] | 406 | [
"Physical phenomena",
"Shock waves",
"Aerodynamics",
"Waves",
"Aerospace engineering",
"Fluid dynamics"
] |
9,223,226 | https://en.wikipedia.org/wiki/Gullstrand%E2%80%93Painlev%C3%A9%20coordinates | Gullstrand–Painlevé coordinates are a particular set of coordinates for the Schwarzschild metric – a solution to the Einstein field equations which describes a black hole. The ingoing coordinates are such that the time coordinate follows the proper time of a free-falling observer who starts from far away at zero velocity, and the spatial slices are flat. There is no coordinate singularity at the Schwarzschild radius (event horizon). The outgoing ones are simply the time reverse of ingoing coordinates (the time is the proper time along outgoing particles that reach infinity with zero velocity).
The solution was proposed independently by Paul Painlevé in 1921 and Allvar Gullstrand in 1922. It was not explicitly shown that these solutions were simply coordinate transformations of the usual Schwarzschild solution until 1933 in Lemaître's paper, although Einstein immediately believed that to be true.
Derivation
The derivation of GP coordinates requires defining the following coordinate systems and understanding how data measured for events in one coordinate system is interpreted in another coordinate system.
Convention: The units for the variables are all geometrized. Time and mass have units in meters. The speed of light in flat spacetime has a value of 1. The gravitational constant has a value of 1.
The metric is expressed in the +−−− sign convention.
Schwarzschild coordinates
A Schwarzschild observer is a far observer or a bookkeeper. He does not directly make measurements of events that occur in different places. Instead, he is far away from the black hole and the events. Observers local to the events are enlisted to make measurements and send the results to him. The bookkeeper gathers and combines the reports from various places. The numbers in the reports are translated into data in Schwarzschild coordinates, which provide a systematic means of evaluating and describing the events globally. Thus, the physicist can compare and interpret the data intelligently. He can find meaningful information from these data. The Schwarzschild form of the Schwarzschild metric using Schwarzschild coordinates is given by
where
G=1=c
t, r, θ, φ are the Schwarzschild coordinates,
M is the mass of the black hole.
GP coordinates
Define a new time coordinate by
for some arbitrary function . Substituting in the Schwarzschild metric one gets
where .
If we now choose such that the term multiplying is unity, we get
and the metric becomes
The spatial metric (i.e. the restriction of the metric on the surface where is constant) is simply the flat metric expressed in spherical polar coordinates. This metric is regular along the horizon where r=2M, since, although the temporal term goes to zero, the off-diagonal term in the metric is still non-zero and ensures that the metric is still invertible (the determinant of the metric is ).
The function is given by
where .
The function is clearly singular at r=2M as it must be to remove that singularity in the Schwarzschild metric.
Other choices for lead to other coordinate charts for the Schwarzschild vacuum; a general treatment is given in Francis & Kosowsky.
Motion of raindrop
Define a raindrop as an object which plunges radially toward a black hole from rest at infinity.
In Schwarzschild coordinates, the velocity of a raindrop is given by
The speed tends to 0 as r approaches the event horizon. The raindrop appears to have slowed as it gets nearer the event horizon and halted at the event horizon as measured by the bookkeeper. Indeed, an observer outside the event horizon would see the raindrop plunge slower and slower. Its image infinitely redshifts and never makes it through the event horizon. However, the bookkeeper does not physically measure the speed directly. He translates data relayed by the shell observer into Schwarzschild values and computes the speed. The result is only an accounting entry.
In GP coordinates, the velocity is given by
The speed of the raindrop is inversely proportional to the square root of the radius and equals the negative newtonian escape velocity. At points very far away from the black hole, the speed is extremely small. As the raindrop plunges toward the black hole, the speed increases. At the event horizon, the speed has the value 1. There is no discontinuity or singularity at the event horizon.
Inside the event horizon, the speed increases as the raindrop gets closer to the singularity. Eventually, the speed becomes infinite at the singularity. As shown below the speed is always less than the speed of light. The results may not be correctly predicted by the equation at and very near the singularity since the true solution may be quite different when quantum mechanics is incorporated.
Despite the problem with the singularity, it's still possible to compute the travel time for the raindrop from the horizon to the center of black hole mathematically.
Integrate the equation of motion:
The result is
Using this result for the speed of the raindrop we can find the proper time along the trajectory of the raindrop in terms of the time . We have
I.e. along the rain drops trajectory, the elapse of time is exactly the proper time along the trajectory. One could have defined the GP coordinates by this requirement, rather than by demanding that the spatial surfaces be flat.
A closely related set of coordinates is the Lemaître coordinates, in which the "radial" coordinate is chosen to be constant along the paths of the raindrops. Since r changes as the raindrops fall, this metric is time dependent while the GP metric is time independent.
The metric obtained if, in the above, we take the function f(r) to be the negative of what we choose above is also called the GP coordinate system. The only change in the metric is that cross term changes sign. This metric is regular for outgoing raindrops—i.e. particles which leave the black hole travelling outward with just escape velocity so that their speed at infinity is zero. In the usual GP coordinates, such particles cannot be described for r<2M. They have a zero value for at r=2M. This is an indication that the Schwarzschild black hole has two horizons, a past horizon, and a future horizon. The Original form of the GP coordinates is regular across the future horizon (where particles fall into when they fall into a black hole) while the alternative negative version is regular across the past horizon (from which particles come out of the black hole if they do so).
The Kruskal–Szekeres coordinates are regular across both horizons at the expense of making the metric strongly dependent on the time coordinate.
Speeds of light
Assume radial motion. For light, Therefore,
At places very far away from the black hole, The speed of light is 1, the same as in special relativity.
At the event horizon, the speed of light shining outward away from the center of black hole is It can not escape from the event horizon. Instead, it gets stuck at the event horizon. Since light moves faster than all others, matter can only move inward at the event horizon. Everything inside the event horizon is hidden from the outside world.
Inside the event horizon, the rain observer measures that the light moves toward the center with speed greater than 2. This is plausible. Even in special relativity, the proper speed of a moving object is
There are two important points to consider:
No object should have speed greater than the speed of light as measured in the same reference frame. Thus, the principle of causality is preserved. Indeed, the speed of raindrop is less than that of light:
The time of travel for light shining inward from event horizon to the center of black hole can be obtained by integrating the equation for the velocity of light,
The result is
The light travel time for a stellar black hole with a typical size of 3 solar masses is about 11 microseconds.
Ignoring effects of rotation, for Sagittarius A*, the supermassive black hole residing at the center of the Milky Way, with mass of 3.7 million solar masses, the light travel time is about 14 seconds.
The supermassive black hole at the center of Messier 87, a giant elliptical galaxy in the Virgo Cluster, is one of the largest known supermassive black holes. With a mass of 3 billion solar masses, it takes about 3 hours for light to travel to the central singularity and 5 hours for a raindrop.
A rain observer's view of the universe
How does the universe look like as seen by a rain observer plunging into the black hole? The view can be described by the following equations:
where
are the rain observer's and shell observer's viewing angles with respect to the radially outward direction.
is the angle between the distant star and the radially outward direction.
is the impact parameter. Each incoming light ray can be backtraced to a corresponding ray at infinity. The Impact parameter for the incoming light ray is the distance between the corresponding ray at infinity and a ray parallel to it that plunges directly into the black hole.
Because of spherical symmetry, the trajectory of light always lies in a plane passing through the center of sphere. It's possible to simplify the metric by assuming .
The impact parameter can be computed knowing the rain observer's r-coordinate and viewing angle . Then, the actual angle of the distant star, is determined by numerically integrating from to infinity. A chart of the sample results is shown at right.
At r/M = 500, the black hole is still very far away. It subtends a diametrical angle of ~ 1 degree in the sky. The stars are not distorted much by the presence of the black hole, except for the stars directly behind it. Due to gravitational lensing, these obstructed stars are now deflected 5 degrees away from the back. In between these stars and the black hole is a circular band of secondary images of the stars. The duplicate images are instrumental in the identification of the black hole.
At r/M = 30, the black hole has become much bigger, spanning a diametrical angle of ~15 degrees in the sky. The band of secondary images has also grown to 10 degrees. It's now possible to find faint tertiary images in the band, which are produced by the light rays that have looped around the black hole once already. The primary images are distributed more tightly in the rest of the sky. The pattern of distribution is similar to that previously exhibited.
At r/M = 2, the event horizon, the black hole now occupies a substantial portion of the sky. The rain observer would see an area up to 42 degrees from the radially inward direction that is pitch dark. The band of secondary and tertiary images, rather than increasing, has decreased in size to 5 degrees. The aberration effect is now quite dominant. The speed of plunging has reached the light speed. The distribution pattern of primary images is changing drastically. The primary images are shifting toward the boundary of the band. The edge near the band is now crowded with stars. Due to Doppler effect, the primary image of the stars which were originally located behind the rain observer have their images appreciably red-shifted, while those that were in front are blue-shifted and appear very bright.
At r/M=0.001, the curve of distant star angle versus view angle appears to form a right angle at the 90 degrees view angle. Almost all of the star images are congregated in a narrow ring 90 degrees from the radially inward direction. Between the ring and the radially inward direction is the enormous black hole. On the opposite side, only a few stars shine faintly.
As the rain observer approaches the singularity, , and . Most of the stars and their images caused by multiple orbits of the light around the black hole are squeezed to a narrow band at the 90° viewing angle. The observer sees a magnificent bright ring of stars bisecting the dark sky.
History
Although the publication of Gullstrand's paper came after Painlevé's, Gullstrand's paper was dated 25 May 1921, whereas Painlevé's publication was a writeup of his presentation before the Academie des Sciences in Paris on 24 October 1921. In this way, Gullstrand's work appears to have priority.
Both Painlevé and Gullstrand used this solution to argue that Einstein's theory was incomplete in that it gave multiple solutions for the gravitational field of a spherical body, and moreover gave different physics (they argued that the lengths of rods could sometimes be longer and sometimes shorter in the radial than the tangential directions). The "trick" of the Painlevé proposal was that he no longer stuck to a full quadratic (static) form but instead, allowed a cross time-space product making the metric form no longer static but stationary and no longer direction symmetric but preferentially oriented.
In a second, longer paper (November 14, 1921), Painlevé explains how he derived his solution by directly solving Einstein's equations for a generic spherically symmetric form of the metric.
The result, equation (4) of his paper, depended on two arbitrary functions of the r coordinate yielding a double infinity of solutions. We now know that these simply represent a variety of choices of both the time and radial coordinates.
Painlevé wrote to Einstein to introduce his solution and invited Einstein to Paris for a debate. In Einstein's reply letter (December 7),
he apologized for not being in a position to visit soon and explained why he was not pleased with Painlevé's arguments, emphasising that the coordinates themselves have no meaning. Finally, Einstein came to Paris in early April. On the 5th of April 1922, in a debate at the "Collège de France" with Painlevé, Becquerel, Brillouin, Cartan, De Donder, Hadamard, Langevin and Nordmann on "the infinite potentials", Einstein, baffled by the non quadratic cross term in the line element, rejected the Painlevé solution.
See also
Isotropic coordinates
Eddington–Finkelstein coordinates
Kruskal–Szekeres coordinates
Lemaître coordinates
References
External links
The River Model of Black Holes
Dr. Andrew J S Hamilton's video "Inside Black Holes"
Black hole orbit simulation in GP coordinates.
Coordinate charts in general relativity
Lorentzian manifolds
Black holes | Gullstrand–Painlevé coordinates | [
"Physics",
"Astronomy",
"Mathematics"
] | 2,964 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Coordinate systems",
"Stellar phenomena",
"Astronomical objects",
"Coordinate charts in general relativity"
] |
6,928,455 | https://en.wikipedia.org/wiki/Early%20prostate%20cancer%20antigen-2 | Early prostate cancer antigen-2 (EPCA-2) is a protein of which blood levels are elevated in prostate cancer. It appears to provide more accuracy in identifying early prostate cancer than the standard prostate cancer marker, PSA.
"EPCA-2" is not the name of a gene. EPCA-2 gets its name because it is the second prostate cancer marker identified by the research team. This earlier marker was previously known as "EPCA", but is now called "EPCA-1".
EPCA-2 versus PSA
Leman, Getzenberg and colleagues describe, in the April 2007 issue of Urology, the performance characteristic of EPCA-2, a novel nuclear protein marker for prostate cancer cells. This paper has since been retracted by the publisher.
A study was initiated which suggested that the EPCA-2 protein serum assay exhibits favorable performance characteristics which are potentially superior to serum PSA. However more studies are necessary to see if this test will retain its sensitivity when used in a screening population.
In September 2008 the industry sponsor of EPCA-2, Onconome sued Dr Robert Getzenberg, JHU, and the University of Pittsburgh, his previous institution, claiming that Getzenberg misrepresented and falsified data related to EPCA-2 after Onconome sponsored 13 million dollars of research over five years in Getzenberg's labs at University of Pittsburgh and Johns Hopkins for a blood test for prostate cancer. Onconome claimed that the test was "essentially as reliable as flipping a coin". Robert H. Getzenberg (Ph.D-JHU 1992), first developed EPCA-2 as a graduate student with Professor Donald Coffey at Johns Hopkins and later as a faculty member at University of Pittsburgh. Getzenberg, former professor of Urology and Director of Research of the James Buchanan Brady Urological Institute, left Johns Hopkins University School of Medicine in 2013 for undisclosed reasons.
References
External links
Medical Today - EPCA-2: A Highly Specific Serum Marker For Prostate Cancer
consumeraffairs.com - Hopkins Researchers Find Better Blood Test for Prostate Cancer
Tumor markers
Prostate cancer | Early prostate cancer antigen-2 | [
"Chemistry",
"Biology"
] | 438 | [
"Chemical pathology",
"Tumor markers",
"Biomarkers"
] |
6,930,483 | https://en.wikipedia.org/wiki/Pederin | Pederin is a vesicant toxic amide with two tetrahydropyran rings, found in the haemolymph of the beetle genus Paederus, including the Nairobi fly, belonging to the family Staphylinidae. It was first characterized by processing 25 million field-collected P. fuscipes. It makes up approximately 0.025% of an insects weight (for P. fuscipes).
It has been demonstrated that the production of pederin relies on the activities of an endosymbiont (Pseudomonas ssp.) within Paederus.
The manufacture of pederin is largely confined to adult female beetles—larvae and males only store pederin acquired maternally (i.e., through eggs) or by ingestion.
Physical effects
Skin contact with pederin from the coelomic fluid exuded from a female Paederus beetle causes Paederus dermatitis. This is a rash that varies from a slight erythema to severe blistering, depending on the concentration and duration of exposure. Treatment involves washing the irritated area with cool soapy water. Application of a topical steroid is also recommended for more intense exposures. These measures can significantly reduce the physical effects the toxin has on the affected area.
Synthesis
An efficient total synthesis of pederin is known. Beginning with (+)-benzoylselenopederic acid, Zn(BH4)2 reduction is applied, introducing stereoselective reduction of the acyclic ketone. Michael addition of nitromethane is performed. After several steps of Moffatt oxidation, phenylselenation, hydrolysis, and reduction, pederic acid is reached.
The final steps of the synthesis of pederin are shown to the right. Here, pederic acid is added to the protected compound in LiHMDS and THF, producing a 75% yield. The protecting groups are then removed using TBAF and a hydrolytic quench. This step gives an 88% yield.
Mode of action
Pederin blocks mitosis at levels as low as 1 ng/ml, by inhibiting protein and DNA synthesis without affecting RNA synthesis, prevents cell division, and has been shown to extend the life of mice bearing a variety of tumors. For these reasons, it has garnered interest as a potential anti-cancer treatment.
Uses
Pederin and its derivatives are being researched as anticancer drugs. This family of compounds is able to inhibit protein and DNA biosynthesis, making it useful to slow the division of cancer cells. One derivative of pederin, psymberin, has been found to be highly selective in targeting solid tumor cells.
See also
Psymberin
Paederus dermatitis
Cycloheximide
Christmas eye
References
Acetamides
Ethers
Tetrahydropyrans
Blister agents | Pederin | [
"Chemistry"
] | 599 | [
"Blister agents",
"Chemical weapons",
"Functional groups",
"Organic compounds",
"Ethers"
] |
6,932,634 | https://en.wikipedia.org/wiki/Gregori%20Aminoff%20Prize | The Gregori Aminoff Prize is an international prize awarded since 1979 by the Royal Swedish Academy of Sciences in the field of crystallography, rewarding "a documented, individual contribution in the field of crystallography, including areas concerned with the dynamics of the formation and dissolution of crystal structures. Some preference should be shown for work evincing elegance in the approach to the problem."
The prize, which is named in memory of the Swedish scientist and artist Gregori Aminoff (1883–1947), Professor of Mineralogy at the Swedish Museum of Natural History from 1923, was endowed through a bequest by his widow Birgit Broomé-Aminoff. The prize can be shared by several winners. It is considered the Nobel prize for crystallography.
Recipients of the Prize
Source: Royal Swedish Academy of Science
See also
List of chemistry awards
List of physics awards
References
Notes
A. The form and spelling of the names in the name column is according to www.kva.se, the official website of the Royal Swedish Academy of Sciences. Alternative spellings and name forms, where they exist, are given at the articles linked from this column.
B. The information in the country column is according to www.kva.se, the official website of the Royal Swedish Academy of Sciences. This information may not necessarily reflect the recipient's birthplace or citizenship.
C. The information in the institution column is according to www.kva.se, the official website of the Royal Swedish Academy of Sciences. This information may not necessarily reflect the recipient's current institution.
D. The citation for each award is quoted (not always in full) www.kva.se, the official website of the Royal Swedish Academy of Sciences. The links in this column are to articles (or sections of articles) on the history and areas of physics for which the awards were presented. The links are intended only as a guide and explanation. For a full account of the work done by each prize winner, please see the biography articles linked from the name column.
Citations
External links
awardee of the Gregori Aminoff Prize
Awards of the Royal Swedish Academy of Sciences
Chemistry awards
Crystallography awards
Physics awards
Awards established in 1979 | Gregori Aminoff Prize | [
"Chemistry",
"Materials_science",
"Technology"
] | 447 | [
"Crystallography awards",
"Chemistry awards",
"Crystallography",
"Science and technology awards",
"Physics awards"
] |
6,933,302 | https://en.wikipedia.org/wiki/Sound%20from%20ultrasound | Sound from ultrasound is the name given here to the generation of audible sound from modulated ultrasound without using an active receiver. This happens when the modulated ultrasound passes through a nonlinear medium which acts, intentionally or unintentionally, as a demodulator.
Parametric array
Since the early 1960s, researchers have been experimenting with creating directive low-frequency sound from nonlinear interaction of an aimed beam of ultrasound waves produced by a parametric array using heterodyning. Ultrasound has much shorter wavelengths than audible sound, so that it propagates in a much narrower beam than any normal loudspeaker system using audio frequencies. Most of the work was performed in liquids (for underwater sound use).
The first modern device for air acoustic use was created in 1998, and is now known by the trademark name "Audio Spotlight", a term first coined in 1983 by the Japanese researchers who abandoned the technology as infeasible in the mid-1980s.
A transducer can be made to project a narrow beam of modulated ultrasound that is powerful enough, at 100 to 110 dBSPL, to substantially change the speed of sound in the air that it passes through. The air within the beam behaves nonlinearly and extracts the modulation signal from the ultrasound, resulting in sound that can be heard only along the path of the beam, or that appears to radiate from any surface that the beam strikes. This technology allows a beam of sound to be projected over a long distance to be heard only in a small well-defined area; for a listener outside the beam the sound pressure decreases substantially. This effect cannot be achieved with conventional loudspeakers, because sound at audible frequencies cannot be focused into such a narrow beam.
There are some limitations with this approach. Anything that interrupts the beam will prevent the ultrasound from propagating, like interrupting a spotlight's beam. For this reason, most systems are mounted overhead, like lighting.
Applications
Commercial advertising
A sound signal can be aimed so that only a particular passer-by, or somebody very close, can hear it. In commercial applications, it can target sound to a single person without the peripheral sound and related noise of a loudspeaker.
Personal audio
It can be used for personal audio, either to have sounds audible to only one person, or that which a group wants to listen to. The navigation instructions for example are only interesting for the driver in a car, not for the passengers. Another possibility are future applications for true stereo sound, where one ear does not hear what the other is hearing.
Train signaling device
Directional audio train signaling may be accomplished through the use of an ultrasonic beam which will warn of the approach of a train while avoiding the nuisance of loud train signals on surrounding homes and businesses.
History
This technology was originally developed by the US Navy and Soviet Navy for underwater sonar in the mid-1960s, and was briefly investigated by Japanese researchers in the early 1980s, but these efforts were abandoned due to extremely poor sound quality (high distortion) and substantial system cost. These problems went unsolved until a paper published by Dr. F. Joseph Pompei of the Massachusetts Institute of Technology in 1998 fully described a working device that reduced audible distortion essentially to that of a traditional loudspeaker.
Products
there were known to be five devices which have been marketed that use ultrasound to create an audible beam of sound.
Audio Spotlight
F. Joseph Pompei of MIT developed technology he calls the "Audio Spotlight", and made it commercially available in 2000 by his company Holosonics, which according to their website claims to have sold "thousands" of their "Audio Spotlight" systems. Disney was among the first major corporations to adopt it for use at the Epcot Center, and many other application examples are shown on the Holosonics website.
Audio Spotlight is a narrow beam of sound that can be controlled with similar precision to light from a spotlight. It uses a beam of ultrasound as a "virtual acoustic source", enabling control of sound distribution.
The ultrasound has wavelengths only a few millimeters long which are much smaller than the source, and therefore naturally travel in an extremely narrow beam.
The ultrasound, which contains frequencies far outside the range of human hearing, is completely inaudible. But as the ultrasonic beam travels through the air, the inherent properties of the air cause the ultrasound to change shape in a predictable way. This gives rise to frequency components in the audible band, which can be predicted and controlled.
HyperSonic Sound
Elwood "Woody" Norris, founder and Chairman of American Technology Corporation (ATC), announced he had successfully created a device which achieved ultrasound transmission of sound in 1996. This device used piezoelectric transducers to send two ultrasonic waves of differing frequencies toward a point, giving the illusion that the audible sound from their interference pattern was originating at that point. ATC named and trademarked their device as "HyperSonic Sound" (HSS). In December 1997, HSS was one of the items in the Best of What's New issue of Popular Science. In December 2002, Popular Science named HyperSonic Sound the best invention of 2002. Norris received the 2005 Lemelson–MIT Prize for his invention of a "hypersonic sound". ATC (now named LRAD Corporation) spun off the technology to Parametric Sound Corporation in September 2010 to focus on their long-range acoustic device (LRAD) products, according to their quarterly reports, press releases, and executive statements.
Mitsubishi Electric Engineering Corporation
Mitsubishi apparently offers a sound from ultrasound product named the "MSP-50E" and commercially available from Mitsubishi electrical engineering company.
AudioBeam
German audio company Sennheiser Electronic once listed their "AudioBeam" product for about $4,500. There is no indication that the product has been used in any public applications. The product has since been discontinued.
Literature survey
The first experimental systems were built over 30 years ago, although these first versions only played simple tones. It was not until much later (see above) that the systems were built for practical listening use.
Experimental ultrasonic nonlinear acoustics
A chronological summary of the experimental approaches taken to examine Audio Spotlight systems in the past will be presented here. At the turn of the millennium working versions of an Audio Spotlight capable of reproducing speech and music could be bought from Holosonics, a company founded on Dr. Pompei's work in the MIT Media Lab.
Related topics were researched almost 40 years earlier in the context of underwater acoustics.
The first article consisted of a theoretical formulation of the half pressure angle of the demodulated signal.
The second article provided an experimental comparison to the theoretical predictions.
Both articles were supported by the U.S. Office of Naval Research, specifically for the use of the phenomenon for underwater sonar pulses. The goal of these systems was not high directivity per se, but rather higher usable bandwidth of a typically band-limited transducer.
The 1970s saw some activity in experimental airborne systems, both in air and underwater. Again supported by the U.S. Office of Naval Research, the primary aim of the underwater experiments was to determine the range limitations of sonar pulse propagation due to nonlinear distortion. The airborne experiments were aimed at recording quantitative data about the directivity and propagation loss of both the ultrasonic carrier and demodulated waves, rather than developing the capability to reproduce an audio signal.
In 1983 the idea was again revisited experimentally but this time with the firm intent to analyze the use of the system in air to form a more complex base band signal in a highly directional manner. The signal processing used to achieve this was simple DSB-AM with no precompensation, and because of the lack of precompensation applied to the input signal, the THD (total harmonic distortion) levels of this system would have probably been satisfactory for speech reproduction, but prohibitive for the reproduction of music. An interesting feature of the experimental set up was the use of 547 ultrasonic transducers to produce a 40 kHz ultrasonic sound source of over 130db at 4 m, which would demand significant safety considerations. Even though this experiment clearly demonstrated the potential to reproduce audio signals using an ultrasonic system, it also showed that the system suffered from heavy distortion, especially when no precompensation was used.
Theoretical ultrasonic nonlinear acoustics
The equations that govern nonlinear acoustics are quite complex and unfortunately they do not have general analytical solutions. They usually require the use of a computer simulation. However, as early as 1965, Berktay performed an analysis under some simplifying assumptions that allowed the demodulated SPL to be written in terms of the amplitude modulated ultrasonic carrier wave pressure Pc and various physical parameters. Note that the demodulation process is extremely lossy, with a minimum loss in the order of 60 dB from the ultrasonic SPL to the audible wave SPL. A precompensation scheme can be based from Berktay's expression, shown in Equation 1, by taking the square root of the base band signal envelope E and then integrating twice to invert the effect of the double partial-time derivative. The analogue electronic circuit equivalents of a square root function is simply an op-amp with feedback, and an equalizer is analogous to an integration function. However, these topic areas lie outside the scope of this project.
where
Audible secondary pressure wave
misc. physical parameters
SPL of the ultrasonic carrier wave
Envelope function (such as DSB-AM)
This equation says that the audible demodulated ultrasonic pressure wave (output signal) is proportional to the twice differentiated, squared version of the envelope function (input signal). Precompensation refers to the trick of anticipating these transforms and applying the inverse transforms on the input, hoping that the output is then closer to the untransformed input.
By the 1990s, it was well known that the Audio Spotlight could work but suffered from heavy distortion. It was also known that the precompensation schemes placed an added demand on the frequency response of the ultrasonic transducers. In effect the transducers needed to keep up with what the digital precompensation demanded of them, namely a broader frequency response. In 1998 the negative effects on THD of an insufficiently broad frequency response of the ultrasonic transducers was quantified with computer simulations by using a precompensation scheme based on Berktay's expression. In 1999 Pompei's article discussed how a new prototype transducer met the increased frequency response demands placed on the ultrasonic transducers by the precompensation scheme, which was once again based on Berktay's expression. In addition impressive reductions in the THD of the output when the precompensation scheme was employed were graphed against the case of using no precompensation.
In summary, the technology that originated with underwater sonar 40 years ago has been made practical for reproduction of audible sound in air by Pompei's paper and device, which, according to his AES paper (1998), demonstrated that distortion had been reduced to levels comparable to traditional loudspeaker systems.
Modulation scheme
The nonlinear interaction mixes ultrasonic tones in air to produce sum and difference frequencies. A DSB (double-sideband) amplitude-modulation scheme with an appropriately large baseband DC offset, to produce the demodulating tone superimposed on the modulated audio spectrum, is one way to generate the signal that encodes the desired baseband audio spectrum. This technique suffers from extremely heavy distortion as not only the demodulating tone interferes, but also all other frequencies present interfere with one another. The modulated spectrum is convolved with itself, doubling its bandwidth by the length property of the convolution. The baseband distortion in the bandwidth of the original audio spectrum is inversely proportional to the magnitude of the DC offset (demodulation tone) superimposed on the signal. A larger tone results in less distortion.
Further distortion is introduced by the second order differentiation property of the demodulation process. The result is a multiplication of the desired signal by the function -ω² in frequency. This distortion may be equalized out with the use of preemphasis filtering (increase amplitude of high frequency signal).
By the time-convolution property of the Fourier transform, multiplication in the time domain is a convolution in the frequency domain. Convolution between a baseband signal and a unity gain pure carrier frequency shifts the baseband spectrum in frequency and halves its magnitude, though no energy is lost. One half-scale copy of the replica resides on each half of the frequency axis. This is consistent with Parseval's theorem.
The modulation depth m is a convenient experimental parameter when assessing the total harmonic distortion in the demodulated signal. It is inversely proportional to the magnitude of the DC offset. THD increases proportionally with m1².
These distorting effects may be better mitigated by using another modulation scheme that takes advantage of the differential squaring device nature of the nonlinear acoustic effect. Modulation of the second integral of the square root of the desired baseband audio signal, without adding a DC offset, results in convolution in frequency of the modulated square-root spectrum, half the bandwidth of the original signal, with itself due to the nonlinear channel effects. This convolution in frequency is a multiplication in time of the signal by itself, or a squaring. This again doubles the bandwidth of the spectrum, reproducing the second time integral of the input audio spectrum. The double integration corrects for the -ω² filtering characteristic associated with the nonlinear acoustic effect. This recovers the scaled original spectrum at baseband.
The harmonic distortion process has to do with the high frequency replicas associated with each squaring demodulation, for either modulation scheme. These iteratively demodulate and self-modulate, adding a spectrally smeared-out and time-exponentiated copy of the original signal to baseband and twice the original center frequency each time, with one iteration corresponding to one traversal of the space between the emitter and target. Only sound with parallel collinear phase velocity vectors interfere to produce this nonlinear effect. Even-numbered iterations will produce their modulation products, baseband and high frequency, as reflected emissions from the target. Odd-numbered iterations will produce their modulation products as reflected emissions off the emitter.
This effect still holds when the emitter and the reflector are not parallel, though due to diffraction effects the baseband products of each iteration will originate from a different location each time, with the originating location corresponding to the path of the reflected high frequency self-modulation products.
These harmonic copies are largely attenuated by the natural losses at those higher frequencies when propagating through air.
Attenuation of ultrasound in air
The figure provided in provides an estimation of the attenuation that the ultrasound would suffer as it propagated through air. The figures from this graph correspond to completely linear propagation, and the exact effect of the nonlinear demodulation phenomena on the attenuation of the ultrasonic carrier waves in air was not considered. There is an interesting dependence on humidity. Nevertheless, a 50 kHz wave suffers an attenuation level in the order of 1 dB per meter at one atmosphere of pressure.
Safe use of high-intensity ultrasound
For the nonlinear effect to occur, relatively high-intensity ultrasonics are required. The SPL involved was typically greater than 100 dB of ultrasound at a nominal distance of 1 m from the face of the ultrasonic transducer. Exposure to more intense ultrasound over 140 dB near the audible range (20–40 kHz) can lead to a syndrome involving manifestations of nausea, headache, tinnitus, pain, dizziness, and fatigue, but this is around 100 times the 100 dB level cited above, and is generally not a concern. Dr Joseph Pompei of Audio Spotlight has published data showing that their product generates ultrasonic sound pressure levels around 130 dB (at 60 kHz) measured at 3 meters.
The UK's independent Advisory Group on Non-ionising Radiation (AGNIR) produced a 180-page report on the health effects of human exposure to ultrasound and infrasound in 2010. The UK Health Protection Agency (HPA) published their report, which recommended an exposure limit for the general public to airborne ultrasound sound pressure levels (SPL) of 100 dB (at 25 kHz and above).
OSHA specifies a safe ceiling value of ultrasound as 145 dB SPL exposure at the frequency range used by commercial systems in air, as long as there is no possibility of contact with the transducer surface or coupling medium (i.e. submerged). This is several times the highest levels used by commercial Audio Spotlight systems, so there is a significant margin for safety. In a review of international acceptable exposure limits Howard et al. (2005) noted the general agreement among standards organizations, but expressed concern with the decision by United States of America's Occupational Safety and Health Administration (OSHA) to increase the exposure limit by an additional 30 dB under some conditions (equivalent to a factor of 1000 in intensity).
For frequencies of ultrasound from 25 to 50 kHz, a guideline of 110 dB had been recommended by Canada, Japan, the USSR, and the International Radiation Protection Agency, and 115 dB by Sweden in the late 1970s to early 1980s, but these were primarily based on subjective effects. The more recent OSHA guidelines above are based on ACGIH (American Conference of Governmental Industrial Hygienists) research from 1987.
Lawton(2001) reviewed international guidelines for airborne ultrasound in a report published by the United Kingdom's Health and Safety Executive, this included a discussion of the guidelines issued by the American Conference of Governmental Industrial Hygienists (ACGIH), 1988. Lawton states "This reviewer believes that the ACGIH has pushed its acceptable exposure limits to the very edge of potentially injurious exposure". The ACGIH document also mentioned the possible need for hearing protection.
See also
Directional sound
Infrasound
Further resources
filed on 17 August 2004 describes an HSS system for using ultrasound to:
Direct distinct 'in-car entertainment' directly to passengers in different positions.
Shape the airwaves in the vehicle to deaden unwanted noises.
References
External links
Holosonics Audio Spotlight
Hypersonic Sound
NextFest
Acoustics
Sound
Ultrasound | Sound from ultrasound | [
"Physics"
] | 3,784 | [
"Classical mechanics",
"Acoustics"
] |
6,935,971 | https://en.wikipedia.org/wiki/Alpha%20cleavage | Alpha-cleavage (α-cleavage) in organic chemistry refers to the act of breaking the carbon-carbon bond adjacent to the carbon bearing a specified functional group.
Mass spectrometry
Generally this topic is discussed when covering tandem mass spectrometry fragmentation and occurs generally by the same mechanisms.
For example, of a mechanism of alpha-cleavage, an electron is knocked off an atom (usually by electron collision) to form a radical cation. Electron removal generally happens in the following order: 1) lone pair electrons, 2) pi bond electrons, 3) sigma bond electrons.
One of the lone pair electrons moves down to form a pi bond with an electron from an adjacent (alpha) bond. The other electron from the bond moves to an adjacent atom (not one adjacent to the lone pair atom) creating a radical. This creates a double bond adjacent to the lone pair atom (oxygen is a good example) and breaks/cleaves the bond from which the two electrons were removed.
In molecules containing carbonyl groups, alpha-cleavage often competes with McLafferty rearrangement.
Photochemistry
In photochemistry, it is the homolytic cleavage of a bond adjacent to a specified group.
See also
Inductive cleavage
References
Organic reactions
Tandem mass spectrometry | Alpha cleavage | [
"Physics",
"Chemistry"
] | 262 | [
"Organic reactions",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Tandem mass spectrometry"
] |
10,716,352 | https://en.wikipedia.org/wiki/Hamilton%20Wetland%20Restoration%20Project | The Hamilton Wetland Restoration Project, now known as the Hamilton/Bel Marin Keys Wetlands Restoration, is a wetlands habitat restoration project at the former Hamilton Air Force Base—Hamilton Army Airfield (1930−1988) site and adjacent Bel Marin Keys shoreline, in Marin County, California.
It is located at Whiteside Marsh on the northwestern shore of San Pablo Bay, in and adjacent to the city of Novato in the North Bay region of the San Francisco Bay Area.
Project
The restoration project is a joint venture between two public agencies: the U.S. Army Corps of Engineers is the lead federal agency, with the California Coastal Conservancy as the local sponsoring agency. In addition, the San Francisco Bay Conservation and Development Commission serves as a collaborating partner.
The U.S. Congress authorized the Hamilton Wetland Restoration Project in 1999, and the addition of the Bel Marin Keys property to the project in 2007. The combined project site comprises approximately .
Together, these three agencies are working to restore the Whiteside Marsh section of the closed Hamilton Air Force Base—Hamilton Army Airfield site to its former natural estuary and wetlands condition, and to create valuable endangered species habitat in the urbanized San Francisco Bay Area.
The Hamilton Wetlands Restoration Project "represents an unprecedented opportunity to contribute to the restoration of the San Francisco Bay, which has lost over 85% of its natural wetlands since the 1880s."
External links
San Pablo Bay
Wetlands of the San Francisco Bay Area
Ecological restoration
Estuaries of California
Landforms of Marin County, California
Natural history of Marin County, California
Protected areas of Marin County, California
Protected areas established in 1999
1999 establishments in California
Environment of the San Francisco Bay Area | Hamilton Wetland Restoration Project | [
"Chemistry",
"Engineering"
] | 333 | [
"Ecological restoration",
"Environmental engineering"
] |
10,721,076 | https://en.wikipedia.org/wiki/Grease%20trap | A grease trap (also known as grease interceptor, grease recovery device, grease capsule and grease converter) is a plumbing device (a type of trap) designed to intercept most greases and solids before they enter a wastewater disposal system. Common wastewater contains small amounts of oils which enter into septic tanks and treatment facilities to form a floating scum layer. This scum layer is very slowly digested and broken down by microorganisms in the anaerobic digestion process. Large amounts of oil from food preparation in restaurants can overwhelm a septic tank or treatment facility, causing the release of untreated sewage into the environment. High-viscosity fats and cooking grease such as lard solidify when cooled, and can combine with other disposed solids to block drain pipes.
Grease traps have been in use since the Victorian era; in the late 1800s, Nathaniel Whiting was granted the first patent. The quantity of fats, oils, greases, and solids (FOGS) that enter sewers is decreased by the traps. They consist of boxes within the drain run that flows between the sinks in a kitchen and the sewer system. They have only kitchen wastewater flowing through them and do not serve any other drainage system, such as toilets. They can be made from various materials, such as stainless steel, plastics, concrete and cast iron. They range from 35-liter capacity to 45,000 litres and greater. They can be located above ground, below ground, inside the kitchen, or outside the building.
Types
There are three primary types of devices. The most common are those specified by American Society Of Mechanical Engineers (ASME), utilizing baffles, or a proprietary inlet diffuser.
Grease trap sizing is based on the size of the 2- or 3-compartment sink, dishwasher, pot sinks, and mop sinks. Many manufacturers and vendors offer online sizing tools to make these calculations easy. The cumulative flow rates of these devices, as well as overall grease retention capacity (in pounds or kilograms) are considered. Currently, ASME Standard (ASME A112.14.3) is being adopted by both of the national model plumbing codes (International Plumbing Code and Uniform Plumbing Code) that cover most of the US. This standard requires that grease interceptors remove a minimum of 90% of incoming FOGs. It also requires that grease interceptors are third-party tested and certified to 90 days compliance with the standard pumping. This third-party testing must be conducted by a recognized and approved testing laboratory.
Passive grease traps are generally smaller, point-of-use units used under three-compartment sinks or adjacent to dishwashers in kitchens.
Large in-ground tanks, usually , are also passive grease interceptors. These units, made of concrete, fiberglass, or steel, have greater grease and solid storage capacities for high-flow applications such as a restaurant or hospital store. They are commonly called gravity interceptors. Interceptors require a retention time of 30 minutes to allow the fats, oils, grease, and food solids to settle in the tank. As more wastewater enters the tank, the grease-free water is pushed out of the tank. The rotting brown grease inside a grease trap or grease interceptor must be pumped out on a scheduled basis. The brown grease is not recycled and goes to landfills. On average of brown grease goes to landfill annually from each restaurant.
Passive grease traps and passive grease interceptors must be emptied and cleaned when 25% full. As the passive devices fill with fats, oils, and grease, they become less productive for grease recovery. A full grease trap does not stop any FOG from entering the sanitary sewer system. The emptied contents or "brown grease" is considered hazardous waste in many jurisdictions.
A third system type, hydromechanical grease interceptors (HGIs), has become more popular in recent years as restaurants open in more nontraditional sites. Often, these sites don't have space for a large concrete grease interceptor. HGIs take up less space and hold more grease as a percent of their liquid capacity — often between 70 and 85% of their liquid capacity or even higher as in the case of some "Trapzilla" models. These interceptors are 3rd-party certified to meet efficiency standards. Most are made out of durable plastic or fiberglass, lasting much longer than concrete gravity grease interceptors. They are usually lightweight and easy to install without heavy equipment. Most manufacturers test beyond the minimum standard to demonstrate the full capacity of the unit.
Finally, automatic grease removal devices or recovery units offer an alternative to hydromechanical grease interceptors in kitchens. While their tanks passively intercept grease, they have an automatic, motorized mechanism for removing the grease from the tank and isolating it in a container. These interceptors must meet the same efficiency standards as a passive HGI, but must also meet an additional standard that proves they are capable of skimming the grease effectively.
They are often designed to be installed unobtrusively in a commercial kitchen, in a corner, or under a sink. The upfront cost of these units can be higher, but kitchen staff can handle the minimal maintenance required, avoiding pumping fees. The compact design of these units allows them to fit in tight spaces, and simplifies installation.
Uses
Restaurant and food service kitchens produce waste grease which is present in the drain lines from various sinks, dishwashers and cooking equipment such as combi ovens and commercial woks. Rotisserie ovens have also become big sources of waste grease. If not removed, the grease can clump and cause blockage and back-up in the sewer.
In the US, sewers back up annually an estimated 400,000 times, and municipal sewer overflows on 40,000 occasions. The U.S. Environmental Protection Agency has determined that sewer pipe blockages are the leading cause of sewer overflows, and grease is the primary cause of sewer blockages in the United States. Even if accumulated FOG does not escalate into blockages and sanitary sewer overflows, it can disrupt wastewater utility operations and increase operations and maintenance requirements.
For these reasons, depending on the country, nearly all municipalities require commercial kitchen operations to use some type of interceptor device to collect grease before it enters sewers. Where FOG is a concern in the local wastewater system, communities have established inspection programs to ensure that these grease traps and/or interceptors are being routinely maintained.
It is estimated 50% of all sewer overflows are caused by grease blockages, with over of raw sewage spills annually.
Method of operation
When the outflow from the kitchen sink enters the grease trap, the solid food particles sink to the bottom, while lighter grease and oil float to the top. The relatively grease-free water is then fed into the normal septic system.The food solids at the bottom and floating oil and grease must be periodically removed in a manner similar to septic tank pumping. A traditional grease trap is not a food disposal unit. Unfinished food must be scraped into the garbage or food recycling bin. Gravy, sauces and food solids must be scraped off dishes before entering the sink or dishwasher.
To maintain some degree of efficiency, there has been a trend to specify larger traps. Unfortunately, providing a large tank for the effluent to stand also means that food waste has time to settle to the bottom of the tank, reducing available volume and adding to clean-out problems. Also, rotting food contained within an interceptor breaks down, producing toxic waste (such as sulfur gases); hydrogen sulfide combines with the water present to create sulfuric acid. This attacks mild steel and concrete materials, resulting in "rot out", On the other hand, polyethylene has acid-resisting properties. A larger interceptor is not a better interceptor. In most cases, multiple interceptors in series will separate grease much better.
Because it has been in the trap for some time, grease thus collected will be contaminated and is unsuitable for further use. This type of grease is called brown grease.
Brown grease
Waste from passive grease traps and gravity interceptors is called brown grease. Brown grease is rotted food solids in combination with fats, oils, and grease (FOG). Brown grease is pumped from the traps and interceptors by grease pumping trucks. Unlike the collected yellow grease, the majority of brown grease goes to landfill sites. New facilities (2012) and new technology are beginning to allow brown grease to be recycled.
References
External links
A112.14.3 Grease Interceptors Standard and A112.14.6 FOG (Fats, Oils, & Greases) Disposal Systems Standard, American Society of Mechanical Engineers (ASME)
Plumbing
Sewerage infrastructure
Sanitation | Grease trap | [
"Chemistry",
"Engineering"
] | 1,781 | [
"Water treatment",
"Plumbing",
"Sewerage infrastructure",
"Construction"
] |
10,721,443 | https://en.wikipedia.org/wiki/Journal%20of%20Hydrologic%20Engineering | The Journal of Hydrologic Engineering is a monthly engineering journal, first published by the American Society of Civil Engineers in 1996. The journal provides information on the development of new hydrologic methods, theories, and applications to current engineering problems. It publishes papers on analytical, experimental, and numerical methods with regard to the investigation and modeling of hydrological processes. It also publishes technical notes, book reviews, and forum discussions. Though the journal is based in the United States, articles dealing with subjects from around the world are accepted and published. The journal requires the use of the metric system, but allows for authors to also submit their papers in other systems of measure in addition to the SI system.
The journal is run by an editor-in-chief and a number of associate editors, who are respected professionals in the fields of hydrology and hydraulic engineering. The editors come from both academic and professional backgrounds and are responsible for screening submissions and forwarding articles to journal reviewers. The journal reviewers are subject matter experts who volunteer to review articles in order to determine if they should be published by the journal. The current editor-in-chief is R. S. Govindaraju of Purdue University.
G. V. Loganathan of Virginia Polytechnic Institute and State University (a victim of the Virginia Tech massacre on 16 April 2007) was an associate editor.
Editors
The following individuals have served as the editor-in-chief:
Rao S. Govindaraju (2013 – present)
Vijay P. Singh (2005 – 2013)
M. Levent Kavvas (1996–2005)
Indexes
The journal is indexed in Google Scholar, Baidu, Elsevier (Ei Compendex), Clarivate Analytics (Web of Science), ProQuest, Civil engineering database, TRDI, OCLC (WorldCat), IET/INSPEC, Crossref, Scopus, and EBSCOHost.
See also
List of scientific journals
References
External links
ASCE Library
Journal website
Academic journals established in 1996
Hydrology journals
Hydraulic engineering
Hydrologic Engineering
American Society of Civil Engineers academic journals | Journal of Hydrologic Engineering | [
"Physics",
"Engineering",
"Environmental_science"
] | 421 | [
"Hydrology",
"Hydrology journals",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
10,730,216 | https://en.wikipedia.org/wiki/Calcium%20hexaboride | Calcium hexaboride (sometimes calcium boride) is a compound of calcium and boron with the chemical formula CaB6. It is an important material due to its high electrical conductivity , hardness, chemical stability, and melting point. It is a black, lustrous, chemically inert powder with a low density. It has the cubic structure typical for metal hexaborides, with octahedral units of 6 boron atoms combined with calcium atoms. CaB6 and lanthanum-doped CaB6 both show weak ferromagnetic properties, which is a remarkable fact because calcium and boron are neither magnetic, nor have inner 3d or 4f electronic shells, which are usually required for ferromagnetism.
Properties
CaB6 has been investigated in the past due to a variety of peculiar physical properties, such as superconductivity, valence fluctuation and Kondo effects. However, the most remarkable property of CaB6 is its ferromagnetism. It occurs at unexpectedly high temperature (600 K) and with low magnetic moment (below 0.07 per atom). The origin of this high temperature ferromagnetism is the ferromagnetic phase of a dilute electron gas, linkage to the presumed excitonic state in calcium boride, or external impurities on the surface of the sample. The impurities might include iron and nickel, probably coming from impurities in the boron used to prepare the sample.
CaB6 is insoluble in H2O, MeOH (methanol), and EtOH (ethanol) and dissolves slowly in acids. Its microhardness is 27 GPa, Knoop hardness is 2600 kg/mm2), Young modulus is 379 GPa, and electrical resistivity is greater than 2·1010 Ω·m for pure crystals. CaB6 is a semiconductor with an energy gap estimated as 1.0 eV. The low, semi-metallic conductivity of many CaB6 samples can be explained by unintentional doping due to impurities and possible non-stoichiometry.
Structural information
The crystal structure of calcium hexaboride is a cubic lattice with calcium at the cell centre and compact, regular octahedra of boron atoms linked at the vertices by B-B bonds to give a three-dimensional boron network. Each calcium has 24 nearest-neighbor boron atoms The calcium atoms are arranged in simple cubic packing so that there are holes between groups of eight calcium atoms situated at the vertices of a cube. The simple cubic structure is expanded by the introduction of the octahedral B6 groups and the structure is a CsCl-like packing of the calcium and hexaboride groups. Another way of describing calcium hexaboride is as having a metal and a B62− octahedral polymeric anions in a CsCl-type structure where the Calcium atoms occupy the Cs sites and the B6 octahedra in the Cl sites. The Ca-B bond length is 3.05 Å and the B-B bond length is 1.7 Å.
43Ca NMR data contains δpeak at -56.0 ppm and δiso at -41.3 ppm where δiso is taken as peak max +0.85 width, the negative shift is due to the high coordination number.
Raman Data: Calcium hexaboride has three Raman peaks at 754.3, 1121.8, and 1246.9 cm−1 due to the active modes A1g, Eg, and T2g respectively.
Observed Vibrational Frequencies cm−1 : 1270(strong) from A1g stretch, 1154 (med.) and 1125(shoulder) from Eg stretch, 526, 520, 485, and 470 from F1g rotation, 775 (strong) and 762 (shoulder) from F2g bend, 1125 (strong) and 1095 (weak) from F1u bend, 330 and 250 from F1u translation, and 880 (med.) and 779 from F2u bend.
Preparation
One of the main reactions for industrial production is:
CaO + 3 B2O3 + 10 Mg → CaB6 + 10 MgO
Other methods of producing CaB6 powder include:
Direct reaction of calcium or calcium oxide and boron at 1000 °C;
Ca + 6B → CaB6
Reacting Ca(OH)2 with boron in vacuum at about 1700 °C (carbothermal reduction);
Ca(OH)2 +7B → CaB6 + BO(g) + H2O(g)
Reacting calcium carbonate with boron carbide in vacuum at above 1400 °C (carbothermal reduction)
Reacting of CaO and H3BO3 and Mg to 1100 °C.
Low-temperature (500 °C) synthesis
CaCl2 + 6NaBH4 → CaB6 + 2NaCl + 12H2 + 4Na
results in relatively poor quality material.
To produce pure CaB6 single crystals, e.g., for use as cathode material, the thus obtained CaB6 powder is further recrystallized and purified with the zone melting technique. The typical growth rate is 30 cm/h and crystal size ~1x10 cm.
Single-crystal CaB6 Nanowires (diameter 15–40 nm, length 1–10 micrometres) can be obtained by pyrolysis of diborane (B2H6) over calcium oxide (CaO) powders at 860–900 °C, in presence of Ni catalyst.
Uses
Calcium hexaboride is used in the manufacturing of boron-alloyed steel and as a deoxidation agent in production of oxygen-free copper. The latter results in higher conductivity than conventionally phosphorus-deoxidized copper owing to the low solubility of boron in copper. CaB6 can also serve as a high temperature material, surface protection, abrasives, tools, and wear resistant material.
CaB6 is highly conductive, has low work function, and thus can be used as a hot cathode material. When used at elevated temperature, calcium hexaboride will oxidize degrading its properties and shortening its usable lifespan.
CaB6 is also a promising candidate for n-type thermoelectric materials, because its power factor is larger than or comparable to that of common thermoelectric materials Bi2Te3 and PbTe.
CaB also can be used as an antioxidant in carbon bonded refractories.
Precautions
Calcium hexaboride is irritating to the eyes, skin, and respiratory system. This product should be handled with proper protective eyeware and clothing. Never put calcium hexaboride down the drain or add water to it.
See also
Boride
Calcium
References
Further reading
Borides
Calcium compounds
Deoxidizers
Non-stoichiometric compounds
Ferromagnetic materials | Calcium hexaboride | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,433 | [
"Non-stoichiometric compounds",
"Deoxidizers",
"Ferromagnetic materials",
"Metallurgy",
"Materials",
"Matter"
] |
7,090,506 | https://en.wikipedia.org/wiki/Dielectric%20loss | In electrical engineering, dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.g. heat). It can be parameterized in terms of either the loss angle or the corresponding loss tangent . Both refer to the phasor in the complex plane whose real and imaginary parts are the resistive (lossy) component of an electromagnetic field and its reactive (lossless) counterpart.
Electromagnetic field perspective
For time-varying electromagnetic fields, the electromagnetic energy is typically viewed as waves propagating either through free space, in a transmission line, in a microstrip line, or through a waveguide. Dielectrics are often used in all of these environments to mechanically support electrical conductors and keep them at a fixed separation, or to provide a barrier between different gas pressures yet still transmit electromagnetic power. Maxwell’s equations are solved for the electric and magnetic field components of the propagating waves that satisfy the boundary conditions of the specific environment's geometry. In such electromagnetic analyses, the parameters permittivity , permeability , and conductivity represent the properties of the media through which the waves propagate. The permittivity can have real and imaginary components (the latter excluding effects, see below) such that
If we assume that we have a wave function such that
then Maxwell's curl equation for the magnetic field can be written as:
where is the imaginary component of permittivity attributed to bound charge and dipole relaxation phenomena, which gives rise to energy loss that is indistinguishable from the loss due to the free charge conduction that is quantified by . The component represents the familiar lossless permittivity given by the product of the free space permittivity and the relative real/absolute permittivity, or
Loss tangent
The loss tangent is then defined as the ratio (or angle in a complex plane) of the lossy reaction to the electric field in the curl equation to the lossless reaction:
Solution for the electric field of the electromagnetic wave is
where:
is the angular frequency of the wave, and
is the wavelength in the dielectric material.
For dielectrics with small loss, square root can be approximated using only zeroth and first order terms of binomial expansion. Also, for small .
Since power is electric field intensity squared, it turns out that the power decays with propagation distance as
where:
is the initial power
There are often other contributions to power loss for electromagnetic waves that are not included in this expression, such as due to the wall currents of the conductors of a transmission line or waveguide. Also, a similar analysis could be applied to the magnetic permeability where
with the subsequent definition of a magnetic loss tangent
The electric loss tangent can be similarly defined:
upon introduction of an effective dielectric conductivity (see relative permittivity#Lossy medium).
Discrete circuit perspective
A capacitor is a discrete electrical circuit component typically made of a dielectric placed between conductors. One lumped element model of a capacitor includes a lossless ideal capacitor in series with a resistor termed the equivalent series resistance (ESR), as shown in the figure below. The ESR represents losses in the capacitor. In a low-loss capacitor the ESR is very small (the conduction is high leading to a low resistivity), and in a lossy capacitor the ESR can be large. Note that the ESR is not simply the resistance that would be measured across a capacitor by an ohmmeter. The ESR is a derived quantity representing the loss due to both the dielectric's conduction electrons and the bound dipole relaxation phenomena mentioned above. In a dielectric, one of the conduction electrons or the dipole relaxation typically dominates loss in a particular dielectric and manufacturing method. For the case of the conduction electrons being the dominant loss, then
where C is the lossless capacitance.
When representing the electrical circuit parameters as vectors in a complex plane, known as phasors, a capacitor's loss tangent is equal to the tangent of the angle between the capacitor's impedance vector and the negative reactive axis, as shown in the adjacent diagram. The loss tangent is then
.
Since the same AC current flows through both ESR and Xc, the loss tangent is also the ratio of the resistive power loss in the ESR to the reactive power oscillating in the capacitor. For this reason, a capacitor's loss tangent is sometimes stated as its dissipation factor, or the reciprocal of its quality factor Q, as follows
References
Electromagnetism
Electrical engineering
External links
Loss in dielectrics, frequency dependence | Dielectric loss | [
"Physics",
"Engineering"
] | 982 | [
"Electromagnetism",
"Electrical engineering",
"Physical phenomena",
"Fundamental interactions"
] |
7,095,521 | https://en.wikipedia.org/wiki/Uranium%20tailings | Uranium tailings or uranium tails are a radioactive waste byproduct (tailings) of conventional uranium mining and uranium enrichment. They contain the radioactive decay products from the uranium decay chains, mainly the U-238 chain, and heavy metals. Long-term storage or disposal of tailings may pose a danger for public health and safety.
Production
Uranium mill tailings are primarily the sandy process waste material from a conventional uranium mill. Milling is the first step in making fuel for nuclear reactors from natural uranium ore. The uranium extract is transformed into yellowcake.
The raw uranium ore is brought to the surface and crushed into a fine sand. The valuable uranium-bearing minerals are then removed via heap leaching with the use of acids or bases, and the remaining radioactive sludge, called "uranium tailings", is stored in huge impoundments. A short ton (907 kg) of ore yields one to five pounds (0.45 to 2.3 kg) of uranium depending on the uranium content of the mineral. Uranium tailings can retain up to 85% of the ore's original radioactivity.
Composition
The tailings contain mainly decay products from the decay chain involving Uranium-238. Uranium tailings contain over a dozen radioactive nuclides, which are the primary hazard posed by the tailings. The most important of these are thorium-230, radium-226, radon-222 (radon gas) and the daughter isotopes of radon decay, including polonium-210. All of those are naturally occurring radioactive materials or "NORM".
Health risks
Tailings contain heavy metals and radioactive radium. Radium then decays over thousands of years and radioactive radon gas is produced. Tailings are kept in piles for long-term storage or disposal and need to be maintained and monitored for leaks over the long term.
If uranium tailings are stored aboveground and allowed to dry out, the radioactive sand can be carried great distances by the wind, entering the food chain and bodies of water. The danger posed by such sand dispersal is uncertain at best given the dilution effect of dispersal. The majority of tailing mass will be inert rock, just as it was in the raw ore before the extraction of the uranium, but physically altered, ground up, mixed with large amounts of water and exposed to atmospheric oxygen, which can substantially alter chemical behaviour.
An EPA estimate of risk based on uranium tailings deposits existing in the United States in 1983 gave the figure of 500 lung cancer deaths per century if no countermeasures are taken.
See also
List of uranium mines
Uranium Mill Tailings Radiation Control Act
References
Radioactive waste
Uranium mining | Uranium tailings | [
"Physics",
"Chemistry",
"Technology"
] | 539 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Environmental impact of nuclear power",
"Radioactivity",
"Nuclear physics",
"Hazardous waste",
"Radioactive waste"
] |
7,096,085 | https://en.wikipedia.org/wiki/Construction%20aggregate | Construction aggregate, or simply aggregate, is a broad category of coarse- to medium-grained particulate material used in construction. Traditionally, it includes natural materials such as sand, gravel, crushed stone. As with other types of aggregates, it is a component of composite materials, particularly concrete and asphalt.
Aggregates are the most mined materials in the world, being a significant part of 6 billion tons of concrete produced per year.
Aggregate serves as reinforcement to add strength to the resulting material.
Due to the relatively high hydraulic conductivity as compared to most soil types, aggregates are widely used in drainage applications such as foundation and French drains, septic drain fields, retaining wall drains, and roadside edge drains. Aggregates are also used as base material under building foundations, roads, and railroads (aggregate base). It has predictable, uniform properties, preventing differential settling under the road or building.
Aggregates are also used as a low-cost extender that binds with more expensive cement or asphalt to form concrete. Although most kinds of aggregate require a form of binding agent, there are types of self-binding aggregate which require no form of binding agent.
More recently, recycled concrete and geosynthetic materials have also been used as aggregates.
Sources
Sources for these basic materials can be grouped into three main areas: mining of mineral aggregate deposits, including sand, gravel, and stone; use of waste slag from the manufacture of iron and steel; and recycling of concrete, which is itself chiefly manufactured from mineral aggregates. In addition, there are some (minor) materials that are used as specialty lightweight aggregates: clay, pumice, perlite, and vermiculite. Other minerals include:
basalt
dolomite
granite
gravel
limestone
sand
sandstone
Specifications
In Europe, sizing ranges are specified as d/D, where the d shows the smallest and D shows the largest square mesh grating that the particles can pass. Application-specific preferred sizings are covered in European Standard EN 13043 for road construction, EN 13383 for larger armour stone, EN 12620 for concrete aggregate, EN 13242 for base layers of road construction, and EN 13450 for railway ballast.
The American Society for Testing and Materials publishes an exhaustive listing of specifications including ASTM D 692 and ASTM D 1073 for various construction aggregate products, which, by their individual design, are suitable for specific construction purposes. These products include specific types of coarse and fine aggregate designed for such uses as additives to asphalt and concrete mixes, as well as other construction uses. State transportation departments further refine aggregate material specifications in order to tailor aggregate use to the needs and available supply in their particular locations.
History
People have used sand and stone for foundations for thousands of years. Significant refinement of the production and use of aggregate occurred during the Roman Empire, which used aggregate to build its vast network of roads and aqueducts. The invention of concrete, which was essential to architecture utilizing arches, created an immediate, permanent demand for construction aggregates.
Vitruvius writes in De architectura:
Economy denotes the proper management of materials and of site, as well as a thrifty balancing of cost and common sense in the construction of works. This will be observed if, in the first place, the architect does not demand things which cannot be found or made ready without great expense. For example: it is not everywhere that there is plenty of pit-sand, rubble, fir, clear fir, and marble... Where there is no pit sand, we must use the kinds washed up by rivers or by the sea... and other problems we must solve in similar ways.
Modern production
The advent of modern blasting methods enabled the development of quarries, which are now used throughout the world, wherever competent bedrock deposits of aggregate quality exist. In many places, good limestone, granite, marble or other quality stone bedrock deposits do not exist. In these areas, natural sand and gravel are mined for use as aggregate. Where neither stone, nor sand and gravel, are available, construction demand is usually satisfied by shipping in aggregate by rail, barge or truck. Additionally, demand for aggregates can be partially satisfied through the use of slag and recycled concrete. However, the available tonnages and lesser quality of these materials prevent them from being a viable replacement for mined aggregates on a large scale.
Large stone quarry and sand and gravel operations exist near virtually all population centers due to the high cost of transportation relative to the low value of the product. Trucking aggregate more than 40 kilometers is typically uneconomical. These are capital-intensive operations, utilizing large earth-moving equipment, belt conveyors, and machines specifically designed for crushing and separating various sizes of aggregate, to create distinct product stockpiles.
According to the USGS, 2006 U.S. crushed stone production was 1.72 billion tonnes valued at $13.8 billion (compared to 1.69 billion tonnes valued at $12.1 billion in 2005), of which limestone was 1,080 million tonnes valued at $8.19 billion from 1,896 quarries, granite was 268 million tonnes valued at $2.59 billion from 378 quarries, trap rock was 148 million tonnes valued at $1.04 billion from 355 quarries, and the balance other kinds of stone from 729 quarries. Limestone and granite are also produced in large amounts as dimension stone. The great majority of crushed stone is moved by heavy truck from the quarry/plant to the first point of sale or use. According to the USGS, 2006 U.S. sand and gravel production was 1.32 billion tonnes valued at $8.54 billion (compared to 1.27 billion tonnes valued at $7.46 billion in 2005), of which 264 million tonnes valued at $1.92 billion was used as concrete aggregates. The great majority of this was again moved by truck, instead of by electric train.
Currently, total U.S. aggregate demand by final market sector was 30%–35% for non-residential building (offices, hotels, stores, manufacturing plants, government and institutional buildings, and others), 25% for highways, and 25% for housing.
Recycled materials
Recycled material such as blast furnace and steel furnace slag can be used as aggregate or partly substitute for portland cement. Blast furnace and steel slag is either air-cooled or water-cooled. Air-cooled slag can be used as aggregate. Water-cooled slag produces sand-sized glass-like particles (granulated). Adding free lime to the water during cooling gives granulated slag hydraulic cementitious properties.
In 2006, according to the USGS, air-cooled blast furnace slag sold or used in the U.S. was 7.3 million tonnes valued at $49 million, granulated blast furnace slag sold or used in the U.S. was 4.2 million tonnes valued at $318 million, and steel furnace slag sold or used in the U.S. was 8.7 million tonnes valued at $40 million. Air-cooled blast furnace slag sales in 2006 were for use in road bases and surfaces (41%), asphaltic concrete (13%), ready-mixed concrete (16%), and the balance for other uses. Granulated blast furnace slag sales in 2006 were for use in cementitious materials (94%), and the balance for other uses. Steel furnace slag sales in 2006 were for use in road bases and surfaces (51%), asphaltic concrete (12%), for fill (18%), and the balance for other uses.
Recycled glass aggregate crushed to a small size is substituted for many construction and utility projects in place of pea gravel or crushed rock. Glass aggregate is not dangerous to handle. It can be used as pipe bedding—placed around sewer, storm water or drinking water pipes to transfer weight from the surface and protect the pipe. Another common use is as fill to bring the level of a concrete floor even with a foundation. Use of glass aggregate helps close the loop in glass recycling in many places where glass cannot be smelted into new glass.
Aggregates themselves can be recycled as aggregates. Recyclable aggregate tends to be concentrated in urban areas. The supply of recycled aggregate depends on physical decay and demolition of structures. Mobile recycling plants eliminate the cost of transporting the material to a central site. The recycled material is typically of variable quality.
Many aggregate products are recycled for other industrial purposes. Contractors save on disposal costs and less aggregate is buried or piled and abandoned. In Bay City, Michigan, for example, a recycle program exists for unused products such as mixed concrete, block, brick, gravel, pea stone, and other used materials. The material is crushed to provide subbase for roads and driveways, among other purposes.
According to the USGS in 2006, 2.9 million tonnes of Portland cement concrete (including aggregate) worth $21.9 million was recycled, and 1.6 million tonnes of asphalt concrete (including aggregate) worth $11.8 million was recycled, both by crushed stone operations. Much more of both materials are recycled by construction and demolition firms not included in the USGS survey. For sand and gravel, the survey showed that 4.7 million tonnes of cement concrete valued at $32.0 million was recycled, and 6.17 million tonnes of asphalt concrete valued at $45.1 million was recycled. Again, more of both materials are recycled by construction and demolition firms not in this USGS survey. The Construction Materials Recycling Association indicates that there are 325 million tonnes of recoverable construction and demolition materials produced annually.
Organic materials
Many geosynthetic aggregates are made from recycled materials. Recyclable plastics can be reused in aggregates. For example, Ring Industrial Group's EZflow product lines are produced with geosynthetic aggregate pieces that are more than 99.9% recycled polystyrene. This polystyrene, otherwise destined for a landfill, is gathered, melted, mixed, reformulated and expanded to create low density aggregates that maintain high strength properties under compressive loads. Such geosynthetic aggregates replace conventional gravel while simultaneously increasing porosity, increasing hydraulic conductivity and eliminating the fine dust "fines" inherent to gravel aggregates which otherwise serve to clog and disrupt the operation of many drainage applications.
Several groups have attempted to use minced tires as part of concrete aggregate. The result is tougher than regular concrete, because it can bend instead of breaking under pressure. However, tires reduce compressive strength partially because the cement bonds poorly with the rubber. Pores in the rubber fill with water when the concrete is mixed, but become voids as the concrete sets. One group put the concrete under pressure as it sets, reducing pore volumes.
Recycled aggregates in the UK
Recycled aggregate in the UK results from the processing of construction material. To ensure the aggregate is inert, it is manufactured from material tested and characterised under European Waste Codes.
In 2008, 210 million tonnes of aggregate were produced including 67 million tonnes of recycled product, according to the Quarry Products Association. The Waste and Resource Action Programme has produced a Quality Protocol for the regulated production of recycled aggregates.
See also
Aggregate (composite), Aggregate base
Aggregate industry in the United States
Alkali-aggregate reaction
Alkali–silica reaction
Concrete
Crushed stone
Dimension stone – stone recycling and reuse
Hoggin
Interfacial transition zone (ITZ)
Marble
Pozzolanic reaction
Road metal
Saturated-surface-dry
Tumble finishing
References
Citations
Sources
UEPG – The European Aggregates Association
Samscreen International
The National Stone, Sand & Gravel Association
Pit and Quarry University/
"Rock to Road" (Industry publication - Canada)
The American Society for Testing Materials
Gravel Watch Ontario
Oregon Concrete & Aggregate Producers Association
Portland Cement Association
Pavement Interactive article on Aggregates
2006 USGS Minerals Yearbook: Stone, Crushed
2005 USGS Minerals Yearbook: Stone, Crushed
2006 USGS Minerals Yearbook: Construction Sand and Gravel
2005 USGS Minerals Yearbook: Construction Sand and Gravel
Construction Aggregate, in June 2007 Mining Engineering (private membership)
2006 USGS Minerals Yearbook: Iron & Steel Slag
Aggregates from Natural and Recycled Sources-Economic Assessments
Construction Materials Recycling Association
MN DNR Aggregate Resource Mapping Program – Division of Lands and Minerals
Quarrying in Depth Recycling
Recycling Tonnages and Primary aggregate production figures
Alberta Sand and Gravel Association (Canada)
Aggregate (composite)
Building stone
Concrete
Granularity of materials
Pavements
Stone (material)
Quarrying
Industrial minerals | Construction aggregate | [
"Physics",
"Chemistry",
"Engineering"
] | 2,555 | [
"Structural engineering",
"Materials",
"Concrete",
"Particle technology",
"Granularity of materials",
"Matter"
] |
7,096,097 | https://en.wikipedia.org/wiki/Gas%20thermometer | A gas thermometer is a thermometer that measures temperature by the variation in volume or pressure of a gas.
Volume Thermometer
This thermometer functions by Charles's Law. Charles's Law states that when the temperature of a gas increases, so does the volume.
Using Charles's Law, the temperature can be measured by knowing the volume of gas at a certain temperature by using the formula, written below. Translating it to the correct levels of the device that is holding the gas. This works on the same principle as mercury thermometers.
or
is the volume,
is the thermodynamic temperature,
is the constant for the system.
is not a fixed constant across all systems and therefore needs to be found experimentally for a given system through testing with known temperature values.
Pressure Thermometer and Absolute Zero
The constant volume gas thermometer plays a crucial role in understanding how absolute zero could be discovered long before the advent of cryogenics. Consider a graph of pressure versus temperature made not far from standard conditions (well above absolute zero) for three different samples of any ideal gas (a, b, c). To the extent that the gas is ideal, the pressure depends linearly on temperature, and the extrapolation to zero pressure occurs at absolute zero. Note that data could have been collected with three different amounts of the same gas, which would have rendered this experiment easy to do in the eighteenth century.
History
See also
Thermodynamic instruments
Boyle's law
Combined gas law
Gay-Lussac's law
Avogadro's law
Ideal gas law
References
Thermometers
Gases
fr:Thermomètre#Thermomètre à gaz | Gas thermometer | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 348 | [
"Thermodynamics stubs",
"Statistical mechanics stubs",
"Matter",
"Phases of matter",
"Measuring instruments",
"Thermodynamics",
"Thermometers",
"Statistical mechanics",
"Physical chemistry stubs",
"Gases"
] |
7,096,967 | https://en.wikipedia.org/wiki/Ground%20source%20heat%20pump | A ground source heat pump (also geothermal heat pump) is a heating/cooling system for buildings that use a type of heat pump to transfer heat to or from the ground, taking advantage of the relative constancy of temperatures of the earth through the seasons. Ground-source heat pumps (GSHPs)or geothermal heat pumps (GHP), as they are commonly termed in North Americaare among the most energy-efficient technologies for providing HVAC and water heating, using less energy than can be achieved by use of resistive electric heaters.
Efficiency is given as a coefficient of performance (CoP) which is typically in the range 3-6, meaning that the devices provide 3-6 units of heat for each unit of electricity used. Setup costs are higher than for other heating systems, due to the requirement of installing ground loops over large areas or of drilling bore holes, hence ground source is often installed when new blocks of flats are built. Air-source heat pumps have lower set-up costs.
Thermal properties of the ground
Ground-source heat pumps take advantage of the difference between the ambient temperature and the temperature at various depths in the ground.
The thermal properties of the ground near the surface can be described as follows:
In the surface layer to a depth of about 1 meter, the temperature is very sensitive to sunlight and weather.
In the shallow layer to a depth of about 8–20 meters (depending on soil type), the thermal mass of the ground causes temperature variation to decrease exponentially with depth until it is close to the local annual average air temperature; it also lags behind the surface temperature, so that the peak temperature is about 6 months after the surface peak temperature.
Below that, in the deeper layer, the temperature is effectively constant, rising about 0.025 °C per metre according to the geothermal gradient.
The "penetration depth" is defined as the depth at which the temperature variable is less than 0.01 of the variation at the surface. This also depends on the type of soil:
History
The heat pump was described by Lord Kelvin in 1853 and developed by Peter Ritter von Rittinger in 1855. Heinrich Zoelly had patented the idea of using it to draw heat from the ground in 1912.
After experimentation with a freezer, Robert C. Webber built the first direct exchange ground source heat pump in the late 1940s; sources disagree, however, as to the exact timeline of his invention The first successful commercial project was installed in the Commonwealth Building (Portland, Oregon) in 1948, and has been designated a National Historic Mechanical Engineering Landmark by ASME. Professor Carl Nielsen of Ohio State University built the first residential open loop version in his home in 1948.
As a result of the 1973 oil crisis, ground source heat pumps became popular in Sweden and have since grown slowly in worldwide popularity as the technology has improved. Open loop systems dominated the market until the development of polybutylene pipe in 1979 made closed loop systems economically viable.
As of 2004, there are over a million units installed worldwide, providing 12 GW of thermal capacity with a growth rate of 10% per year. Each year (as of 2011/2004, respectively), about 80,000 units are installed in the US and 27,000 in Sweden. In Finland, a geothermal heat pump was the most common heating system choice for new detached houses between 2006 and 2011 with market share exceeding 40%.
Arrangement
Internal arrangement
A heat pump is the central unit for the building's heating and cooling. It usually comes in two main variants:
Liquid-to-water heat pumps (also called water-to-water) are hydronic systems that carry heating or cooling through the building through pipes to conventional radiators, underfloor heating, baseboard radiators and hot water tanks. These heat pumps are also preferred for pool heating. Heat pumps typically only heat water to about efficiently, whereas boilers typically operate at . The size of radiators designed for the higher temperatures achieved by boilers may be too small for use with heat pumps, requiring replacement with larger radiators when retrofitting a home from boiler to heat pump. When used for cooling, the temperature of the circulating water must normally be kept above the dew point to ensure that atmospheric humidity does not condense on the radiator.
Liquid-to-air heat pumps (also called water-to-air) output forced air, and are most commonly used to replace legacy forced air furnaces and central air conditioning systems. There are variations that allow for split systems, high-velocity systems, and ductless systems. Heat pumps cannot achieve as high a fluid temperature as a conventional furnace, so they require a higher volume flow rate of air to compensate. When retrofitting a residence, the existing ductwork may have to be enlarged to reduce the noise from the higher air flow.
Ground heat exchanger
Ground source heat pumps employ a ground heat exchanger in contact with the ground or groundwater to extract or dissipate heat. Incorrect design can result in the system freezing after a number of years or very inefficient system performance; thus accurate system design is critical to a successful system
Pipework for the ground loop is typically made of high-density polyethylene pipe and contains a mixture of water and anti-freeze (propylene glycol, denatured alcohol or methanol). Monopropylene glycol has the least damaging potential when it might leak into the ground, and is, therefore, the only allowed anti-freeze in ground sources in an increasing number of European countries.
Horizontal
A horizontal closed loop field is composed of pipes that are arrayed in a plane in the ground. A long trench, deeper than the frost line, is dug and U-shaped or slinky coils are spread out inside the same trench. Shallow horizontal heat exchangers experience seasonal temperature cycles due to solar gains and transmission losses to ambient air at ground level. These temperature cycles lag behind the seasons because of thermal inertia, so the heat exchanger will harvest heat deposited by the sun several months earlier, while being weighed down in late winter and spring, due to accumulated winter cold. Systems in wet ground or in water are generally more efficient than drier ground loops since water conducts and stores heat better than solids in sand or soil. If the ground is naturally dry, soaker hoses may be buried with the ground loop to keep it wet.
Vertical
A vertical system consists of a number of boreholes some deep fitted with U-shaped pipes through which a heat-carrying fluid that absorbs (or discharges) heat from (or to) the ground is circulated. Bore holes are spaced at least 5–6 m apart and the depth depends on ground and building characteristics. Alternatively, pipes may be integrated with the foundation piles used to support the building. Vertical systems rely on migration of heat from surrounding geology, unless recharged during the summer and at other times when surplus heat is available. Vertical systems are typically used where there is insufficient available land for a horizontal system.
Pipe pairs in the hole are joined with a U-shaped cross connector at the bottom of the hole or comprises two small-diameter high-density polyethylene (HDPE) tubes thermally fused to form a U-shaped bend at the bottom. The space between the wall of the borehole and the U-shaped tubes is usually grouted completely with grouting material or, in some cases, partially filled with groundwater. For illustration, a detached house needing 10 kW (3 ton) of heating capacity might need three boreholes deep.
Radial or directional drilling
As an alternative to trenching, loops may be laid by mini horizontal directional drilling (mini-HDD). This technique can lay piping under yards, driveways, gardens or other structures without disturbing them, with a cost between those of trenching and vertical drilling. This system also differs from horizontal & vertical drilling as the loops are installed from one central chamber, further reducing the ground space needed. Radial drilling is often installed retroactively (after the property has been built) due to the small nature of the equipment used and the ability to bore beneath existing constructions.
Open loop
In an open-loop system (also called a groundwater heat pump), the secondary loop pumps natural water from a well or body of water into a heat exchanger inside the heat pump. Since the water chemistry is not controlled, the appliance may need to be protected from corrosion by using different metals in the heat exchanger and pump. Limescale may foul the system over time and require periodic acid cleaning. This is much more of a problem with cooling systems than heating systems. A standing column well system is a specialized type of open-loop system where water is drawn from the bottom of a deep rock well, passed through a heat pump, and returned to the top of the well. A growing number of jurisdictions have outlawed open-loop systems that drain to the surface because these may drain aquifers or contaminate wells. This forces the use of more environmentally sound injection wells or a closed-loop system.
Pond
A closed pond loop consists of coils of pipe similar to a slinky loop attached to a frame and located at the bottom of an appropriately sized pond or water source. Artificial ponds are used as heat storage (up to 90% efficient) in some central solar heating plants, which later extract the heat (similar to ground storage) via a large heat pump to supply district heating.
Direct exchange (DX)
The direct exchange geothermal heat pump (DX) is the oldest type of geothermal heat pump technology where the refrigerant itself is passed through the ground loop. Developed during the 1980s, this approach faced issues with the refrigerant and oil management system, especially after the ban of CFC refrigerants in 1989 and DX systems now are infrequently used.
Installation
Because of the technical knowledge and equipment needed to design and size the system properly (and install the piping if heat fusion is required), a GSHP system installation requires a professional's services. Several installers have published real-time views of system performance in an online community of recent residential installations. The International Ground Source Heat Pump Association (IGSHPA), Geothermal Exchange Organization (GEO), Canadian GeoExchange Coalition and Ground Source Heat Pump Association maintain listings of qualified installers in the US, Canada and the UK. Furthermore, detailed analysis of soil thermal conductivity for horizontal systems and formation thermal conductivity for vertical systems will generally result in more accurately designed systems with a higher efficiency.
Thermal performance
Cooling performance is typically expressed in units of BTU/hr/watt as the energy efficiency ratio (EER), while heating performance is typically reduced to dimensionless units as the coefficient of performance (COP). The conversion factor is 3.41 BTU/hr/watt. Since a heat pump moves three to five times more heat energy than the electric energy it consumes, the total energy output is much greater than the electrical input. This results in net thermal efficiencies greater than 300% as compared to radiant electric heat being 100% efficient. Traditional combustion furnaces and electric heaters can never exceed 100% efficiency. Ground source heat pumps can reduce energy consumption – and corresponding air pollution emissions – up to 72% compared to electric resistance heating with standard air-conditioning equipment.
Efficient compressors, variable speed compressors and larger heat exchangers all contribute to heat pump efficiency. Residential ground source heat pumps on the market today have standard COPs ranging from 2.4 to 5.0 and EERs ranging from 10.6 to 30. To qualify for an Energy Star label, heat pumps must meet certain minimum COP and EER ratings which depend on the ground heat exchanger type. For closed-loop systems, the ISO 13256-1 heating COP must be 3.3 or greater and the cooling EER must be 14.1 or greater.
Standards ARI 210 and 240 define Seasonal Energy Efficiency Ratio (SEER) and Heating Seasonal Performance Factors (HSPF) to account for the impact of seasonal variations on air source heat pumps. These numbers are normally not applicable and should not be compared to ground source heat pump ratings. However, Natural Resources Canada has adapted this approach to calculate typical seasonally adjusted HSPFs for ground-source heat pumps in Canada. The NRC HSPFs ranged from 8.7 to 12.8 BTU/hr/watt (2.6 to 3.8 in nondimensional factors, or 255% to 375% seasonal average electricity utilization efficiency) for the most populated regions of Canada.
For the sake of comparing heat pump appliances to each other, independently from other system components, a few standard test conditions have been established by the American Refrigerant Institute (ARI) and more recently by the International Organization for Standardization. Standard ARI 330 ratings were intended for closed-loop ground-source heat pumps, and assume secondary loop water temperatures of for air conditioning and for heating. These temperatures are typical of installations in the northern US. Standard ARI 325 ratings were intended for open-loop ground-source heat pumps, and include two sets of ratings for groundwater temperatures of and . ARI 325 budgets more electricity for water pumping than ARI 330. Neither of these standards attempts to account for seasonal variations. Standard ARI 870 ratings are intended for direct exchange ground-source heat pumps. ASHRAE transitioned to ISO 13256–1 in 2001, which replaces ARI 320, 325 and 330. The new ISO standard produces slightly higher ratings because it no longer budgets any electricity for water pumps.
Soil without artificial heat addition or subtraction and at depths of several metres or more remains at a relatively constant temperature year round. This temperature equates roughly to the average annual air temperature of the chosen location, usually at a depth of in the northern US. Because this temperature remains more constant than the air temperature throughout the seasons, ground source heat pumps perform with far greater efficiency during extreme air temperatures than air conditioners and air-source heat pumps.
Analysis of heat transfer
A challenge in predicting the thermal response of a ground heat exchanger (GHE) is the diversity of the time and space scales involved. Four space scales and eight time scales are involved in the heat transfer of GHEs. The first space scale having practical importance is the diameter of the borehole (~ 0.1 m) and the associated time is on the order of 1 hr, during which the effect of the heat capacity of the backfilling material is significant. The second important space dimension is the half distance between two adjacent boreholes, which is on the order of several meters. The corresponding time is on the order of a month, during which the thermal interaction between adjacent boreholes is important. The largest space scale can be tens of meters or more, such as the half-length of a borehole and the horizontal scale of a GHE cluster. The time scale involved is as long as the lifetime of a GHE (decades).
The short-term hourly temperature response of the ground is vital for analyzing the energy of ground-source heat pump systems and for their optimum control and operation. By contrast, the long-term response determines the overall feasibility of a system from the standpoint of the life cycle.
The main questions that engineers may ask in the early stages of designing a GHE are (a) what the heat transfer rate of a GHE as a function of time is, given a particular temperature difference between the circulating fluid and the ground, and (b) what the temperature difference as a function of time is, given a required heat exchange rate. In the language of heat transfer, the two questions can probably be expressed as
where Tf is the average temperature of the circulating fluid, T0 is the effective, undisturbed temperature of the ground, ql is the heat transfer rate of the GHE per unit time per unit length (W/m), and R is the total thermal resistance (m.K/W).R(t) is often an unknown variable that needs to be determined by heat transfer analysis. Despite R(t) being a function of time, analytical models exclusively decompose it into a time-independent part and a time-dependent part to simplify the analysis.
Various models for the time-independent and time-dependent R can be found in the references. Further, a thermal response test is often performed to make a deterministic analysis of ground thermal conductivity to optimize the loopfield size, especially for larger commercial sites (e.g., over 10 wells).
Seasonal thermal storage
The efficiency of ground source heat pumps can be greatly improved by using seasonal thermal energy storage and interseasonal heat transfer. Heat captured and stored in thermal banks in the summer can be retrieved efficiently in the winter. Heat storage efficiency increases with scale, so this advantage is most significant in commercial or district heating systems.
Geosolar combisystems have been used to heat and cool a greenhouse using an aquifer for thermal storage. In summer, the greenhouse is cooled with cold ground water. This heats the water in the aquifer which can become a warm source for heating in winter. The combination of cold and heat storage with heat pumps can be combined with water/humidity regulation. These principles are used to provide renewable heat and renewable cooling to all kinds of buildings.
Also the efficiency of existing small heat pump installations can be improved by adding large, cheap, water-filled solar collectors. These may be integrated into a to-be-overhauled parking lot, or in walls or roof constructions by installing one-inch PE pipes into the outer layer.
Environmental impact
The US Environmental Protection Agency (EPA) has called ground source heat pumps the most energy-efficient, environmentally clean, and cost-effective space conditioning systems available. Heat pumps offer significant emission reductions potential where the electricity is produced from renewable resources.
GSHPs have unsurpassed thermal efficiencies and produce zero emissions locally, but their electricity supply includes components with high greenhouse gas emissions unless it is a 100% renewable energy supply. Their environmental impact, therefore, depends on the characteristics of the electricity supply and the available alternatives.
The GHG emissions savings from a heat pump over a conventional furnace can be calculated based on the following formula:
HL = seasonal heat load ≈ 80 GJ/yr for a modern detached house in the northern US
FI = emissions intensity of fuel = 50 kg(CO2)/GJ for natural gas, 73 for heating oil, 0 for 100% renewable energy such as wind, hydro, photovoltaic or solar thermal
AFUE = furnace efficiency ≈ 95% for a modern condensing furnace
COP = heat pump coefficient of performance ≈ 3.2 seasonally adjusted for northern US heat pump
EI = emissions intensity of electricity ≈ 200–800 ton(CO2)/GWh, depending on the region's mix of electric power plants (Coal vs Natural Gas vs Nuclear, Hydro, Wind & Solar)
Ground-source heat pumps always produce fewer greenhouse gases than air conditioners, oil furnaces, and electric heating, but natural gas furnaces may be competitive depending on the greenhouse gas intensity of the local electricity supply. In countries like Canada and Russia with low emitting electricity infrastructure, a residential heat pump may save 5 tons of carbon dioxide per year relative to an oil furnace, or about as much as taking an average passenger car off the road. But in cities like Beijing or Pittsburgh that are highly reliant on coal for electricity production, a heat pump may result in 1 or 2 tons more carbon dioxide emissions than a natural gas furnace. For areas not served by utility natural gas infrastructure, however, no better alternative exists.
The fluids used in closed loops may be designed to be biodegradable and non-toxic, but the refrigerant used in the heat pump cabinet and in direct exchange loops was, until recently, chlorodifluoromethane, which is an ozone-depleting substance. Although harmless while contained, leaks and improper end-of-life disposal contribute to enlarging the ozone hole. For new construction, this refrigerant is being phased out in favor of the ozone-friendly but potent greenhouse gas R410A. Open-loop systems (i.e. those that draw ground water as opposed to closed-loop systems using a borehole heat exchanger) need to be balanced by reinjecting the spent water. This prevents aquifer depletion and the contamination of soil or surface water with brine or other compounds from underground.
Before drilling, the underground geology needs to be understood, and drillers need to be prepared to seal the borehole, including preventing penetration of water between strata. The unfortunate example is a geothermal heating project in Staufen im Breisgau, Germany which seems the cause of considerable damage to historical buildings there. In 2008, the city centre was reported to have risen 12 cm, after initially sinking a few millimeters. The boring tapped a naturally pressurized aquifer, and via the borehole this water entered a layer of anhydrite, which expands when wet as it forms gypsum. The swelling will stop when the anhydrite is fully reacted, and reconstruction of the city center "is not expedient until the uplift ceases". By 2010 sealing of the borehole had not been accomplished. By 2010, some sections of town had risen by 30 cm.
Economics
Ground source heat pumps are characterized by high capital costs and low operational costs compared to other HVAC systems. Their overall economic benefit depends primarily on the relative costs of electricity and fuels, which are highly variable over time and across the world. Based on recent prices, ground-source heat pumps currently have lower operational costs than any other conventional heating source almost everywhere in the world. Natural gas is the only fuel with competitive operational costs, and only in a handful of countries where it is exceptionally cheap, or where electricity is exceptionally expensive. In general, a homeowner may save anywhere from 20% to 60% annually on utilities by switching from an ordinary system to a ground-source system.
Capital costs and system lifespan have received much less study until recently, and the return on investment is highly variable. The rapid escalation in system price has been accompanied by rapid improvements in efficiency and reliability. Capital costs are known to benefit from economies of scale, particularly for open-loop systems, so they are more cost-effective for larger commercial buildings and harsher climates. The initial cost can be two to five times that of a conventional heating system in most residential applications, new construction or existing. In retrofits, the cost of installation is affected by the size of the living area, the home's age, insulation characteristics, the geology of the area, and the location of the property. Proper duct system design and mechanical air exchange should be considered in the initial system cost.
Capital costs may be offset by government subsidies; for example, Ontario offered $7000 for residential systems installed in the 2009 fiscal year. Some electric companies offer special rates to customers who install a ground-source heat pump for heating or cooling their building. Where electrical plants have larger loads during summer months and idle capacity in the winter, this increases electrical sales during the winter months. Heat pumps also lower the load peak during the summer due to the increased efficiency of heat pumps, thereby avoiding the costly construction of new power plants. For the same reasons, other utility companies have started to pay for the installation of ground-source heat pumps at customer residences. They lease the systems to their customers for a monthly fee, at a net overall saving to the customer.
The lifespan of the system is longer than conventional heating and cooling systems. Good data on system lifespan is not yet available because the technology is too recent, but many early systems are still operational today after 25–30 years with routine maintenance. Most loop fields have warranties for 25 to 50 years and are expected to last at least 50 to 200 years. Ground-source heat pumps use electricity for heating the house. The higher investment above conventional oil, propane or electric systems may be returned in energy savings in 2–10 years for residential systems in the US. The payback period for larger commercial systems in the US is 1–5 years, even when compared to natural gas. Additionally, because geothermal heat pumps usually have no outdoor compressors or cooling towers, the risk of vandalism is reduced or eliminated, potentially extending a system's lifespan.
Ground source heat pumps are recognized as one of the most efficient heating and cooling systems on the market. They are often the second-most cost-effective solution in extreme climates (after co-generation), despite reductions in thermal efficiency due to ground temperature. (The ground source is warmer in climates that need strong air conditioning, and cooler in climates that need strong heating.) The financial viability of these systems depends on the adequate sizing of ground heat exchangers (GHEs), which generally contribute the most to the overall capital costs of GSHP systems.
Commercial systems maintenance costs in the US have historically been between $0.11 to $0.22 per m2 per year in 1996 dollars, much less than the average $0.54 per m2 per year for conventional HVAC systems.
Governments that promote renewable energy will likely offer incentives for the consumer (residential), or industrial markets. For example, in the United States, incentives are offered both on the state and federal levels of government.
See also
Ground-coupled heat exchanger
Deep water source cooling
Solar thermal cooling
Renewable heat
International Ground Source Heat Pump Association
Glossary of geothermal heating and cooling
Uniform Mechanical Code
References
External links
Geothermal Heat Pumps. (EERE/USDOE)
Cost calculation
Geothermal Heat Pump Consortium
International Ground Source Heat Pump Association
Ground Source Heat Pump Association (GSHPA)
Energy conversion
Building engineering
Heat pumps
Sustainable technologies | Ground source heat pump | [
"Engineering"
] | 5,277 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
7,097,405 | https://en.wikipedia.org/wiki/Nina%20Bari | Nina Karlovna Bari (; 19 November 1901 – 15 July 1961) was a Soviet mathematician known for her work on trigonometric series.<ref name="asc">Biography of Nina Karlovna Bari, by Giota Soublis, Agnes Scott College.</ref> She is also well-known for two textbooks, Higher Algebra and The Theory of Series''.
Early life and education
Nina Bari was born in Russia on 19 November 1901, the daughter of Olga and Karl Adolfovich Bari, a physician. In 1918, she became one of the first women to be accepted to the Department of Physics and Mathematics at the prestigious Moscow State University. She graduated in 1921—just three years after entering the university. After graduation, Bari began her teaching career. She lectured at the Moscow Forestry Institute, the Moscow Polytechnic Institute, and the Sverdlov Communist Institute. Bari applied for and received the only paid research fellowship awarded by the newly created Research Institute of Mathematics and Mechanics. As a student, Bari was drawn to an elite group nicknamed the Luzitania—an informal academic and social organization. She studied trigonometric series and functions under the tutelage of Nikolai Luzin, becoming one of his star students. She presented the main result of her research to the Moscow Mathematical Society in 1922—the first woman to address the society.
In 1926, Bari completed her doctoral work on the topic of trigonometric expansions, winning the Glavnauk Prize for her thesis work. In 1927, Bari took advantage of an opportunity to study in Paris at the Sorbonne and the College de France. She then attended the Polish Mathematical Congress in Lwów, Poland; a Rockefeller grant enabled her to return to Paris to continue her studies. Bari's decision to travel may have been influenced by the disintegration of the Luzitanians. Luzin's irascible, demanding personality had alienated many of the mathematicians who had gathered around him. By 1930, all traces of the Luzitania movement had vanished, and Luzin left Moscow State for the Academy of Science's Steklov Institute of Mathematics. In 1932, she became a professor at Moscow State University and in 1935 was awarded the title of Doctor of Physical and Mathematical Sciences, a more prestigious research degree than traditional Ph.D. By this time, she had completed foundational work on trigonometric series.
Career and later life
She was a close collaborator with Dmitrii Menshov on a number of research projects. She and Menshov took charge of function theory work at Moscow State during the 1940s. In 1952, she published an important piece on primitive functions, and trigonometric series and their almost everywhere convergence. Bari also posted works at the 1956 Third All-Union Congress in Moscow and the 1958 International Congress of Mathematicians in Edinburgh.
Mathematics was the center of Bari's intellectual life, but she enjoyed literature and the arts. She was also a mountain hiking enthusiast and tackled the Caucasus, Altai, Pamir and Tian Shan mountain ranges in Russia. Bari's interest in mountain hiking was inspired by her husband, Viktor Vladimirovich Nemytskii, a Soviet mathematician, Moscow State professor and an avid mountain explorer. There is no documentation of their marriage available, but contemporaries believe the two married later in life. Bari's last work—her 55th publication—was a 900-page monograph on the state of the art of trigonometric series theory, which is recognized as a standard reference work for those specializing in function and trigonometric series theory.
Death
On 15 July 1961, Bari died after being hit by a train. It was possibly a suicide due to depression caused by Luzin's death eleven years earlier.
References
1901 births
1961 deaths
Soviet mathematicians
Soviet women mathematicians
20th-century Russian mathematicians
Mathematical analysts
Moscow State University alumni
Academic staff of Moscow State University
20th-century women mathematicians
Railway accident deaths in Russia | Nina Bari | [
"Mathematics"
] | 793 | [
"Mathematical analysis",
"Mathematical analysts"
] |
1,496,348 | https://en.wikipedia.org/wiki/Samphire | Samphire is a name given to a number of succulent salt-tolerant plants (halophytes) that tend to be associated with water bodies.
Rock samphire (Crithmum maritimum) is a coastal species with white flowers that grows in Ireland, the United Kingdom and the Isle of Man. This is probably the species mentioned by Shakespeare in King Lear.
Golden samphire (Limbarda crithmoides) is a coastal species with yellow flowers that grows across Eurasia.
Several species in the genus Salicornia, known as "marsh samphire" in Britain.
Blutaparon vermiculare, Central America, southeastern North America
Tecticornia, Australia
Sarcocornia, cosmopolitan
Following the construction of the Channel Tunnel, the nature reserve created on new land near Folkestone made from excavated rock was named "Samphire Hoe", a name coined by Mrs Gillian Janaway.
Etymology
Originally "sampiere", a corruption of the French "Saint Pierre" (Saint Peter), samphire was named after the patron saint of fishermen because all of the original plants with its name grow in rocky salt-sprayed regions along the sea coast of northern Europe or in its coastal marsh areas. It is sometimes called rock samphire or seafennel. In North Wales, especially along the River Dee's marshes, it has long been known as sampkin.
Uses
Marsh samphire ashes were used to make soap and glass (hence its other old name in English, "glasswort") as it is a source of sodium carbonate, also known as soda ash. In the 14th century glassmakers located their workshops near regions where this plant grew, since it was so closely linked to their trade.
Many samphires are edible. In England the leaves were gathered early in the year and pickled or eaten in salads with oil and vinegar.
Marsh samphire (Salicornia bigelovii) was investigated as a potential biodiesel source that can be grown in coastal areas where conventional crops cannot be grown.
Rock samphire is another kind of samphire, also called sea fennel. It is mentioned by Shakespeare in King Lear:
This refers to the dangers involved in collecting rock samphire on sea cliffs.
Aboriginal Australians have long used samphire as bush tucker, due to its abundance, flavour, and nutritional value. It is high in Vitamin A and a good source of calcium and iron. Other Australians have recently discovered the potential of the species as a food plant and it has begun to appear on restaurant menus across the country.
A variety of rock samphire known as Paccasasso del Conero, or sea fennel, is well known in Italy along the Adriatic coast. This variety is typically used in local recipes such as a mortadella and paccasasso sandwich, pasta with mussels and paccasassi, or in fresh salad.
References
External links
How to cook samphire
Halophytes
Vegetables
Plant common names
Rock Samphire in Italy: history and recipes | Samphire | [
"Chemistry",
"Biology"
] | 634 | [
"Common names of organisms",
"Plants",
"Plant common names",
"Halophytes",
"Salts"
] |
1,496,757 | https://en.wikipedia.org/wiki/Quest%20Diagnostics | Quest Diagnostics Incorporated is an American clinical laboratory. A Fortune 500 company, Quest operates in the United States, Puerto Rico, Mexico, and Brazil. Quest also maintains collaborative agreements with various hospitals and clinics across the globe.
As of 2020, the company had approximately 48,000 employees, and it generated more than $7.7 billion in revenue in 2019. The company offers access to diagnostic testing services for cancer, cardiovascular disease, infectious disease, neurological disorders, COVID-19, and employment and court-ordered drug testing.
History
1960–1995
Originally founded as Metropolitan Pathology Laboratory, Inc. in 1967 by Paul A. Brown, MD, the clinical laboratory underwent a variety of name changes. In 1969, the company's name changed to MetPath, Inc. with headquarters in Teaneck, New Jersey. By 1982, MetPath was acquired by what was then known as Corning Glass Works and was subsequently renamed Corning Clinical Laboratories.
1996–2000
On December 31, 1996, Quest Diagnostics became an independent company as a spin-off from Corning. Kenneth W. Freeman was appointed as CEO during this transition. Over the next year, Quest acquired a clinical laboratory division of Branford, Connecticut–based Diagnostic Medical Laboratory, Inc. (DML). Two years later in 1999, Quest added SmithKline Beecham Clinical Laboratories to their subsidiaries; which includes a joint venture ownership with CompuNet Clinical Laboratory. The purchase of SmithKline Beecham also included the lab's medical sample transport airline originally founded in 1988.
In 1997, Quest and Banner Health formed a joint venture creating the Arizona based Sonora Quest laboratory, a business unit of Laboratory Sciences of Arizona. This entity represents the operations of Quest Diagnostics in the Arizona regional market.
2001–2015
From May 2004 to April 2012, Surya Mohapatra served as the company's President and CEO. In 2007 Quest acquired diagnostic testing equipment company AmeriPath. In response to Mohapatra's resignation after eight years with Quest, former Philips Healthcare CEO Stephen Rusckowski was appointed. Under Rusckowski, Quest Diagnostics teamed up with central New England's largest health care system, UMass Memorial Health Care, to purchase its clinical outreach laboratory.
2016–present
In 2016, Quest collaborated with Safeway to bring testing services to twelve of its stores in California, Maryland, Virginia, Texas and Colorado.
By the end of 2017, Quest, in partnership with Walmart, incorporated laboratory testing in about 15 of their locations in Texas and Florida.
In May 2018, the company announced it will become an in-network laboratory provider to UnitedHealthcare starting in 2019, providing access to 48 million plan members.
In September 2018, Quest moved its headquarters from Madison, where it was located since 2007, to Secaucus, New Jersey.
In November 2018, Quest launched QuestDirect, a consumer-initiated testing service that allows patients to order health and wellness lab testing from home.
In March 2020, the company launched a COVID-19 testing service. As of July 2020, Quest had performed more than 9.2 million COVID-19 molecular tests and 2.8 million serology tests.
In April 2024, Quest has added a new blood screening to their AD-Detect product line. This test will analyze the blood for a specific Alzheimer's protein, pTau-217.
Acquisitions
Partnerships
2005: Forms a strategic alliance with Ciphergen Biosystems to commercialize novel proteomic tests.
Controversies
Quest Diagnostics set a record in April 2009 when it paid $302 million to the government to settle a Medicare fraud case alleging the company sold faulty medical testing kits. It was the largest qui tam (whistleblower) settlement paid by a medical lab for manufacturing and distributing a faulty product. In May 2011, Quest paid $241 million to the state of California to settle a False Claims Act case that alleged the company had overcharged Medi-Cal, the state's Medicaid program, and provided illegal kickbacks as incentives for healthcare providers to use Quest labs.
In 2018, Quest Diagnostics was among a number of US based labs linked to inaccuracies of over 200 women's cervical smear tests for CervicalCheck, Ireland's national screening program. Audits of the testing performed by Quest (and another subcontractor Clinical Pathology Laboratories, Inc. of Austin Texas) showed a high rate of errors in analysis of samples which led to lawsuits and a government inquiry. Quest and the Irish government continue to settle the resulting lawsuits.
On June 3, 2019, Quest announced that American Medical Collection Agency (AMCA), a billing collections service provider, had informed Quest Diagnostics that an unauthorized user had access to AMCA’s system containing personal information AMCA received from various entities, including from Quest. AMCA provides billing collections services to Optum360, which in turn is a Quest contractor. AMCA later went bankrupt after the breach.
References
External links
Medical technology companies of the United States
Companies listed on the New York Stock Exchange
American companies established in 1967
Companies based in Hudson County, New Jersey
Secaucus, New Jersey
1967 establishments in New York City
Corning Inc.
Life sciences industry
Health care companies based in New Jersey
1982 mergers and acquisitions
Corporate spin-offs
Alzheimer's disease research | Quest Diagnostics | [
"Biology"
] | 1,075 | [
"Life sciences industry"
] |
1,496,984 | https://en.wikipedia.org/wiki/Flue-gas%20desulfurization | Flue-gas desulfurization (FGD) is a set of technologies used to remove sulfur dioxide () from exhaust flue gases of fossil-fuel power plants, and from the emissions of other sulfur oxide emitting processes such as waste incineration, petroleum refineries, cement and lime kilns.
Methods
Since stringent environmental regulations limiting emissions have been enacted in many countries, is being removed from flue gases by a variety of methods. Common methods used:
Wet scrubbing using a slurry of alkaline sorbent, usually limestone or lime, or seawater to scrub gases;
Spray-dry scrubbing using similar sorbent slurries;
Wet sulfuric acid process recovering sulfur in the form of commercial quality sulfuric acid;
SNOX Flue gas desulfurization removes sulfur dioxide, nitrogen oxides and particulates from flue gases;
Dry sorbent injection systems that introduce powdered hydrated lime (or other sorbent material) into exhaust ducts to eliminate and from process emissions.
For a typical coal-fired power station, flue-gas desulfurization (FGD) may remove 90 per cent or more of the in the flue gases.
History
Methods of removing sulfur dioxide from boiler and furnace exhaust gases have been studied for over 150 years. Early ideas for flue gas desulfurization were established in England around 1850.
With the construction of large-scale power plants in England in the 1920s, the problems associated with large volumes of from a single site began to concern the public. The emissions problem did not receive much attention until 1929, when the House of Lords upheld the claim of a landowner against the Barton Electricity Works of the Manchester Corporation for damages to his land resulting from emissions. Shortly thereafter, a press campaign was launched against the erection of power plants within the confines of London. This outcry led to the imposition of controls on all such power plants.
The first major FGD unit at a utility was installed in 1931 at Battersea Power Station, owned by London Power Company. In 1935, an FGD system similar to that installed at Battersea went into service at Swansea Power Station. The third major FGD system was installed in 1938 at Fulham Power Station. These three early large-scale FGD installations were suspended during World War II, because the characteristic white vapour plumes would have aided location finding by enemy aircraft. The FGD plant at Battersea was recommissioned after the war and, together with FGD plant at the new Bankside B power station opposite the City of London, operated until the stations closed in 1983 and 1981 respectively. Large-scale FGD units did not reappear at utilities until the 1970s, where most of the installations occurred in the United States and Japan.
The Clean Air Act of 1970 (CAA) and it amendments have influenced implementation of FGD. In 2017, the revised PTC 40 Standard was published. This revised standard (PTC 40-2017) covers Dry and Regenerable FGD systems and provides a more detailed Uncertainty Analysis section. This standard is currently in use today by companies around the world.
As of June 1973, there were 42 FGD units in operation, 36 in Japan and 6 in the United States, ranging in capacity from 5 MW to 250 MW. As of around 1999 and 2000, FGD units were being used in 27 countries, and there were 678 FGD units operating at a total power plant capacity of about 229 gigawatts. About 45% of the FGD capacity was in the U.S., 24% in Germany, 11% in Japan, and 20% in various other countries. Approximately 79% of the units, representing about 199 gigawatts of capacity, were using lime or limestone wet scrubbing. About 18% (or 25 gigawatts) utilized spray-dry scrubbers or sorbent injection systems.
FGD on ships
The International Maritime Organization (IMO) has adopted guidelines on the approval, installation and use of exhaust gas scrubbers (exhaust gas cleaning systems) on board ships to ensure compliance with the sulphur regulation of MARPOL Annex VI. Flag States must approve such systems and port States can (as part of their port state control) ensure that such systems are functioning correctly. If a scrubber system is not functioning properly (and the IMO procedures for such malfunctions are not adhered to), port States can sanction the ship. The United Nations Convention on the Law Of the Sea also bestows port States with a right to regulate (and even ban) the use of open loop scrubber systems within ports and internal waters.
Sulfuric acid mist formation
Fossil fuels such as coal and oil can contain a significant amount of sulfur. When fossil fuels are burned, about 95 percent or more of the sulfur is generally converted to sulfur dioxide (). Such conversion happens under normal conditions of temperature and of oxygen present in the flue gas. However, there are circumstances under which such reaction may not occur.
can further oxidize into sulfur trioxide () when excess oxygen is present and gas temperatures are sufficiently high. At about 800 °C, formation of is favored. Another way that can be formed is through catalysis by metals in the fuel. Such reaction is particularly true for heavy fuel oil, where a significant amount of vanadium is present. In whatever way is formed, it does not behave like in that it forms a liquid aerosol known as sulfuric acid () mist that is very difficult to remove. Generally, about 1% of the sulfur dioxide will be converted to . Sulfuric acid mist is often the cause of the blue haze that often appears as the flue gas plume dissipates. Increasingly, this problem is being addressed by the use of wet electrostatic precipitators.
FGD chemistry
Principles
Most FGD systems employ two stages: one for fly ash removal and the other for removal. Attempts have been made to remove both the fly ash and in one scrubbing vessel. However, these systems experienced severe maintenance problems and low removal efficiency. In wet scrubbing systems, the flue gas normally passes first through a fly ash removal device, either an electrostatic precipitator or a baghouse, and then into the -absorber. However, in dry injection or spray drying operations, the is first reacted with the lime, and then the flue gas passes through a particulate control device.
Another important design consideration associated with wet FGD systems is that the flue gas exiting the absorber is saturated with water and still contains some . These gases are highly corrosive to any downstream equipment such as fans, ducts, and stacks. Two methods that may minimize corrosion are: (1) reheating the gases to above their dew point, or (2) using materials of construction and designs that allow equipment to withstand the corrosive conditions. Both alternatives are expensive. Engineers determine which method to use on a site-by-site basis.
Scrubbing with an alkali solid or solution
is an acid gas, and, therefore, the typical sorbent slurries or other materials used to remove the from the flue gases are alkaline. The reaction taking place in wet scrubbing using a (limestone) slurry produces calcium sulfite () and may be expressed in the simplified dry form as:
Wet scrubbing can be conducted with a (hydrated lime) and :
(M = Ca, Mg)
To partially offset the cost of the FGD installation, some designs, particularly dry sorbent injection systems, further oxidize the (calcium sulfite) to produce marketable (gypsum) that can be of high enough quality to use in wallboard and other products. The process by which this synthetic gypsum is created is also known as forced oxidation:
2
A natural alkaline usable to absorb is seawater. The is absorbed in the water, and when oxygen is added reacts to form sulfate ions and free . The surplus of is offset by the carbonates in seawater pushing the carbonate equilibrium to release gas:
In industry caustic soda () is often used to scrub , producing sodium sulfite:
2
Types of wet scrubbers used in FGD
To promote maximum gas–liquid surface area and residence time, a number of wet scrubber designs have been used, including spray towers, venturis, plate towers, and mobile packed beds. Because of scale buildup, plugging, or erosion, which affect FGD dependability and absorber efficiency, the trend is to use simple scrubbers such as spray towers instead of more complicated ones. The configuration of the tower may be vertical or horizontal, and flue gas can flow concurrently, countercurrently, or crosscurrently with respect to the liquid. The chief drawback of spray towers is that they require a higher liquid-to-gas ratio requirement for equivalent removal than other absorber designs.
FGD scrubbers produce a scaling wastewater that requires treatment to meet U.S. federal discharge regulations. However, technological advancements in ion-exchange membranes and electrodialysis systems has enabled high-efficiency treatment of FGD wastewater to meet EPA discharge limits. The treatment approach is similar for other highly scaling industrial wastewaters.
Venturi-rod scrubbers
A venturi scrubber is a converging/diverging section of duct. The converging section accelerates the gas stream to high velocity. When the liquid stream is injected at the throat, which is the point of maximum velocity, the turbulence caused by the high gas velocity atomizes the liquid into small droplets, which creates the surface area necessary for mass transfer to take place. The higher the pressure drop in the venturi, the smaller the droplets and the higher the surface area. The penalty is in power consumption.
For simultaneous removal of and fly ash, venturi scrubbers can be used. In fact, many of the industrial sodium-based throwaway systems are venturi scrubbers originally designed to remove particulate matter. These units were slightly modified to inject a sodium-based scrubbing liquor. Although removal of both particles and in one vessel can be economic, the problems of high pressure drops and finding a scrubbing medium to remove heavy loadings of fly ash must be considered. However, in cases where the particle concentration is low, such as from oil-fired units, it can be more effective to remove particulate and simultaneously.
Packed bed scrubbers
A packed scrubber consists of a tower with packing material inside. This packing material can be in the shape of saddles, rings, or some highly specialized shapes designed to maximize the contact area between the dirty gas and liquid. Packed towers typically operate at much lower pressure drops than venturi scrubbers and are therefore cheaper to operate. They also typically offer higher removal efficiency. The drawback is that they have a greater tendency to plug up if particles are present in excess in the exhaust air stream.
Spray towers
A spray tower is the simplest type of scrubber. It consists of a tower with spray nozzles, which generate the droplets for surface contact. Spray towers are typically used when circulating a slurry (see below). The high speed of a venturi would cause erosion problems, while a packed tower would plug up if it tried to circulate a slurry.
Counter-current packed towers are infrequently used because they have a tendency to become plugged by collected particles or to scale when lime or limestone scrubbing slurries are used.
Scrubbing reagent
As explained above, alkaline sorbents are used for scrubbing flue gases to remove . Depending on the application, the two most important are lime and sodium hydroxide (also known as caustic soda). Lime is typically used on large coal- or oil-fired boilers as found in power plants, as it is very much less expensive than caustic soda. The problem is that it results in a slurry being circulated through the scrubber instead of a solution. This makes it harder on the equipment. A spray tower is typically used for this application. The use of lime results in a slurry of calcium sulfite () that must be disposed of. Fortunately, calcium sulfite can be oxidized to produce by-product gypsum () which is marketable for use in the building products industry.
Caustic soda is limited to smaller combustion units because it is more expensive than lime, but it has the advantage that it forms a solution rather than a slurry. This makes it easier to operate. It produces a "spent caustic" solution of sodium sulfite/bisulfite (depending on the pH), or sodium sulfate that must be disposed of. This is not a problem in a kraft pulp mill for example, where this can be a source of makeup chemicals to the recovery cycle.
Scrubbing with sodium sulfite solution
It is possible to scrub sulfur dioxide by using a cold solution of sodium sulfite; this forms a sodium hydrogen sulfite solution. By heating this solution it is possible to reverse the reaction to form sulfur dioxide and the sodium sulfite solution. Since the sodium sulfite solution is not consumed, it is called a regenerative treatment. The application of this reaction is also known as the Wellman–Lord process.
In some ways this can be thought of as being similar to the reversible liquid–liquid extraction of an inert gas such as xenon or radon (or some other solute which does not undergo a chemical change during the extraction) from water to another phase. While a chemical change does occur during the extraction of the sulfur dioxide from the gas mixture, it is the case that the extraction equilibrium is shifted by changing the temperature rather than by the use of a chemical reagent.
Gas-phase oxidation followed by reaction with ammonia
A new, emerging flue gas desulfurization technology has been described by the IAEA. It is a radiation technology where an intense beam of electrons is fired into the flue gas at the same time as ammonia is added to the gas. The Chendu power plant in China started up such a flue gas desulfurization unit on a 100 MW scale in 1998. The Pomorzany power plant in Poland also started up a similar sized unit in 2003 and that plant removes both sulfur and nitrogen oxides. Both plants are reported to be operating successfully. However, the accelerator design principles and manufacturing quality need further improvement for continuous operation in industrial conditions.
No radioactivity is required or created in the process. The electron beam is generated by a device similar to the electron gun in a TV set. This device is called an accelerator. This is an example of a radiation chemistry process where the physical effects of radiation are used to process a substance.
The action of the electron beam is to promote the oxidation of sulfur dioxide to sulfur(VI) compounds. The ammonia reacts with the sulfur compounds thus formed to produce ammonium sulfate, which can be used as a nitrogenous fertilizer. In addition, it can be used to lower the nitrogen oxide content of the flue gas. This method has attained industrial plant scale.
Facts and statistics
The information in this section was obtained from a US EPA published fact sheet.
Flue gas desulfurization scrubbers have been applied to combustion units firing coal and oil that range in size from 5 MW to 1,500 MW. Scottish Power are spending £400 million installing FGD at Longannet power station, which has a capacity of over 2,000 MW. Dry scrubbers and spray scrubbers have generally been applied to units smaller than 300 MW.
FGD has been fitted by RWE npower at Aberthaw Power Station in south Wales using the seawater process and works successfully on the 1,580 MW plant.
Approximately 85% of the flue gas desulfurization units installed in the US are wet scrubbers, 12% are spray dry systems, and 3% are dry injection systems.
The highest removal efficiencies (greater than 90%) are achieved by wet scrubbers and the lowest (less than 80%) by dry scrubbers. However, the newer designs for dry scrubbers are capable of achieving efficiencies in the order of 90%.
In spray drying and dry injection systems, the flue gas must first be cooled to about 10–20 °C above adiabatic saturation to avoid wet solids deposition on downstream equipment and plugging of baghouses.
The capital, operating and maintenance costs per short ton of removed (in 2001 US dollars) are:
For wet scrubbers larger than 400 MW, the cost is $200 to $500 per ton
For wet scrubbers smaller than 400 MW, the cost is $500 to $5,000 per ton
For spray dry scrubbers larger than 200 MW, the cost is $150 to $300 per ton
For spray dry scrubbers smaller than 200 MW, the cost is $500 to $4,000 per ton
Alternative methods of reducing sulfur dioxide emissions
An alternative to removing sulfur from the flue gases after burning is to remove the sulfur from the fuel before or during combustion. Hydrodesulfurization of fuel has been used for treating fuel oils before use. Fluidized bed combustion adds lime to the fuel during combustion. The lime reacts with the to form sulfates which become part of the ash.
This elemental sulfur is then separated and finally recovered at the end of the process for further usage in, for example, agricultural products. Safety is one of the greatest benefits of this method, as the whole process takes place at atmospheric pressure and ambient temperature. This method has been developed by Paqell, a joint venture between Shell Global Solutions and Paques.
See also
Incineration
Scrubber
Flue-gas emissions from fossil-fuel combustion
Flue-gas stacks
Wellman–Lord process
References
External links
Schematic process flow of FGD plant
5000 MW FGD Plant (includes a detailed process flow diagram)
Alstom presentation to UN-ECE on air pollution control (includes process flow diagram for dry, wet and seawater FGD)
Flue Gas Treatment article including the removal of hydrogen chloride, sulfur trioxide, and other heavy metal particles such as mercury.
Institute of Clean Air Companies – national trade association representing emissions control manufacturers
Pollution control technologies
Air pollution control systems
Acid gas control
Incineration
Environmental engineering
Environmental impact of the energy industry
Gas technologies
Desulfurization | Flue-gas desulfurization | [
"Chemistry",
"Engineering"
] | 3,804 | [
"Desulfurization",
"Separation processes",
"Chemical engineering",
"Combustion engineering",
"Pollution control technologies",
"Incineration",
"Civil engineering",
"Environmental engineering"
] |
1,498,040 | https://en.wikipedia.org/wiki/Rydberg%20atom | A Rydberg atom is an excited atom with one or more electrons that have a very high principal quantum number, n. The higher the value of n, the farther the electron is from the nucleus, on average. Rydberg atoms have a number of peculiar properties including an exaggerated response to electric and magnetic fields, long decay periods and electron wavefunctions that approximate, under some conditions, classical orbits of electrons about the nuclei. The core electrons shield the outer electron from the electric field of the nucleus such that, from a distance, the electric potential looks identical to that experienced by the electron in a hydrogen atom.
In spite of its shortcomings, the Bohr model of the atom is useful in explaining these properties. Classically, an electron in a circular orbit of radius r, about a hydrogen nucleus of charge +e, obeys Newton's second law:
where k = 1/(4πε0).
Orbital momentum is quantized in units of ħ:
.
Combining these two equations leads to Bohr's expression for the orbital radius in terms of the principal quantum number, n:
It is now apparent why Rydberg atoms have such peculiar properties: the radius of the orbit scales as n2 (the n = 137 state of hydrogen has an atomic radius ~1 μm) and the geometric cross-section as n4. Thus, Rydberg atoms are extremely large, with loosely bound valence electrons, easily perturbed or ionized by collisions or external fields.
Because the binding energy of a Rydberg electron is proportional to 1/r and hence falls off like 1/n2, the energy level spacing falls off like 1/n3 leading to ever more closely spaced levels converging on the first ionization energy. These closely spaced Rydberg states form what is commonly referred to as the Rydberg series. Figure 2 shows some of the energy levels of the lowest three values of orbital angular momentum in lithium.
History
The existence of the Rydberg series was first demonstrated in 1885 when Johann Balmer discovered a simple empirical formula for the wavelengths of light associated with transitions in atomic hydrogen. Three years later, the Swedish physicist Johannes Rydberg presented a generalized and more intuitive version of Balmer's formula that came to be known as the Rydberg formula. This formula indicated the existence of an infinite series of ever more closely spaced discrete energy levels converging on a finite limit.
This series was qualitatively explained in 1913 by Niels Bohr with his semiclassical model of the hydrogen atom in which quantized values of angular momentum lead to the observed discrete energy levels. A full quantitative derivation of the observed spectrum was derived by Wolfgang Pauli in 1926 following development of quantum mechanics by Werner Heisenberg and others.
Methods of production
The only truly stable state of a hydrogen-like atom is the ground state with n = 1. The study of Rydberg states requires a reliable technique for exciting ground state atoms to states with a large value of n.
Electron impact excitation
Much early experimental work on Rydberg atoms relied on the use of collimated beams of fast electrons incident on ground-state atoms. Inelastic scattering processes can use the electron kinetic energy to increase the atoms' internal energy exciting to a broad range of different states including many high-lying Rydberg states,
Because the electron can retain any arbitrary amount of its initial kinetic energy, this process results in a population with a broad spread of different energies.
Charge exchange excitation
Another mainstay of early Rydberg atom experiments relied on charge exchange between a beam of ions and a population of neutral atoms of another species, resulting in the formation of a beam of highly excited atoms,
Again, because the kinetic energy of the interaction can contribute to the final internal energies of the constituents, this technique populates a broad range of energy levels.
Optical excitation
The arrival of tunable dye lasers in the 1970s allowed a much greater level of control over populations of excited atoms. In optical excitation, the incident photon is absorbed by the target atom, resulting in a precise final state energy. The problem of producing single state, mono-energetic populations of Rydberg atoms thus becomes the somewhat simpler problem of precisely controlling the frequency of the laser output,
This form of direct optical excitation is generally limited to experiments with the alkali metals, because the ground state binding energy in other species is generally too high to be accessible with most laser systems.
For atoms with a large valence electron binding energy (equivalent to a large first ionization energy), the excited states of the Rydberg series are inaccessible with conventional laser systems. Initial collisional excitation can make up the energy shortfall allowing optical excitation to be used to select the final state. Although the initial step excites to a broad range of intermediate states, the precision inherent in the optical excitation process means that the laser light only interacts with a specific subset of atoms in a particular state, exciting to the chosen final state.
Hydrogenic potential
An atom in a Rydberg state has a valence electron in a large orbit far from the ion core; in such an orbit, the outermost electron feels an almost hydrogenic Coulomb potential, UC, from a compact ion core consisting of a nucleus with Z protons and the lower electron shells filled with Z-1 electrons. An electron in the spherically symmetric Coulomb potential has potential energy:
The similarity of the effective potential "seen" by the outer electron to the hydrogen potential is a defining characteristic of Rydberg states and explains why the electron wavefunctions approximate to classical orbits in the limit of the correspondence principle. In other words, the electron's orbit resembles the orbit of planets inside a solar system, similar to what was seen in the obsolete but visually useful Bohr and Rutherford models of the atom.
There are three notable exceptions that can be characterized by the additional term added to the potential energy:
An atom may have two (or more) electrons in highly excited states with comparable orbital radii. In this case, the electron-electron interaction gives rise to a significant deviation from the hydrogen potential. For an atom in a multiple Rydberg state, the additional term, Uee, includes a summation of each pair of highly excited electrons:
If the valence electron has very low angular momentum (interpreted classically as an extremely eccentric elliptical orbit), then it may pass close enough to polarise the ion core, giving rise to a 1/r4 core polarization term in the potential. The interaction between an induced dipole and the charge that produces it is always attractive so this contribution is always negative,
where αd is the dipole polarizability. Figure 3 shows how the polarization term modifies the potential close to the nucleus.
If the outer electron penetrates the inner electron shells, it will “see” more of the charge of the nucleus and hence experience a greater force. In general, the modification to the potential energy is not simple to calculate and must be based on knowledge of the geometry of the ion core.
Quantum-mechanical details
Quantum-mechanically, a state with abnormally high n refers to an atom in which the valence electron(s) have been excited into a formerly unpopulated electron orbital with higher energy and lower binding energy. In hydrogen the binding energy is given by:
where Ry = 13.6 eV is the Rydberg constant. The low binding energy at high values of n explains why Rydberg states are susceptible to ionization.
Additional terms in the potential energy expression for a Rydberg state, on top of the hydrogenic Coulomb potential energy require the introduction of a quantum defect, δℓ, into the expression for the binding energy:
Electron wavefunctions
The long lifetimes of Rydberg states with high orbital angular momentum can be explained in terms of the overlapping of wavefunctions. The wavefunction of an electron in a high ℓ state (high angular momentum, “circular orbit”) has very little overlap with the wavefunctions of the inner electrons and hence remains relatively unperturbed.
The three exceptions to the definition of a Rydberg atom as an atom with a hydrogenic potential, have an alternative, quantum mechanical description that can be characterized by the additional term(s) in the atomic Hamiltonian:
If a second electron is excited into a state ni, energetically close to the state of the outer electron no, then its wavefunction becomes almost as large as the first (a double Rydberg state). This occurs as ni approaches no and leads to a condition where the size of the two electron’s orbits are related; a condition sometimes referred to as radial correlation. An electron-electron repulsion term must be included in the atomic Hamiltonian.
Polarization of the ion core produces an anisotropic potential that causes an angular correlation between the motions of the two outermost electrons. This can be thought of as a tidal locking effect due to a non-spherically symmetric potential. A core polarization term must be included in the atomic Hamiltonian.
The wavefunction of the outer electron in states with low orbital angular momentum ℓ, is periodically localised within the shells of inner electrons and interacts with the full charge of the nucleus. Figure 4 shows a semi-classical interpretation of angular momentum states in an electron orbital, illustrating that low-ℓ states pass closer to the nucleus potentially penetrating the ion core. A core penetration term must be added to the atomic Hamiltonian.
In external fields
The large separation between the electron and ion-core in a Rydberg atom makes possible an extremely large electric dipole moment, d. There is an energy associated with the presence of an electric dipole in an electric field, F, known in atomic physics as a Stark shift,
Depending on the sign of the projection of the dipole moment onto the local electric field vector, a state may have energy that increases or decreases with field strength (low-field and high-field seeking states respectively). The narrow spacing between adjacent n-levels in the Rydberg series means that states can approach degeneracy even for relatively modest field strengths. The theoretical field strength at which a crossing would occur assuming no coupling between the states is given by the Inglis–Teller limit,
In the hydrogen atom, the pure 1/r Coulomb potential does not couple Stark states from adjacent n-manifolds resulting in real crossings as shown in figure 5. The presence of additional terms in the potential energy can lead to coupling resulting in avoided crossings as shown for lithium in figure 6.
Applications and further research
Precision measurements of trapped Rydberg atoms
The radiative decay lifetimes of atoms in metastable states to the ground state are important to understanding astrophysics observations and tests of the standard model.
Investigating diamagnetic effects
The large sizes and low binding energies of Rydberg atoms lead to a high magnetic susceptibility, . As diamagnetic effects scale with the area of the orbit and the area is proportional to the radius squared (A ∝ n4), effects impossible to detect in ground state atoms become obvious in Rydberg atoms, which demonstrate very large diamagnetic shifts.
Rydberg atoms exhibit strong electric-dipole coupling of the atoms to electromagnetic fields and has been used to detect radio communications.
In plasmas
Rydberg atoms form commonly in plasmas due to the recombination of electrons and positive ions; low energy recombination results in fairly stable Rydberg atoms, while recombination of electrons and positive ions with high kinetic energy often form autoionising Rydberg states. Rydberg atoms’ large sizes and susceptibility to perturbation and ionisation by electric and magnetic fields, are an important factor determining the properties of plasmas.
Condensation of Rydberg atoms forms Rydberg matter, most often observed in form of long-lived clusters. The de-excitation is significantly impeded in Rydberg matter by exchange-correlation effects in the non-uniform electron liquid formed on condensation by the collective valence electrons, which causes extended lifetime of clusters.
In astrophysics (radio recombination lines)
Rydberg atoms occur in space due to the dynamic equilibrium between photoionization by hot stars and recombination with electrons, which at these very low densities usually proceeds via the electron re-joining the atom in a very high n state, and then gradually dropping through the energy levels to the ground state, giving rise to a sequence of recombination spectral lines spread across the electromagnetic spectrum. The very small differences in energy between Rydberg states differing in n by one or a few means that photons emitted in transitions between such states have low frequencies and long wavelengths, even up to radio waves. The first detection of such a radio recombination line (RRL) was by Soviet radio astronomers in 1964; the line, designated H90α, was emitted by hydrogen atoms in the n = 90 state. Today, Rydberg atoms of hydrogen, helium and carbon in space are routinely observed via RRLs, the brightest of which are the Hnα lines corresponding to transitions from n+1 to n. Weaker lines, Hnβ and Hnγ, with Δn = 2 and 3 are also observed. Corresponding lines for helium and carbon are Henα, Cnα, and so on. The discovery of lines with n > 100 was surprising, as even in the very low densities of interstellar space, many orders of magnitude lower than the best laboratory vacuums attainable on Earth, it had been expected that such highly-excited atoms would be frequently destroyed by collisions, rendering the lines unobservable. Improved theoretical analysis showed that this effect had been overestimated, although collisional broadening does eventually limit detectability of the lines at very high n. The record wavelength for hydrogen is λ = 73 cm for H253α, implying atomic diameters of a few microns, and for carbon, λ = 18 metres, from C732α, from atoms with a diameter of 57 micron.
RRLs from hydrogen and helium are produced in highly ionized regions (H II regions and the Warm Ionised Medium). Carbon has a lower ionization energy than hydrogen, and so singly-ionized carbon atoms, and the corresponding recombining Rydberg states, exist further from the ionizing stars, in so-called C II regions which form thick shells around H II regions. The larger volume partially compensates for the low abundance of C compared to H, making the carbon RRLs detectable.
In the absence of collisional broadening, the wavelengths of RRLs are modified only by the Doppler effect, so the measured wavelength, , is usually converted to radial velocity, , where is the rest-frame wavelength. H II regions in our Galaxy can have radial velocities up to ±150 km/s, due to their motion relative to Earth as both orbit the centre of the Galaxy. These motions are regular enough that can be used to estimate the position of the H II region on the line of sight and so its 3D position in the Galaxy. Because all astrophysical Rydberg atoms are hydrogenic, the frequencies of transitions for H, He, and C are given by the same formula, except for the slightly different reduced mass of the valence electron for each element. This gives helium and carbon lines apparent Doppler shifts of −100 and −140 km/s, respectively, relative to the corresponding hydrogen line.
RRLs are used to detect ionized gas in distant regions of our Galaxy, and also in external galaxies, because the radio photons are not absorbed by interstellar dust, which blocks photons from the more familiar optical transitions. They are also used to measure the temperature of the ionized gas, via the ratio of line intensity to the continuum bremsstrahlung emission from the plasma. Since the temperature of H II regions is regulated by line emission from heavier elements such as C, N, and O, recombination lines also indirectly measure their abundance (metallicity).
RRLs are spread across the radio spectrum with relatively small intervals in wavelength between them, so they frequently occur in radio spectral observations primarily targeted at other spectral lines. For instance, H166α, H167α, and H168α are very close in wavelength to the 21-cm line from neutral hydrogen. This allows radio astronomers to study both the neutral and the ionized interstellar medium from the same set of observations. Since RRLs are numerous and weak, common practice is to average the velocity spectra of several neighbouring lines, to improve sensitivity.
There are a variety of other potential applications of Rydberg atoms in cosmology and astrophysics.
Strongly interacting systems
Due to their large size, Rydberg atoms can exhibit very large electric dipole moments. Calculations using perturbation theory show that this results in strong interactions between two close Rydberg atoms. Coherent control of these interactions combined with their relatively long lifetime makes them a suitable candidate to realize a quantum computer. In 2010 two-qubit gates were achieved experimentally. Strongly interacting Rydberg atoms also feature quantum critical behavior, which makes them interesting to study on their own.
Current research directions
Since 2000's Rydberg atoms research encompasses broadly five directions: sensing, quantum optics, quantum computation, quantum simulation and Rydberg states of matter. High electric dipole moments between Rydberg atomic states are used for radio frequency and terahertz sensing and imaging, including non-demolition measurements of individual microwave photons. Electromagnetically induced transparency was used in combination with strong interactions between two atoms excited in Rydberg state to provide medium that exhibits strongly nonlinear behaviour at the level of individual optical photons. The tuneable interaction between Rydberg states, enabled also first quantum simulation experiments.
In October 2018, the United States Army Research Laboratory publicly discussed efforts to develop a super wideband radio receiver using Rydberg atoms. In March 2020, the laboratory announced that its scientists analysed the Rydberg sensor's sensitivity to oscillating electric fields over an enormous range of frequencies—from 0 to 1012 Hertz (the spectrum to 0.3mm wavelength). The Rydberg sensor can reliably detect signals over the entire spectrum and compare favourably with other established electric field sensor technologies, such as electro-optic crystals and dipole antenna-coupled passive electronics.
Classical simulation
A simple 1/r potential results in a closed Keplerian elliptical orbit. In the presence of an external electric field Rydberg atoms can obtain very large electric dipole moments making them extremely susceptible to perturbation by the field. Figure 7 shows how application of an external electric field (known in atomic physics as a Stark field) changes the geometry of the potential, dramatically changing the behaviour of the electron. A Coulombic potential does not apply any torque as the force is always antiparallel to the position vector (always pointing along a line running between the electron and the nucleus):
,
.
With the application of a static electric field, the electron feels a continuously changing torque. The resulting trajectory becomes progressively more distorted over time, eventually going through the full range of angular momentum from L = LMAX, to a straight line L = 0, to the initial orbit in the opposite sense
L = −LMAX.
The time period of the oscillation in angular momentum (the time to complete the trajectory in figure 8), almost exactly matches the quantum mechanically predicted period for the wavefunction to return to its initial state, demonstrating the classical nature of the Rydberg atom.
See also
Heavy Rydberg system
Old quantum theory
Quantum chaos
Rydberg molecule
Rydberg polaron
References
Atoms | Rydberg atom | [
"Physics"
] | 4,005 | [
"Atoms",
"Matter"
] |
1,498,102 | https://en.wikipedia.org/wiki/Femtotechnology | Femtotechnology is a term used in reference to the hypothetical manipulation of matter on the scale of a femtometer, or 10−15 m. This is three orders of magnitude lower than picotechnology, at the scale of 10−12 m, and six orders of magnitude lower than nanotechnology, at the scale of 10−9 m.
Theory
Work in the femtometer range involves manipulation of excited energy states within atomic nuclei, specifically nuclear isomers, to produce metastable (or otherwise stabilized) states with unusual properties. In the extreme case, excited states of the individual nucleons that make up the atomic nucleus (protons and neutrons) are considered, ostensibly to tailor the behavioral properties of these particles.
The most advanced form of molecular nanotechnology is often imagined to involve self-replicating molecular machines, and there have been some speculations suggesting something similar might be possible with analogues of molecules composed of nucleons rather than atoms. For example, the astrophysicist Frank Drake once speculated about the possibility of self-replicating organisms composed of such nuclear molecules living on the surface of a neutron star, a suggestion taken up in the science fiction novel Dragon's Egg by the physicist Robert Forward. It is thought by physicists that nuclear molecules may be possible, but they would be very short-lived, and whether they could actually be made to perform complex tasks such as self-replication, or what type of technology could be used to manipulate them, is unknown.
Applications
Practical applications of femtotechnology are currently considered to be unlikely. The spacings between nuclear energy levels require equipment capable of efficiently generating and processing gamma rays, without equipment degradation. The nature of the strong interaction is such that excited nuclear states tend to be very unstable (unlike the excited electron states in Rydberg atoms), and there are a finite number of excited states below the nuclear binding energy, unlike the (in principle) infinite number of bound states available to an atom's electrons. Similarly, what is known about the excited states of individual nucleons seems to indicate that these do not produce behavior that in any way makes nucleons easier to use or manipulate, and indicates instead that these excited states are even less stable and fewer in number than the excited states of atomic nuclei.
In fiction
Femtotechnology plays a critical role in the 2005 science-fiction novel Pushing Ice. It also features in various stories by Greg Egan such as Riding the Crocodile, where he proposes the idea of a "strong bullet" which overcomes the instability of high atomic weight femto-structures by being accelerated to near light speed, letting it travel interstellar distances before impacting a target and constructing a stable nano-scale structure as it decays.
See also
Attophysics
Femtochemistry
Mode-locking, a laser technique producing pulses in the femtosecond range
Ultrashort pulse
References
External links
Femtotech? (Sub)Nuclear Scale Engineering and Computation
There’s Plenty More Room at the Bottom: Beyond Nanotech to Femtotech
Femtocomputing
Hypothetical technology
Nanotechnology | Femtotechnology | [
"Materials_science",
"Engineering"
] | 641 | [
"Nanotechnology",
"Materials science"
] |
1,498,615 | https://en.wikipedia.org/wiki/Rudolf%20Haag | Rudolf Haag (17 August 1922 – 5 January 2016) was a German theoretical physicist, who mainly dealt with fundamental questions of quantum field theory. He was one of the founders of the modern formulation of quantum field theory and he identified the formal structure in terms of the principle of locality and local observables. He also made important advances in the foundations of quantum statistical mechanics.
Biography
Rudolf Haag was born on 17 August 1922, in Tübingen, a university town in the middle of Baden-Württemberg. His family belonged to the cultured middle class. Haag's mother was the writer and politician Anna Haag. His father, Albert Haag, was a teacher of mathematics at a Gymnasium. After finishing high-school in 1939, he visited his sister in London shortly before the beginning of World War II. He was interned as an enemy alien and spent the war in a camp of German civilians in Manitoba. There he used his spare-time after the daily compulsory labour to study physics and mathematics as an autodidact.
After the war, Haag returned to Germany and enrolled at the Technical University of Stuttgart in 1946, where he graduated as a physicist in 1948. In 1951, he received his doctorate at the University of Munich under the supervision of Fritz Bopp and became his assistant until 1956. In April 1953, he joined the CERN theoretical study group in Copenhagen directed by Niels Bohr. After a year, he returned to his assistant position in Munich and completed the German habilitation in 1954. From 1956 to 1957 he worked with Werner Heisenberg at the Max Planck Institute for Physics in Göttingen.
From 1957 to 1959, he was a visiting professor at Princeton University and from 1959 to 1960 he worked at the University of Marseille. He became a professor of Physics at the University of Illinois Urbana-Champaign in 1960. In 1965, he and Res Jost founded the journal Communications in Mathematical Physics. Haag remained the first editor-in-chief until 1973. In 1966, he accepted the professorship position for theoretical physics at the University of Hamburg, where he stayed until he retired in 1987. After retirement, he worked on the concept of the quantum physical event.
Haag developed an interest in music at an early age. He began learning the violin, but later preferred the piano, which he played almost every day. In 1948, Haag married Käthe Fues, with whom he had four children, Albert, Friedrich, Elisabeth, and Ulrich. After retirement, he moved together with his second wife Barbara Klie to Schliersee, a pastoral village in the Bavarian mountains. He died on 5 January 2016, in Fischhausen-Neuhaus, in southern Bavaria.
Scientific career
At the beginning of his career, Haag contributed significantly to the concepts of quantum field theory, including Haag's theorem, from which follows that the interaction picture of quantum mechanics does not exist in quantum field theory. A new approach to the description of scattering processes of particles became necessary. In the following years Haag developed what is known as Haag–Ruelle scattering theory.
During this work, he realized that the rigid relationship between fields and particles that had been postulated up to that point, did not exist, and that the particle interpretation should be based on Albert Einstein's principle of locality, which assigns operators to regions of spacetime. These insights found their final formulation in the Haag–Kastler axioms for local observables of quantum field theories. This framework uses elements of the theory of operator algebras and is therefore referred to as algebraic quantum field theory or, from the physical point of view, as local quantum physics.
This concept proved fruitful for understanding the fundamental properties of any theory in four-dimensional Minkowski space. Without making assumptions about non-observable charge-changing fields, Haag, in collaboration with Sergio Doplicher and John E. Roberts, elucidated the possible structure of the superselection sectors of the observables in theories with short-range forces. Sectors can always be composes with one another, each sector satisfies either para-Bose or para-Fermi statistics and for each sector there is a conjugate sector. These insights correspond to the additivity of charges in the particle interpretation, to the Bose–Fermi alternative for particle statistics, and to the existence of antiparticles. In the special case of simple sectors, a global gauge group and charge-carrying fields, which can generate all sectors from the vacuum state, were reconstructed from the observables. These results were later generalized for arbitrary sectors in the Doplicher–Roberts duality theorem. The application of these methods to theories in low-dimensional spaces also led to an understanding of the occurrence of braid group statistics and quantum groups.
In quantum statistical mechanics, Haag, together with Nicolaas M. Hugenholtz and Marinus Winnink, succeeded in generalizing the Gibbs–von Neumann characterization of thermal equilibrium states using the KMS condition (named after Ryogo Kubo, Paul C. Martin, and Julian Schwinger) in such a way that it extends to infinite systems in the thermodynamic limit. It turned out that this condition also plays a prominent role in the theory of von Neumann algebras and resulted in the Tomita–Takesaki theory. This theory has proven to be a central element in structural analysis and recently also in the construction of concrete quantum field theoretical models. Together with Daniel Kastler and Ewa Trych-Pohlmeyer, Haag also succeeded in deriving the KMS condition from the stability properties of thermal equilibrium states. Together with Huzihiro Araki, Daniel Kastler, and Masamichi Takesaki, he also developed a theory of chemical potential in this context.
The framework created by Haag and Kastler for studying quantum field theories in Minkowski space can be transferred to theories in curved spacetime. By working with Klaus Fredenhagen, Heide Narnhofer, and Ulrich Stein, Haag made important contributions to the understanding of the Unruh effect and Hawking radiation.
Haag had a certain mistrust towards what he viewed as speculative developments in theoretical physics but occasionally dealt with such questions. The best known Acontribution is the Haag–Łopuszański–Sohnius theorem, which classifies the possible supersymmetries of the S-matrix that are not covered by the Coleman–Mandula theorem.
Honors and awards
In 1970 Haag received the Max Planck Medal for outstanding achievements in theoretical physics and in 1997 the Henri Poincaré Prize for his fundamental contributions to quantum field theory as one of the founders of the modern formulation. Since 1980 Haag was a member of the German National Academy of Sciences Leopoldina and since 1981 of the Göttingen Academy of Sciences. Since 1979 he was a corresponding member of the Bavarian Academy of Sciences and since 1987 of the Austrian Academy of Sciences.
Publications
Textbook
Selected scientific works
(Haag's theorem.)
(Haag–Ruelle scattering theory.)
(Haag–Kastler axioms.)
(Doplicher-Haag-Roberts analysis of the superselection structure.)
(KMS condition.)
(Stability and KMS condition.)
(KMS condition and chemical potential.)
(Unruh effect.)
(Hawking radiation.)
(Classification of Supersymmetry.)
(Concept of Event.)
Others
See also
Axiomatic quantum field theory
Communications in Mathematical Physics
Constructive quantum field theory
Haag–Łopuszański–Sohnius theorem
Haag–Ruelle scattering theory
Haag's theorem
Hilbert's sixth problem
Local quantum physics
Principle of locality
Quantum field theory
Quantum field theory in curved spacetime
Notes
References
Further reading
(With photo).
(With photo).
External links
.
.
.
.
Theoretical physicists
Mathematical physicists
German theoretical physicists
20th-century German physicists
21st-century German physicists
Academic staff of the University of Hamburg
Winners of the Max Planck Medal
Members of the Austrian Academy of Sciences
Members of the Bavarian Academy of Sciences
Members of the German National Academy of Sciences Leopoldina
1922 births
2016 deaths
People associated with CERN | Rudolf Haag | [
"Physics"
] | 1,675 | [
"Theoretical physics",
"Theoretical physicists"
] |
1,498,625 | https://en.wikipedia.org/wiki/Mellin%20inversion%20theorem | In mathematics, the Mellin inversion formula (named after Hjalmar Mellin) tells us conditions under
which the inverse Mellin transform, or equivalently the inverse two-sided Laplace transform, are defined and recover the transformed function.
Method
If is analytic in the strip ,
and if it tends to zero uniformly as for any real value c between a and b, with its integral along such a line converging absolutely, then if
we have that
Conversely, suppose is piecewise continuous on the positive real numbers, taking a value halfway between the limit values at any jump discontinuities, and suppose the integral
is absolutely convergent when . Then is recoverable via the inverse Mellin transform from its Mellin transform . These results can be obtained by relating the Mellin transform to the Fourier transform by a change of variables and then applying an appropriate version of the Fourier inversion theorem.
Boundedness condition
The boundedness condition on can be strengthened if
is continuous. If is analytic in the strip , and if , where K is a positive constant, then as defined by the inversion integral exists and is continuous; moreover the Mellin transform of is for at least .
On the other hand, if we are willing to accept an original which is a
generalized function, we may relax the boundedness condition on
to
simply make it of polynomial growth in any closed strip contained in the open strip .
We may also define a Banach space version of this theorem. If we call by
the weighted Lp space of complex valued functions on the positive reals such that
where ν and p are fixed real numbers with , then if
is in with , then
belongs to with and
Here functions, identical everywhere except on a set of measure zero, are identified.
Since the two-sided Laplace transform can be defined as
these theorems can be immediately applied to it also.
See also
Mellin transform
Nachbin's theorem
References
External links
Tables of Integral Transforms at EqWorld: The World of Mathematical Equations.
Integral transforms
Theorems in complex analysis
Laplace transforms | Mellin inversion theorem | [
"Mathematics"
] | 410 | [
"Theorems in mathematical analysis",
"Theorems in complex analysis"
] |
1,499,203 | https://en.wikipedia.org/wiki/Transocean | Transocean Ltd. is an American drilling company. It is the world's largest offshore drilling contractor based on revenue and is based in Steinhausen, Switzerland. The company has offices in 20 countries, including Canada, the United States, Norway, United Kingdom, India, Brazil, Singapore, Indonesia, and Malaysia.
In 2010, Transocean was found partially responsible (30% of total liability) for the Deepwater Horizon oil spill resulting from the explosion of one of its oil rigs in the Gulf of Mexico.
The primary business of Transocean is contracts with other large companies in the oil and gas industry. In 2019, Royal Dutch Shell accounted for 26% of the company's revenues, while Equinor accounted for 21% of the company's revenues, and Chevron accounted for 17% of the company's revenues.
History
Transocean was formed as a result of the merger of Southern Natural Gas Company, later Sonat, with many smaller drilling companies.
In 1953, the Birmingham, Alabama-based Southern Natural Gas Company created The Offshore Company after acquiring the joint drilling operation DeLong-McDermott from DeLong Engineering and J. Ray McDermott. In 1954, the company launched Rig 51, the first mobile jackup rig, in the Gulf of Mexico. In 1967, the Offshore Company went public. In 1978, SNG turned it into a wholly owned subsidiary. In 1982, it was changed to Sonat Offshore Drilling Inc., reflecting a change in its parent's name. William C. O'Malley, an executive at Sonat's headquarters in Birmingham, was named the company's first Chief Executive Officer in 1992. In 1993, Sonat spun off the majority of its ownership in the company. Sonat sold its remaining 40% stake in the company during a secondary public offering in late 1995.
In 1996, the company acquired Norwegian group Transocean ASA for US$1.5 billion. Transocean started in the 1970s as a whaling company and expanded through a series of mergers. The new company was called Transocean Offshore. The new company began building massive drilling operations with drills capable of going to 10,000 feet (as opposed to 3,000 feet at the time) and operating two drill operations on the same ship. Its first ship, Discoverer Enterprise, cost nearly US$430 million and was . The Enterprise class drillship is the largest of the drilling ships.
In 1999, Transocean merged with Sedco Forex, the offshore drilling subsidiary of Schlumberger in a $3.2 billion stock transaction in which Schlumberger shareholders received shares of Transocean.
Sedco Forex had been formed from a merger of two drilling companies, the Southeastern Drilling Company (Sedco), founded in 1947 by Bill Clements and acquired by Schlumberger in 1985 for $1 billion and French drilling company Forages et Exploitations Pétrolières (Forex) founded in 1942 in German-occupied France for drilling in North Africa. Schlumberger first got a foothold in the company in 1959 and then assumed total control in 1964, and renamed it Forex Neptune Drilling Company.
In 2000, Transocean acquired R&B Falcon Corporation, owner of 115 drilling rigs, in a deal valued at $17.7 billion. With the acquisition, Transocean gained control of what at the time was the world's largest offshore operation. Among R&B Falcon's assets was the Deepwater Horizon. R&B Falcon had acquired Cliffs Drilling Company in 1998.
In 2005, the company's Discoverer Spirit rig set a world record for the deepest offshore oil and gas well of .
In 2007, the US Department of Justice and the Securities and Exchange Commission filed a case against Transocean, alleging violations of the Foreign Corrupt Practices Act. The case alleged that Transocean paid bribes through its freight forwarding agents to Nigerian customs officials. Transocean later admitted to approving the bribes and agreed to pay US$13,440,000 to settle the matter.
In 2007, the company merged with GlobalSantaFe Corporation in a transaction that created a company with an enterprise value of $53 billion. Shareholders of GlobalSantaFe Corporation received $15 billion of cash as well as stock in the new company for their shares. Robert E. Rose, who was non-executive chairman of GlobalSantaFe, was made Transocean's chairman. Rose had been chairman of Global Marine prior to its 2001 merger with Santa Fe International Corporation.
In 2008, the company moved its headquarters to Switzerland, resulting in a significantly lower tax rate.
In September 2009, its Deepwater Horizon rig established a well, the deepest well in history – more than 5,000 feet deeper than its stated design specification.
In 2010, Transocean was implicated in the Deepwater Horizon oil spill resulting from the explosion of one of its oil rigs in the Gulf of Mexico that was leased to BP.
In 2011, the company acquired Aker Drilling, which owned 4 harsh environment rigs used for drilling near Norway.
In 2012, the company sold 38 shallow water rigs and narrowed its focus on high-specification deepwater rigs.
In 2013, the company was added to the S&P 500 index.
In February 2015, CEO Steven Newman quit following a $2.2 billion quarterly loss.
Effective on 30 March 2016, the company delisted its shares from the SIX Swiss Exchange, at which time its shares were removed from the Swiss Market Index.
Effective on January 30, 2018, the company completed its acquisition of Songa Offshore.
In December 2018, the company acquired Ocean Rig.
Controversies
Accidents and incidents
Transocean was rated as a leader in its industry for many years. However, since the company's 2007 merger with GlobalSantaFe, Transocean's reputation has suffered considerably, according to EnergyPoint Research, an independent oil service industry rating firm. From 2004 to 2007, Transocean was the leader or near the top among deep-water drillers in "job quality" and "overall satisfaction." In 2008 and 2009, surveys ranked Transocean as last among deep-water drillers for "job quality" and next to last in "overall satisfaction." In 2008 and 2009, Transocean ranked first for in-house safety and environmental policies, and in the middle of the pack for perceived environmental and safety record. The Deepwater Horizon explosion and massive oil spill, starting in April 2010, further hurt its reputation. "Transocean is dominant, but the accident has definitely tarnished its reputation for worker safety and for being able to manage and deliver on extraordinarily complex deepwater projects," said Christopher Ruppel, an energy expert and managing director of capital markets at Execution Noble, an investment bank.
Transocean Leader accident (2002)
On 2 March 2002, a Scottish man was killed in an accident aboard the Transocean Leader drilling rig operated for BP, located about 138 kilometers (86 miles) west of Shetland, Scotland.
Galveston Bay explosion (2003)
On 17 June 2003, one worker was killed, four others were hospitalised and 21 were evacuated after an explosion on a Transocean gas drilling rig in Galveston Bay, Texas.
Maintenance citation on Transocean Rather (2005)
On 24 August 2005, the UK Health and Safety Executive issued a notice to Transocean saying that, it had failed to maintain its "remote blowout preventor control panel … in an efficient state, efficient working order and in good repair." On 21 November 2005, Transocean was found to be in compliance for this matter.
Sinking of Bourbon Dolphin supply boat and Transocean Rather accident (2007)
On 12 April 2007, the Bourbon Dolphin supply boat sank off the coast of Scotland while servicing the Transocean Rather drilling rig, killing eight people. The Norwegian Ministry of Justice established a Commission of Inquiry to investigate the incident, and the commission's report found a series of "unfortunate circumstances" led to the accident "with many of them linked to Bourbon Offshore and Transocean."
2008 fatalities
In 2008, two Transocean workers were reportedly killed on the company's vessels.
Deepwater Horizon drilling rig explosion (2010)
On 20 April 2010, a fire was reported on a Transocean-owned semisubmersible drilling rig, Deepwater Horizon. Deepwater Horizon was a RBS8D design of Reading & Bates Falcon, a firm that was acquired by Transocean in 2001. The fire broke out at 10:00 p.m. CDT UTC−5 in US waters of Mississippi Canyon Block 252 in the Gulf of Mexico. The rig was off the Louisiana coast. The US Coast Guard launched a rescue operation after the explosion which killed 11 workers and critically injured seven of the 126-member crew.
Deepwater Horizon was completely destroyed and subsequently sank.
As the Deepwater Horizon sank, the riser pipe that connected the well-head to the rig was severed. As a result, oil began to spill into the Gulf of Mexico. Estimates of the leak were about 80,000 barrels per day – for 87 days.
Louisiana Governor Bobby Jindal declared a state of emergency on 29 April, as the oil slick grew and headed toward the most important and most sensitive wetlands in North America, threatening to destroy wildlife and the livelihood of thousands of fishermen. The head of BP Group told CNN's Brian Todd on 28 April that the accident could have been prevented and focused blame on Transocean, which owned and partly manned the rig.
Transocean came under fire from lawyers, representing the fishing and tourism businesses that were hit by the oil spill, and the United States Department of Justice for seeking to use a Limitation of Liability Act of 1851 to restrict its liability for economic damages to $26.7 million.
During Congressional testimony, Transocean and BP blamed each other for the disaster. It emerged that a "heated argument" broke out on the platform 11 hours before the accident, in which Transocean and BP personnel disagreed on an engineering decision related to the closing of the well. On 14 May 2010, US President Barack Obama commented, "I did not appreciate what I considered to be a ridiculous spectacle… executives of BP and Transocean and Halliburton [the firm responsible for cementing the well] falling over each other to point the finger of blame at somebody else. The American people could not have been impressed with that display, and I certainly wasn't."
Transocean later claimed that 2010, the year in which the disaster occurred, was "the best year in safety performance in our company’s history". In a regulatory filing, Transocean said, "Notwithstanding the tragic loss of life in the Gulf of Mexico, we achieved an exemplary statistical safety record as measured by our total recordable incident rate and total potential severity rate." They used this justification to award employees about two-thirds of the maximum possible safety bonuses. In response to broad criticism, including from Interior Secretary Ken Salazar, the company announced that its executives would donate the safety portion of the bonuses to a fund supporting the victims' families.
Offshore drilling leak off the Brazilian coast (2011)
The offshore drilling facility "Sedco 706", operated by Transocean under contract from Chevron, began to leak in November 2011 while working on the "Frade" oil field. Oil began leaking from the seabed at a depth of approximately 1100 to 1200m. Damage included an oil slick (oil floating on the ocean surface) covering an area of approximately 80 km2 and growing. This put the oil at a distance of about 370 km from Rio de Janeiro, but other beautiful beaches are much closer (estimated 140 km). The Brazilian government sued Transocean and attempted to force the company to cease operations in Brazil, but a settlement was reached without a finding of fault or liability.
Transocean Winner grounding on the Isle of Lewis, Scotland (2016)
In the early hours of Monday 8 August 2016, the semi-submersible drilling rig Transocean Winner ran aground near Dalmore in the Carloway district of the Isle of Lewis in the Outer Hebrides, Scotland. The rig had been under tow by the tug Alp Forward in winds of galeforce, when the tow line broke. The rig subsequently drifted ashore at Dalmore and became stuck fast on rocks at 07.30 BST. Continuing poor weather meant that a damage inspection by salvors has been practically impossible, as personnel require to be airlifted on to the rig, in spite of it being close to the shore. The rig was carrying approximately 280 tons of diesel, to power its generators, of which 53 tons is thought to have leaked into the sea, and dispersed or evaporated in rough conditions. Environmental monitoring of plant and animal life is on-going, particularly in view of the economically important fish farming operations in nearby Loch Ròg.
See also
List of oilfield service companies
List of Texas companies (T)
References
External links
Subsidiaries of Transocean LTD. worldwide (as of December 31, 2018)] (U.S. Securities and Exchange Commission)
Companies listed on the New York Stock Exchange
Service companies of Switzerland
Drilling rig operators
Swiss companies established in 1973
Energy engineering and contractor companies
Norwegian companies established in 1973
Vernier, Switzerland
Tax inversions
Energy companies established in 1973 | Transocean | [
"Engineering"
] | 2,719 | [
"Energy engineering and contractor companies",
"Engineering companies"
] |
2,146,034 | https://en.wikipedia.org/wiki/CRISPR | CRISPR () (an acronym for clustered regularly interspaced short palindromic repeats) is a family of DNA sequences found in the genomes of prokaryotic organisms such as bacteria and archaea. Each sequence within an individual prokaryotic cell is derived from a DNA fragment of a bacteriophage that had previously infected the prokaryote or one of its ancestors. These sequences are used to detect and destroy DNA from similar bacteriophages during subsequent infections. Hence these sequences play a key role in the antiviral (i.e. anti-phage) defense system of prokaryotes and provide a form of heritable, acquired immunity. CRISPR is found in approximately 50% of sequenced bacterial genomes and nearly 90% of sequenced archaea.
Cas9 (or "CRISPR-associated protein 9") is an enzyme that uses CRISPR sequences as a guide to recognize and open up specific strands of DNA that are complementary to the CRISPR sequence. Cas9 enzymes together with CRISPR sequences form the basis of a technology known as CRISPR-Cas9 that can be used to edit genes within living organisms. This editing process has a wide variety of applications including basic biological research, development of biotechnological products, and treatment of diseases. The development of the CRISPR-Cas9 genome editing technique was recognized by the Nobel Prize in Chemistry in 2020 awarded to Emmanuelle Charpentier and Jennifer Doudna.
History
Repeated sequences
The discovery of clustered DNA repeats took place independently in three parts of the world. The first description of what would later be called CRISPR is from Osaka University researcher Yoshizumi Ishino and his colleagues in 1987. They accidentally cloned part of a CRISPR sequence together with the "iap" gene (isozyme conversion of alkaline phosphatase) from their target genome, that of Escherichia coli. The organization of the repeats was unusual. Repeated sequences are typically arranged consecutively, without interspersing different sequences. They did not know the function of the interrupted clustered repeats.
In 1993, researchers of Mycobacterium tuberculosis in the Netherlands published two articles about a cluster of interrupted direct repeats (DR) in that bacterium. They recognized the diversity of the sequences that intervened in the direct repeats among different strains of M. tuberculosis and used this property to design a typing method called spoligotyping, still in use today.
Francisco Mojica at the University of Alicante in Spain studied the function of repeats in the archaeal species Haloferax and Haloarcula. Mojica's supervisor surmised that the clustered repeats had a role in correctly segregating replicated DNA into daughter cells during cell division, because plasmids and chromosomes with identical repeat arrays could not coexist in Haloferax volcanii. Transcription of the interrupted repeats was also noted for the first time; this was the first full characterization of CRISPR. By 2000, Mojica and his students, after an automated search of published genomes, identified interrupted repeats in 20 species of microbes as belonging to the same family. Because those sequences were interspaced, Mojica initially called these sequences "short regularly spaced repeats" (SRSR). In 2001, Mojica and Ruud Jansen, who were searching for an additional interrupted repeats, proposed the acronym CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) to unify the numerous acronyms used to describe these sequences. In 2002, Tang, et al. showed evidence that CRISPR repeat regions from the genome of Archaeoglobus fulgidus were transcribed into long RNA molecules subsequently processed into unit-length small RNAs, plus some longer forms of 2, 3, or more spacer-repeat units.
In 2005, yogurt researcher Rodolphe Barrangou discovered that Streptococcus thermophilus, after iterative phage infection challenges, develops increased phage resistance due to the incorporation of additional CRISPR spacer sequences. Barrangou's employer, the Danish food company Danisco, then developed phage-resistant S. thermophilus strains for yogurt production. Danisco was later bought by DuPont, which owns about 50 percent of the global dairy culture market, and the technology spread widely.
CRISPR-associated systems
A major advance in understanding CRISPR came with Jansen's observation that the prokaryote repeat cluster was accompanied by four homologous genes that make up CRISPR-associated systems, cas 1–4. The Cas proteins showed helicase and nuclease motifs, suggesting a role in the dynamic structure of the CRISPR loci. In this publication, the acronym CRISPR was used as the universal name of this pattern, but its function remained enigmatic.
In 2005, three independent research groups showed that some CRISPR spacers are derived from phage DNA and extrachromosomal DNA such as plasmids. In effect, the spacers are fragments of DNA gathered from viruses that previously attacked the cell. The source of the spacers was a sign that the CRISPR-cas system could have a role in adaptive immunity in bacteria. All three studies proposing this idea were initially rejected by high-profile journals, but eventually appeared in other journals.
The first publication proposing a role of CRISPR-Cas in microbial immunity, by Mojica and collaborators at the University of Alicante, predicted a role for the RNA transcript of spacers on target recognition in a mechanism that could be analogous to the RNA interference system used by eukaryotic cells. Koonin and colleagues extended this RNA interference hypothesis by proposing mechanisms of action for the different CRISPR-Cas subtypes according to the predicted function of their proteins.
Experimental work by several groups revealed the basic mechanisms of CRISPR-Cas immunity. In 2007, the first experimental evidence that CRISPR was an adaptive immune system was published. A CRISPR region in Streptococcus thermophilus acquired spacers from the DNA of an infecting bacteriophage. The researchers manipulated the resistance of S. thermophilus to different types of phages by adding and deleting spacers whose sequence matched those found in the tested phages. In 2008, Brouns and Van der Oost identified a complex of Cas proteins called Cascade, that in E. coli cut the CRISPR RNA precursor within the repeats into mature spacer-containing RNA molecules called CRISPR RNA (crRNA), which remained bound to the protein complex. Moreover, it was found that Cascade, crRNA and a helicase/nuclease (Cas3) were required to provide a bacterial host with immunity against infection by a DNA virus. By designing an anti-virus CRISPR, they demonstrated that two orientations of the crRNA (sense/antisense) provided immunity, indicating that the crRNA guides were targeting dsDNA. That year Marraffini and Sontheimer confirmed that a CRISPR sequence of S. epidermidis targeted DNA and not RNA to prevent conjugation. This finding was at odds with the proposed RNA-interference-like mechanism of CRISPR-Cas immunity, although a CRISPR-Cas system that targets foreign RNA was later found in Pyrococcus furiosus. A 2010 study showed that CRISPR-Cas cuts strands of both phage and plasmid DNA in S. thermophilus.
Cas9
A simpler CRISPR system from Streptococcus pyogenes relies on the protein Cas9. The Cas9 endonuclease is a four-component system that includes two small molecules: crRNA and trans-activating CRISPR RNA (tracrRNA). In 2012, Jennifer Doudna and Emmanuelle Charpentier re-engineered the Cas9 endonuclease into a more manageable two-component system by fusing the two RNA molecules into a "single-guide RNA" that, when combined with Cas9, could find and cut the DNA target specified by the guide RNA. This contribution was so significant that it was recognized by the Nobel Prize in Chemistry in 2020. By manipulating the nucleotide sequence of the guide RNA, the artificial Cas9 system could be programmed to target any DNA sequence for separation. Another collaboration comprising Virginijus Šikšnys, Gasiūnas, Barrangou, and Horvath showed that Cas9 from the S. thermophilus CRISPR system can also be reprogrammed to target a site of their choosing by changing the sequence of its crRNA. These advances fueled efforts to edit genomes with the modified CRISPR-Cas9 system.
Groups led by Feng Zhang and George Church simultaneously published descriptions of genome editing in human cell cultures using CRISPR-Cas9 for the first time. It has since been used in a wide range of organisms, including baker's yeast (Saccharomyces cerevisiae), the opportunistic pathogen Candida albicans, zebrafish (Danio rerio), fruit flies (Drosophila melanogaster), ants (Harpegnathos saltator and Ooceraea biroi), mosquitoes (Aedes aegypti), nematodes (Caenorhabditis elegans), plants, mice (Mus musculus domesticus), monkeys and human embryos.
CRISPR has been modified to make programmable transcription factors that allows activation or silencing of targeted genes.
The CRISPR-Cas9 system has been shown to make effective gene edits in Human tripronuclear zygotes, as first described in a 2015 paper by Chinese scientists P. Liang and Y. Xu. The system made a successful cleavage of mutant Beta-Hemoglobin (HBB) in 28 out of 54 embryos. Four out of the 28 embryos were successfully recombined using a donor template. The scientists showed that during DNA recombination of the cleaved strand, the homologous endogenous sequence HBD competes with the exogenous donor template. DNA repair in human embryos is much more complicated and particular than in derived stem cells.
Cas12a
In 2015, the nuclease Cas12a (formerly called ) was characterized in the CRISPR-Cpf1 system of the bacterium Francisella novicida. Its original name, from a TIGRFAMs protein family definition built in 2012, reflects the prevalence of its CRISPR-Cas subtype in the Prevotella and Francisella lineages. Cas12a showed several key differences from Cas9 including: causing a 'staggered' cut in double stranded DNA as opposed to the 'blunt' cut produced by Cas9, relying on a 'T rich' PAM (providing alternative targeting sites to Cas9), and requiring only a CRISPR RNA (crRNA) for successful targeting. By contrast, Cas9 requires both crRNA and a trans-activating crRNA (tracrRNA).
These differences may give Cas12a some advantages over Cas9. For example, Cas12a's small crRNAs are ideal for multiplexed genome editing, as more of them can be packaged in one vector than can Cas9's sgRNAs. The sticky 5′ overhangs left by Cas12a can also be used for DNA assembly that is much more target-specific than traditional restriction enzyme cloning. Finally, Cas12a cleaves DNA 18–23 base pairs downstream from the PAM site. This means there is no disruption to the recognition sequence after repair, and so Cas12a enables multiple rounds of DNA cleavage. By contrast, since Cas9 cuts only 3 base pairs upstream of the PAM site, the NHEJ pathway results in indel mutations that destroy the recognition sequence, thereby preventing further rounds of cutting. In theory, repeated rounds of DNA cleavage should cause an increased opportunity for the desired genomic editing to occur. A distinctive feature of Cas12a, as compared to Cas9, is that after cutting its target, Cas12a remains bound to the target and then cleaves other ssDNA molecules non-discriminately. This property is called "collateral cleavage" or "trans-cleavage" activity and has been exploited for the development of various diagnostic technologies.
Cas13
In 2016, the nuclease (formerly known as ) from the bacterium Leptotrichia shahii was characterized. Cas13 is an RNA-guided RNA endonuclease, which means that it does not cleave DNA, but only single-stranded RNA. Cas13 is guided by its crRNA to a ssRNA target and binds and cleaves the target. Similar to Cas12a, the Cas13 remains bound to the target and then cleaves other ssRNA molecules non-discriminately. This collateral cleavage property has been exploited for the development of various diagnostic technologies.
In 2021, Dr. Hui Yang characterized novel miniature Cas13 protein (mCas13) variants, Cas13X and Cas13Y. Using a small portion of N gene sequence from SARS-CoV-2 as a target in characterization of mCas13, revealed the sensitivity and specificity of mCas13 coupled with RT-LAMP for detection of SARS-CoV-2 in both synthetic and clinical samples over other available standard tests like RT-qPCR (1 copy/μL).
Locus structure
Repeats and spacers
The CRISPR array is made up of an AT-rich leader sequence followed by short repeats that are separated by unique spacers. CRISPR repeats typically range in size from 28 to 37 base pairs (bps), though there can be as few as 23 bp and as many as 55 bp. Some show dyad symmetry, implying the formation of a secondary structure such as a stem-loop ('hairpin') in the RNA, while others are designed to be unstructured. The size of spacers in different CRISPR arrays is typically 32 to 38 bp (range 21 to 72 bp). New spacers can appear rapidly as part of the immune response to phage infection. There are usually fewer than 50 units of the repeat-spacer sequence in a CRISPR array.
CRISPR RNA structures
Cas genes and CRISPR subtypes
Small clusters of cas genes are often located next to CRISPR repeat-spacer arrays. Collectively the 93 cas genes are grouped into 35 families based on sequence similarity of the encoded proteins. 11 of the 35 families form the cas core, which includes the protein families Cas1 through Cas9. A complete CRISPR-Cas locus has at least one gene belonging to the cas core.
CRISPR-Cas systems fall into two classes. Class 1 systems use a complex of multiple Cas proteins to degrade foreign nucleic acids. Class 2 systems use a single large Cas protein for the same purpose. Class 1 is divided into types I, III, and IV; class 2 is divided into types II, V, and VI. The 6 system types are divided into 33 subtypes. Each type and most subtypes are characterized by a "signature gene" found almost exclusively in the category. Classification is also based on the complement of cas genes that are present. Most CRISPR-Cas systems have a Cas1 protein. The phylogeny of Cas1 proteins generally agrees with the classification system, but exceptions exist due to module shuffling. Many organisms contain multiple CRISPR-Cas systems suggesting that they are compatible and may share components. The sporadic distribution of the CRISPR-Cas subtypes suggests that the CRISPR-Cas system is subject to horizontal gene transfer during microbial evolution.
Mechanism
CRISPR-Cas immunity is a natural process of bacteria and archaea. CRISPR-Cas prevents bacteriophage infection, conjugation and natural transformation by degrading foreign nucleic acids that enter the cell.
Spacer acquisition
When a microbe is invaded by a bacteriophage, the first stage of the immune response is to capture phage DNA and insert it into a CRISPR locus in the form of a spacer. Cas1 and Cas2 are found in both types of CRISPR-Cas immune systems, which indicates that they are involved in spacer acquisition. Mutation studies confirmed this hypothesis, showing that removal of Cas1 or Cas2 stopped spacer acquisition, without affecting CRISPR immune response.
Multiple Cas1 proteins have been characterised and their structures resolved. Cas1 proteins have diverse amino acid sequences. However, their crystal structures are similar and all purified Cas1 proteins are metal-dependent nucleases/integrases that bind to DNA in a sequence-independent manner. Representative Cas2 proteins have been characterised and possess either (single strand) ssRNA- or (double strand) dsDNA- specific endoribonuclease activity.
In the I-E system of E. coli Cas1 and Cas2 form a complex where a Cas2 dimer bridges two Cas1 dimers. In this complex Cas2 performs a non-enzymatic scaffolding role, binding double-stranded fragments of invading DNA, while Cas1 binds the single-stranded flanks of the DNA and catalyses their integration into CRISPR arrays. New spacers are usually added at the beginning of the CRISPR next to the leader sequence creating a chronological record of viral infections. In E. coli a histone like protein called integration host factor (IHF), which binds to the leader sequence, is responsible for the accuracy of this integration. IHF also enhances integration efficiency in the type I-F system of Pectobacterium atrosepticum. but in other systems, different host factors may be required
Protospacer adjacent motifs (PAM)
Bioinformatic analysis of regions of phage genomes that were excised as spacers (termed protospacers) revealed that they were not randomly selected but instead were found adjacent to short (3–5 bp) DNA sequences termed protospacer adjacent motifs (PAM). Analysis of CRISPR-Cas systems showed PAMs to be important for type I and type II, but not type III systems during acquisition. In type I and type II systems, protospacers are excised at positions adjacent to a PAM sequence, with the other end of the spacer cut using a ruler mechanism, thus maintaining the regularity of the spacer size in the CRISPR array. The conservation of the PAM sequence differs between CRISPR-Cas systems and appears to be evolutionarily linked to Cas1 and the leader sequence.
New spacers are added to a CRISPR array in a directional manner, occurring preferentially, but not exclusively, adjacent to the leader sequence. Analysis of the type I-E system from E. coli demonstrated that the first direct repeat adjacent to the leader sequence is copied, with the newly acquired spacer inserted between the first and second direct repeats.
The PAM sequence appears to be important during spacer insertion in type I-E systems. That sequence contains a strongly conserved final nucleotide (nt) adjacent to the first nt of the protospacer. This nt becomes the final base in the first direct repeat. This suggests that the spacer acquisition machinery generates single-stranded overhangs in the second-to-last position of the direct repeat and in the PAM during spacer insertion. However, not all CRISPR-Cas systems appear to share this mechanism as PAMs in other organisms do not show the same level of conservation in the final position. It is likely that in those systems, a blunt end is generated at the very end of the direct repeat and the protospacer during acquisition.
Insertion variants
Analysis of Sulfolobus solfataricus CRISPRs revealed further complexities to the canonical model of spacer insertion, as one of its six CRISPR loci inserted new spacers randomly throughout its CRISPR array, as opposed to inserting closest to the leader sequence.
Multiple CRISPRs contain many spacers to the same phage. The mechanism that causes this phenomenon was discovered in the type I-E system of E. coli. A significant enhancement in spacer acquisition was detected where spacers already target the phage, even mismatches to the protospacer. This 'priming' requires the Cas proteins involved in both acquisition and interference to interact with each other. Newly acquired spacers that result from the priming mechanism are always found on the same strand as the priming spacer. This observation led to the hypothesis that the acquisition machinery slides along the foreign DNA after priming to find a new protospacer.
Biogenesis
CRISPR-RNA (crRNA), which later guides the Cas nuclease to the target during the interference step, must be generated from the CRISPR sequence. The crRNA is initially transcribed as part of a single long transcript encompassing much of the CRISPR array. This transcript is then cleaved by Cas proteins to form crRNAs. The mechanism to produce crRNAs differs among CRISPR-Cas systems. In type I-E and type I-F systems, the proteins Cas6e and Cas6f respectively, recognise stem-loops created by the pairing of identical repeats that flank the crRNA. These Cas proteins cleave the longer transcript at the edge of the paired region, leaving a single crRNA along with a small remnant of the paired repeat region.
Type III systems also use Cas6, however, their repeats do not produce stem-loops. Cleavage instead occurs by the longer transcript wrapping around the Cas6 to allow cleavage just upstream of the repeat sequence.
Type II systems lack the Cas6 gene and instead utilize RNaseIII for cleavage. Functional type II systems encode an extra small RNA that is complementary to the repeat sequence, known as a trans-activating crRNA (tracrRNA). Transcription of the tracrRNA and the primary CRISPR transcript results in base pairing and the formation of dsRNA at the repeat sequence, which is subsequently targeted by RNaseIII to produce crRNAs. Unlike the other two systems, the crRNA does not contain the full spacer, which is instead truncated at one end.
CrRNAs associate with Cas proteins to form ribonucleotide complexes that recognize foreign nucleic acids. CrRNAs show no preference between the coding and non-coding strands, which is indicative of an RNA-guided DNA-targeting system. The type I-E complex (commonly referred to as Cascade) requires five Cas proteins bound to a single crRNA.
Interference
During the interference stage in type I systems, the PAM sequence is recognized on the crRNA-complementary strand and is required along with crRNA annealing. In type I systems correct base pairing between the crRNA and the protospacer signals a conformational change in Cascade that recruits Cas3 for DNA degradation.
Type II systems rely on a single multifunctional protein, Cas9, for the interference step. Cas9 requires both the crRNA and the tracrRNA to function and cleave DNA using its dual HNH and RuvC/RNaseH-like endonuclease domains. Basepairing between the PAM and the phage genome is required in type II systems. However, the PAM is recognized on the same strand as the crRNA (the opposite strand to type I systems).
Type III systems, like type I require six or seven Cas proteins binding to crRNAs. The type III systems analysed from S. solfataricus and P. furiosus both target the mRNA of phages rather than phage DNA genome, which may make these systems uniquely capable of targeting RNA-based phage genomes. Type III systems were also found to target DNA in addition to RNA using a different Cas protein in the complex, Cas10. The DNA cleavage was shown to be transcription dependent.
The mechanism for distinguishing self from foreign DNA during interference is built into the crRNAs and is therefore likely common to all three systems. Throughout the distinctive maturation process of each major type, all crRNAs contain a spacer sequence and some portion of the repeat at one or both ends. It is the partial repeat sequence that prevents the CRISPR-Cas system from targeting the chromosome as base pairing beyond the spacer sequence signals self and prevents DNA cleavage. RNA-guided CRISPR enzymes are classified as type V restriction enzymes.
Evolution
The cas genes in the adaptor and effector modules of the CRISPR-Cas system are believed to have evolved from two different ancestral modules. A transposon-like element called casposon encoding the Cas1-like integrase and potentially other components of the adaptation module was inserted next to the ancestral effector module, which likely functioned as an independent innate immune system. The highly conserved cas1 and cas2 genes of the adaptor module evolved from the ancestral module while a variety of class 1 effector cas genes evolved from the ancestral effector module. The evolution of these various class 1 effector module cas genes was guided by various mechanisms, such as duplication events. On the other hand, each type of class 2 effector module arose from subsequent independent insertions of mobile genetic elements. These mobile genetic elements took the place of the multiple gene effector modules to create single gene effector modules that produce large proteins which perform all the necessary tasks of the effector module. The spacer regions of CRISPR-Cas systems are taken directly from foreign mobile genetic elements and thus their long-term evolution is hard to trace. The non-random evolution of these spacer regions has been found to be highly dependent on the environment and the particular foreign mobile genetic elements it contains.
CRISPR-Cas can immunize bacteria against certain phages and thus halt transmission. For this reason, Koonin described CRISPR-Cas as a Lamarckian inheritance mechanism. However, this was disputed by a critic who noted, "We should remember [Lamarck] for the good he contributed to science, not for things that resemble his theory only superficially. Indeed, thinking of CRISPR and other phenomena as Lamarckian only obscures the simple and elegant way evolution really works". But as more recent studies have been conducted, it has become apparent that the acquired spacer regions of CRISPR-Cas systems are indeed a form of Lamarckian evolution because they are genetic mutations that are acquired and then passed on. On the other hand, the evolution of the Cas gene machinery that facilitates the system evolves through classic Darwinian evolution.
Coevolution
Analysis of CRISPR sequences revealed coevolution of host and viral genomes.
The basic model of CRISPR evolution is newly incorporated spacers driving phages to mutate their genomes to avoid the bacterial immune response, creating diversity in both the phage and host populations. To resist a phage infection, the sequence of the CRISPR spacer must correspond perfectly to the sequence of the target phage gene. Phages can continue to infect their hosts' given point mutations in the spacer. Similar stringency is required in PAM or the bacterial strain remains phage sensitive.
Rates
A study of 124 S. thermophilus strains showed that 26% of all spacers were unique and that different CRISPR loci showed different rates of spacer acquisition. Some CRISPR loci evolve more rapidly than others, which allowed the strains' phylogenetic relationships to be determined. A comparative genomic analysis showed that E. coli and S. enterica evolve much more slowly than S. thermophilus. The latter's strains that diverged 250,000 years ago still contained the same spacer complement.
Metagenomic analysis of two acid-mine-drainage biofilms showed that one of the analyzed CRISPRs contained extensive deletions and spacer additions versus the other biofilm, suggesting a higher phage activity/prevalence in one community than the other. In the oral cavity, a temporal study determined that 7–22% of spacers were shared over 17 months within an individual while less than 2% were shared across individuals.
From the same environment, a single strain was tracked using PCR primers specific to its CRISPR system. Broad-level results of spacer presence/absence showed significant diversity. However, this CRISPR added three spacers over 17 months, suggesting that even in an environment with significant CRISPR diversity some loci evolve slowly.
CRISPRs were analysed from the metagenomes produced for the Human Microbiome Project. Although most were body-site specific, some within a body site are widely shared among individuals. One of these loci originated from streptococcal species and contained ≈15,000 spacers, 50% of which were unique. Similar to the targeted studies of the oral cavity, some showed little evolution over time.
CRISPR evolution was studied in chemostats using S. thermophilus to directly examine spacer acquisition rates. In one week, S. thermophilus strains acquired up to three spacers when challenged with a single phage. During the same interval, the phage developed single-nucleotide polymorphisms that became fixed in the population, suggesting that targeting had prevented phage replication absent these mutations.
Another S. thermophilus experiment showed that phages can infect and replicate in hosts that have only one targeting spacer. Yet another showed that sensitive hosts can exist in environments with high-phage titres. The chemostat and observational studies suggest many nuances to CRISPR and phage (co)evolution.
Identification
CRISPRs are widely distributed among bacteria and archaea and show some sequence similarities. Their most notable characteristic is their repeating spacers and direct repeats. This characteristic makes CRISPRs easily identifiable in long sequences of DNA, since the number of repeats decreases the likelihood of a false positive match.
Analysis of CRISPRs in metagenomic data is more challenging, as CRISPR loci do not typically assemble, due to their repetitive nature or through strain variation, which confuses assembly algorithms. Where many reference genomes are available, polymerase chain reaction (PCR) can be used to amplify CRISPR arrays and analyse spacer content. However, this approach yields information only for specifically targeted CRISPRs and for organisms with sufficient representation in public databases to design reliable polymerase PCR primers. Degenerate repeat-specific primers can be used to amplify CRISPR spacers directly from environmental samples; amplicons containing two or three spacers can be then computationally assembled to reconstruct long CRISPR arrays.
The alternative is to extract and reconstruct CRISPR arrays from shotgun metagenomic data. This is computationally more difficult, particularly with second generation sequencing technologies (e.g. 454, Illumina), as the short read lengths prevent more than two or three repeat units appearing in a single read. CRISPR identification in raw reads has been achieved using purely de novo identification or by using direct repeat sequences in partially assembled CRISPR arrays from contigs (overlapping DNA segments that together represent a consensus region of DNA) and direct repeat sequences from published genomes as a hook for identifying direct repeats in individual reads.
Use by phages
Another way for bacteria to defend against phage infection is by having chromosomal islands. A subtype of chromosomal islands called phage-inducible chromosomal island (PICI) is excised from a bacterial chromosome upon phage infection and can inhibit phage replication. PICIs are induced, excised, replicated, and finally packaged into small capsids by certain staphylococcal temperate phages. PICIs use several mechanisms to block phage reproduction. In the first mechanism, PICI-encoded Ppi differentially blocks phage maturation by binding or interacting specifically with phage TerS, hence blocking phage TerS/TerL complex formation responsible for phage DNA packaging. In the second mechanism PICI CpmAB redirects the phage capsid morphogenetic protein to make 95% of SaPI-sized capsid and phage DNA can package only 1/3rd of their genome in these small capsids and hence become nonviable phage. The third mechanism involves two proteins, PtiA and PtiB, that target the LtrC, which is responsible for the production of virion and lysis proteins. This interference mechanism is modulated by a modulatory protein, PtiM, binds to one of the interference-mediating proteins, PtiA, and hence achieves the required level of interference.
One study showed that lytic ICP1 phage, which specifically targets Vibrio cholerae serogroup O1, has acquired a CRISPR-Cas system that targets a V. cholera PICI-like element. The system has 2 CRISPR loci and 9 Cas genes. It seems to be homologous to the I-F system found in Yersinia pestis. Moreover, like the bacterial CRISPR-Cas system, ICP1 CRISPR-Cas can acquire new sequences, which allows phage and host to co-evolve.
Certain archaeal viruses were shown to carry mini-CRISPR arrays containing one or two spacers. It has been shown that spacers within the virus-borne CRISPR arrays target other viruses and plasmids, suggesting that mini-CRISPR arrays represent a mechanism of heterotypic superinfection exclusion and participate in interviral conflicts.
Applications
CRISPR gene editing is a revolutionary technology that allows for precise, targeted modifications to the DNA of living organisms. Developed from a natural defense mechanism found in bacteria, CRISPR-Cas9 is the most commonly used system, that allows "cutting" of DNA at specific locations and either delete, modify, or insert genetic material. This technology has transformed fields such as genetics, medicine, and agriculture, offering potential treatments for genetic disorders, advancements in crop engineering, and research into the fundamental workings of life. However, its ethical implications and potential unintended consequences have sparked significant debate.
See also
CRISPR activation
Anti-CRISPR
CRISPR/Cas Tools
CRISPR gene editing
The CRISPR Journal
"Designer baby"
DRACO
Gene knockout
Genome-wide CRISPR-Cas9 knockout screens
Glossary of genetics
Human germline engineering
Human Nature (2019 documentary film)
MAGESTIC
New eugenics
Prime editing
RNAi
SiRNA
Surveyor nuclease assay
Synthetic biology
Zinc finger
Notes
References
Further reading
External links
Protein Data Bank
1987 in biotechnology
2015 in biotechnology
Biological engineering
Biotechnology
Genetic engineering
Genome editing
Jennifer Doudna
Molecular biology
Non-coding RNA
Repetitive DNA sequences
Immune system
Prokaryote genes | CRISPR | [
"Chemistry",
"Engineering",
"Biology"
] | 7,130 | [
"Genetics techniques",
"Biological engineering",
"Prokaryote genes",
"Genome editing",
"Immune system",
"Prokaryotes",
"Genetic engineering",
"Biotechnology",
"Organ systems",
"Molecular genetics",
"Repetitive DNA sequences",
"nan",
"Molecular biology",
"Biochemistry"
] |
2,146,043 | https://en.wikipedia.org/wiki/Thermochromic%20ink | Thermochromic ink (also called thermochromatic ink) is a type of dye that changes color in response to a change in temperature. It was first used in the 1970s in novelty toys like mood rings, but has found some practical uses in things such as thermometers, product packaging, and pens. The ink has also found applications within the medical field for specific medical simulations in medical training. Thermochromic ink can also turn transparent when heat is applied; an example of this type of ink can be found on the corners of an examination mark sheet to prove that the sheet has not been edited or photocopied.
Composition
There are two main variants of thermochromic ink, one composed of leuco dyes and one composed of liquid crystals. For both types of ink, the chemicals need to be contained within capsules around 3 to 5 microns long. This protects the dyes and crystals from mixing with other chemicals that might affect the functionality of the ink.
Leuco dyes
The leuco dye variant is typically composed of leuco dyes with additional chemicals to add different desired effects. It is the most commonly used type because it is easier to manufacture. They can be designed to react to changes in temperature that range from -15 °C to 60 °C. Most common applications of the ink have activation temperatures at -10 °C (cold), 31 °C (body temperature), or 43 °C (warm). At lower temperatures, the ink appears to be a certain color, and once the temperature increases, the ink becomes either translucent or lightly colored, allowing hidden patterns to be seen. This gives the effect of a change in color, and the process can also be reversed by lowering the temperature again.
Liquid crystals
Liquid crystals can change from liquid to solid in response to a change in temperature. At lower temperatures, the crystals are mostly solid and hardly reflect any light, causing it to appear black. As it gradually increases in temperature, the crystals become more spaced out, causing light to reflect differently and changing the color of the crystals. The temperatures at which these crystals change their properties can range from -30 °C to 90 °C.
Applications
On June 20, 2017, the United States Postal Service released the first application of thermochromic ink to postage stamps in its Total Eclipse of the Sun Forever stamp to commemorate the solar eclipse of August 21, 2017. When pressed with a finger, body heat turns the black circle in the center of the stamp into an image of the full moon. The stamp image is a photo of a total solar eclipse seen in Jalu, Libya, on March 29, 2006. The photo was taken by retired NASA astrophysicist Fred Espenak, aka "Mr. Eclipse".
Medical uses
In medical training, thermochromic ink can be used to imitate human blood because it shares its color changing property. It is currently being tested in medical simulations involving extracorporeal membrane oxygenation (ECMO). In these procedures, a change in color of blood between a dark and light red indicates blood oxygenation and blood deoxygenation, which describes the oxygen concentration levels within a person's blood sample. It's important to accurately identify this change in order to safely and correctly operate the ECMO machines. This has led to simulation-based trainings (SBT) which allows medical students to run simulations that mimic real ECMO machines before using them in serious situations. By using thermochromic ink in these simulations, the color changing effect can be realistically copied and observed without using real human blood or other costly methods.
Artificial blood or animal blood is typically used in these simulations; however, there are some advantages in using thermochromic ink as an alternative. It can be reused for multiple simulations with minimal variance in the outcomes and it is more cost effective. There are limitations to using this as the ink does not share any other properties with blood, so its only practical use is to observe the change in color of blood.
Product packaging
Product packaging is an important aspect of maintaining the quality of consumer goods. Modern day packaging is split into 2 categories; active packaging and smart packaging. Thermochromic ink has found use in smart packaging, which is the aspect of packaging that deals with monitoring the condition of the products. Since most consumer goods are affected by changes in temperature, using thermochromic ink as an indicator of those temperature changes allows consumers to recognize when the quality of a product has changed. It can also be used to tell consumers the right temperatures to consume the product.
Erasable ink pens
In 2006, Pilot Corporation, Japan developed a pen with erasable ink that utilized thermochromic ink. It was composed of a solvent, a colorant, and a resin film-forming agent. At temperatures below 65 °C, the ink stayed in a colored state. Once temperatures went above 65 °C, the ink began to melt and became colorless, creating the effect of erasable ink. The ink was able to return to its colored state by cooling the temperature down to below -10 °C.
See also
Thermochromism
Security printing
Active packaging
References
Thermochromism
Dyes
Spectroscopy
Materials science | Thermochromic ink | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,066 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Applied and interdisciplinary physics",
"Instrumental analysis",
"Chromism",
"Materials science",
"nan",
"Smart materials",
"Spectroscopy",
"Thermochromism"
] |
2,147,274 | https://en.wikipedia.org/wiki/Supercritical%20fluid%20extraction | Supercritical fluid extraction (SFE) is the process of separating one component (the extractant) from another (the matrix) using supercritical fluids as the extracting solvent. Extraction is usually from a solid matrix, but can also be from liquids. SFE can be used as a sample preparation step for analytical purposes, or on a larger scale to either strip unwanted material from a product (e.g. decaffeination) or collect a desired product (e.g. essential oils). These essential oils can include limonene and other straight solvents. Carbon dioxide (CO2) is the most used supercritical fluid, sometimes modified by co-solvents such as ethanol or methanol. Extraction conditions for supercritical carbon dioxide are above the critical temperature of 31 °C and critical pressure of 74 bar. Addition of modifiers may slightly alter this. The discussion below will mainly refer to extraction with CO2, except where specified.
Advantages
Selectivity
The properties of the supercritical fluid can be altered by varying the pressure and temperature, allowing selective extraction. For example, volatile oils can be extracted from a plant with low pressures (100 bar), whereas liquid extraction would also remove lipids. Lipids can be removed using pure CO2 at higher pressures, and then phospholipids can be removed by adding ethanol to the solvent. The same principle can be used to extract polyphenols and unsaturated fatty acids separately from wine wastes.
Speed
Extraction is a diffusion-based process, in which the solvent is required to diffuse into the matrix and the extracted material to diffuse out of the matrix into the solvent. Diffusivities are much faster in supercritical fluids than in liquids, and therefore extraction can occur faster. In addition, due to the lack of surface tension and negligible viscosities compared to liquids, the solvent can penetrate more into the matrix inaccessible to liquids. An extraction using an organic liquid may take several hours, whereas supercritical fluid extraction can be completed in 10 to 60 minutes.
Limitations
The requirement for high pressures increases the cost compared to conventional liquid extraction, so SFE will only be used where there are significant advantages. Carbon dioxide itself is non-polar, and has somewhat limited dissolving power, so cannot always be used as a solvent on its own, particularly for polar solutes. The use of modifiers increases the range of materials which can be extracted. Food grade modifiers such as ethanol can often be used, and can also help in the collection of the extracted material, but reduces some of the benefits of using a solvent which is gaseous at room temperature.
Procedure
The system must contain a pump for the CO2, a pressure cell to contain the sample, a means of maintaining pressure in the system and a collecting vessel. The liquid is pumped to a heating zone, where it is heated to supercritical conditions. It then passes into the extraction vessel, where it rapidly diffuses into the solid matrix and dissolves the material to be extracted. The dissolved material is swept from the extraction cell into a separator at lower pressure, and the extracted material settles out. The CO2 can then be cooled, re-compressed and recycled, or discharged to atmosphere.
Pumps
Carbon dioxide () is usually pumped as a liquid, usually below 5 °C (41 °F) and a pressure of about 50 bar. The solvent is pumped as a liquid as it is then almost incompressible; if it were pumped as a supercritical fluid, much of the pump stroke would be "used up" in compressing the fluid, rather than pumping it. For small scale extractions (up to a few grams / minute), reciprocating pumps or syringe pumps are often used. For larger scale extractions, diaphragm pumps are most common. The pump heads will usually require cooling, and the CO2 will also be cooled before entering the pump.
Pressure vessels
Pressure vessels can range from simple tubing to more sophisticated purpose built vessels with quick release fittings. The pressure requirement is at least 74 bar, and most extractions are conducted at under 350 bar. However, sometimes higher pressures will be needed, such as extraction of vegetable oils, where pressures of 800 bar are sometimes required for complete miscibility of the two phases.
The vessel must be equipped with a means of heating. It can be placed inside an oven for small vessels, or an oil or electrically heated jacket for larger vessels. Care must be taken if rubber seals are used on the vessel, as the supercritical carbon dioxide may dissolve in the rubber, causing swelling, and the rubber will rupture on depressurization.
Pressure maintenance
The pressure in the system must be maintained from the pump right through the pressure vessel. In smaller systems (up to about 10 mL / min) a simple restrictor can be used. This can be either a capillary tube cut to length, or a needle valve which can be adjusted to maintain pressure at different flow rates. In larger systems a back pressure regulator will be used, which maintains pressure upstream of the regulator by means of a spring, compressed air, or electronically driven valve. Whichever is used, heating must be supplied, as the adiabatic expansion of the CO2 results in significant cooling. This is problematic if water or other extracted material is present in the sample, as this may freeze in the restrictor or valve and cause blockages.
Collection
The supercritical solvent is passed into a vessel at lower pressure than the extraction vessel. The density, and hence dissolving power, of supercritical fluids varies sharply with pressure, and hence the solubility in the lower density CO2 is much lower, and the material precipitates for collection. It is possible to fractionate the dissolved material using a series of vessels at reducing pressure. The CO2 can be recycled or depressurized to atmospheric pressure and vented. For analytical SFE, the pressure is usually dropped to atmospheric, and the now gaseous carbon dioxide bubbled through a solvent to trap the precipitated components.
Heating and cooling
This is an important aspect. The fluid is cooled before pumping to maintain liquid conditions, then heated after pressurization. As the fluid is expanded into the separator, heat must be provided to prevent excessive cooling. For small scale extractions, such as for analytical purposes, it is usually sufficient to pre-heat the fluid in a length of tubing inside the oven containing the extraction cell. The restrictor can be electrically heated, or even heated with a hairdryer. For larger systems, the energy required during each stage of the process can be calculated using the thermodynamic properties of the supercritical fluid.
Simple model of SFE
There are two essential steps to SFE, transport (by diffusion or otherwise) of the solid particles to the surface, and dissolution in the supercritical fluid. Other factors, such as diffusion into the particle by the SF and reversible release such as desorption from an active site are sometimes significant, but not dealt with in detail here. Figure 2 shows the stages during extraction from a spherical particle where at the start of the extraction the level of extractant is equal across the whole sphere (Fig. 2a). As extraction commences, material is initially extracted from the edge of the sphere, and the concentration in the center is unchanged (Fig 2b). As the extraction progresses, the concentration in the center of the sphere drops as the extractant diffuses towards the edge of the sphere (Figure 2c).
The relative rates of diffusion and dissolution are illustrated by two extreme cases in Figure 3. Figure 3a shows a case where dissolution is fast relative to diffusion. The material is carried away from the edge faster than it can diffuse from the center, so the concentration at the edge drops to zero. The material is carried away as fast as it arrives at the surface, and the extraction is completely diffusion limited. Here the rate of extraction can be increased by increasing diffusion rate, for example raising the temperature, but not by increasing the flow rate of the solvent. Figure 3b shows a case where solubility is low relative to diffusion. The extractant is able to diffuse to the edge faster than it can be carried away by the solvent, and the concentration profile is flat. In this case, the extraction rate can be increased by increasing the rate of dissolution, for example by increasing flow rate of the solvent.
The extraction curve of % recovery against time can be used to elucidate the type of extraction occurring. Figure 4(a) shows a typical diffusion controlled curve. The extraction is initially rapid, until the concentration at the surface drops to zero, and the rate then becomes much slower. The % extracted eventually approaches 100%. Figure 4(b) shows a curve for a solubility limited extraction. The extraction rate is almost constant, and only flattens off towards the end of the extraction. Figure 4(c) shows a curve where there are significant matrix effects, where there is some sort of reversible interaction with the matrix, such as desorption from an active site. The recovery flattens off, and if the 100% value is not known, then it is hard to tell that extraction is less than complete.
Optimization
The optimum will depend on the purpose of the extraction. For an analytical extraction to determine, say, antioxidant content of a polymer, then the essential factors are complete extraction in the shortest time. However, for production of an essential oil extract from a plant, then quantity of CO2 used will be a significant cost, and "complete" extraction not required, a yield of 70 - 80% perhaps being sufficient to provide economic returns. In another case, the selectivity may be more important, and a reduced rate of extraction will be preferable if it provides greater discrimination. Therefore, few comments can be made which are universally applicable. However, some general principles are outlined below.
Maximizing diffusion
This can be achieved by increasing the temperature, swelling the matrix, or reducing the particle size. Matrix swelling can sometimes be increased by increasing the pressure of the solvent, and by adding modifiers to the solvent. Some polymers and elastomers in particular are swelled dramatically by CO2, with diffusion being increased by several orders of magnitude in some cases.
Maximizing solubility
Generally, higher pressure will increase solubility. The effect of temperature is less certain, as close to the critical point, increasing the temperature causes decreases in density, and hence dissolving power. At pressures well above the critical pressure, solubility is likely to increase with temperature. Addition of low levels of modifiers (sometimes called entrainers), such as methanol and ethanol, can also significantly increase solubility, particularly of more polar compounds.
Optimizing flow rate
The flow rate of supercritical carbon dioxide should be measured in terms of mass flow rather than by volume because the density of the changes according to the temperature both before entering the pump heads and during compression. Coriolis flow meters are best used to achieve such flow confirmation. To maximize the rate of extraction, the flow rate should be high enough for the extraction to be completely diffusion limited (but this will be very wasteful of solvent). However, to minimize the amount of solvent used, the extraction should be completely solubility limited (which will take a very long time). Flow rate must therefore be determined depending on the competing factors of time and solvent costs, and also capital costs of pumps, heaters and heat exchangers. The optimum flow rate will probably be somewhere in the region where both solubility and diffusion are significant factors.
See also
Laboratory equipment
Steam distillation
Accelerated solvent extraction
References
Further reading
Industrial processes
Extraction (chemistry)
Microtechnology | Supercritical fluid extraction | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,416 | [
"Extraction (chemistry)",
"Materials science",
"Microtechnology",
"Separation processes"
] |
2,147,801 | https://en.wikipedia.org/wiki/Gell-Mann%E2%80%93Nishijima%20formula | The Gell-Mann–Nishijima formula (sometimes known as the NNG formula) relates the baryon number B, the strangeness S, the isospin I3 of quarks and hadrons to the electric charge Q. It was originally given by Kazuhiko Nishijima and Tadao Nakano in 1953, and led to the proposal of strangeness as a concept, which Nishijima originally called "eta-charge" after the eta meson. Murray Gell-Mann proposed the formula independently in 1956. The modern version of the formula relates all flavour quantum numbers (isospin up and down, strangeness, charm, bottomness, and topness) with the baryon number and the electric charge.
Formula
The original form of the Gell-Mann–Nishijima formula is:
This equation was originally based on empirical experiments. It is now understood as a result of the quark model. In particular, the electric charge Q of a quark or hadron particle is related to its isospin I3 and its hypercharge Y via the relation:
Since the discovery of charm, top, and bottom quark flavors, this formula has been generalized. It now takes the form:
where Q is the charge, I3 the 3rd-component of the isospin, B the baryon number, and S, C, B′, T are the strangeness, charm, bottomness and topness numbers.
Expressed in terms of quark content, these would become:
By convention, the flavor quantum numbers (strangeness, charm, bottomness, and topness) carry the same sign as the electric charge of the particle. So, since the strange and bottom quarks have a negative charge, they have flavor quantum numbers equal to −1. And since the charm and top quarks have positive electric charge, their flavor quantum numbers are +1.
From a quantum chromodynamics point of view, the Gell-Mann–Nishijima formula and its generalized version can be derived using an approximate SU(3) flavour symmetry because the charges can be defined using the corresponding conserved Noether currents.
Weak interaction analog
In 1961 Glashow proposed a relation similar formula would also apply to the weak interaction:
Here the charge is related to the projection of weak isospin and the hypercharge .
References
Further reading
Standard Model
he:נוסחת גל-מן-נישיג'ימה | Gell-Mann–Nishijima formula | [
"Physics"
] | 510 | [
"Standard Model",
"Particle physics"
] |
2,147,961 | https://en.wikipedia.org/wiki/Invariants%20of%20tensors | In mathematics, in the fields of multilinear algebra and representation theory, the principal invariants of the second rank tensor are the coefficients of the characteristic polynomial
,
where is the identity operator and are the roots of the polynomial and the eigenvalues of .
More broadly, any scalar-valued function is an invariant of if and only if for all orthogonal . This means that a formula expressing an invariant in terms of components, , will give the same result for all Cartesian bases. For example, even though individual diagonal components of will change with a change in basis, the sum of diagonal components will not change.
Properties
The principal invariants do not change with rotations of the coordinate system (they are objective, or in more modern terminology, satisfy the principle of material frame-indifference) and any function of the principal invariants is also objective.
Calculation of the invariants of rank two tensors
In a majority of engineering applications, the principal invariants of (rank two) tensors of dimension three are sought, such as those for the right Cauchy-Green deformation tensor which has the eigenvalues , , and . Where , , and are the principal stretches, i.e. the eigenvalues of .
Principal invariants
For such tensors, the principal invariants are given by:
For symmetric tensors, these definitions are reduced.
The correspondence between the principal invariants and the characteristic polynomial of a tensor, in tandem with the Cayley–Hamilton theorem reveals that
where is the second-order identity tensor.
Main invariants
In addition to the principal invariants listed above, it is also possible to introduce the notion of main invariants
which are functions of the principal invariants above. These are the coefficients of the characteristic polynomial of the deviator , such that it is traceless. The separation of a tensor into a component that is a multiple of the identity and a traceless component is standard in hydrodynamics, where the former is called isotropic, providing the modified pressure, and the latter is called deviatoric, providing shear effects.
Mixed invariants
Furthermore, mixed invariants between pairs of rank two tensors may also be defined.
Calculation of the invariants of order two tensors of higher dimension
These may be extracted by evaluating the characteristic polynomial directly, using the Faddeev-LeVerrier algorithm for example.
Calculation of the invariants of higher order tensors
The invariants of rank three, four, and higher order tensors may also be determined.
Engineering applications
A scalar function that depends entirely on the principal invariants of a tensor is objective, i.e., independent of rotations of the coordinate system. This property is commonly used in formulating closed-form expressions for the strain energy density, or Helmholtz free energy, of a nonlinear material possessing isotropic symmetry.
This technique was first introduced into isotropic turbulence by Howard P. Robertson in 1940 where he was able to derive Kármán–Howarth equation from the invariant principle. George Batchelor and Subrahmanyan Chandrasekhar exploited this technique and developed an extended treatment for axisymmetric turbulence.
Invariants of non-symmetric tensors
A real tensor in 3D (i.e., one with a 3x3 component matrix) has as many as six independent invariants, three being the invariants of its symmetric part and three characterizing the orientation of the axial vector of the skew-symmetric part relative to the principal directions of the symmetric part. For example, if the Cartesian components of are
the first step would be to evaluate the axial vector associated with the skew-symmetric part. Specifically, the axial vector has components
The next step finds the principal values of the symmetric part of . Even though the eigenvalues of a real non-symmetric tensor might be complex, the eigenvalues of its symmetric part will always be real and therefore can be ordered from largest to smallest. The corresponding orthonormal principal basis directions can be assigned senses to ensure that the axial vector points within the first octant. With respect to that special basis, the components of are
The first three invariants of are the diagonal components of this matrix: (equal to the ordered principal values of the tensor's symmetric part). The remaining three invariants are the axial vector's components in this basis: . Note: the magnitude of the axial vector, , is the sole invariant of the skew part of , whereas these distinct three invariants characterize (in a sense) "alignment" between the symmetric and skew parts of . Incidentally, it is a myth that a tensor is positive definite if its eigenvalues are positive. Instead, it is positive definite if and only if the eigenvalues of its symmetric part are positive.
See also
Symmetric polynomial
Elementary symmetric polynomial
Newton's identities
Invariant theory
References
Tensors
Invariant theory
Linear algebra | Invariants of tensors | [
"Physics",
"Mathematics",
"Engineering"
] | 996 | [
"Symmetry",
"Tensors",
"Group actions",
"Invariant theory",
"Linear algebra",
"Algebra"
] |
2,148,329 | https://en.wikipedia.org/wiki/Mathematical%20universe%20hypothesis | In physics and cosmology, the mathematical universe hypothesis (MUH), also known as the ultimate ensemble theory, is a speculative "theory of everything" (TOE) proposed by cosmologist Max Tegmark. According to the hypothesis, the universe is a mathematical object in and of itself. Tegmark extends this idea to hypothesize that all mathematical objects exist, which he describes as a form of Platonism or Modal realism.
The hypothesis has proved controversial. Jürgen Schmidhuber argues that it is not possible to assign an equal weight or probability to all mathematical objects a priori due to there being infinitely many of them. Physicists Piet Hut and Mark Alford have suggested that the idea is incompatible with Gödel's first incompleteness theorem.
Tegmark replies that not only is the universe mathematical, but it is also computable.
Description
Tegmark's MUH is the hypothesis that our external physical reality is a mathematical structure. That is, the physical universe is not merely described by mathematics, but is mathematics — specifically, a mathematical structure. Mathematical existence equals physical existence, and all structures that exist mathematically exist physically as well. Observers, including humans, are "self-aware substructures (SASs)". In any mathematical structure complex enough to contain such substructures, they "will subjectively perceive themselves as existing in a physically 'real' world".
The theory can be considered a form of Pythagoreanism or Platonism in that it proposes the existence of mathematical entities; a form of mathematicism in that it denies that anything exists except mathematical objects; and a formal expression of ontic structural realism.
Tegmark claims that the hypothesis has no free parameters and is not observationally ruled out. Thus, he reasons, it is preferred over other theories-of-everything by Occam's Razor. Tegmark also considers augmenting the MUH with a second assumption, the computable universe hypothesis (CUH), which says that the mathematical structure that is our external physical reality is defined by computable functions.
The MUH is related to Tegmark's categorization of four levels of the multiverse. This categorization posits a nested hierarchy of increasing diversity, with worlds corresponding to different sets of initial conditions (level 1), physical constants (level 2), quantum branches (level 3), and altogether different equations or mathematical structures (level 4).
Criticisms and responses
Andreas Albrecht of Imperial College in London called it a "provocative" solution to one of the central problems facing physics. Although he "wouldn't dare" go so far as to say he believes it, he noted that "it's actually quite difficult to construct a theory where everything we see is all there is".
Definition of the ensemble
Jürgen Schmidhuber argues that "Although Tegmark suggests that '... all mathematical structures are a priori given equal statistical weight,' there is no way of assigning equal non-vanishing probability to all (infinitely many) mathematical structures." Schmidhuber puts forward a more restricted ensemble which admits only universe representations describable by constructive mathematics, that is, computer programs; e.g., the Global Digital Mathematics Library and Digital Library of Mathematical Functions, linked open data representations of formalized fundamental theorems intended to serve as building blocks for additional mathematical results. He explicitly includes universe representations describable by non-halting programs whose output bits converge after finite time, although the convergence time itself may not be predictable by a halting program, due to the undecidability of the halting problem.
In response, Tegmark notes that a constructive mathematics formalized measure of free parameter variations of physical dimensions, constants, and laws over all universes has not yet been constructed for the string theory landscape either, so this should not be regarded as a "show-stopper".
Consistency with Gödel's theorem
It has also been suggested that the MUH is inconsistent with Gödel's incompleteness theorem. In a three-way debate between Tegmark and fellow physicists Piet Hut and Mark Alford, the "secularist" (Alford) states that "the methods allowed by formalists cannot prove all the theorems in a sufficiently powerful system... The idea that math is 'out there' is incompatible with the idea that it consists of formal systems."
Tegmark's response is to offer a new hypothesis "that only Gödel-complete (fully decidable) mathematical structures have physical existence. This drastically shrinks the Level IV multiverse, essentially placing an upper limit on complexity, and may have the attractive side effect of explaining the relative simplicity of our universe." Tegmark goes on to note that although conventional theories in physics are Gödel-undecidable, the actual mathematical structure describing our world could still be Gödel-complete, and "could in principle contain observers capable of thinking about Gödel-incomplete mathematics, just as finite-state digital computers can prove certain theorems about Gödel-incomplete formal systems like Peano arithmetic." In he gives a more detailed response, proposing as an alternative to MUH the more restricted "Computable Universe Hypothesis" (CUH) which only includes mathematical structures that are simple enough that Gödel's theorem does not require them to contain any undecidable or uncomputable theorems. Tegmark admits that this approach faces "serious challenges", including (a) it excludes much of the mathematical landscape; (b) the measure on the space of allowed theories may itself be uncomputable; and (c) "virtually all historically successful theories of physics violate the CUH".
Observability
Stoeger, Ellis, and Kircher note that in a true multiverse theory, "the universes are then completely disjoint and nothing that happens in any one of them is causally linked to what happens in any other one. This lack of any causal connection in such multiverses really places them beyond any scientific support". Ellis specifically criticizes the MUH, stating that an infinite ensemble of completely disconnected universes is "completely untestable, despite hopeful remarks sometimes made, see, e.g., Tegmark (1998)." Tegmark maintains that MUH is testable, stating that it predicts (a) that "physics research will uncover mathematical regularities in nature", and (b) by assuming that we occupy a typical member of the multiverse of mathematical structures, one could "start testing multiverse predictions by assessing how typical our universe is".
Plausibility of radical Platonism
The MUH is based on the radical Platonist view that math is an external reality. However, Jannes argues that "mathematics is at least in part a human construction", on the basis that if it is an external reality, then it should be found in some other animals as well: "Tegmark argues that, if we want to give a complete description of reality, then we will need a language independent of us humans, understandable for non-human sentient entities, such as aliens and future supercomputers". Brian Greene argues similarly: "The deepest description of the universe should not require concepts whose meaning relies on human experience or interpretation. Reality transcends our existence and so shouldn't, in any fundamental way, depend on ideas of our making."
However, there are many non-human entities, plenty of which are intelligent, and many of which can apprehend, memorise, compare and even approximately add numerical quantities. Several animals have also passed the mirror test of self-consciousness. But a few surprising examples of mathematical abstraction notwithstanding (for example, chimpanzees can be trained to carry out symbolic addition with digits, or the report of a parrot understanding a "zero-like concept"), all examples of animal intelligence with respect to mathematics are limited to basic counting abilities. He adds, "non-human intelligent beings should exist that understand the language of advanced mathematics. However, none of the non-human intelligent beings that we know of confirm the status of (advanced) mathematics as an objective language." In the paper "On Math, Matter and Mind" the secularist viewpoint examined argues that math is evolving over time, there is "no reason to think it is converging to a definite structure, with fixed questions and established ways to address them", and also that "The Radical Platonist position is just another metaphysical theory like solipsism... In the end the metaphysics just demands that we use a different language for saying what we already knew." Tegmark responds that "The notion of a mathematical structure is rigorously defined in any book on Model Theory", and that non-human mathematics would only differ from our own "because we are uncovering a different part of what is in fact a consistent and unified picture, so math is converging in this sense." In his 2014 book on the MUH, Tegmark argues that the resolution is not that we invent the language of mathematics, but that we discover the structure of mathematics.
Coexistence of all mathematical structures
Don Page has argued that "At the ultimate level, there can be only one world and, if mathematical structures are broad enough to include all possible worlds or at least our own, there must be one unique mathematical structure that describes ultimate reality. So I think it is logical nonsense to talk of Level 4 in the sense of the co-existence of all mathematical structures." This means there can only be one mathematical corpus. Tegmark responds that "This is less inconsistent with Level IV than it may sound, since many mathematical structures decompose into unrelated substructures, and separate ones can be unified."
Consistency with our "simple universe"
Alexander Vilenkin comments that "The number of mathematical structures increases with increasing complexity, suggesting that 'typical' structures should be horrendously large and cumbersome. This seems to be in conflict with the beauty and simplicity of the theories describing our world". He goes on to note that Tegmark's solution to this problem, the assigning of lower "weights" to the more complex structures seems arbitrary ("Who determines the weights?") and may not be logically consistent ("It seems to introduce an additional mathematical structure, but all of them are supposed to be already included in the set").
Occam's razor
Tegmark has been criticized as misunderstanding the nature and application of Occam's razor; Massimo Pigliucci reminds that "Occam's razor is just a useful heuristic, it should never be used as the final arbiter to decide which theory is to be favored".
See also
Abstract object theory
Anthropic principle
Church–Turing thesis
Digital physics
Pancomputationalism
Impossible world
Mathematicism
Measure problem (cosmology)
Modal realism
Ontology
Permutation City
Structuralism (philosophy of science)
"The Unreasonable Effectiveness of Mathematics in the Natural Sciences"
Hilbert's sixth problem
References
Sources
Our Mathematical Universe: written by Max Tegmark and published on January 7, 2014, this book describes Tegmark's theory.
Further reading
Schmidhuber, J. (1997) "A Computer Scientist's View of Life, the Universe, and Everything" in C. Freksa, ed., Foundations of Computer Science: Potential - Theory - Cognition. Lecture Notes in Computer Science, Springer: p. 201-08.
Tegmark, Max (2014), Our Mathematical Universe: My Quest for the Ultimate Nature of Reality,
Woit, P. (17 January 2014), "Book Review: 'Our Mathematical Universe' by Max Tegmark", The Wall Street Journal.
Hamlin, Colin (2017). "Towards a Theory of Universes: Structure Theory and the Mathematical Universe Hypothesis". Synthese 194 (581–591). https://link.springer.com/article/10.1007/s11229-015-0959-y
External links
Jürgen Schmidhuber "The ensemble of universes describable by constructive mathematics."
Page maintained by Max Tegmark with links to his technical and popular writings.
"The 'Everything' mailing list" (and archives). Discusses the idea that all possible universes exist.
Richard Carrier Blogs: Our Mathematical Universe
Interview with Sam Harris Tegmark and Harris discuss efficacy of mathematics, multiverses, artificial intelligence.
Collection of interviews with Max Tegmark in 'Closer to truth"
"Is the Universe made of math?" Excerpt in Scientific American
Abstract object theory
Metaphysical realism
Multiverse
Ontology
Physical cosmology | Mathematical universe hypothesis | [
"Physics",
"Astronomy",
"Mathematics"
] | 2,651 | [
"Astronomical hypotheses",
"Astronomical sub-disciplines",
"Mathematical Platonism",
"Mathematical logic",
"Theoretical physics",
"Mathematical objects",
"Astrophysics",
"Computability theory",
"Multiverse",
"Physical cosmology"
] |
2,148,918 | https://en.wikipedia.org/wiki/Anionic%20addition%20polymerization | In polymer chemistry, anionic addition polymerization is a form of chain-growth polymerization or addition polymerization that involves the polymerization of monomers initiated with anions. The type of reaction has many manifestations, but traditionally vinyl monomers are used. Often anionic polymerization involves living polymerizations, which allows control of structure and composition.
History
As early as 1936, Karl Ziegler proposed that anionic polymerization of styrene and butadiene by consecutive addition of monomer to an alkyl lithium initiator occurred without chain transfer or termination. Twenty years later, living polymerization was demonstrated by Michael Szwarc and coworkers. In one of the breakthrough events in the field of polymer science, Szwarc elucidated that electron transfer occurred from radical anion sodium naphthalene to styrene. The results in the formation of an organosodium species, which rapidly added styrene to form a "two – ended living polymer." An important aspect of his work, Szwarc employed the aprotic solvent tetrahydrofuran. Being a physical chemist, Szwarc elucidated the kinetics and the thermodynamics of the process in considerable detail. At the same time, he explored the structure property relationship of the various ion pairs and radical ions involved. This work provided the foundations for the synthesis of polymers with improved control over molecular weight, molecular weight distribution, and the architecture.
The use of alkali metals to initiate polymerization of 1,3-dienes led to the discovery by Stavely and co-workers at Firestone Tire and Rubber company of cis-1,4-polyisoprene. This sparked the development of commercial anionic polymerization processes that utilize alkyllithium initiators.
Roderic Quirk won the 2019 Charles Goodyear Medal in recognition of his contributions to anionic polymerization technology. He was introduced to the subject while working in a Phillips Petroleum lab with Henry Hsieh.
Monomer characteristics
Two broad classes of monomers are susceptible to anionic polymerization.
Vinyl monomers have the formula CH2=CHR, the most important are styrene (R = C6H5), butadiene (R = CH=CH2), and isoprene (R = C(Me)=CH2). A second major class of monomers are acrylate esters, such as acrylonitrile, methacrylate, cyanoacrylate, and acrolein. Other vinyl monomers include vinylpyridine, vinyl sulfone, vinyl sulfoxide, vinyl silanes.
Cyclic monomers
Many cyclic compounds are susceptible to ring-opening polymerization. Epoxides, cyclic trisiloxanes, some lactones, lactides, cyclic carbonates, and amino acid N-carboxyanhydrides.
In order for polymerization to occur with vinyl monomers, the substituents on the double bond must be able to stabilize a negative charge. Stabilization occurs through delocalization of the negative charge. Because of the nature of the carbanion propagating center, substituents that react with bases or nucleophiles either must not be present or be protected.
Initiation
Initiators are selected based on the reactivity of the monomers. Highly electrophilic monomers such as cyanoacrylates require only weakly nucleophilic initiators, such as amines, phosphines, or even halides. Less reactive monomers such as styrene require powerful nucleophiles such as butyl lithium. Reactions of intermediate strength are used for monomers of intermediate reactivity such as vinylpyridine.
The solvents used in anionic addition polymerizations are determined by the reactivity of both the initiator and nature of the propagating chain end. Anionic species with low reactivity, such as heterocyclic monomers, can use a wide range of solvents.
Initiation by electron transfer
Initiation of styrene polymerization with sodium naphthalene proceeds by electron transfer from the naphthalene radical anion to the monomer. The resulting radical dimerizes to give a disodium compound, which then functions as the initiator. Polar solvents are necessary for this type of initiation both for stability of the anion-radical and to solvate the cation species formed. The anion-radical can then transfer an electron to the monomer.
Initiation can also involve the transfer of an electron from the alkali metal to the monomer to form an anion-radical. Initiation occurs on the surface of the metal, with the reversible transfer of an electron to the adsorbed monomer.
Initiation by strong anions
Nucleophilic initiators include covalent or ionic metal amides, alkoxides, hydroxides, cyanides, phosphines, amines and organometallic compounds (alkyllithium compounds and Grignard reagents). The initiation process involves the addition of a neutral (B:) or negative (:B−) nucleophile to the monomer.
The most commercially useful of these initiators has been the alkyllithium initiators. They are primarily used for the polymerization of styrenes and dienes.
Monomers activated by strong electronegative groups may be initiated even by weak anionic or neutral nucleophiles (i.e. amines, phosphines). Most prominent example is the curing of cyanoacrylate, which constitutes the basis for superglue. Here, only traces of basic impurities are sufficient to induce an anionic addition polymerization or zwitterionic addition polymerization, respectively.
Propagation
Propagation in anionic addition polymerization results in the complete consumption of monomer. This stage is often fast, even at low temperatures.
Living anionic polymerization
Living anionic polymerization is a living polymerization technique involving an anionic propagating species.
Living anionic polymerization was demonstrated by Szwarc and co workers in 1956. Their initial work was based on the polymerization of styrene and dienes.
One of the remarkable features of living anionic polymerization is that the mechanism involves no formal termination step. In the absence of impurities, the carbanion would still be active and capable of adding another monomer. The chains will remain active indefinitely unless there is inadvertent or deliberate termination or chain transfer. This gave rise to two important consequences:
The number average molecular weight, Mn, of the polymer resulting from such a system could be calculated by the amount of consumed monomer and the initiator used for the polymerization, as the degree of polymerization would be the ratio of the moles of the monomer consumed to the moles of the initiator added.
, where Mo = formula weight of the repeating unit, [M]o = initial concentration of the monomer, and [I] = concentration of the initiator.
All the chains are initiated at roughly the same time. The final result is that the polymer synthesis can be done in a much more controlled manner in terms of the molecular weight and molecular weight distribution (Poisson distribution).
The following experimental criteria have been proposed as a tool for identifying a system as living polymerization system.
Polymerization until the monomer is completely consumed and until further monomer is added.
Constant number of active centers or propagating species.
Poisson distribution of molecular weight
Chain end functionalization can be carried out quantitatively.
However, in practice, even in the absence of terminating agents, the concentration of the living anions will reduce with time due to a decay mechanism termed as spontaneous termination.
Consequences of living polymerization
Block copolymers
Synthesis of block copolymers is one of the most important applications of living polymerization as it offers the best control over structure. The nucleophilicity of the resulting carbanion will govern the order of monomer addition, as the monomer forming the less nucleophilic propagating species may inhibit the addition of the more nucleophilic monomer onto the chain. An extension of the above concept is the formation of triblock copolymers where each step of such a sequence aims to prepare a block segment with predictable, known molecular weight and narrow molecular weight distribution without chain termination or transfer.
Sequential monomer addition is the dominant method, also this simple approach suffers some limitations.
Moreover, this strategy, enables synthesis of linear block copolymer structures that are not accessible via sequential monomer addition. For common A-b-B structures, sequential block copolymerization gives access to well defined
block copolymers only if the crossover reaction rate constant is significantly higher than the rate constant of the homopolymerization
of the second monomer, i.e., kAA >> kBB.
End-group functionalization/termination
One of the remarkable features of living anionic polymerization is the absence of a formal termination step. In the absence of impurities, the carbanion would remain active, awaiting the addition of new monomer. Termination can occur through unintentional quenching by impurities, often present in trace amounts. Typical impurities include oxygen, carbon dioxide, or water. Termination intentionally allows the introduction of tailored end groups.
Living anionic polymerization allow the incorporation of functional end-groups, usually added to quench polymerization. End-groups that have been used in the functionalization of α-haloalkanes include hydroxide, -NH2, -OH, -SH, -CHO,-COCH3, -COOH, and epoxides.
An alternative approach for functionalizing end-groups is to begin polymerization with a functional anionic initiator. In this case, the functional groups are protected since the ends of the anionic polymer chain is a strong base. This method leads to polymers with controlled molecular weights and narrow molecular weight distributions.
Additional reading
Cowie, J.; Arrighi,V. Polymers: Chemistry and Physics of Modern Materials; CRC Press: Boca Raton, FL, 2008.
References
Polymerization reactions
ja:重合反応#アニオン重合 | Anionic addition polymerization | [
"Chemistry",
"Materials_science"
] | 2,161 | [
"Polymerization reactions",
"Polymer chemistry"
] |
2,149,887 | https://en.wikipedia.org/wiki/Reductive%20amination | Reductive amination (also known as reductive alkylation) is a form of amination that converts a carbonyl group to an amine via an intermediate imine. The carbonyl group is most commonly a ketone or an aldehyde. It is a common method to make amines and is widely used in green chemistry since it can be done catalytically in one-pot under mild conditions. In biochemistry, dehydrogenase enzymes use reductive amination to produce the amino acid glutamate. Additionally, there is ongoing research on alternative synthesis mechanisms with various metal catalysts which allow the reaction to be less energy taxing, and require milder reaction conditions. Investigation into biocatalysts, such as imine reductases, have allowed for higher selectivity in the reduction of chiral amines which is an important factor in pharmaceutical synthesis.
Reaction process
Reductive amination occurs between a carbonyl such as an aldehyde or ketone and an amine in the presence of a reducing agent. The reaction conditions are neutral or weakly acidic.
Reaction Steps
The nucelophilic amine reacts at the carbon of the carbonyl group to form a hemiaminal species
reversible loss of one molecule of water from the hemiaminal species by alkylimino-de-oxo-bisubstitution to form the imine intermediate. The equilibrium between aldehyde/ketone and imine is shifted toward imine formation by dehydration.
The intermediate imine can be isolated or reacted in-situ with a suitable reducing agent (e.g., sodium borohydride) to produce the amine product. Intramolecular reductive amination can also occur to afford a cyclic amine product if the amine and carbonyl are on the same molecule of starting material.
There are two ways to conduct a reductive amination reaction: direct and indirect.
Direct Reductive Amination
In a direct reaction, the carbonyl and amine starting materials and the reducing agent are combined and the reductions are done sequentially. These are often one-pot reactions since the imine intermediate is not isolated before the final reduction to the product. Instead, as the reaction proceeds, the imine becomes favoured for reduction over the carbonyl starting material. The two most common methods for direct reductive amination are hydrogenation with catalytic platinum, palladium, or nickel catalysts and the use of hydride reducing agents like cyanoborohydride (NaBH3CN).
Indirect Reductive Amination
Indirect reductive amination, also called a stepwise reduction, isolates the imine intermediate. In a separate step, the isolated imine intermediate is reduced to form the amine product.
Designing a reductive amination reaction
There are many considerations to be made when designing a reductive amination reaction.
Chemoselectivity issues may arise since the carbonyl group can also be reduced.
The reaction between the carbonyl and amine are in equilibrium, favouring the carbonyl unless water is removed from the system.
reduction-sensitive intermediates may form in the reaction which can affect chemoselectivity.
The amine substrate, imine intermediate, or amine product might deactivate the catalyst.
Acyclic imines have E/Z isomers. This makes it difficult to create enantiopure chiral compounds through stereoselective reductions.
To solve the last issue, asymmetric reductive amination reactions can be used to synthesize an enantiopure product of chiral amines. In asymmetric reductive amination, a carbonyl that can be converted from achiral to chiral is used. The carbonyl undergoes condensation with an amine in the presence of H2 and a chiral catalyst to form the imine intermediate, which is then reduced to form the amine. However, this method is still limiting to synthesize primary amines which are non-selective and prone to overalkylation.
Common reducing agents
Palladium Hydride
Palladium hydride (H2/Pd) is a versatile reducing agent commonly used in reductive amination reactions. Its catalytic efficiency stems from the ability of palladium to adsorb hydrogen gas, forming active hydride species. These hydrides facilitate the reduction of imines or iminium ions—key intermediates in reductive amination—into secondary or tertiary amines. This reaction typically occurs under mild conditions with excellent selectivity, which often makes H2/Pd the first choice for synthesizing amines in pharmaceuticals and fine chemicals. Additionally, H2/Pd is compatible with a wide range of functional groups, further enhancing its utility in complex organic synthesis.
Sodium Borohydride
Sodium Borohydride (NaBH4) reduces both imines and carbonyl groups. However, it is not very selective and can reduce other reducible functional groups present in the reaction. To ensure that this does not occur, reagents with weak electrophilic carbonyl groups, poor nucleophilic amines and sterically hindered reactive centres should not be used, as these properties do not favour the reduction of the carbonyl to form an imine and increases the chance that other functional groups will be reduced instead.
Sodium Cyanoborohydride
Sodium cyanoborohydride (NaBH3CN) is soluble in hydroxylic solvents, stable in acidic solutions, and has different selectivities depending on the pH. At low pH values, it efficiently reduces aldehydes and ketones. As the pH increases, the reduction rate slows and instead, the imine intermediate becomes preferential for reduction. For this reason, NaBH3CN is an ideal reducing agent for one-pot direct reductive amination reactions that don't isolate the intermediate imine.
When used as a reducing agent, NaBH3CN can release toxic by-products like HCN and NaCN during work up.
Sodium Triacetoxyborohydride
Sodium triacetoxyborohydride (STAB, NaBH(OAc)3) is a common reducing agent for reductive aminations. STAB selectively reduces the imine intermediate formed through dehydration of the molecule. STAB is a weaker reductant than NaBH4, and can preferentially reduce the imine group in the presence of other reduction-sensitive functional groups. While STAB has also been reported as a selective reducing agent for aldehydes in the presence of keto groups, standard reductive amination reaction conditions greatly favour imine reduction to form an amine.
Variations and related reactions
The reductive amination reaction is related to the Eschweiler–Clarke reaction, in which amines are methylated to tertiary amines, the Leuckart–Wallach reaction, and other amine alkylation methods such as the Mannich reaction and Petasis reaction.
A classic named reaction is the Mignonac reaction (1921) involving reaction of a ketone with ammonia over a nickel catalyst. An example of this reaction is the synthesis of 1-phenylethylamine from acetophenone:
Additionally, many systems catalyze reductive aminations with hydrogenation catalysts. Generally, catalysis is preferred to stoichiometric reactions as they may improve reaction efficiency and atom economy, and produce less waste. These reactions can utilize homogeneous or heterogeneous catalyst systems. These systems provide alternative synthesis routes which are efficient, require fewer volatile reagents and are redox-economical. As well, this method can be used in the reduction of alcohols, along with aldehydes and ketones to form the amine product. One example of a heterogeneous catalytic system is the Ni-catalyzed reductive amination of alcohols. Nickel is commonly used as a catalyst for reductive amination because of its abundance and relatively good catalytic activity.
An example of a homogeneous catalytic system is the reductive amination of ketones done with an iridium catalyst. Homogenous Iridium (III) catalysts have been shown to be effective in the reductive amination of carboxylic acids, which in the past has been more difficult than aldehydes and ketones. Homogeneous catalysts are often favored because they are more environmentally and economically friendly compared to most heterogeneous systems.
In industry, tertiary amines such as triethylamine and diisopropylethylamine are formed directly from ketones with a gaseous mixture of ammonia and hydrogen and a suitable catalyst.
In green chemistry
Reductive amination is commonly used over other methods for introducing amines to alkyl substrates, such as SN2-type reactions with halides, since it can be done in mild conditions and has high selectivity for nitrogen-containing compounds. Reductive amination can occur sequentially in one-pot reactions, which eliminates the need for intermediate purifications and reduces waste. Some multistep synthetic pathways have been reduced to one step through one-pot reductive amination. This makes it a highly appealing method to produce amines in green chemistry.
Biochemistry
In biochemistry, dehydrogenase enzymes can catalyze the reductive amination of α-keto acids and ammonia to yield α-amino acids. Reductive amination is predominantly used for the synthesis of the amino acid glutamate starting from α-ketoglutarate, while biochemistry largely relies on transamination to introduce nitrogen in the other amino acids. The use of enzymes as a catalyst is advantageous because the enzyme active sites are often stereospecific and have the ability to selectively synthesize a certain enantiomer. This is useful in the pharmaceutical industry, particularly for drug-development, because enantiomer pairs can have different reactivities in the body. Additionally, enzyme biocatalysts are often quite selective in reactivity so they can be used in the presence of other functional groups, without the use of protecting groups. For instance a class of enzymes called imine reductases, IREDs, can be used to catalyze direct asymmetric reductive amination to form chiral amines.
In popular culture
In the critically acclaimed drama Breaking Bad, main character Walter White uses the reductive amination reaction to produce his high purity methamphetamine, relying on phenyl-2-propanone and methylamine.
See also
Forster–Decker method
Leuckart reaction
References
External links
Current methods for reductive amination
Industrial reductive amination at BASF
Organic redox reactions | Reductive amination | [
"Chemistry"
] | 2,236 | [
"Coupling reactions",
"Organic redox reactions",
"Organic reactions"
] |
2,149,972 | https://en.wikipedia.org/wiki/Convective%20inhibition | Convective inhibition (CIN or CINH) is a numerical measure in meteorology that indicates the amount of energy that will prevent an air parcel from rising from the surface to the level of free convection.
CIN is the amount of energy required to overcome the negatively buoyant energy the environment exerts on an air parcel. In most cases, when CIN exists, it covers a layer from the ground to the level of free convection (LFC). The negatively buoyant energy exerted on an air parcel is a result of the air parcel being cooler (denser) than the air which surrounds it, which causes the air parcel to accelerate downward. The layer of air dominated by CIN is warmer and more stable than the layers above or below it.
The situation in which convective inhibition is measured is when layers of warmer air are above a particular region of air. The effect of having warm air above a cooler air parcel is to prevent the cooler air parcel from rising into the atmosphere. This creates a stable region of air. Convective inhibition indicates the amount of energy that will be required to force the cooler packet of air to rise. This energy comes from fronts, heating, moistening, or mesoscale convergence boundaries such as outflow and sea breeze boundaries, or orographic lift.
Typically, an area with a high convection inhibition number is considered stable and has very little likelihood of developing a thunderstorm. Conceptually, it is the opposite of CAPE.
CIN hinders updrafts necessary to produce convective weather, such as thunderstorms. Although, when large amounts of CIN are reduced by heating and moistening during a convective storm, the storm will be more severe than in the case when no CIN was present.
CIN is strengthened by low altitude dry air advection and surface air cooling. Surface cooling causes a small capping inversion to form aloft allowing the air to become stable. Incoming weather fronts and short waves influence the strengthening or weakening of CIN.
CIN is calculated by measurements recorded electronically by a Rawinsonde (weather balloon) which carries devices which measure weather parameters, such as air temperature and pressure. A single value for CIN is calculated from one balloon ascent by use of the equation below. The z-bottom and z-top limits of integration in the equation represent the bottom and top altitudes (in meters) of a single CIN layer, is the virtual temperature of the specific parcel and is the virtual temperature of the environment. In many cases, the z-bottom value is the ground and the z-top value is the LFC. CIN is an energy per unit mass and the units of measurement are joules per kilogram (J/kg). CIN is expressed as a negative energy value. CIN values greater than 200 J/kg are sufficient to prevent convection in the atmosphere.
The CIN energy value is an important figure on a skew-T log-P diagram and is a helpful value in evaluating the severity of a convective event. On a skew-T log-P diagram, CIN is any area between the warmer environment virtual temperature profile and the cooler parcel virtual temperature profile.
CIN is effectively negative buoyancy, expressed B-; the opposite of convective available potential energy (CAPE), which is expressed as B+ or simply B. As with CAPE, CIN is usually expressed in J/kg but may also be expressed as m2/s2, as the values are equivalent. In fact, CIN is sometimes referred to as negative buoyant energy (NBE).
See also
Atmospheric thermodynamics
Convective instability
Equilibrium level
Thermodynamic diagrams
References
External links
CINH Help Page
Atmospheric thermodynamics
Meteorological quantities
Severe weather and convection | Convective inhibition | [
"Physics",
"Mathematics"
] | 783 | [
"Quantity",
"Physical quantities",
"Meteorological quantities"
] |
2,150,292 | https://en.wikipedia.org/wiki/Sergei%20Winogradsky | Sergei Nikolaevich Winogradsky (; ; , Kyiv – 24 February 1953, Brie-Comte-Robert), also published under the name Sergius Winogradsky, was a Ukrainian and Russian microbiologist, ecologist and soil scientist who pioneered the cycle-of-life concept. Winogradsky discovered the first known form of lithotrophy during his research with Beggiatoa in 1887. He reported that Beggiatoa oxidized hydrogen sulfide (H2S) as an energy source and formed intracellular sulfur droplets. This research provided the first example of lithotrophy, but not autotrophy. Born in the capital of present-day Ukraine, his legacy is also celebrated by this nation.
His research on nitrifying bacteria would report the first known form of chemoautotrophy, showing how a lithotroph fixes carbon dioxide (CO2) to make organic compounds.
He is best known in school science as the inventor of the Winogradsky column technique for the study of sediment microbes.
Biography
Winogradsky was born in Kyiv, Russian Empire to a family of wealthy lawyers. Among his paternal ancestors were Cossack atamans, and on the maternal side he was linked to the Skoropadsky family. In his youth Winogradsky was "strictly devoted to the Orthodox faith", though he later became irreligious.
After graduating from the 2nd Kiev Gymnasium in 1873, he began studying law, but he entered the Imperial Conservatoire of Music in Saint Petersburg in 1875 to study piano. However, after two years of music training, he entered the Saint Petersburg Imperial University in 1877 to study chemistry under Nikolai Menshutkin and botany under Andrei Famintsyn, receiving his degree in 1881 and staying on for a master's in botany, which he received in 1884. In 1885, he moved to the University of Straßburg to work under the renowned botanist Anton de Bary, subsequently becoming renowned for his work on sulfur bacteria.
In 1888, after de Bary's death, he relocated to Zürich, where he began investigation into the process of nitrification, identifying the genera Nitrosomonas and Nitrosococcus, which oxidizes ammonium to nitrite, and Nitrobacter, which oxidizes nitrite to nitrate.
He returned to St. Petersburg for the period 1891–1905, obtaining his doctoral degree in 1902 and from then on heading the division of general microbiology of the Institute of Experimental Medicine. During this period, he identified the obligate anaerobe Clostridium pasteurianum, which is capable of fixing atmospheric nitrogen. In St. Petersburg he trained Vasily Omelianski, who popularized Winogradskys concepts and methodology in the Soviet Union during the next decades.
In 1901, he was elected an honorary member of the Moscow Society of Naturalists and, in 1902, a corresponding member of the French Academy of Sciences. In 1905, due to ill health, the scientist left the institute and moved from St. Petersburg to the town of Gorodok in Podolia, where from 1892 he owned a huge estate. In fact, while working as the director of the Institute of Experimental Medicine, Winogradsky renounced his salary, which was transferred to a special account, and then used these funds to build a room for a scientific library, the director of which lived on the income from the estate, where agricultural work was carried out.
In Gorodok Winogradsky addressed the problems of agriculture and soil science. He introduced new management methods, bought the best varieties of seeds, plants, and livestock, and advanced technology. His estate became one of the richest and most successful in Podolia, and remained profitable even during the First World War, falling under Austro-Hungarian occupation.
He retired from active scientific work in 1905, dividing his time between his private estate in Gorodok and Switzerland.
After the revolution of 1917, Winogradsky went first to Switzerland and then to Belgrade. In 1922, he accepted an invitation to head the Pasteur Institute's division of agricultural bacteriology at an experimental station at Brie-Comte-Robert, France, about 30 km from Paris. During this period, he worked on a number of topics, among them iron bacteria, nitrifying bacteria, nitrogen fixation by Azotobacter, cellulose-decomposing bacteria, and culture methods for soil microorganisms. In 1923 Winogradsky became an honorary member of the Russian Academy of Sciences despite his emigration. He retired from active life in 1940 and died in Brie-Comte-Robert in 1953.
Discoveries
Winogradsky discovered various biogeochemical cycles and parts of these cycles. These discoveries include
His work on bacterial sulfide oxidation for which he first became renowned, including the first known form of lithotrophy (in Beggiatoa).
His work on the Nitrogen cycle including
The identification of the obligate anaerobe Clostridium pasteurianum is a free living microbe capable of fixing atmospheric nitrogen and not living in legume root nodules.
Chemosynthesis – his most noted discovery
The Winogradsky column
Chemosynthesis
Winogradsky is best known for discovering chemoautotrophy, which soon became popularly known as chemosynthesis, the process by which organisms derive energy from a number of different inorganic compounds and obtain carbon in the form of carbon dioxide. Previously, it was believed that autotrophs obtained their energy solely from light, not from reactions of inorganic compounds. With the discovery of organisms that oxidized inorganic compounds such as hydrogen sulfide and ammonium as energy sources, autotrophs could be divided into two groups: photoautotrophs and chemoautotrophs. Winogradsky was one of the first researchers to attempt to understand microorganisms outside of the medical context, making him among the first students of microbial ecology and environmental microbiology.
The Winogradsky column remains an important display of chemoautotrophy and microbial ecology, demonstrated in microbiology lectures around the world.
Memorials
The Institute of Microbiology of the Russian Academy of Sciences bears Winogradsky's name since 2003.
In 2012, a bust of the scientist was unveiled on the grounds of his former estate in Horodok, Khmelnytskyi Oblast, Ukraine.
In Ukraine, the study and popularization of the life and activities of Sergey Winogradsky are promoted by the Winogradsky Club, whose centre is located in the Horodok Museum of Local History (G-MUSEUM). One of the museum's exhibitions is a reconstruction of Winogradsky's laboratory in Brie-Comte-Robert including a wax figure of the scientist.
See also
Hermann Hellriegel
Martinus Beijerinck
Further reading
Ackert, Lloyd. Sergei Vinogradskii and the Cycle of Life: From the Thermodynamics of Life to Ecological Microbiology, 1850-1950. Vol. 34.; Dordrecht; London: Springer, 2013.
References
External links
Sergei Winogradsky at Cycle of Life website including images.
page Winogradsky Club
official website G-Museum
1856 births
1953 deaths
Environmental microbiology
Foreign members of the Royal Society
Former Russian Orthodox Christians
Nitrogen cycle
Ukrainian biochemists
Ukrainian microbiologists
Ukrainian biologists
Ukrainian ecologists
Leeuwenhoek Medal winners
Academic staff of the University of Strasbourg
Russian scientists | Sergei Winogradsky | [
"Chemistry",
"Environmental_science"
] | 1,550 | [
"Environmental microbiology",
"Nitrogen cycle",
"Metabolism"
] |
2,150,441 | https://en.wikipedia.org/wiki/Quantifier%20elimination | Quantifier elimination is a concept of simplification used in mathematical logic, model theory, and theoretical computer science. Informally, a quantified statement " such that " can be viewed as a question "When is there an such that ?", and the statement without quantifiers can be viewed as the answer to that question.
One way of classifying formulas is by the amount of quantification. Formulas with less depth of quantifier alternation are thought of as being simpler, with the quantifier-free formulas as the simplest.
A theory has quantifier elimination if for every formula , there exists another formula without quantifiers that is equivalent to it (modulo this theory).
Examples
An example from mathematics says that a single-variable quadratic polynomial has a real root if and only if its discriminant is non-negative:
Here the sentence on the left-hand side involves a quantifier , whereas the equivalent sentence on the right does not.
Examples of theories that have been shown decidable using quantifier elimination are Presburger arithmetic, algebraically closed fields, real closed fields, atomless Boolean algebras, term algebras, dense linear orders, abelian groups, random graphs, as well as many of their combinations such as Boolean algebra with Presburger arithmetic, and term algebras with queues.
Quantifier eliminator for the theory of the real numbers as an ordered additive group is Fourier–Motzkin elimination; for the theory of the field of real numbers it is the Tarski–Seidenberg theorem.
Quantifier elimination can also be used to show that "combining" decidable theories leads to new decidable theories (see Feferman–Vaught theorem).
Algorithms and decidability
If a theory has quantifier elimination, then a specific question can be addressed: Is there a method of determining for each ? If there is such a method we call it a quantifier elimination algorithm. If there is such an algorithm, then decidability for the theory reduces to deciding the truth of the quantifier-free sentences. Quantifier-free sentences have no variables, so their validity in a given theory can often be computed, which enables the use of quantifier elimination algorithms to decide validity of sentences.
Related concepts
Various model-theoretic ideas are related to quantifier elimination, and there are various equivalent conditions.
Every first-order theory with quantifier elimination is model complete. Conversely, a model-complete theory, whose theory of universal consequences has the amalgamation property, has quantifier elimination.
The models of the theory of the universal consequences of a theory are precisely the substructures of the models of . The theory of linear orders does not have quantifier elimination. However the theory of its universal consequences has the amalgamation property.
Basic ideas
To show constructively that a theory has quantifier elimination, it suffices to show that we can eliminate an existential quantifier applied to a conjunction of literals, that is, show that each formula of the form:
where each is a literal, is equivalent to a quantifier-free formula. Indeed, suppose we know how to eliminate quantifiers from conjunctions of literals, then if is a quantifier-free formula, we can write it in disjunctive normal form
and use the fact that
is equivalent to
Finally, to eliminate a universal quantifier
where is quantifier-free, we transform
into disjunctive normal form, and use the fact that
is equivalent to
Relationship with decidability
In early model theory, quantifier elimination was used to demonstrate that various theories possess properties like decidability and completeness. A common technique was to show first that a theory admits elimination of quantifiers and thereafter prove decidability or completeness by considering only the quantifier-free formulas. This technique can be used to show that Presburger arithmetic is decidable.
Theories could be decidable yet not admit quantifier elimination. Strictly speaking, the theory of the additive natural numbers did not admit quantifier elimination, but it was an expansion of the additive natural numbers that was shown to be decidable. Whenever a theory is decidable, and the language of its valid formulas is countable, it is possible to extend the theory with countably many relations to have quantifier elimination (for example, one can introduce, for each formula of the theory, a relation symbol that relates the free variables of the formula).
Example: Nullstellensatz for algebraically closed fields and for differentially closed fields.
See also
Cylindrical algebraic decomposition
Elimination theory
Conjunction elimination
Notes
References
, see for an English translation
Model theory | Quantifier elimination | [
"Mathematics"
] | 1,008 | [
"Mathematical logic",
"Model theory"
] |
2,150,549 | https://en.wikipedia.org/wiki/Oxidative%20stress | Oxidative stress reflects an imbalance between the systemic manifestation of reactive oxygen species and a biological system's ability to readily detoxify the reactive intermediates or to repair the resulting damage. Disturbances in the normal redox state of cells can cause toxic effects through the production of peroxides and free radicals that damage all components of the cell, including proteins, lipids, and DNA. Oxidative stress from oxidative metabolism causes base damage, as well as strand breaks in DNA. Base damage is mostly indirect and caused by the reactive oxygen species generated, e.g., (superoxide radical), OH (hydroxyl radical) and (hydrogen peroxide). Further, some reactive oxidative species act as cellular messengers in redox signaling. Thus, oxidative stress can cause disruptions in normal mechanisms of cellular signaling.
In humans, oxidative stress is thought to be involved in the development of attention deficit hyperactivity disorder, cancer, Parkinson's disease, Lafora disease, Alzheimer's disease, atherosclerosis, heart failure, myocardial infarction, fragile X syndrome, sickle-cell disease, lichen planus, vitiligo, autism, infection, chronic fatigue syndrome, and depression; however, reactive oxygen species can be beneficial, as they are used by the immune system as a way to attack and kill pathogens. Short-term oxidative stress may also be important in prevention of aging by induction of a process named mitohormesis, and is required to initiate stress response processes in plants.
Chemical and biological effects
Chemically, oxidative stress is associated with increased production of oxidizing species or a significant decrease in the effectiveness of antioxidant defenses, such as glutathione. The effects of oxidative stress depend upon the size of these changes, with a cell being able to overcome small perturbations and regain its original state. However, more severe oxidative stress can cause cell death, and even moderate oxidation can trigger apoptosis, while more intense stresses may cause necrosis.
Production of reactive oxygen species is a particularly destructive aspect of oxidative stress. Such species include free radicals and peroxides. Some of the less reactive of these species (such as superoxide) can be converted by oxidoreduction reactions with transition metals or other redox cycling compounds (including quinones) into more aggressive radical species that can cause extensive cellular damage. Most long-term effects are caused by damage to DNA. DNA damage induced by ionizing radiation is similar to oxidative stress, and these lesions have been implicated in aging and cancer. Biological effects of single-base damage by radiation or oxidation, such as 8-oxoguanine and thymine glycol, have been extensively studied. Recently the focus has shifted to some of the more complex lesions. Tandem DNA lesions are formed at substantial frequency by ionizing radiation and metal-catalyzed reactions. Under anoxic conditions, the predominant double-base lesion is a species in which C8 of guanine is linked to the 5-methyl group of an adjacent 3'-thymine (G[8,5- Me]T). Most of these oxygen-derived species are produced by normal aerobic metabolism. Normal cellular defense mechanisms destroy most of these. Repair of oxidative damages to DNA is frequent and ongoing, largely keeping up with newly induced damages. In rat urine, about 74,000 oxidative DNA adducts per cell are excreted daily. There is also a steady state level of oxidative damages in the DNA of a cell. There are about 24,000 oxidative DNA adducts per cell in young rats and 66,000 adducts per cell in old rats. Likewise, any damage to cells is constantly repaired. However, under the severe levels of oxidative stress that cause necrosis, the damage causes ATP depletion, preventing controlled apoptotic death and causing the cell to simply fall apart.
Polyunsaturated fatty acids, particularly arachidonic acid and linoleic acid, are primary targets for free radical and singlet oxygen oxidations. For example, in tissues and cells, the free radical oxidation of linoleic acid produces racemic mixtures of 13-hydroxy-9Z,11E-octadecadienoic acid, 13-hydroxy-9E,11E-octadecadienoic acid, 9-hydroxy-10E,12-E-octadecadienoic acid (9-EE-HODE), and 11-hydroxy-9Z,12-Z-octadecadienoic acid as well as 4-Hydroxynonenal while singlet oxygen attacks linoleic acid to produce (presumed but not yet proven to be racemic mixtures of) 13-hydroxy-9Z,11E-octadecadienoic acid, 9-hydroxy-10E,12-Z-octadecadienoic acid, 10-hydroxy-8E,12Z-octadecadienoic acid, and 12-hydroxy-9Z-13-E-octadecadienoic (see 13-Hydroxyoctadecadienoic acid and 9-Hydroxyoctadecadienoic acid). Similar attacks on arachidonic acid produce a far larger set of products including various isoprostanes, hydroperoxy- and hydroxy- eicosatetraenoates, and 4-hydroxyalkenals. While many of these products are used as markers of oxidative stress, the products derived from linoleic acid appear far more predominant than arachidonic acid products and therefore easier to identify and quantify in, for example, atheromatous plaques. Certain linoleic acid products have also been proposed to be markers for specific types of oxidative stress. For example, the presence of racemic 9-HODE and 9-EE-HODE mixtures reflects free radical oxidation of linoleic acid whereas the presence of racemic 10-hydroxy-8E,12Z-octadecadienoic acid and 12-hydroxy-9Z-13-E-octadecadienoic acid reflects singlet oxygen attack on linoleic acid. In addition to serving as markers, the linoleic and arachidonic acid products can contribute to tissue and/or DNA damage but also act as signals to stimulate pathways which function to combat oxidative stress.
Table adapted from.
Production and consumption of oxidants
One source of reactive oxygen under normal conditions in humans is the leakage of activated oxygen from mitochondria during oxidative phosphorylation. E. coli mutants that lack an active electron transport chain produce as much hydrogen peroxide as wild-type cells, indicating that other enzymes contribute the bulk of oxidants in these organisms. One possibility is that multiple redox-active flavoproteins all contribute a small portion to the overall production of oxidants under normal conditions.
Other enzymes capable of producing superoxide are xanthine oxidase, NADPH oxidases and cytochromes P450. Hydrogen peroxide is produced by a wide variety of enzymes including several oxidases. Reactive oxygen species play important roles in cell signalling, a process termed redox signaling. Thus, to maintain proper cellular homeostasis, a balance must be struck between reactive oxygen production and consumption.
The best studied cellular antioxidants are the enzymes superoxide dismutase (SOD), catalase, and glutathione peroxidase. Less well studied (but probably just as important) enzymatic antioxidants are the peroxiredoxins and the recently discovered sulfiredoxin. Other enzymes that have antioxidant properties (though this is not their primary role) include paraoxonase, glutathione-S transferases, and aldehyde dehydrogenases.
The amino acid methionine is prone to oxidation, but oxidized methionine can be reversible. Oxidation of methionine is shown to inhibit the phosphorylation of adjacent Ser/Thr/Tyr sites in proteins. This gives a plausible mechanism for cells to couple oxidative stress signals with cellular mainstream signaling such as phosphorylation.
Diseases
Oxidative stress is suspected to be important in neurodegenerative diseases including Lou Gehrig's disease (aka MND or ALS), Parkinson's disease, Alzheimer's disease, Huntington's disease, depression, and multiple sclerosis. It is also indicated in Neurodevelopmental conditions such as Autism Spectrum Disorder. Indirect evidence via monitoring biomarkers such as reactive oxygen species, and reactive nitrogen species production indicates oxidative damage may be involved in the pathogenesis of these diseases, while cumulative oxidative stress with disrupted mitochondrial respiration and mitochondrial damage are related to Alzheimer's disease, Parkinson's disease, and other neurodegenerative diseases.
Oxidative stress is thought to be linked to certain cardiovascular disease, since oxidation of LDL in the vascular endothelium is a precursor to plaque formation. Oxidative stress also plays a role in the ischemic cascade due to oxygen reperfusion injury following hypoxia. This cascade includes both strokes and heart attacks. Oxidative stress has also been implicated in chronic fatigue syndrome (ME/CFS). Oxidative stress also contributes to tissue injury following irradiation and hyperoxia, as well as in diabetes. In hematological cancers, such as leukemia, the impact of oxidative stress can be bilateral. Reactive oxygen species can disrupt the function of immune cells, promoting immune evasion of leukemic cells. On the other hand, high levels of oxidative stress can also be selectively toxic to cancer cells.
Oxidative stress is likely to be involved in age-related development of cancer. The reactive species produced in oxidative stress can cause direct damage to the DNA and are therefore mutagenic, and it may also suppress apoptosis and promote proliferation, invasiveness and metastasis. Infection by Helicobacter pylori which increases the production of reactive oxygen and nitrogen species in human stomach is also thought to be important in the development of gastric cancer.
Oxidative stress can cause DNA damage in neurons. In neuronal progenitor cells, DNA damage is associated with increased secretion of amyloid beta proteins Aβ40 and Aβ42. This association supports the existence of a causal relationship between oxidative DNA damage and Aβ accumulation and suggests that oxidative DNA damage may contribute to Alzheimer's disease (AD) pathology. AD is associated with an accumulation of DNA damage (double-strand breaks) in vulnerable neuronal and glial cell populations from early stages onward, and DNA double-strand breaks are increased in the hippocampus of AD brains compared to non-AD control brains.
Antioxidants as supplements
The use of antioxidants to prevent some diseases is controversial. In a high-risk group like smokers, high doses of beta carotene increased the rate of lung cancer since high doses of beta-carotene in conjunction of high oxygen tension due to smoking results in a pro-oxidant effect and an antioxidant effect when oxygen tension is not high. In less high-risk groups, the use of vitamin E appears to reduce the risk of heart disease. However, while consumption of food rich in vitamin E may reduce the risk of coronary heart disease in middle-aged to older men and women, using vitamin E supplements also appear to result in an increase in total mortality, heart failure, and hemorrhagic stroke. The American Heart Association therefore recommends the consumption of food rich in antioxidant vitamins and other nutrients, but does not recommend the use of vitamin E supplements to prevent cardiovascular disease. In other diseases, such as Alzheimer's, the evidence on vitamin E supplementation is also mixed. Since dietary sources contain a wider range of carotenoids and vitamin E tocopherols and tocotrienols from whole foods, ex post facto epidemiological studies can have differing conclusions than artificial experiments using isolated compounds. AstraZeneca's radical scavenging nitrone drug NXY-059 shows some efficacy in the treatment of stroke.
Oxidative stress (as formulated in Denham Harman's free-radical theory of aging) is also thought to contribute to the aging process. While there is good evidence to support this idea in model organisms such as Drosophila melanogaster and Caenorhabditis elegans, recent evidence from Michael Ristow's laboratory suggests that oxidative stress may also promote life expectancy of Caenorhabditis elegans by inducing a secondary response to initially increased levels of reactive oxygen species. The situation in mammals is even less clear. Recent epidemiological findings support the process of mitohormesis, but a 2007 meta-analysis finds that in studies with a low risk of bias (randomization, blinding, follow-up), some popular antioxidant supplements (vitamin A, beta carotene, and vitamin E) may increase mortality risk (although studies more prone to bias reported the reverse).
The USDA removed the table showing the Oxygen Radical Absorbance Capacity (ORAC) of Selected Foods Release 2 (2010) table due to the lack of evidence that the antioxidant level present in a food translated into a related antioxidant effect in the body.
Metal catalysts
Metals such as iron, copper, chromium, vanadium, and cobalt are capable of redox cycling in which a single electron may be accepted or donated by the metal. This action catalyzes production of reactive radicals and reactive oxygen species. The presence of such metals in biological systems in an uncomplexed form (not in a protein or other protective metal complex) can significantly increase the level of oxidative stress. These metals are thought to induce Fenton reactions and the Haber-Weiss reaction, in which hydroxyl radical is generated from hydrogen peroxide. The hydroxyl radical then can modify amino acids. For example, meta-tyrosine and ortho-tyrosine form by hydroxylation of phenylalanine. Other reactions include lipid peroxidation and oxidation of nucleobases. Metal-catalyzed oxidations also lead to irreversible modification of arginine, lysine, proline, and threonine. Excessive oxidative-damage leads to protein degradation or aggregation.
The reaction of transition metals with proteins oxidated by reactive oxygen or nitrogen species can yield reactive products that accumulate and contribute to aging and disease. For example, in Alzheimer's patients, peroxidized lipids and proteins accumulate in lysosomes of the brain cells.
Non-metal redox catalysts
Certain organic compounds in addition to metal redox catalysts can also produce reactive oxygen species. One of the most important classes of these is the quinones. Quinones can redox cycle with their conjugate semiquinones and hydroquinones, in some cases catalyzing the production of superoxide from dioxygen or hydrogen peroxide from superoxide.
Immune defense
The immune system uses the lethal effects of oxidants by making the production of oxidizing species a central part of its mechanism of killing pathogens; with activated phagocytes producing both reactive oxygen and nitrogen species. These include superoxide , nitric oxide (•NO) and their particularly reactive product, peroxynitrite (ONOO-). Although the use of these highly reactive compounds in the cytotoxic response of phagocytes causes damage to host tissues, the non-specificity of these oxidants is an advantage since they will damage almost every part of their target cell. This prevents a pathogen from escaping this part of immune response by mutation of a single molecular target.
Male infertility
Sperm DNA fragmentation appears to be an important factor in the cause of male infertility, since men with high DNA fragmentation levels have significantly lower odds of conceiving. Oxidative stress is the major cause of DNA fragmentation in spermatozoa. A high level of the oxidative DNA damage 8-oxo-2'-deoxyguanosine is associated with abnormal spermatozoa and male infertility.
Aging
In a rat model of premature aging, oxidative stress induced DNA damage in the neocortex and hippocampus was substantially higher than in normally aging control rats. Numerous studies have shown that the level of 8-oxo-2'-deoxyguanosine, a product of oxidative stress, increases with age in the brain and muscle DNA of the mouse, rat, gerbil and human. Further information on the association of oxidative DNA damage with aging is presented in the article DNA damage theory of aging. However, it was recently shown that the fluoroquinolone antibiotic Enoxacin can diminish aging signals and promote lifespan extension in nematodes C. elegans by inducing oxidative stress.
Origin of eukaryotes
The great oxygenation event began with the biologically induced appearance of oxygen in the Earth's atmosphere about 2.45 billion years ago. The rise of oxygen levels due to cyanobacterial photosynthesis in ancient microenvironments was probably highly toxic to the surrounding biota. Under these conditions, the selective pressure of oxidative stress is thought to have driven the evolutionary transformation of an archaeal lineage into the first eukaryotes. Oxidative stress might have acted in synergy with other environmental stresses (such as ultraviolet radiation and/or desiccation) to drive this selection. Selective pressure for efficient repair of oxidative DNA damages may have promoted the evolution of eukaryotic sex involving such features as cell-cell fusions, cytoskeleton-mediated chromosome movements and emergence of the nuclear membrane. Thus, the evolution of meiotic sex and eukaryogenesis may have been inseparable processes that evolved in large part to facilitate repair of oxidative DNA damages.
COVID-19 and cardiovascular injury
It has been proposed that oxidative stress may play a major role in determining cardiac complications in COVID-19.
See also
Antioxidative stress
Acatalasia
Bruce Ames
Malondialdehyde, an oxidative stress marker
Mitochondrial free radical theory of aging
Mitohormesis
Nitric oxide
Pro-oxidant
Reductive stress
References
Cell biology
Chemical pathology
Alzheimer's disease
Senescence | Oxidative stress | [
"Chemistry",
"Biology"
] | 3,923 | [
"Cell biology",
"Senescence",
"Cellular processes",
"Biochemistry",
"Chemical pathology",
"Metabolism"
] |
5,425,217 | https://en.wikipedia.org/wiki/Full%20state%20feedback | Full state feedback (FSF), or pole placement, is a method employed in feedback control system theory to place the closed-loop poles of a plant in predetermined locations in the s-plane. Placing poles is desirable because the location of the poles corresponds directly to the eigenvalues of the system, which control the characteristics of the response of the system. The system must be considered controllable in order to implement this method.
Principle
If the closed-loop dynamics can be represented by the state space equation (see State space (controls))
with output equation
then the poles of the system transfer function are the roots of the characteristic equation given by
Full state feedback is utilized by commanding the input vector . Consider an input proportional (in the matrix sense) to the state vector,
.
Substituting into the state space equations above, we have
The poles of the FSF system are given by the characteristic equation of the matrix , . Comparing the terms of this equation with those of the desired characteristic equation yields the values of the feedback matrix which force the closed-loop eigenvalues to the pole locations specified by the desired characteristic equation.
Example of FSF
Consider a system given by the following state space equations:
The uncontrolled system has open-loop poles at and . These poles are the eigenvalues of the matrix and they are the roots of . Suppose, for considerations of the response, we wish the controlled system eigenvalues to be located at and , which are not the poles we currently have. The desired characteristic equation is then , from .
Following the procedure given above, the FSF controlled system characteristic equation is
where
Upon setting this characteristic equation equal to the desired characteristic equation, we find
.
Therefore, setting forces the closed-loop poles to the desired locations, affecting the response as desired.
This only works for Single-Input systems. Multiple input systems will have a matrix that is not unique. Choosing, therefore, the best values is not trivial. A linear-quadratic regulator might be used for such applications.
See also
Pole splitting
Step response
Ackermann's Formula
Linear-quadratic regulator
References
External links
Mathematica function to compute the state feedback gains
Control theory
Feedback | Full state feedback | [
"Mathematics"
] | 449 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
5,426,393 | https://en.wikipedia.org/wiki/Advection%20upstream%20splitting%20method | The Advection Upstream Splitting Method (AUSM) is a numerical method used to solve the advection equation in computational fluid dynamics. It is particularly useful for simulating compressible flows with shocks and discontinuities.
The AUSM is developed as a numerical inviscid flux function for solving a general system of conservation equations. It is based on the upwind concept and was motivated to provide an alternative approach to other upwind methods, such as the Godunov method, flux difference splitting methods by Roe, and Solomon and Osher, flux vector splitting methods by Van Leer, and Steger and Warming.
The AUSM first recognizes that the inviscid flux consist of two physically distinct parts, i.e., convective and pressure fluxes. The former is associated with the flow (advection) speed, while the latter with the acoustic speed; or respectively classified as the linear and nonlinear fields. Currently, the convective and pressure fluxes are formulated using the eigenvalues of the flux Jacobian matrices. The method was originally proposed by Liou and Steffen for the typical compressible aerodynamic flows, and later substantially improved in to yield a more accurate and robust version. To extend its capabilities, it has been further developed in for all speed-regimes and multiphase flow. Its variants have also been proposed.
Features
The Advection Upstream Splitting Method has many features. The main features are:
accurate capturing of shock and contact discontinuities
entropy-satisfying solution
positivity-preserving solution
algorithmic simplicity (not requiring explicit eigen-structure of the flux Jacobian matrices) and straightforward extension to additional conservation laws
free of “carbuncle” phenomena
uniform accuracy and convergence rate for all Mach numbers.
Since the method does not specifically require eigenvectors, it is especially attractive for the system whose eigen-structure is not known explicitly, as the case of two-fluid equations for multiphase flow.
Applications
The AUSM has been employed to solve a wide range of problems, low-Mach to hypersonic aerodynamics, large eddy simulation and aero-acoustics, direct numerical simulation, multiphase flow, galactic relativistic flow etc.
See also
Euler equations
Finite volume method
Flux limiter
Godunov's theorem
High resolution scheme
Numerical method of lines
Sergei K. Godunov
Total variation diminishing
References
Computational fluid dynamics
Numerical differential equations | Advection upstream splitting method | [
"Physics",
"Chemistry"
] | 498 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
5,427,586 | https://en.wikipedia.org/wiki/Strahler%20number | In mathematics, the Strahler number or Horton–Strahler number of a mathematical tree is a numerical measure of its branching complexity.
These numbers were first developed in hydrology, as a way of measuring the complexity of rivers and streams, by and . In this application, they are referred to as the Strahler stream order and are used to define stream size based on a hierarchy of tributaries.
The same numbers also arise in the analysis of L-systems and of hierarchical biological structures such as (biological) trees and animal respiratory and circulatory systems, in register allocation for compilation of high-level programming languages and in the analysis of social networks.
Definition
All trees in this context are directed graphs, oriented from the root towards the leaves; in other words, they are arborescences. The degree of a node in a tree is just its number of children. One may assign a Strahler number to all nodes of a tree, in bottom-up order, as follows:
If the node is a leaf (has no children), its Strahler number is one.
If the node has one child with Strahler number i, and all other children have Strahler numbers less than i, then the Strahler number of the node is i again.
If the node has two or more children with Strahler number i, and no children with greater number, then the Strahler number of the node is i + 1.
The Strahler number of a tree is the number of its root node.
Algorithmically, these numbers may be assigned by performing a depth-first search and assigning each node's number in postorder.
The same numbers may also be generated via a pruning process in which the tree is simplified in a sequence of stages, where in each stage one removes all leaf nodes and all of the paths of degree-one nodes leading to leaves: the Strahler number of a node is the stage at which it would be removed by this process, and the Strahler number of a tree is the number of stages required to remove all of its nodes. Another equivalent definition of the Strahler number of a tree is that it is the height of the largest complete binary tree that can be homeomorphically embedded into the given tree; the Strahler number of a node in a tree is similarly the height of the largest complete binary tree that can be embedded below that node.
Any node with Strahler number i must have at least two descendants with Strahler number i − 1, at least four descendants with Strahler number i − 2, etc., and at least 2i − 1 leaf descendants. Therefore, in a tree with n nodes, the largest possible Strahler number is log2 n + 1. However, unless the tree forms a complete binary tree its Strahler number will be less than this bound. In an n-node binary tree, chosen uniformly at random among all possible binary trees, the expected index of the root is with high probability very close to log4 n.
Applications
River networks
In the application of the Strahler stream order to hydrology, each segment of a stream or river within a river network is treated as a node in a tree, with the next segment downstream as its parent. When two first-order streams come together, they form a second-order stream. When two second-order streams come together, they form a third-order stream. Streams of lower order joining a higher order stream do not change the order of the higher stream. Thus, if a first-order stream joins a second-order stream, it remains a second-order stream. It is not until a second-order stream combines with another second-order stream that it becomes a third-order stream. As with mathematical trees, a segment with index i must be fed by at least 2i − 1 different tributaries of index 1. Shreve noted that Horton's and Strahler's Laws should be expected from any topologically random distribution. A later review of the relationships confirmed this argument, establishing that, from the properties the laws describe, no conclusion can be drawn to explain the structure or origin of the stream network.
To qualify as a stream a hydrological feature must be either recurring or perennial. Recurring (or "intermittent") streams have water in the channel for at least part of the year. The index of a stream or river may range from 1 (a stream with no tributaries) to 12 (globally the most powerful river, the Amazon, at its mouth). The Ohio River is of order eight and the Mississippi River is of order 10. Estimates are that 80% of the streams on the planet are first to third order headwater streams.
If the bifurcation ratio of a river network is high, then there is a higher chance of flooding. There would also be a lower time of concentration. The bifurcation ratio can also show which parts of a drainage basin are more likely to flood, comparatively, by looking at the separate ratios. Most British rivers have a bifurcation ratio of between 3 and 5.
describe how to compute Strahler stream order values in a GIS application. This algorithm is implemented by RivEX, an ESRI ArcGIS Pro 3.3.x tool. The input to their algorithm is a network of the centre lines of the bodies of water, represented as arcs (or edges) joined at nodes. Lake boundaries and river banks should not be used as arcs, as these will generally form a non-tree network with an incorrect topology.
Alternative stream ordering systems have been developed by Shreve and Hodgkinson et al. A statistical comparison of Strahler and Shreve systems, together with an analysis of stream/link lengths, is given by Smart.
Other hierarchical systems
The Strahler numbering may be applied in the statistical analysis of any hierarchical system, not just to rivers.
describe an application of the Horton–Strahler index in the analysis of social networks.
applied a variant of Strahler numbering (starting with zero at the leaves instead of one), which they called tree-rank, to the analysis of L-systems.
Strahler numbering has also been applied to biological hierarchies such as the branching structures of trees and of animal respiratory and circulatory systems.
Register allocation
When translating a high-level programming language to assembly language the minimum number of registers required to evaluate an expression tree is exactly its Strahler number. In this context, the Strahler number may also be called the register number.
For expression trees that require more registers than are available, the Sethi–Ullman algorithm may be used to translate an expression tree into a sequence of machine instructions that uses the registers as efficiently as possible, minimizing the number of times intermediate values are spilled from registers to main memory and the total number of instructions in the resulting compiled code.
Related parameters
Bifurcation ratio
Associated with the Strahler numbers of a tree are bifurcation ratios, numbers describing how close to balanced a tree is. For each order i in a hierarchy, the ith bifurcation ratio is
where ni denotes the number of nodes with order i.
The bifurcation ratio of an overall hierarchy may be taken by averaging the bifurcation ratios at different orders. In a complete binary tree, the bifurcation ratio will be 2, while other trees will have larger bifurcation ratios. It is a dimensionless number.
Pathwidth
The pathwidth of an arbitrary undirected graph G may be defined as the smallest number w such that there exists an interval graph H containing G as a subgraph, with the largest clique in H having w + 1 vertices. For trees (viewed as undirected graphs by forgetting their orientation and root) the pathwidth differs from the Strahler number, but is closely related to it: in a tree with pathwidth w and Strahler number s, these two numbers are related by the inequalities
w ≤ s ≤ 2w + 2.
The ability to handle graphs with cycles and not just trees gives pathwidth extra versatility compared to the Strahler number.
However, unlike the Strahler number, the pathwidth is defined only for the whole graph, and not separately for each node in the graph.
See also
Main stem of a river, typically found by following the branch with the highest Strahler number
Pfafstetter Coding System
Notes
References
.
.
.
.
.
.
.
.
.
.
.
.
.
Hydrology
Geomorphology
Physical geography
Trees (graph theory)
Graph invariants | Strahler number | [
"Chemistry",
"Mathematics",
"Engineering",
"Environmental_science"
] | 1,773 | [
"Hydrology",
"Graph theory",
"Graph invariants",
"Mathematical relations",
"Environmental engineering"
] |
5,434,119 | https://en.wikipedia.org/wiki/Misalignment%20mechanism | It is a well known fact that a quarter of the energy density of the universe is in the form of dark matter (DM). One can corroborate the presence of DM by alluding to the observational data such as anisotropies in Cosmic Microwave Background (CMB) radiation and the formation of Large scale structure in the universe. There are various schools of thought with differing positions on the nature of DM, but they mostly converge on the fact that the mass of DM lies within the range of eV to .
Such light-weight, spinless DM, with no or little self-interaction between themselves
is described by the classical scalar field. Axion is the example of field-like DM.
The interaction of axions with the other particles is assumed to be too weak for axions to reach thermal equilibrium with the rest of the early universe plasma, implying that they were produced non-thermally. The production mechanism of such particles is the vacuum misalignment mechanism which is a hypothesized effect in the Peccei–Quinn theory proposed solution to the strong-CP problem in quantum mechanics. The effect occurs when a particle's field has an initial value that is not at or near a potential minimum. This causes the particle's field to oscillate around the nearest minimum, eventually dissipating energy by decaying into other particles until the minimum is attained.
In the case of hypothesized axions created in the early universe, the initial values are random because of the masslessness of axions in the high temperature plasma. Near the critical temperature of quantum chromodynamics, axions possess a temperature-dependent mass that enters a damped oscillation until the potential minimum is reached.
There are other production mechanism for cold DM axions, but it is least model dependent provided that the Hubble parameter is much greater than the axion mass well before matter - radiation equality. The expansion of the universe acts as a friction term, freezing the axion amplitude at a constant value .
The action in the minimally coupled scalar field theory is given by
where is the determinant of FLRW metric . The dynamics of these particles are a Klein-Gordon equation in a homogeneous and isotropic space-time, of which scale factor a(t) evolves as determined by the Hubble parameter . Near the minimum of its potential, where , of which then behaves cosmologically as a damped harmonic oscillator:
Due to the expansion of the universe, dropped below , the damping becomes undercritical and the field can roll down and start oscillating near the bottom of the potential. In this case, the solution of field equation can be deduced by WKB approximation.
The energy density of these fields dilutes with the scale factor. It can be shown that the axion density provides a fraction a of the critical density given by,
The φ oscillations, which can be interpreted as a set of particles, therefore have the red shifting behavior of (non-relativistic) matter, making this a suitable dark matter candidate.
References
2. Asimina Arvanitaki etal; (1 January 2020). The Large-Misalignment Mechanism for the Formation of Compact Axion Structures:
Signatures from the QCD Axion to Fuzzy Dark Matter; arXiv:1909.11665v2 [astro-ph.CO] 30 Dec 2019
Physics beyond the Standard Model
Astroparticle physics | Misalignment mechanism | [
"Physics"
] | 729 | [
"Astroparticle physics",
"Unsolved problems in physics",
"Astrophysics",
"Particle physics",
"Physics beyond the Standard Model"
] |
5,434,910 | https://en.wikipedia.org/wiki/Bonner%20sphere | A Bonner sphere is a device used to determine the energy spectrum of a neutron beam. The method was first described in 1960 by Rice University's Bramblett, Ewing and Tom W. Bonner and employs thermal neutron detectors embedded in moderating spheres of different sizes. Comparison of the neutrons detected by each sphere allows accurate determination of the neutron energy. This detector system utilizes a few channel unfolding techniques to determine the coarse, few group neutron spectrum. The original detector system was capable of measuring neutrons between thermal energies up to ~20 MeV. These detectors have been modified to provide additional resolution above 20 MeV to energies up to 1 GeV.
Bonner sphere spectroscopy
Because of the complexity with which neutrons interact with the environment, precise determination of the neutron energy is quite difficult. Bonner sphere spectroscopy (BSS) is one of the few methods that provide an accurate measure of the neutron spectrum.
Remball
A single Bonner sphere of an appropriate size can be used for dosimetry, as the sensitivity of the detector will approximate the radiation weighting factor across a range of neutron energies. Such Bonner spheres are sometimes known as a remball.
See also
Neutron detection
References
Particle detectors
Spectrometers
Neutron instrumentation
Neutron-related techniques | Bonner sphere | [
"Physics",
"Chemistry",
"Astronomy",
"Technology",
"Engineering"
] | 256 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Astronomy stubs",
"Measuring instruments",
"Spectrometers",
"Particle detectors",
"Neutron instrumentation",
"Particle physics",
"Particle physics stubs",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
2,956,909 | https://en.wikipedia.org/wiki/Nicolaou%20Taxol%20total%20synthesis | The Nicolaou Taxol total synthesis, published by K. C. Nicolaou and his group in 1994 concerns the total synthesis of taxol. Taxol is an important drug in the treatment of cancer but also expensive because the compound is harvested from a scarce resource, namely the pacific yew.
This synthetic route to taxol is one of several; other groups have presented their own solutions, notably the group of Holton with a linear synthesis starting from borneol, the Samuel Danishefsky group starting from the Wieland-Miescher ketone and the Wender group from pinene.
The Nicolaou synthesis is an example of convergent synthesis because the molecule is assembled from three pre-assembled synthons. Two major parts are cyclohexene rings A and C that are connected by two short bridges creating an 8 membered ring in the middle (ring B). The third pre-assembled part is an amide tail. Ring
D is an oxetane ring fused to ring C. Two key chemical transformations are the Shapiro reaction and the pinacol coupling reaction.
The overall synthesis was published in 1995 in a series of four papers.
Retrosynthesis
As illustrated in Retrosynthetic Scheme I, Taxol was derived from diol 7.2 by an ester bond formation, according to the Ojima-Holton method. This diol comes from carbonate 6.3 by the addition of phenyllithium. The oxetane ring in compound 6.3 was obtained via an SN2 reaction involving a mesylate derived from acetal 4.9. Ring B was closed via a McMurry reaction involving dialdehyde 4.8 which ultimately was derived from aldehyde 4.2 and hydrazone 3.6 using a Shapiro coupling reaction.
Retrosynthetic Scheme II indicates that both the aldehyde and the hydrazone used in the Shapiro coupling reaction were synthesized using Diels-Alder reactions.
C Ring synthesis
As shown in Scheme 1, the ring synthesis of ring C began with a Diels-Alder reaction between diene 1.3 and dienophile 1.1 in the presence of phenylboronic acid (1.2), which, after addition of 2,2-dimethyl-1,3-propanediol, gave five-membered lactone 1.8 in 62% yield. Boron served as a molecular tether and aligned both diene and dienophile for this endo Diels-Alder cycloaddition. After protection of the hydroxyl groups as tert-butyldimethylsilyl ethers, reduction of the ester with lithium aluminium hydride and selective deprotection of the secondary hydroxyl group gave lactone diol 1.11. The unusual lactone hydrates 1.9 and 1.10 were isolated as synthetic intermediates in this process.
Lactone diol 2.1, after selective protection, was reduced with lithium aluminium hydride to give triol 2.4. This triol, after conversion to the acetonide, was selectively oxidized to the aldehyde using tetrapropylammonium perruthenate (TPAP) and N-methylmorpholine N-oxide. Aldehyde 2.6 served as a starting point for the construction of ring B (Scheme 4, compound 4.2).
A ring synthesis
The A ring synthesis (Scheme 3) started with a Diels-Alder reaction of diene 3.1 with the commercially available dienophile 2-chloroacrylonitrile 3.2 to give cyclohexene 3.3 with complete regioselectivity. Hydrolysis of the cyanochloro group and simultaneous cleavage of the acetate group led to hydroxyketone 3.4. The hydroxyl group was protected as a tert-butyldimethylsilyl ether (3.5). In preparation for a Shapiro reaction, this ketone was converted to hydrazone 3.6.
B ring synthesis
The coupling of ring A and ring C created the 8 membered B ring. One connection was made via a nucleophilic addition of a vinyllithium compound to an aldehyde and the other connection through a pinacol coupling reaction of two aldehydes (Scheme 4).
A Shapiro reaction of the vinyllithium compound derived from hydrazone 4.1 with aldehyde 4.2 makes the first connection that will become the B ring. The control of stereochemistry in 4.3 is thought to be derived from the relative hindrance of the Si face in the orientation shown on the right, due to the proximity of the axial methyl group. Epoxidation with vanadyl(acetylacetate) converted alkene 4.3 to epoxide 4.4, which, upon reduction with lithium aluminium hydride, gave diol 4.5. This diol was then protected as carbonate ester 4.6. The carbonate group also served to create rigidity in the ring structure for the imminent pinacol coupling reaction. The two silyl ether groups were removed, and diol 4.7 was then oxidized to give dialdehyde 4.8 using N-methylmorpholine N-oxide in the presence of a catalytic amount of tetrapropylammonium perruthenate. In the final step of the formation of Ring B, a pinacol coupling using conditions developed by McMurry (titanium(III) chloride and a zinc/copper alloy) gave diol 4.9.
Resolution
At this point in the synthesis of Taxol, the material was a racemic mixture. To obtain the desired enantiomer, allylic alcohol 4.9 was acylated with (1S)-(−)-camphanic chloride and dimethylaminopyridine, giving two diastereomers. These were then separated using standard column chromatography. The desired enantiomer was then isolated when one of the separated disatereomers was treated with potassium bicarbonate in methanol.
D ring synthesis
The desired enantiomer from resolution, allylic alcohol 5.1 (Scheme 5) was acetylated with acetic anhydride and 4-(dimethylamino)pyridine in methylene chloride to yield monoacetate 5.2. It is noteworthy that this reaction was exclusive for the allylic alcohol, and the adjacent hydroxyl group was not acetylated. Alcohol 5.2 was oxidized with tetrapropylammonium perruthenate and N-methylmorpholine N-oxide to give ketone 5.3. Alkene 5.3 underwent hydroboration in tetrahydrofuran. Oxidation with basic hydrogen peroxide and sodium bicarbonate gave alcohol 5.4 in 35% yield, with 15% yield of a regioisomer. The acetonide was removed, giving triol 5.5. This alcohol was monoacetylated, to give acetate 5.6. The benzyl group was removed and replaced with a triethylsilyl group. Diol 5.7 was selectively activated using methanesulfonyl chloride and 4-(dimethylamino)pyridine to give mesylate 5.8, in 78% yield.
The acetyl group in 6.1 (Scheme 6) was removed to give primary alcohol 6.2. The Taxol ring (D) was added by an intramolecular nucleophilic substitution involving this hydroxyl group to give oxetane 6.3. After acetylation, phenyllithium was used to open the carbonate ester ring to give alcohol 6.5. Allylic oxidation with pyridinium chlorochromate, sodium acetate, and celite gave ketone 6.6, which was subsequently reduced using sodium borohydride to give secondary alcohol 6.7. This was the last compound before the addition of the amide tail.
Tail addition
As shown in Scheme 7, Ojima lactam 7.1 reacted with alcohol 7.2 with sodium bis(trimethylsilyl)amide as a base. This alcohol is the triethylsilyl ether of the naturally occurring compound baccatin III. The related compound, 10-deacetylbaccatin III, is found in Taxus baccata, also known as the European Yew, in concentrations of 1 gram per kilogram leaves. Removal of the triethylsilyl protecting group gave Taxol.
Precursor synthesis
Synthesis of the Diels-Alder dienophile for Ring C
The ethyl ester of propionic acid (1) was brominated and then converted to the Wittig reagent using triphenylphosphine. Aldehyde 6 was obtained from allyl alcohol (4) by protection as the tert-butyldiphenylsilyl ether (5) followed by ozonolysis. Wittig reagent 3 and aldehyde 6 reacted in a Wittig reaction to give unsaturated ester 7, which was deprotected to give dienophile 8 (Scheme 1, compound 1).
Synthesis of the Diels-Alder diene for Ring A
Aldol condensation of acetone and ethyl acetoacetate gave β-keto-ester 3. A Grignard reaction involving methylmagnesium bromide provided alcohol 4, which was subjected to acid catalyzed elimination to give diene 5. Reduction and acetylation gave diene 7 (Scheme 3, compound 1).
Protecting groups
The synthesis makes use of various protecting groups as follows:
See also
Paclitaxel total synthesis
Danishefsky Taxol total synthesis
Holton Taxol total synthesis
Kuwajima Taxol total synthesis
Mukaiyama Taxol total synthesis
Wender Taxol total synthesis
External links
Nicolaou Taxol Synthesis @ SynArchive.com
Taxol in dynamic 3D
References
Total synthesis
Scripps Research
Taxanes | Nicolaou Taxol total synthesis | [
"Chemistry"
] | 2,110 | [
"Total synthesis",
"Chemical synthesis"
] |
2,957,626 | https://en.wikipedia.org/wiki/Pimaric%20acid | Pimaric acid is a carboxylic acid that is classified as a resin acid. It is a major component of the rosin obtained from pine trees.
When heated above 100 °C, pimaric acid converts to abietic acid, which it usually accompanies in mixtures like rosin.
It is soluble in alcohols, acetone, and ethers. The compound is colorless, but almost invariably samples are yellow or brown owing to air oxidation. As a mixture with abietic acid, it is often hydrogenated, esterified, or otherwise modified to produce materials of commerce.
See also
Isopimaric acid
References
Carboxylic acids
Diterpenes
Phenanthrenes
Vinyl compounds | Pimaric acid | [
"Chemistry"
] | 149 | [
"Carboxylic acids",
"Functional groups",
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
2,957,962 | https://en.wikipedia.org/wiki/Quenching%20%28fluorescence%29 | In chemistry, quenching refers to any process which decreases the fluorescent intensity of a given substance. A variety of processes can result in quenching, such as excited state reactions, energy transfer, complex-formation and collisions. As a consequence, quenching is often heavily dependent on pressure and temperature. Molecular oxygen, iodine ions and acrylamide are common chemical quenchers. The chloride ion is a well known quencher for quinine fluorescence. Quenching poses a problem for non-instant spectroscopic methods, such as laser-induced fluorescence.
Quenching is made use of in optode sensors; for instance the quenching effect of oxygen on certain ruthenium complexes allows the measurement of oxygen saturation in solution. Quenching is the basis for Förster resonance energy transfer (FRET) assays. Quenching and dequenching upon interaction with a specific molecular biological target is the basis for activatable optical contrast agents for molecular imaging. Many dyes undergo self-quenching, which can decrease the brightness of protein-dye conjugates for fluorescence microscopy, or can be harnessed in sensors of proteolysis.
Mechanisms
Förster resonance energy transfer
There are a few distinct mechanisms by which energy can be transferred non-radiatively (without absorption or emission of photons) between two dyes, a donor and an acceptor. Förster resonance energy transfer (FRET or FET) is a dynamic quenching mechanism because energy transfer occurs while the donor is in the excited state. FRET is based on classical dipole-dipole interactions between the transition dipoles of the donor and acceptor and is extremely dependent on the donor-acceptor distance, R, falling off at a rate of 1/R6. FRET also depends on the donor-acceptor spectral overlap (see figure) and the relative orientation of the donor and acceptor transition dipole moments. FRET can typically occur over distances up to 100 Å.
Dexter electron transfer
Dexter (also known as Dexter exchange or collisional energy transfer, colloquially known as Dexter Energy Transfer) is another dynamic quenching mechanism. Dexter electron transfer is a short-range phenomenon that falls off exponentially with distance (proportional to e−kR where k is a constant that depends on the inverse of the van der Waals radius of the atom) and depends on spatial overlap of donor and quencher molecular orbitals. In most donor-fluorophore–quencher-acceptor situations, the Förster mechanism is more important than the Dexter mechanism. With both Förster and Dexter energy transfer, the shapes of the absorption and fluorescence spectra of the dyes are unchanged.
Dexter electron transfer can be significant between the dye and the solvent especially when hydrogen bonds are formed between them.
Exciplex
Exciplex (excited state complex) formation is a third dynamic quenching mechanism.
Static quenching
The remaining energy transfer mechanism is static quenching (also referred to as contact quenching). Static quenching can be a dominant mechanism for some reporter-quencher probes. Unlike dynamic quenching, static quenching occurs when the molecules form a complex in the ground state, i.e. before excitation occurs. The complex has its own unique properties, such as being nonfluorescent and having a unique absorption spectrum. Dye aggregation is often due to hydrophobic effects—the dye molecules stack together to minimize contact with water. Planar aromatic dyes that are matched for association through hydrophobic forces can enhance static quenching. High temperatures and addition of surfactants tend to disrupt ground state complex formation.
Collisional quenching
Collisional quenching occurs when the excited fluorophore experiences contact with an atom or molecule that can facilitate non-radiative transitions to the ground state. ... Excited-state molecule collides with quencher molecule and returns to ground state non-radiatively.
See also
Dark quencher, for use in molecular biology.
Förster resonance energy transfer, a phenomenon on which some quenching techniques rely
References
Fluorescence
Reaction mechanisms | Quenching (fluorescence) | [
"Chemistry"
] | 850 | [
"Reaction mechanisms",
"Luminescence",
"Fluorescence",
"Physical organic chemistry",
"Chemical kinetics"
] |
2,962,357 | https://en.wikipedia.org/wiki/Relativistic%20electromagnetism | Relativistic electromagnetism is a physical phenomenon explained in electromagnetic field theory due to Coulomb's law and Lorentz transformations.
Electromechanics
After Maxwell proposed the differential equation model of the electromagnetic field in 1873, the mechanism of action of fields came into question, for instance in the Kelvin's master class held at Johns Hopkins University in 1884 and commemorated a century later.
The requirement that the equations remain consistent when viewed from various moving observers led to special relativity, a geometric theory of 4-space where intermediation is by light and radiation. The spacetime geometry provided a context for technical description of electric technology, especially generators, motors, and lighting at first. The Coulomb force was generalized to the Lorentz force. For example, with this model transmission lines and power grids were developed and radio frequency communication explored.
An effort to mount a full-fledged electromechanics on a relativistic basis is seen in the work of Leigh Page, from the project outline in 1912 to his textbook Electrodynamics (1940) The interplay (according to the differential equations) of electric and magnetic field as viewed over moving observers is examined. What is charge density in electrostatics becomes proper charge density and generates a magnetic field for a moving observer.
A revival of interest in this method for education and training of electrical and electronics engineers broke out in the 1960s after Richard Feynman's textbook.
Rosser's book Classical Electromagnetism via Relativity was popular, as was Anthony French's treatment in his textbook which illustrated diagrammatically the proper charge density. One author proclaimed, "Maxwell — Out of Newton, Coulomb, and Einstein".
The use of retarded potentials to describe electromagnetic fields from source-charges is an expression of relativistic electromagnetism.
Principle
The question of how an electric field in one inertial frame of reference looks in different reference frames moving with respect to the first is crucial to understanding fields created by moving sources. In the special case, the sources that create the field are at rest with respect to one of the reference frames. Given the electric field in the frame where the sources are at rest, one can ask: what is the electric field in some other frame? Knowing the electric field at some point (in space and time) in the rest frame of the sources, and knowing the relative velocity of the two frames provided all the information needed to calculate the electric field at the same point in the other frame. In other words, the electric field in the other frame does not depend on the particular distribution of the source charges, only on the local value of the electric field in the first frame at that point. Thus, the electric field is a complete representation of the influence of the far-away charges.
Alternatively, introductory treatments of magnetism introduce the Biot–Savart law, which describes the magnetic field associated with an electric current. An observer at rest with respect to a system of static, free charges will see no magnetic field. However, a moving observer looking at the same set of charges does perceive a current, and thus a magnetic field. That is, the magnetic field is simply the electric field, as seen in a moving coordinate system.
Redundancy
The title of this article is redundant since all mathematical theories of electromagnetism are relativistic.
Indeed, as Einstein wrote, "The special theory of relativity ... was simply a systematic development of the electrodynamics of Clerk Maxwell and Lorentz".
Combination of spatial and temporal variables in Maxwell's theory required admission of a four-manifold. Finite light speed and other constant motion lines were described with analytic geometry. Orthogonality of electric and magnetic vector fields in space was extended by hyperbolic orthogonality for the temporal factor.
When Ludwik Silberstein published his textbook The Theory of Relativity (1914) he related the new geometry to electromagnetism. Faraday's law of induction was suggestive to Einstein when he wrote in 1905 about the "reciprocal electrodynamic action of a magnet and a conductor".
Nevertheless, the aspiration, reflected in references for this article, is for an analytic geometry of spacetime and charges providing a deductive route to forces and currents in practice. Such a royal route to electromagnetic understanding may be lacking, but a path has been opened with differential geometry: The tangent space at an event in spacetime is a four-dimensional vector space, operable by linear transformations. Symmetries observed by electricians find expression in linear algebra and differential geometry. Using exterior algebra to construct a 2-form F from electric and magnetic fields, and the implied dual 2-form ★F, the equations and (current) express Maxwell's theory with a differential form approach.
See also
Covariant formulation of classical electromagnetism
Special relativity
Liénard–Wiechert potential
Moving magnet and conductor problem
Wheeler–Feynman absorber theory
Paradox of a charge in a gravitational field
Notes and references
Further reading
Electromagnetism
Electromagnetism | Relativistic electromagnetism | [
"Physics"
] | 1,034 | [
"Electromagnetism",
"Physical phenomena",
"Special relativity",
"Fundamental interactions",
"Theory of relativity"
] |
11,708,593 | https://en.wikipedia.org/wiki/Interferometric%20microscopy | Interferometric microscopy or imaging interferometric microscopy is the concept of microscopy which
is related to holography, synthetic-aperture imaging, and off-axis-dark-field illumination techniques.
Interferometric microscopy allows enhancement of resolution of optical microscopy due to interferometric (holographic)
registration of several partial images (amplitude and phase) and the numerical combining.
Combining of partial images
In interferometric microscopy, the image of a micro-object is synthesized numerically as a coherent combination
of partial images with registered amplitude and phase.
For registration of partial images, a conventional holographic set-up is used with a reference wave, as is usual in optical holography. Capturing multiple exposures allows the numerical emulation of a large numerical aperture objective from images obtained with an objective lens with smaller-value numerical aperture.
Similar techniques allows scanning and precise detection of small particles.
As the combined image keeps both amplitude and phase information, the interferometric microscopy can be especially efficient for the phase objects, allowing detection of light variations of index of refraction, which cause the phase shift or the light passing through for a small fraction of a radian.
Non-optical waves
Although the Interferometric microscopy has been demonstrated only for optical images (visible light), this technique may find application in high resolution atom optics, or optics of neutral atom beams (see Atomic de Broglie microscope), where the Numerical aperture is usually very limited
.
See also
Digital holographic microscopy
Holography
Numerical aperture
Raman microscope
Diffraction limited
References
Microscopy
Interferometry
Atomic, molecular, and optical physics
Holography | Interferometric microscopy | [
"Physics",
"Chemistry"
] | 329 | [
"Atomic",
"Microscopy",
" molecular",
" and optical physics"
] |
11,709,087 | https://en.wikipedia.org/wiki/Turbine%20engine%20failure | A turbine engine failure occurs when a gas turbine engine unexpectedly stops producing power due to a malfunction other than fuel exhaustion. It often applies for aircraft, but other turbine engines can also fail, such as ground-based turbines used in power plants or combined diesel and gas vessels and vehicles.
Reliability
Turbine engines in use on today's turbine-powered aircraft are very reliable. Engines operate efficiently with regularly scheduled inspections and maintenance. These units can have lives ranging in the tens of thousands of hours of operation. However, engine malfunctions or failures occasionally occur that require an engine to be shut down in flight. Since multi-engine airplanes are designed to fly with one engine inoperative and flight crews are trained to fly with one engine inoperative, the in-flight shutdown of an engine typically does not constitute a serious safety of flight issue.
The Federal Aviation Administration (FAA) was quoted as stating turbine engines have a failure rate of one per 375,000 flight hours, compared to of one every 3,200 flight hours for aircraft piston engines.
Due to "gross under-reporting" of general aviation piston engines in-flight shutdowns (IFSD), the FAA has no reliable data and assessed the rate "between 1 per 1,000 and 1 per 10,000 flight hours".
Continental Motors reports the FAA states general aviation engines experience one failures or IFSD every 10,000 flight hours, and states its Centurion engines is one per flight hours, lowering to one per flight hours in 2013–2014.
The General Electric GE90 has an in-flight shutdown rate (IFSD) of one per million engine flight-hours.
The Pratt & Whitney Canada PT6 is known for its reliability with an in-flight shutdown rate of one per hours from 1963 to 2016, lowering to one per hours over 12 months in 2016.
Emergency landing
Following an engine shutdown, a precautionary landing is usually performed with airport fire and rescue equipment positioned near the runway. The prompt landing is a precaution against the risk that another engine will fail later in the flight or that the engine failure that has already occurred may have caused or been caused by other as-yet unknown damage or malfunction of aircraft systems (such as fire or damage to aircraft flight controls) that may pose a continuing risk to the flight. Once the aircraft lands, fire department personnel assist with inspecting the aircraft to ensure it is safe before it taxis to its parking position.
Rotorcraft
Turboprop-powered aircraft and turboshaft-powered helicopters are also powered by turbine engines and are subject to engine failures for many similar reasons as jet-powered aircraft. In the case of an engine failure in a helicopter, it is often possible for the pilot to enter autorotation, using the unpowered rotor to slow the aircraft's descent and provide a measure of control, usually allowing for a safe emergency landing even without engine power.
Shutdowns that are not engine failures
Most in-flight shutdowns are harmless and likely to go unnoticed by passengers. For example, it may be prudent for the flight crew to shut down an engine and perform a precautionary landing in the event of a low oil pressure or high oil temperature warning in the cockpit. However, passengers in a jet powered aircraft may become quite alarmed by other engine events such as a compressor surge — a malfunction that is typified by loud bangs and even flames from the engine's inlet and tailpipe. A compressor surge is a disruption of the airflow through a gas turbine jet engine that can be caused by engine deterioration, a crosswind over the engine's inlet, ice accumulation around the engine inlet, ingestion of foreign material, or an internal component failure such as a broken blade. While this situation can be alarming, the engine may recover with no damage.
Other events that can happen with jet engines, such as a fuel control fault, can result in excess fuel in the engine's combustor. This additional fuel can result in flames extending from the engine's exhaust pipe. As alarming as this would appear, at no time is the engine itself actually on fire.
Also, the failure of certain components in the engine may result in a release of oil into bleed air that can cause an odor or oily mist in the cabin. This is known as a fume event. The dangers of fume events are the subject of debate in both aviation and medicine.
Possible causes
Engine failures can be caused by mechanical problems in the engine itself, such as damage to portions of the turbine or oil leaks, as well as damage outside the engine such as fuel pump problems or fuel contamination. A turbine engine failure can also be caused by entirely external factors, such as volcanic ash, bird strikes or weather conditions like precipitation or icing. Weather risks such as these can sometimes be countered through the usage of supplementary ignition or anti-icing systems.
Failures during takeoff
A turbine-powered aircraft's takeoff procedure is designed around ensuring that an engine failure will not endanger the flight. This is done by planning the takeoff around three critical V speeds, V1, VR and V2. V1 is the critical engine failure recognition speed, the speed at which a takeoff can be continued with an engine failure, and the speed at which stopping distance is no longer guaranteed in the event of a rejected takeoff. VR is the speed at which the nose is lifted off the runway, a process known as rotation. V2 is the single-engine safety speed, the single engine climb speed. The use of these speeds ensure that either sufficient thrust to continue the takeoff, or sufficient stopping distance to reject it will be available at all times.
Failure during extended operations
In order to allow twin-engined aircraft to fly longer routes that are over an hour from a suitable diversion airport, a set of rules known as ETOPS (Extended Twin-engine Operational Performance Standards) is used to ensure a twin turbine engine powered aircraft is able to safely arrive at a diversionary airport after an engine failure or shutdown, as well as to minimize the risk of a failure. ETOPS includes maintenance requirements, such as frequent and meticulously logged inspections and operation requirements such as flight crew training and ETOPS-specific procedures.
Contained and uncontained failures
Engine failures may be classified as either as "contained" or "uncontained".
A contained engine failure is one in which all internal rotating components remain within or embedded in the engine's case (including any containment wrapping that is part of the engine), or exit the engine through the tail pipe or air inlet.
An uncontained engine event occurs when an engine failure results in fragments of rotating engine parts penetrating and escaping through the engine case.
The very specific technical distinction between a contained and uncontained engine failure derives from regulatory requirements for design, testing, and certification of aircraft engines under Part 33 of the U.S. Federal Aviation Regulations, which has always required turbine aircraft engines to be designed to contain damage resulting from rotor blade failure. Under Part 33, engine manufacturers are required to perform blade off tests to ensure containment of shrapnel if blade separation occurs. Blade fragments exiting the inlet or exhaust can still pose a hazard to the aircraft, and this should be considered by the aircraft designers. A nominally contained engine failure can still result in engine parts departing the aircraft as long as the engine parts exit via the existing openings in the engine inlet or outlet, and do not create new openings in the engine case containment. Fan blade fragments departing via the inlet may also cause airframe parts such as the inlet duct and other parts of the engine nacelle to depart the aircraft due to deformation from the fan blade fragment's residual kinetic energy.
The containment of failed rotating parts is a complex process which involves high energy, high speed interactions of numerous locally and remotely located engine components (e.g., failed blade, other blades, containment structure, adjacent cases, bearings, bearing supports, shafts, vanes, and externally mounted components). Once the failure event starts, secondary events of a random nature may occur whose course and ultimate conclusion cannot be precisely predicted. Some of the structural interactions that have been observed to affect containment are the deformation and/or deflection of blades, cases, rotor, frame, inlet, casing rub strips, and the containment structure.
Uncontained turbine engine disk failures within an aircraft engine present a direct hazard to an airplane and its crew and passengers because high-energy disk fragments can penetrate the cabin or fuel tanks, damage flight control surfaces, or sever flammable fluid or hydraulic lines. Engine cases are not designed to contain failed turbine disks. Instead, the risk of uncontained disk failure is mitigated by designating disks as safety-critical parts, defined as the parts of an engine whose failure is likely to present a direct hazard to the aircraft.
Notable uncontained engine failure accidents
National Airlines Flight 27: a McDonnell Douglas DC-10 flying from Miami to San Francisco in 1973 had an overspeed failure of a General Electric CF6-6, resulting in one fatality.
Two LOT Polish Airlines flights, both Ilyushin Il-62s, suffered catastrophic uncontained engine failures in the 1980s. The first was in 1980 on LOT Polish Airlines Flight 7 where flight controls were destroyed, killing all 87 on board. In 1987, on LOT Polish Airlines Flight 5055, the failure of the aircraft's inner left (#2) engine damaged the outer left (#1) engine, setting both on fire and causing loss of flight controls, leading to a crash that killed all 183 people on board. In both cases, the turbine shaft in engine #2 disintegrated due to production defects in the engines' bearings, which were missing rollers.
The Tu-154 crash near Krasnoyarsk was a major aircraft crash that occurred on Sunday, December 23, 1984, in the vicinity of Krasnoyarsk. The Tu-154B-2 airliner of the 1st Krasnoyarsk united aviation unit (Aeroflot) performed passenger flight SU-3519 on the Krasnoyarsk-Irkutsk route, but during the climb, engine No. 3 failed. The crew decided to return to the airport of departure, but during the landing approach a fire broke out, which destroyed the control systems and as a result, the plane crashed to the ground 3200 meters from the threshold of the runway of the Yemelyanovo airport and collapsed. Of the 111 people on board (104 passengers and 7 crew members), one survived. The cause of the catastrophe was the destruction of the disk of the first stage of the low pressure circuit of engine No. 3, which occurred due to the presence of fatigue cracks. The cracks were caused by a manufacturing defect – the inclusion of a titanium-nitrogen compound that has a higher microhardness than the original material. The methods used at that time for the manufacture and repair of disks, as well as the means of control, were found to be partially obsolete, which is why they did not ensure the effectiveness of control and detection of such a defect. The defect itself arose probably due to accidental ingestion of a titanium sponge or charge for smelting an ingot of a piece enriched with nitrogen.
Cameroon Airlines Flight 786: a Boeing 737 flying between Douala and Garoua, Cameroon in 1984 had a failure of a Pratt & Whitney JT8D-15 engine. Two people died.
British Airtours Flight 28M: a Boeing 737 flying from Manchester to Corfu in 1985 suffered an uncontained engine failure and fire on takeoff. The takeoff was aborted and the plane turned onto a taxiway and began evacuating. Fifty-five passengers and crew were unable to escape and died of smoke inhalation. The accident led to major changes to improve the survivability of aircraft evacuations.
United Airlines Flight 232: a McDonnell Douglas DC-10 flying from Denver to Chicago in 1989. The failure of the rear General Electric CF6-6 engine caused the loss of all hydraulics, forcing the pilots to attempt a landing using differential thrust. There were 111 fatalities. Prior to this crash, the probability of a simultaneous failure of all three hydraulic systems was considered as low as one in a billion. However, statistical models did not account for the position of the number-two engine, mounted at the tail close to hydraulic lines, nor the results of fragments released in many directions. Since then, aircraft engine designs have focused on keeping shrapnel from puncturing the cowling or ductwork, increasingly utilizing high-strength composite materials to achieve penetration resistance while keeping the weight low.
Baikal Airlines Flight 130: a starter of engine No. 2 on a Tu-154 heading from Irkutsk to Domodedovo, Moscow in 1994, failed to stop after engine startup and continued to operate at over 40,000 rpm with open bleed valves from engines, which caused an uncontained failure of the starter. A detached turbine disk damaged fuel and oil supply lines (which caused fire) and hydraulic lines. The fire-extinguishing system failed to stop the fire, and the plane diverted back to Irkutsk. However, due to loss of hydraulic pressure the crew lost control of the plane, which subsequently crashed into a dairy farm killing all 124 on board and one on the ground.
ValuJet 597: A DC-9-32 taking off from Hartsfield Jackson Atlanta International Airport on June 8, 1995, suffered an uncontained engine failure of the 7th stage high pressure compressor disk due to inadequate inspection of the corroded disk. The resulting rupture caused jet fuel to flow into the cabin and ignite, and the fire caused the jet to be a write-off.
Delta Air Lines Flight 1288: a McDonnell Douglas MD-88 flying from Pensacola, Florida to Atlanta in 1996 had a cracked compressor rotor hub failure on one of its Pratt & Whitney JT8D-219 engines. Two died.
TAM Flight 9755: a Fokker 100, departing Recife/Guararapes–Gilberto Freyre International Airport for São Paulo/Guarulhos International Airport on 15 September 2001, suffered an uncontained engine failure (Rolls-Royce RB.183 Tay) in which fragments of the engine shattered three cabin windows, causing decompression and pulling a passenger partly out of the plane. Another passenger held the passenger in until the aircraft landed, but the passenger blown out of the window died.
Qantas Flight 32: an Airbus A380 flying from London Heathrow to Sydney (via Singapore) in 2010 had an uncontained failure in a Rolls-Royce Trent 900 engine. The failure was found to have been caused by a misaligned counter bore within a stub oil pipe leading to a fatigue fracture. This in turn led to an oil leakage followed by an oil fire in the engine. The fire led to the release of the Intermediate Pressure Turbine (IPT) disc. The airplane, however, landed safely. This led to the grounding of the entire Qantas A380 fleet.
British Airways Flight 2276: a Boeing 777-200ER flying from Las Vegas to London in 2015 suffered an uncontained engine failure on its #1 GE90 engine during takeoff, resulting in a large fire on its port side. The aircraft successfully aborted takeoff and the plane was evacuated with no fatalities.
American Airlines Flight 383: a Boeing 767-300ER flying from Chicago to Miami in 2016 suffered an uncontained engine failure on its #2 engine (General Electric CF6) during takeoff resulting in a large fire which destroyed the outer right wing. The aircraft aborted takeoff and was evacuated with 21 minor injuries, but no fatalities.
Air France Flight 66: an Airbus A380, registration F-HPJE performing flight from Paris, France, to Los Angeles, United States, was en route about southeast of Nuuk, Greenland, when it suffered a catastrophic engine failure in 2017 (General Electric / Pratt & Whitney Engine Alliance GP7000). The crew descended the aircraft and diverted to Goose Bay, Canada, for a safe landing about two hours later.
References
This article contains text from a publication of the United States National Transportation Safety Board. which can be found here As a work of the United States Federal Government, the source is in the public domain and may be adapted freely per USC Title 17; Chapter 1; §105 (see Wikipedia:Public Domain).
Turbines
Jet engines
Aviation safety
Aviation risks
Emergency aircraft operations
Aircraft engines | Turbine engine failure | [
"Chemistry",
"Technology"
] | 3,379 | [
"Engines",
"Turbomachinery",
"Turbines",
"Jet engines",
"Aircraft engines"
] |
11,710,718 | https://en.wikipedia.org/wiki/Well%20drainage | Well drainage means drainage of agricultural lands by wells. Agricultural land is drained by pumped wells (vertical drainage) to improve the soils by controlling water table levels and soil salinity.
Introduction
Subsurface (groundwater) drainage for water table and soil salinity in agricultural land can be done by horizontal and vertical drainage systems.
Horizontal drainage systems are drainage systems using open ditches (trenches) or buried pipe drains.
Vertical drainage systems are drainage systems using pumped wells, either open dug wells or tube wells.
Both systems serve the same purposes, namely water table control and soil salinity control .
Both systems can facilitate the reuse of drainage water (e.g. for irrigation), but wells offer more flexibility.
Reuse is only feasible if the quality of the groundwater is acceptable and the salinity is low.
Design
Although one well may be sufficient to solve groundwater and soil salinity problems in a few hectares, one usually needs a number of wells, because the problems may be widely spread.
The wells may be arranged in a triangular, square or rectangular pattern.
The design of the well field concerns depth, capacity, discharge, and spacing of the wells.
The discharge is found from a water balance.
The depth is selected in accordance to aquifer properties. The well filter must be placed in a permeable soil layer.
The spacing can be calculated with a well spacing equation using discharge, aquifer properties, well depth and optimal depth of the water table.
The determination of the optimum depth of the water table is the realm of drainage research .
Flow to wells
The basic, steady state, equation for flow to fully penetrating wells (i.e. wells reaching the impermeable base) in a regularly spaced well field in a uniform unconfined (phreatic) aquifer with a hydraulic conductivity that is isotropic is:
where Q = safe well discharge - i.e. the steady state discharge at which no overdraught or groundwater depletion occurs - (m3/day), K = uniform hydraulic conductivity of the soil (m/day), D = depth below soil surface, = depth of the bottom of the well equal to the depth of the impermeable base (m), = depth of the watertable midway between the wells (m), is the depth of the water level inside the well (m), = radius of influence of the well (m) and is the radius of the well (m).
The radius of influence of the wells depends on the pattern of the well field, which may be triangular, square, or rectangular. It can be found as:
where = total surface area of the well field (m2)and N = number of wells in the well field.
The safe well discharge (Q) can also be found from:
where q is the safe yield or drainable surplus of the aquifer (m/day) and is the operation intensity of the wells (hours/24 per day). Thus the basic equation can also be written as:
Well spacing
With a well spacing equation one can calculate various design alternatives to arrive at the most attractive or economical solution for watertable control in agricultural land.
The basic flow equation cannot be used for determining the well spacing in a partially penetrating well-field in a non-uniform and anisotropic aquifer, but one needs a numerical solution of more complicated equations.
The costs of the most attractive solution can be compared with the costs of a horizontal drainage system - for which the drain spacing can be calculated with a drainage equation - serving the same purpose, to decide which system deserves preference.
The well design proper is described in
An illustration of the parameters involved is shown in the figure. The hydraulic conductivity can be found from an aquifer test.
Software
The numerical computer program WellDrain for well spacing calculations takes into account fully and partially penetrating wells, layered aquifers, anisotropy (different vertical and horizontal hydraulic conductivity or permeability) and entrance resistance.
Modelling
With a groundwater model that includes the possibility to introduce wells, one can study the impact of a well drainage system on the hydrology of the project area. There are also models that give the opportunity to evaluate the water quality.
SahysMod is such a polygonal groundwater model permitting to assess the use of well water for irrigation, the effects on soil salinity and on depth of the water table.
References
External links
Salinity Control and Reclamation Program (SCARP) using wells in the Indus valley of Pakistan.
Website on waterlogging and land reclamation by horizontal and vertical drainage systems :
Drainage
Hydrology
Hydrogeology
Hydraulic engineering
Land management
Land reclamation
Water and the environment
de:Schluckbrunnen | Well drainage | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 974 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering",
"Hydrogeology"
] |
11,711,216 | https://en.wikipedia.org/wiki/Tinsel%20wire | Tinsel wire is a type of electrical wire used for applications that require high mechanical flexibility but low current-carrying capacity. Tinsel wire is commonly used in cords of telephones, handsets, headphones, and small electrical appliances. It is far more resistant to metal fatigue failure than either stranded wire or solid wire.
Construction
Tinsel wire is produced by wrapping several strands of thin metal foil around a flexible nylon or textile core. Because the foil is very thin, the bend radius imposed on the foil is much greater than the thickness of the foil, leading to a low probability of metal fatigue. Meanwhile, the core provides high tensile strength without impairing flexibility.
Typically, multiple tinsel wires are jacketed with an insulating layer to form one conductor. A cord is formed from several conductors in either a round profile or a flat cable.
Connections
Tinsel wire is commonly connected to equipment with crimped terminal lugs that pierce the insulation to make contact with the metal ribbons, rather than stripping insulation. Separated from the core, the individual ribbons are relatively fragile, and the core can be damaged by high temperatures. These factors make it difficult or impractical to terminate tinsel wire by soldering during equipment manufacture, although soldering is possible, with some difficulty, to repair a failed connection. However, the conductors tend to break at their junction with the rigid solder.
Applications
Tinsel wires or cords are used for telephony and audio applications in which frequent bending of electric cords occurs, such as for headsets and telephone handsets. It is also used in power cords for small appliances such as electric shavers or clocks, where stranded cable conductors of adequate mechanical size would be too stiff. Tinsel cords are recognized as type TPT or TST in the US and Canadian electrical codes, and are rated at 0.5 amperes.
Manufacturers and suppliers
Maeden
Dacon Systems, Inc.
Gavitt Wire & Cable Co., Inc.
See also
Litz wire
References
Electrical wiring
Telephony equipment | Tinsel wire | [
"Physics",
"Engineering"
] | 408 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
314,366 | https://en.wikipedia.org/wiki/H-infinity%20methods%20in%20control%20theory | H∞ (i.e. "H-infinity") methods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use H∞ methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization. H∞ techniques have the advantage over classical control techniques in that H∞ techniques are readily applicable to problems involving multivariate systems with cross-coupling between channels; disadvantages of H∞ techniques include the level of mathematical understanding needed to apply them successfully and the need for a reasonably good model of the system to be controlled. It is important to keep in mind that the resulting controller is only optimal with respect to the prescribed cost function and does not necessarily represent the best controller in terms of the usual performance measures used to evaluate controllers such as settling time, energy expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These methods were introduced into control theory in the late 1970s-early 1980s
by George Zames (sensitivity minimization), J. William Helton (broadband matching),
and Allen Tannenbaum (gain margin optimization).
The phrase H∞ control comes from the name of the mathematical space over which the optimization takes place: H∞ is the Hardy space of matrix-valued functions that are analytic and bounded in the open right-half of the complex plane defined by Re(s) > 0; the H∞ norm is the supremum singular value of the matrix over that space. In the case of a scalar-valued function, the elements of the Hardy space that extend continuously to the boundary and are continuous at infinity is the disk algebra. For a matrix-valued function, the norm can be interpreted as a maximum gain in any direction and at any frequency; for SISO systems, this is effectively the maximum magnitude of the frequency response.
H∞ techniques can be used to minimize the closed loop impact of a perturbation: depending on the problem formulation, the impact will either be measured in terms of stabilization or performance. Simultaneously optimizing robust performance and robust stabilization is difficult. One method that comes close to achieving this is H∞ loop-shaping, which allows the control designer to apply classical loop-shaping concepts to the multivariable frequency response to get good robust performance, and then optimizes the response near the system bandwidth to achieve good robust stabilization.
Commercial software is available to support H∞ controller synthesis.
Problem formulation
First, the process has to be represented according to the following standard configuration:
The plant P has two inputs, the exogenous input w, that includes reference signal and disturbances, and the manipulated variables u. There are two outputs, the error signals z that we want to minimize, and the measured variables v, that we use to control the system. v is used in K to calculate the manipulated variables u. Notice that all these are generally vectors, whereas P and K are matrices.
In formulae, the system is:
It is therefore possible to express the dependency of z on w as:
Called the lower linear fractional transformation, is defined (the subscript comes from lower):
Therefore, the objective of control design is to find a controller such that is minimised according to the norm. The same definition applies to control design. The infinity norm of the transfer function matrix is defined as:
where is the maximum singular value of the matrix .
The achievable H∞ norm of the closed loop system is mainly given through the matrix D11 (when the system P is given in the form (A, B1, B2, C1, C2, D11, D12, D22, D21)). There are several ways to come to an H∞ controller:
A Youla-Kucera parametrization of the closed loop often leads to very high-order controller.
Riccati-based approaches solve two Riccati equations to find the controller, but require several simplifying assumptions.
An optimization-based reformulation of the Riccati equation uses linear matrix inequalities and requires fewer assumptions.
See also
Blaschke product
Hardy space
H square
H-infinity loop-shaping
Linear-quadratic-Gaussian control (LQG)
Rosenbrock system matrix
References
Bibliography
.
.
.
.
.
.
Control theory
Hardy spaces | H-infinity methods in control theory | [
"Mathematics"
] | 886 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
314,402 | https://en.wikipedia.org/wiki/Liquid%20oxygen | Liquid oxygen, sometimes abbreviated as LOX or LOXygen, is a clear cyan liquid form of dioxygen . It was used as the oxidizer in the first liquid-fueled rocket invented in 1926 by Robert H. Goddard, an application which is ongoing.
Physical properties
Liquid oxygen has a clear cyan color and is strongly paramagnetic: it can be suspended between the poles of a powerful horseshoe magnet. Liquid oxygen has a density of , slightly denser than liquid water, and is cryogenic with a freezing point of and a boiling point of at . Liquid oxygen has an expansion ratio of 1:861 and because of this, it is used in some commercial and military aircraft as a transportable source of breathing oxygen.
Because of its cryogenic nature, liquid oxygen can cause the materials it touches to become extremely brittle. Liquid oxygen is also a very powerful oxidizing agent: organic materials will burn rapidly and energetically in liquid oxygen. Further, if soaked in liquid oxygen, some materials such as coal briquettes, carbon black, etc., can detonate unpredictably from sources of ignition such as flames, sparks or impact from light blows. Petrochemicals, including asphalt, often exhibit this behavior.
The tetraoxygen molecule (O4) was predicted in 1924 by Gilbert N. Lewis, who proposed it to explain why liquid oxygen defied Curie's law. Modern computer simulations indicate that, although there are no stable O4 molecules in liquid oxygen, O2 molecules do tend to associate in pairs with antiparallel spins, forming transient O4 units.
Liquid nitrogen has a lower boiling point at −196 °C (77 K) than oxygen's −183 °C (90 K), and vessels containing liquid nitrogen can condense oxygen from air: when most of the nitrogen has evaporated from such a vessel, there is a risk that liquid oxygen remaining can react violently with organic material. Conversely, liquid nitrogen or liquid air can be oxygen-enriched by letting it stand in open air; atmospheric oxygen dissolves in it, while nitrogen evaporates preferentially.
The surface tension of liquid oxygen at its normal pressure boiling point is .
Uses
In commerce, liquid oxygen is classified as an industrial gas and is widely used for industrial and medical purposes. Liquid oxygen is obtained from the oxygen found naturally in air by fractional distillation in a cryogenic air separation plant.
Air forces have long recognized the strategic importance of liquid oxygen, both as an oxidizer and as a supply of gaseous oxygen for breathing in hospitals and high-altitude aircraft flights. In 1985, the USAF started a program of building its own oxygen-generation facilities at all major consumption bases.
In rocket propellant
Liquid oxygen is the most common cryogenic liquid oxidizer propellant for spacecraft rocket applications, usually in combination with liquid hydrogen, kerosene or methane.
Liquid oxygen was used in the first liquid fueled rocket. The World War II V-2 missile also used liquid oxygen under the name A-Stoff and Sauerstoff. In the 1950s, during the Cold War both the United States' Redstone and Atlas rockets, and the Soviet R-7 Semyorka used liquid oxygen. Later, in the 1960s and 1970s, the ascent stages of the Apollo Saturn rockets, and the Space Shuttle main engines used liquid oxygen.
As of 2024, many active rockets use liquid oxygen:
Chinese space program
CASC: Long March 5, Long March 6, Long March 7, Long March 8, Long March 12, Long March 9 (under development), Long March 10 (under development)
Galactic Energy: Pallas-1 (under development)
i-Space: Hyperbola-3 (under development)
LandSpace: Zhuque-2
Orienspace: Gravity-2 (under development)
Space Pioneer: Tianlong-2
European Space Agency: Ariane 6
Indian Space Research Organisation: GSLV
JAXA (Japan): H-IIA, H3
Korea Aerospace Research Institute: Naro-1, Nuri
Roscosmos (Russia): Soyuz-2, Angara
United States
Blue Origin: New Shepard, New Glenn (under development)
Firefly Aerospace: Firefly Alpha
NASA: Space Launch System
Northrop Grumman: Antares 300 (under development)
Rocket Lab: Electron, Neutron (under development)
SpaceX: Falcon 9, Falcon Heavy, Starship
United Launch Alliance: Atlas V, Vulcan
History
By 1845, Michael Faraday had managed to liquefy most gases then known to exist. Six gases, however, resisted every attempt at liquefaction and were known at the time as "permanent gases". They were oxygen, hydrogen, nitrogen, carbon monoxide, methane, and nitric oxide.
In 1877, Louis Paul Cailletet in France and Raoul Pictet in Switzerland succeeded in producing the first droplets of liquid air.
In 1883, Polish professors Zygmunt Wróblewski and Karol Olszewski produced the first measurable quantity of liquid oxygen.
See also
Oxygen storage
Industrial gas
Cryogenics
Liquid hydrogen
Liquid helium
Liquid nitrogen
List of stoffs
Natterer compressor
Rocket fuel
Solid oxygen
Tetraoxygen
References
Further reading
Rocket oxidizers
Cryogenics
Oxygen
Industrial gases
Liquids
1883 in science | Liquid oxygen | [
"Physics",
"Chemistry"
] | 1,082 | [
"Applied and interdisciplinary physics",
"Phases of matter",
"Cryogenics",
"Oxidizing agents",
"Rocket oxidizers",
"Industrial gases",
"Chemical process engineering",
"Matter",
"Liquids"
] |
314,647 | https://en.wikipedia.org/wiki/GROMOS | GROningen MOlecular Simulation (GROMOS) is the name of a force field for molecular dynamics simulation, and a related computer software package. Both are developed at the University of Groningen, and at the Computer-Aided Chemistry Group at the Laboratory for Physical Chemistry at the Swiss Federal Institute of Technology (ETH Zurich). At Groningen, Herman Berendsen was involved in its development.
The united atom force field was optimized with respect to the condensed phase properties of alkanes.
Versions
GROMOS87
Aliphatic and aromatic hydrogen atoms were included implicitly by representing the carbon atom and attached hydrogen atoms as one group centered on the carbon atom, a united atom force field. The van der Waals force parameters were derived from calculations of the crystal structures of hydrocarbons, and on amino acids using short (0.8 nm) nonbonded cutoff radii.
GROMOS96
In 1996, a substantial rewrite of the software package was released. The force field was also improved, e.g., in the following way: aliphatic CHn groups were represented as united atoms with van der Waals interactions reparametrized on the basis of a series of molecular dynamics simulations of model liquid alkanes using long (1.4 nm) nonbonded cutoff radii. This version is continually being refined and several different parameter sets are available. GROMOS96 includes studies of molecular dynamics, stochastic dynamics, and energy minimization. The energy component was also part of the prior GROMOS, named GROMOS87. GROMOS96 was planned and conceived during a time of 20 months. The package is made of 40 different programs, each with a different essential function. An example of two important programs within the GROMOS96 are PROGMT, in charge of constructing molecular topology and also PROPMT, changing the classical molecular topology into the path-integral molecular topology.
GROMOS05
An updated version of the software package was introduced in 2005.
GROMOS11
The current GROMOS release is dated in May 2011.
Parameter sets
Some of the force field parameter sets that are based on the GROMOS force field. The A-version applies to aqueous or apolar solutions of proteins, nucleotides, and sugars. The B-version applies to isolated molecules (gas phase).
54
54A7 - 53A6 taken and adjusted torsional angle terms to better reproduce helical propensities, altered N–H, C=O repulsion, new CH3 charge group, parameterisation of Na+ and Cl− to improve free energy of hydration and new improper dihedrals.
54B7 - 53B6 in vacuo taken and changed in same manner as 53A6 to 54A7.
53
53A5 - optimised by first fitting to reproduce the thermodynamic properties of pure liquids of a range of small polar molecules and the solvation free enthalpies of amino acid analogs in cyclohexane, is an expansion and renumbering of 45A3.
53A6 - 53A5 taken and adjusted partial charges to reproduce hydration free enthalpies in water, recommended for simulations of biomolecules in explicit water.
45
45A3 - suitable to apply to lipid aggregates such as membranes and micelles, for mixed systems of aliphatics with or without water, for polymers, and other apolar systems that may interact with different biomolecules.
45A4 - 45A3 reparameterised to improve DNA representation.
43
43A1
43A2
See also
GROMACS
Ascalaph Designer
Comparison of software for molecular mechanics modeling
Comparison of force field implementations
References
External links
C++ software
Fortran software
Molecular dynamics software
Force fields (chemistry) | GROMOS | [
"Chemistry"
] | 794 | [
"Molecular dynamics software",
"Computational chemistry software",
"Molecular dynamics",
"Computational chemistry",
"Force fields (chemistry)"
] |
314,650 | https://en.wikipedia.org/wiki/Enamel%20paint | Enamel paint is paint that air-dries to a hard, usually glossy, finish, used for coating surfaces that are outdoors or otherwise subject to hard wear or variations in temperature; it should not be confused with decorated objects in "painted enamel", where vitreous enamel is applied with brushes and fired in a kiln. The name is something of a misnomer, as in reality most commercially available enamel paints are significantly softer than either vitreous enamel or stoved synthetic resins, and are totally different in composition; vitreous enamel is applied as a powder or paste and then fired at high temperature. There is no generally accepted definition or standard for use of the term "enamel paint", and not all enamel-type paints may use it.
Paint
Typically the term "enamel paint" is used to describe oil-based covering products, usually with a significant amount of gloss in them, however recently many latex or water-based paints have adopted the term as well. The term today means "hard surfaced paint" and usually is in reference to paint brands of higher quality, floor coatings of a high gloss finish, or spray paints. Most enamel paints are alkyd resin based. Some enamel paints have been made by adding varnish to oil-based paint.
Although "enamels" and "painted enamel" in art normally refer to vitreous enamel, in the 20th century some artists used commercial enamel paints in art, including Pablo Picasso (mixing it with oil paint), Hermann-Paul, Jackson Pollock, and Sidney Nolan. The Trial (1947) is one of a number of works by Nolan to use enamel paint, usually Ripolin, a commercial paint not intended for art, also Picasso's usual brand. Some "enamel paints" are now produced specifically for artists.
Enamels paints can also refer to nitrocellulose based paints, one of the first modern commercial paints of the 20th century. They have since been superseded by new synthetic coatings like alkyd, acrylic and vinyl, due to toxicity, safety, and conservation (tendency to age yellow) concerns. In art has been used also by Pollock with the commercial paint named Duco. The artist experimented and created with many types of commercial or house paints during his career. Other artists: "after discovering various types of industrial materials produced in the United States in the 1930s, Siqueiros' produced most of his easel works with uncommon materials which include Duco paint, a DuPont brand name for pyroxyline paint, a tough and resilient type of nitro-cellulose paint manufactured for the automotive industry". Nitro-cellulose enamels are also commonly known as modern lacquers. Enamel paint comes in a variety of hues and can be custom blended to produce a particular tint. It is also available in water-based and solvent-based formulations, with solvent-based enamel being more prevalent in industrial applications. For the greatest results, use a high-quality brush, roller, or spray gun when applying enamel paint. When dried, enamel paint forms a durable, hard-wearing surface that resists chipping, fading, and discoloration, making it a great choice for a wide range of surfaces and applications.
Uses and categories
Floor enamel – May be used for concrete, stairs, basements, porches, and patios.
Fast dry enamel – Can dry within 10–15 minutes of application. Ideal for refrigerators, counters, and other industrial finishes.
High-temp enamel – May be used for engines, brake calipers, exhaust pipe and BBQs.
Enamel paint is also used on wood to make it resistant to the elements via the waterproofing and rotproofing properties of enamel. Generally, treated surfaces last much longer and are much more resistant to wear than untreated surfaces.
Model building – Xtracolor and Humbrol are mainstream UK brands. Colourcoats model paint is a high quality brand with authentic accurate military colours. Testors, a US company, offers the Floquil, Pactra, Model Master and Testors brands.
Nail enamel – to color nails, it comes in many varieties for fast drying, color retention, gloss retention, etc.
Epoxy enamel, polyurethane enamel, etc. used in protective coating / industrial painting purpose in chemical and petrochemical industries for anti-corrosion purposes.
Notes
Coatings
Paints | Enamel paint | [
"Chemistry"
] | 904 | [
"Paints",
"Coatings"
] |
314,692 | https://en.wikipedia.org/wiki/One-form%20%28differential%20geometry%29 | In differential geometry, a one-form (or covector field) on a differentiable manifold is a differential form of degree one, that is, a smooth section of the cotangent bundle. Equivalently, a one-form on a manifold is a smooth mapping of the total space of the tangent bundle of to whose restriction to each fibre is a linear functional on the tangent space. Symbolically,
where is linear.
Often one-forms are described locally, particularly in local coordinates. In a local coordinate system, a one-form is a linear combination of the differentials of the coordinates:
where the are smooth functions. From this perspective, a one-form has a covariant transformation law on passing from one coordinate system to another. Thus a one-form is an order 1 covariant tensor field.
Examples
The most basic non-trivial differential one-form is the "change in angle" form This is defined as the derivative of the angle "function" (which is only defined up to an additive constant), which can be explicitly defined in terms of the atan2 function. Taking the derivative yields the following formula for the total derivative:
While the angle "function" cannot be continuously defined – the function atan2 is discontinuous along the negative -axis – which reflects the fact that angle cannot be continuously defined, this derivative is continuously defined except at the origin, reflecting the fact that infinitesimal (and indeed local) in angle can be defined everywhere except the origin. Integrating this derivative along a path gives the total change in angle over the path, and integrating over a closed loop gives the winding number times
In the language of differential geometry, this derivative is a one-form on the punctured plane. It is closed (its exterior derivative is zero) but not exact, meaning that it is not the derivative of a 0-form (that is, a function): the angle is not a globally defined smooth function on the entire punctured plane. In fact, this form generates the first de Rham cohomology of the punctured plane. This is the most basic example of such a form, and it is fundamental in differential geometry.
Differential of a function
Let be open (for example, an interval ), and consider a differentiable function with derivative The differential assigns to each point a linear map from the tangent space to the real numbers. In this case, each tangent space is naturally identifiable with the real number line, and the linear map in question is given by scaling by This is the simplest example of a differential (one-)form.
See also
References
Differential forms
1 (number) | One-form (differential geometry) | [
"Engineering"
] | 538 | [
"Tensors",
"Differential forms"
] |
314,983 | https://en.wikipedia.org/wiki/Hydroxylamine | Hydroxylamine (also known as hydroxyammonia) is an inorganic compound with the chemical formula . The compound is in a form of a white hygroscopic crystals. Hydroxylamine is almost always provided and used as an aqueous solution. It is consumed almost exclusively to produce Nylon-6. The oxidation of to hydroxylamine is a step in biological nitrification.
History
Hydroxylamine was first prepared as hydroxylammonium chloride in 1865 by the German chemist Wilhelm Clemens Lossen (1838-1906); he reacted tin and hydrochloric acid in the presence of ethyl nitrate. It was first prepared in pure form in 1891 by the Dutch chemist Lobry de Bruyn and by the French chemist Léon Maurice Crismer (1858-1944). The coordination complex (zinc dichloride di(hydroxylamine)), known as Crismer's salt, releases hydroxylamine upon heating.
Production
Hydroxylamine or its salts (salts containing hydroxylammonium cations ) can be produced via several routes but only two are commercially viable. It is also produced naturally as discussed in a section on biochemistry.
From nitric oxide
is mainly produced as its sulfuric acid salt, hydroxylammonium hydrogen sulfate (), by the hydrogenation of nitric oxide over platinum catalysts in the presence of sulfuric acid.
Raschig process
Another route to is the Raschig process: aqueous ammonium nitrite is reduced by and at 0 °C to yield a hydroxylamido-N,N-disulfonate anion:
This anion is then hydrolyzed to give hydroxylammonium sulfate :
Solid can be collected by treatment with liquid ammonia. Ammonium sulfate, , a side-product insoluble in liquid ammonia, is removed by filtration; the liquid ammonia is evaporated to give the desired product.
The net reaction is:
A base then frees the hydroxylamine from the salt:
Other methods
Julius Tafel discovered that hydroxylamine hydrochloride or sulfate salts can be produced by electrolytic reduction of nitric acid with HCl or respectively:
Hydroxylamine can also be produced by the reduction of nitrous acid or potassium nitrite with bisulfite:
(100 °C, 1 h)
Hydrochloric acid disproportionates nitromethane to hydroxylamine hydrochloride and carbon monoxide via the hydroxamic acid.
A direct lab synthesis of hydroxylamine from molecular nitrogen in water plasma was demonstrated in 2024.
Reactions
Hydroxylamine reacts with electrophiles, such as alkylating agents, which can attach to either the oxygen or the nitrogen atoms:
The reaction of with an aldehyde or ketone produces an oxime.
(in NaOH solution)
This reaction is useful in the purification of ketones and aldehydes: if hydroxylamine is added to an aldehyde or ketone in solution, an oxime forms, which generally precipitates from solution; heating the precipitate with an inorganic acid then restores the original aldehyde or ketone.
Oximes such as dimethylglyoxime are also employed as ligands.
reacts with chlorosulfonic acid to give hydroxylamine-O-sulfonic acid:
When heated, hydroxylamine explodes. A detonator can easily explode aqueous solutions concentrated above 80% by weight, and even 50% solution might prove detonable if tested in bulk. In air, the combustion is rapid and complete:
Absent air, pure hydroxylamine requires stronger heating and the detonation does not complete combustion:
Partial isomerisation to the amine oxide contributes to the high reactivity.
Functional group
Hydroxylamine derivatives substituted in place of the hydroxyl or amine hydrogen are (respectively) called O- or Nhydroxylamines. In general Nhydroxylamines are more common. Examples are Ntertbutylhydroxylamine or the glycosidic bond in calicheamicin. N,ODimethylhydroxylamine is a precursor to Weinreb amides.
Similarly to amines, one can distinguish hydroxylamines by their degree of substitution: primary, secondary and tertiary. When stored exposed to air for weeks, secondary hydroxylamines degrade to nitrones.
Norganylhydroxylamines, , where R is an organyl group, can be reduced to amines :
Synthesis
Amine oxidation with benzoyl peroxide is the most common method to synthesize hydroxylamines. Care must be taken to prevent over-oxidation to a nitrone. Other methods include:
Hydrogenation of an oxime
Alkylating a precursor hydroxylamine
Amine oxide pyrolysis (the Cope reaction)
Uses
Approximately 95% of hydroxylamine is used in the synthesis of cyclohexanone oxime, a precursor to Nylon 6. The treatment of this oxime with acid induces the Beckmann rearrangement to give caprolactam (3). The latter can then undergo a ring-opening polymerization to yield Nylon 6.
Laboratory uses
Hydroxylamine and its salts are commonly used as reducing agents in myriad organic and inorganic reactions. They can also act as antioxidants for fatty acids.
High concentrations of hydroxylamine are used by biologists to introduce mutations by acting as a DNA nucleobase amine-hydroxylating agent. In is thought to mainly act via hydroxylation of cytidine to hydroxyaminocytidine, which is misread as thymidine, thereby inducing C:G to T:A transition mutations. But high concentrations or over-reaction of hydroxylamine in vitro are seemingly able to modify other regions of the DNA & lead to other types of mutations. This may be due to the ability of hydroxylamine to undergo uncontrolled free radical chemistry in the presence of trace metals and oxygen, in fact in the absence of its free radical affects Ernst Freese noted hydroxylamine was unable to induce reversion mutations of its C:G to T:A transition effect and even considered hydroxylamine to be the most specific mutagen known. Practically, it has been largely surpassed by more potent mutagens such as EMS, ENU, or nitrosoguanidine, but being a very small mutagenic compound with high specificity, it found some specialized uses such as mutation of DNA packed within bacteriophage capsids, and mutation of purified DNA in vitro.
An alternative industrial synthesis of paracetamol developed by Hoechst–Celanese involves the conversion of ketone to a ketoxime with hydroxylamine.
Some non-chemical uses include removal of hair from animal hides and photographic developing solutions. In the semiconductor industry, hydroxylamine is often a component in the "resist stripper", which removes photoresist after lithography.
Hydroxylamine can also be used to better characterize the nature of a post-translational modification onto proteins. For example, poly(ADP-Ribose) chains are sensitive to hydroxylamine when attached to glutamic or aspartic acids but not sensitive when attached to serines. Similarly, Ubiquitin molecules bound to serines or threonines residues are sensitive to hydroxylamine, but those bound to lysine (isopeptide bond) are resistant.
Biochemistry
In biological nitrification, the oxidation of to hydroxylamine is mediated by the ammonia monooxygenase (AMO). Hydroxylamine oxidoreductase (HAO) further oxidizes hydroxylamine to nitrite.
Cytochrome P460, an enzyme found in the ammonia-oxidizing bacteria Nitrosomonas europea, can convert hydroxylamine to nitrous oxide, a potent greenhouse gas.
Hydroxylamine can also be used to highly selectively cleave asparaginyl-glycine peptide bonds in peptides and proteins. It also bonds to and permanently disables (poisons) heme-containing enzymes. It is used as an irreversible inhibitor of the oxygen-evolving complex of photosynthesis on account of its similar structure to water.
Safety and environmental concerns
Hydroxylamine can be an explosive, with a theoretical decomposition energy of about 5 kJ/g, and aqueous solutions above 80% can be easily detonated by detonator or strong heating under confinement. At least two factories dealing in hydroxylamine have been destroyed since 1999 with loss of life. It is known, however, that ferrous and ferric iron salts accelerate the decomposition of 50% solutions. Hydroxylamine and its derivatives are more safely handled in the form of salts.
It is an irritant to the respiratory tract, skin, eyes, and other mucous membranes. It may be absorbed through the skin, is harmful if swallowed, and is a possible mutagen.
See also
Amine
Amino acid
References
Further reading
Hydroxylamine
Walters, Michael A. and Andrew B. Hoem. "Hydroxylamine." e-Encyclopedia of Reagents for Organic Synthesis. 2001.
Schupf Computational Chemistry Lab
M. W. Rathke A. A. Millard "Boranes in Functionalization of Olefins to Amines: 3-Pinanamine" Organic Syntheses, Coll. Vol. 6, p. 943; Vol. 58, p. 32. (preparation of hydroxylamine-O-sulfonic acid).
External links
Calorimetric studies of hydroxylamine decomposition
Chemical company BASF info
MSDS
Deadly detonation of hydroxylamine at Concept Sciences facility
Functional groups
Inorganic amines
Photographic chemicals
Rocket fuels
Reducing agents
Nitrogen oxoacids | Hydroxylamine | [
"Chemistry"
] | 2,090 | [
"Functional groups",
"Hydroxylamines",
"Redox",
"Reducing agents"
] |
315,008 | https://en.wikipedia.org/wiki/Magnetoresistive%20RAM | Magnetoresistive random-access memory (MRAM) is a type of non-volatile random-access memory which stores data in magnetic domains. Developed in the mid-1980s, proponents have argued that magnetoresistive RAM will eventually surpass competing technologies to become a dominant or even universal memory. Currently, memory technologies in use such as flash RAM and DRAM have practical advantages that have so far kept MRAM in a niche role in the market.
Description
Unlike conventional RAM chip technologies, data in MRAM is not stored as electric charge or current flows, but by magnetic storage elements. The elements are formed from two ferromagnetic plates, each of which can hold a magnetization, separated by a thin insulating layer. One of the two plates is a permanent magnet set to a particular polarity; the other plate's magnetization can be changed to match that of an external field to store memory. This configuration is known as a magnetic tunnel junction (MTJ) and is the simplest structure for an MRAM bit. A memory device is built from a grid of such "cells".
The simplest method of reading is accomplished by measuring the electrical resistance of the cell. A particular cell is (typically) selected by powering an associated transistor that switches current from a supply line through the cell to ground. Because of tunnel magnetoresistance, the electrical resistance of the cell changes with the relative orientation of the magnetization in the two plates. By measuring the resulting current, the resistance inside any particular cell can be determined, and from this the magnetization polarity of the writable plate. Typically if the two plates have the same magnetization alignment (low resistance state) this is considered to mean "1", while if the alignment is antiparallel the resistance will be higher (high resistance state) and this means "0".
Data is written to the cells using a variety of means. In the simplest "classic" design, each cell lies between a pair of write lines arranged at right angles to each other, parallel to the cell, one above and one below the cell. When current is passed through them, an induced magnetic field is created at the junction, which the writable plate picks up. This pattern of operation is similar to magnetic-core memory, a system commonly used in the 1960s.
However, due to process and material variations, an array of memory cells has a distribution of switching fields with a deviation σ. Therefore, to program all the bits in a large array with the same current, the applied field needs to be larger than the mean "selected" switching field by greater than 6σ. In addition,the applied field must be kept below a maximum value. Thus, this "conventional" MRAM must keep these two distributions well-separated. As a result, there is a narrow operating window for programming fields; and only inside this window, can all the bits be programmed without errors or disturbs. In 2005, a "Savtchenko switching" relying on the unique behavior of a synthetic antiferromagnet (SAF) free layer is applied to solve this problem. The SAF layer is formed from two ferromagnetic layers separated by a nonmagnetic coupling spacer layer. For a synthetic antiferromagnet having some net anisotropy Hk in each layer, there exists a critical spin flop field Hsw at which the two antiparallel layer magnetizations will rotate (flop) to be orthogonal to the applied field H with each layer scissoring slightly in the direction of H. Therefore, if only a single line current is applied (half-selected bits), the 45° field angle cannot switch the state. Below the toggling transition, there are no disturbs all the way up to the highest fields.
This approach still requires a fairly substantial current to generate the field, however, which makes it less interesting for low-power uses, one of MRAM's primary disadvantages. Additionally, as the device is scaled down in size, there comes a time when the induced field overlaps adjacent cells over a small area, leading to potential false writes. This problem, the half-select (or write disturb) problem, appears to set a fairly large minimal size for this type of cell. One experimental solution to this problem was to use circular domains written and read using the giant magnetoresistive effect, but it appears that this line of research is no longer active.
A newer technique, spin-transfer torque (STT) or spin-transfer switching, uses spin-aligned ("polarized") electrons to directly torque the domains. Specifically, if the electrons flowing into a layer have to change their spin, this will develop a torque that will be transferred to the nearby layer. This lowers the amount of current needed to write the cells, making it about the same as the read process. There are concerns that the "classic" type of MRAM cell will have difficulty at high densities because of the amount of current needed during writes, a problem that STT avoids. For this reason, the STT proponents expect the technique to be used for devices of 65 nm and smaller. The downside is the need to maintain the spin coherence. Overall, the STT requires much less write current than conventional or toggle MRAM. Research in this field indicates that STT current can be reduced up to 50 times by using a new composite structure. However, higher-speed operation still requires higher current.
Other potential arrangements include "vertical transport MRAM" (VMRAM), which uses current through a vertical column to change magnetic orientation, a geometric arrangement that reduces the write disturb problem and so can be used at higher density.
A review article provides the details of materials and challenges associated with MRAM in the perpendicular geometry. The authors describe a new term called "Pentalemma", which represents a conflict in five different requirements such as write current, stability of the bits, readability, read/write speed and the process integration with CMOS. The selection of materials and the design of MRAM to fulfill those requirements are discussed.
Comparison with other systems
Density
The main determinant of a memory system's cost is the density of the components used to make it up. Smaller components, and fewer of them, mean that more "cells" can be packed onto a single chip, which in turn means more can be produced at once from a single silicon wafer. This improves yield, which is directly related to cost.
DRAM uses a small capacitor as a memory element, wires to carry current to and from it, and a transistor to control it – referred to as a "1T1C" cell. This makes DRAM the highest-density RAM currently available, and thus the least expensive, which is why it is used for the majority of RAM found in computers.
MRAM is physically similar to DRAM in makeup, and often does require a transistor for the write operation (though not strictly necessary). The scaling of transistors to higher density necessarily leads to lower available current, which could limit MRAM performance at advanced nodes.
Power consumption
Since the capacitors used in DRAM lose their charge over time, memory assemblies that use DRAM must refresh all the cells in their chips several times a second, reading each one and re-writing its contents. As DRAM cells decrease in size it is necessary to refresh the cells more often, resulting in greater power consumption.
In contrast, MRAM never requires a refresh. This means that not only does it retain its memory with the power turned off but also there is no constant power-draw. While the read process in theory requires more power than the same process in a DRAM, in practice the difference appears to be very close to zero. However, the write process requires more power to overcome the existing field stored in the junction, varying from three to eight times the power required during reading. Although the exact amount of power savings depends on the nature of the work — more frequent writing will require more power – in general MRAM proponents expect much lower power consumption (up to 99% less) compared to DRAM. STT-based MRAMs eliminate the difference between reading and writing, further reducing power requirements.
It is also worth comparing MRAM with another common memory system — flash RAM. Like MRAM, flash does not lose its memory when power is removed, which makes it very common in applications requiring persistent storage. When used for reading, flash and MRAM are very similar in power requirements. However, flash is re-written using a large pulse of voltage (about 10 V) that is stored up over time in a charge pump, which is both power-hungry and time-consuming. In addition, the current pulse physically degrades the flash cells, which means flash can only be written to some finite number of times before it must be replaced.
In contrast, MRAM requires only slightly more power to write than read, and no change in the voltage, eliminating the need for a charge pump. This leads to much faster operation, lower power consumption, and an indefinitely long lifetime.
Data retention
MRAM is often touted as being a non-volatile memory. However, the current mainstream high-capacity MRAM, spin-transfer torque memory, provides improved retention at the cost of higher power consumption, i.e., higher write current. In particular, the critical (minimum) write current is directly proportional to the thermal stability factor Δ. The retention is in turn proportional to exp(Δ). The retention, therefore, degrades exponentially with reduced write current.
Speed
Dynamic random-access memory (DRAM) performance is limited by the rate at which the charge stored in the cells can be drained (for reading) or stored (for writing). MRAM operation is based on measuring voltages rather than charges or currents, so there is less "settling time" needed. IBM researchers have demonstrated MRAM devices with access times on the order of 2 ns, somewhat better than even the most advanced DRAMs built on much newer processes. A team at the German Physikalisch-Technische Bundesanstalt have demonstrated MRAM devices with 1 ns settling times, better than the currently accepted theoretical limits for DRAM, although the demonstration was a single cell. The differences compared to flash are far more significant, with write speeds as much as thousands of times faster. However, these speed comparisons are not for like-for-like current. High-density memory requires small transistors with reduced current, especially when built for low standby leakage. Under such conditions, write times shorter than 30 ns may not be reached so easily. In particular, to meet solder reflow stability of 260 °C over 90 seconds, 250 ns pulses have been required. This is related to the elevated thermal stability requirement driving up the write bit error rate. In order to avoid breakdown from higher current, longer pulses are needed.
For the perpendicular STT MRAM, the switching time is largely determined by the thermal stability Δ as well as the write current. A larger Δ (better for data retention) would require a larger write current or a longer pulse. A combination of high speed and adequate retention is only possible with a sufficiently high write current.
The only current memory technology that easily competes with MRAM in terms of performance at comparable density is static random-access memory (SRAM). SRAM consists of a series of transistors arranged in a flip-flop, which will hold one of two states as long as power is applied. Since the transistors have a very low power requirement, their switching time is very low. However, since an SRAM cell consists of several transistors, typically four or six, its density is much lower than DRAM. This makes it expensive, which is why it is used only for small amounts of high-performance memory, notably the CPU cache in almost all modern central processing unit designs.
Although MRAM is not quite as fast as SRAM, it is close enough to be interesting even in this role. Given its much higher density, a CPU designer may be inclined to use MRAM to offer a much larger but somewhat slower cache, rather than a smaller but faster one. It remains to be seen how this trade-off will play out in the future.
Endurance
The endurance of MRAM is affected by write current, just like retention and speed, as well as read current. When the write current is sufficiently large for speed and retention, the probability of MTJ breakdown needs to be considered. If the read current/write current ratio is not small enough, read disturb becomes more likely, i.e., a read error occurs during one of the many switching cycles. The read disturb error rate is given by
,
where τ is the relaxation time (1 ns) and Icrit is the critical write current. Higher endurance requires a sufficiently low . However, a lower Iread also reduces read speed.
Endurance is mainly limited by the possible breakdown of the thin MgO layer.
Overall
MRAM has similar performance to SRAM, enabled by the use of sufficient write current. However, this dependence on write current also makes it a challenge to compete with the higher density comparable to mainstream DRAM and Flash. Nevertheless, some opportunities for MRAM exist where density need not be maximized. From a fundamental physics point of view, the spin-transfer torque approach to MRAM is bound to a "rectangle of death" formed by retention, endurance, speed, and power requirements, as covered above.
While the power-speed tradeoff is universal for electronic devices, the endurance-retention tradeoff at high current and the degradation of both at low Δ is problematic. Endurance is largely limited to 108 cycles.
Alternatives to MRAM
Flash and EEPROM's limited write-cycles are a serious problem for any real RAM-like role. In addition, the high power needed to write the cells is a problem in low-power nodes, where non-volatile RAM is often used. The power also needs time to be "built up" in a device known as a charge pump, which makes writing dramatically slower than reading, often as low as 1/1000 as fast. While MRAM was certainly designed to address some of these issues, a number of other new memory devices are in production or have been proposed to address these shortcomings.
To date, the only similar system to enter widespread production is ferroelectric RAM, or F-RAM (sometimes referred to as FeRAM).
Also seeing renewed interest are silicon-oxide-nitride-oxide-silicon (SONOS) memory and ReRAM. 3D XPoint has also been in development, but is known to have a higher power budget than DRAM.
History
1955 — Magnetic-core memory had the same reading writing principle as MRAM
1984 — Arthur V. Pohm and James M. Daughton, while working for Honeywell, developed the first magnetoresistance memory devices.
1988 — European scientists (Albert Fert and Peter Grünberg) discovered the "giant magnetoresistive effect" in thin-film structures.
1989 — Pohm and Daughton left Honeywell to form Nonvolatile Electronics, Inc. (later renamed to NVE Corp.) sublicensing the MRAM technology they have created.
1995 — Motorola (later to become Freescale Semiconductor, and subsequently NXP Semiconductors) initiates work on MRAM development
1996 — Spin torque transfer is proposed
1997 — Sony published the first Japan Patent Application for S.P.I.N.O.R. (Spin Polarized Injection Non-Volatile Orthogonal Read/Write RAM), a forerunner of STT RAM.
1998 — Motorola develops 256Kb MRAM test chip.
2000 — IBM and Infineon established a joint MRAM development program.
2000 — Spintec laboratory's first Spin-Torque Transfer patent.
2002
NVE announces technology exchange with Cypress Semiconductor.
Toggle patent granted to Motorola
2003 — A 128 kbit MRAM chip was introduced, manufactured with a 180 nm lithographic process
2004
June — Infineon unveiled a 16-Mbit prototype, manufactured with a 180 nm lithographic process
September — MRAM becomes a standard product offering at Freescale.
October — Taiwan developers of MRAM tape out 1 Mbit parts at TSMC.
October — Micron drops MRAM, mulls other memories.
December — TSMC, NEC and Toshiba describe novel MRAM cells.
December — Renesas Technology promotes a high performance, high-reliability MRAM technology.
Spintech laboratory's first observation of Thermal Assisted Switching (TAS) as MRAM approach.
Crocus Technology is founded; the company is a developer of second-generation MRAM
2005
January — Cypress Semiconductor samples MRAM, using NVE IP.
March — Cypress to Sell MRAM Subsidiary.
June — Honeywell posts data sheet for 1-Mbit rad-hard MRAM using a 150 nm lithographic process.
August — MRAM record: memory cell runs at 2 GHz.
November — Renesas Technology and Grandis collaborate on development of 65 nm MRAM employing spin torque transfer (STT).
November — NVE receives an SBIR grant to research cryptographic tamper-responsive memory.
December — Sony announced Spin-RAM, the first lab-produced spin-torque-transfer MRAM, which utilizes a spin-polarized current through the tunneling magnetoresistance layer to write data. This method consumes less power and is more scalable than conventional MRAM. With further advances in materials, this process should allow for densities higher than those possible in DRAM.
December — Freescale Semiconductor Inc. demonstrates an MRAM that uses magnesium oxide, rather than an aluminum oxide, allowing for a thinner insulating tunnel barrier and improved bit resistance during the write cycle, thereby reducing the required write current.
Spintec laboratory gives Crocus Technology exclusive license on its patents.
2006
February — Toshiba and NEC announced a 16 Mbit MRAM chip with a new "power-forking" design. It achieves a transfer rate of 200 Mbit/s, with a 34 ns cycle time, the best performance of any MRAM chip. It also boasts the smallest physical size in its class — 78.5 square millimeters — and the low voltage requirement of 1.8 volts.
July — On July 10, Austin Texas — Freescale Semiconductor begins marketing a 4-Mbit MRAM chip, which sells for approximately $25.00 per chip.
2007
R&D moving to spin transfer torque RAM (SPRAM)
February — Tohoku University and Hitachi developed a prototype 2-Mbit non-volatile RAM chip employing spin-transfer torque switching.
August — "IBM, TDK Partner In Magnetic Memory Research on Spin Transfer Torque Switching" IBM and TDK to lower the cost and boost performance of MRAM to hopefully release a product to market.
November — Toshiba applied and proved the spin-transfer torque switching with perpendicular magnetic anisotropy MTJ device.
November — NEC develops world's fastest SRAM-compatible MRAM with operation speed of 250 MHz.
2008
Japanese satellite, SpriteSat, to use Freescale MRAM to replace SRAM and FLASH components
June — Samsung and Hynix become partner on STT-MRAM
June — Freescale spins off MRAM operations as new company Everspin
August — Scientists in Germany have developed next-generation MRAM that is said to operate as fast as fundamental performance limits allow, with write cycles under 1 nanosecond.
November — Everspin announces BGA packages, product family from 256 Kb to 4 Mb
2009
June — Hitachi and Tohoku University demonstrated a 32-Mbit spin-transfer torque RAM (SPRAM).
June — Crocus Technology and Tower Semiconductor announce deal to port Crocus' MRAM process technology to Tower's manufacturing environment
November — Everspin releases SPI MRAM product family and ships first embedded MRAM samples
2010
April — Everspin releases 16 Mb density
June — Hitachi and Tohoku Univ announce Multi-level SPRAM
2011
March — PTB, Germany, announces below 500 ps (2 Gbit/s) write cycle
2012
November — Chandler, Arizona, USA, Everspin debuts 64 Mb ST-MRAM on a 90 nm process.
December — A team from University of California, Los Angeles presents voltage-controlled MRAM at IEEE International Electron Devices Meeting.
2013
November — Buffalo Technology and Everspin announce a new industrial SATA III SSD that incorporates Everspin's Spin-Torque MRAM (ST-MRAM) as cache memory.
2014
January — Researchers announce the ability to control the magnetic properties of core/shell antiferromagnetic nanoparticles using only temperature and magnetic field changes.
October — Everspin partners with GlobalFoundries to produce ST-MRAM on 300 mm wafers.
2016
April — Samsung's semiconductor chief Kim Ki-nam says Samsung is developing an MRAM technology that "will be ready soon".
July — IBM and Samsung report an MRAM device capable of scaling down to 11 nm with a switching current of 7.5 microamps at 10 ns.
August — Everspin announced it was shipping samples of the industry's first 256 Mb ST-MRAM to customers.
October — Avalanche Technology partners with Sony Semiconductor Manufacturing to manufacture STT-MRAM on 300 mm wafers, based on "a variety of manufacturing nodes".
December — Inston and Toshiba independently present results on voltage-controlled MRAM at International Electron Devices Meeting.
2019
January — Everspin starts shipping samples of 28 nm 1 Gb STT-MRAM chips.
March — Samsung commence commercial production of its first embedded STT-MRAM based on a 28 nm process.
May — Avalanche partners with United Microelectronics Corporation to jointly develop and produce embedded MRAM based on the latter's 28 nm CMOS manufacturing process.
2020
December — IBM announces a 14 nm MRAM node.
2021
May — TSMC revealed a roadmap for developing the eMRAM technology at 12/14 nm node as an offering to replace eFLASH.
November — Taiwan Semiconductor Research Institute announced the development of a SOT-MRAM device.
Applications
Possible practical application of the MRAM includes virtually every device that has some type of memory inside such as aerospace and military systems, digital cameras, notebooks, smart cards, mobile telephones, cellular base stations, personal computers, battery-backed SRAM replacement, datalogging specialty memories (black box solutions), media players, and book readers etc.
See also
Magnetic bubble memory
EEPROM
Everspin Technologies
F-RAM
Ferromagnetism
Magnetoresistance
Memristor
MOSFET
NRAM
nvSRAM
Phase-change memory (PRAM)
Spin valve
Spin-transfer torque
Tunnel magnetoresistance
References
External links
Wired News article from February, 2006
NEC Press Release from February, 2006
BBC news article from July, 2006
Freescale MRAM – an in-depth examination from August 2006
MRAM – The Birth of the Super Memory – An article and an interview with Freescale about their MRAM technology
Spin torque applet – An applet illustrating the principles underlying spin-torque transfer MRAM
New Speed Record for Magnetic Memories – The Future of Things article
Types of RAM
Non-volatile memory
Spintronics | Magnetoresistive RAM | [
"Physics",
"Materials_science"
] | 4,805 | [
"Spintronics",
"Condensed matter physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.